# Video Card 101



## Praetor

*Video Card 101*
Revision History

 *v1.00* Nov 2005 Initial draft.
 *v1.10* Feb 2006 Rewritten with a different direction.
Contents

 *Section 01 - Preface*
 *Section 02 - Short and Sweet: What videocard should I get?*
 *Section 03 - Long and Painful: What do I need to know to pick out a good videocard?*
 *Section 04 - VFAQ*
 *Section 05 - When Stuff Goes Wrong*
 *Section 06 - The Encyclopedia*
 *Section 07 - A Look at ATi*
 *Section 08 - A Look at nVidia*
 *Section 09 - Official Crap*


----------



## Praetor

Section 01 - Preface
Ok so the comments coming back about the first draft of this 101 guide were generically "freaking nice guide ... but too much all at once" so this guide is a lot more to the point with respect to addressing 'what should I get' and also adds the additional section dealing with overclocking other neat things. This version now is split up so that it more effectively communicates what is needed for people to make good product purchases without being nearly as overwhelming (and difficult to maintain) as the orignal release.


----------



## Praetor

Section 02 - Short and Sweet: What videocard should I get?
*Before Reading Further aka "How to Use this Suggestion Guide"*
 Figure out what type of card you need. For instance, if you are building a machine that is never ever going to play a single videogame, there is no reason to consider a top of the line videocard that will weigh in at several hundred dollars. Once you get an idea for what type of performance requirements you have, have a look at the category titles to see which category your requirements fall in.
 Once you've picked a category, you should figure out how much money you have to work with. The suggestions are listed top-to-bottom in order of price tag. This list is not computer generated: I've hand selected every entry here and made sure that as you go down the list, not only does the price tag increase, the performance of the card will also increase (except where noted). The reasoning here is that, there is no point to spend more money and get the equivalent/less performance
 Please keep in mind that the reccomendations are grouped by ATi/nVidia and AGP/PCIE ... while picking an ATi or nVidia based card may only be a preference, picking an AGP or PCIE based card is often a condition: you need to pick a videocard that will fit into your motherboard. Now for those who are trying to decide on ATi/nVidia or AGP/PCIE, these two issues are explictly dealt with in the *VFAQ* section of this 101 guide
 If this is all still way too overhwhelming, at the bottom of every category I've picked out a few cards from those listed as "Praetor's Picks" and provided more detailed information on what you're getting and why I chose that particuar card.
Now once you've settled on the above steps, pick a category either scroll down or click on the links to see the appropriate reccoemdnation block:
*Category A - "I'm Building a machine that does not have any gaming requirements, I just want a basic videocard"*
*Category B - "I want a video card that will let me play the occassional videogame: I'm not looking for the best of the best, just something that will let me play the occasional game."*
*Category C - "I'm building a gaming box, but I dont have a fortune to spend on the top-tier parts; I'd still like to play my games at near-max settings if possible"*
*Category D - "I want to play all the latest games at the highest settings, I'm willing to pay the premium for it"*


----------



## Praetor

*Category A - "I'm Building a machine that does not have any gaming requirements, I just want a basic videocard"*
For people falling into this category the options are pretty straight forwards and the considerations few: there is no point in pumping several hundred dollars into a top-notch high performance gaming videocard as all that graphical horsepower will never be realized; it will however be benificial to look for a videocard that is inexpensive, provides the basic functionality required, low maintanence and does not compromise any other functionality of the system. As such, suggestions in this section will be subject to the following constraints

 Not cost more than $60. For users wanting a basic card, there's no point in shelling out the big bucks for high end videocards packing features that wont be utilized
 Not have HyperMemory or TurboCache. These technologies might sound neato but their premise is to use some system memory as their own... which reduces the of memory available to the system.
 Will have a preference for passively cooled solutions: being lowpowered cards, often the heat output is more than readily handled by passive coolers thus removing the possibility of having a cheap active cooling solution fail

 *[AGP][ATi]* *Powercolor RV6DE-NA3 Radeon7000 32MB ($21.00)*, *Sapphire 100949L Radeon 7000 32MB ($23.99)*, *Connect3D Radeon7000 32MB ($24.99)*. With these three cards, performance is tossed right out of the window in favor of low-cost, zero frills. If you need a videocard that puts an image on the screen, this is the card for you. These cards are the absolute bottom of the barrel but also come with the absolute lowest price tags

 *[AGP][nVidia]* *MSI MX4000-T64 64MB ($22.99)* , *ASUS V9400-X/TD/64 GeForce MX4000 64MB ($23.99)*. Like the ATi cards in the previous point, these two cards are also targeted at buyers who just want a videocard that puts images up on the screen and arent looking for all the bells and whistles.

 *[AGP][ATi]* *JetWay R9MX-AD-064C Radon 9000 64MB ($25.99)* The Radeon 9000 lineup is a generational jump up from the primative Radeon7000s suggested above and for users who may want to explore DirectX8 options, this is the cheapest DX8 card of it's class. For all intents and purposes, this is the lowest model ATi-AGP card that should be considered unless you are building a system and you want to squeeze as much performance out of it as possible in other areas and are down to counting pennies.

 *[AGP][ATi]* *MSI RX9250-T128 Radeon 9250 128MB ($29.99)*, *Sapphire 100583-GN-H Radeon 9250 128MB ($32.50)*, *ASUS A9250/TD/128 Radeon 9250 128MB ($33.00)* Offering more performance and functionality without a noticeable increase in price nor thermal/noise output, the Radeon 9250 based cards are a superior choice to all the previously suggested cards.

 *[AGP][nVidia]* *ASUS V9250-X/TD/64 GeForce FX5200 64MB ($33.00)*, *ASUS V9520-X/TD/128 GeForce FX5200 128MB ($34.99)*, *XFX PC-T34A-NT GeForce FX5200 128MB ($39.00)*. The cheapest cards packing DirectX9 hardware support, these cards offer the most functionality per buck of all the cards currently listed. Of these three cards, the last one has a 128bit memory interface which doubles it's memory bandwidth compared to the other two cards making it an ideal candidate for a uber-low performance gaming card

 *[AGP][ATi]* *Sapphire 100576 Radeon 9550SE 128MB ($47.00)*, *ASUS A9550/TD/128 Radeon 9550 128MB ($48.50)*. These are ATi's cheapest DirectX9 capable cards. Unless your selection is ATi-limited there's no reason to consider these cards: the XFX card mentioned previously is better performing as a whole and 20% cheaper.

 *[PCIE][nVidia]* *ASUS EN6200TC128/TD/16M GeForce 6200TC supporting 128MB ($42.00)*, *MSI NX6200TC-TD128ELF GeForce 6200TC supporting 128MB  ($42.00)*. Both of these cards are included in the event that users need the absolute cheapest PCI-Express based discrete videocard options available. As cards, they should be avoided if possible as they will leech of system memory (i.e., the cards physically come with 16MB of memory onboard but will leach up to 112MB of your system memory to provided "128MB" of memory). While this may seem like a cheap way of getting the video performance of a 128MB card (and for all intents and purposes, it is), for systems featuring a card such as this will get better overall performance by spending the extra few dollars and avoiding a TurboCach based card.

 *[PCIE][ATi]* *MSI RX300HM-TD128ELF Radeon X300SE HyperMemory supporting 256MB ($45.00)*, *Sapphire 100140L Radeon X300SE supporting 256MB ($46.00)*. Like the TurboCache based cards in the previous point, these are ATI's absolute cheapest PCI-Express solutions and are included for that very reason. The difference between these cards and the nVidia solutions previous are that this card will present a significantly reduced memory footprint since they feature 64MB of onboard memory (thus for a "128MB card" they will only chew up 64MB of your system memory); furthermore, if more graphics performance is needed these cards can use up to 192MB of your system memory to provide a "256MB card"

 *[PCIE][ATi]* *Sapphire 1024-2C50-04-SA Radeon X300SE 128MB ($47.99)*, *ASUS EAX300SE-X/TD/128 Radeon X300SE 128MB ($48.00)*. These are the cheapest no-compromise PCI-Epxress videocards. They feature of 128MB of real physical memory and use none of your system memory

 *[PCIE][nVidia]* *ASUS N6200/TD/128 GeForce 6200 128MB ($48.99)*. This is the cheapest no-compromise nVidia based PCI-Express solution: this card features 128MB of physical memory on board and will not use any of your system memory.

 *[AGP][ATi]* *JetWay 96MX-AT-128C Mobility Radeon 9600 128MB ($39.00 after $10.00 MIR)* After the rebate this card offers the best performance of all the AGP cards listed here. Nothing else listed so far holds a candle to this option. Even without the rebate, this card is very competitively priced still.

 *[AGP][nVidia]* *eVGA 256-A8-N313-LX GeForce FX5500 256MB ($39.00 after $15.00 MIR)* This is the best nVidia AGP-based solution presented so far and although a few things eliminate this card from being the best AGP solution so far: the JetWay Mobility 9600 listed above offers superior performance (and at a better pre-rebate price) and secondly, this card features active cooling. While initially this may seem like a neat idea (i.e., keep your card nice and cool), a GPU like the GeForce FX5500 wont generate enough heat to warrant the need for active cooling ... the inclusion of active cooling just presents another device that can mechanically fail.

 *[AGP][nVidia]* *Apollo GeForce AGP6200AL GeForce 6200 128MB ($54.00)*, *MSI NX6200AX-TD128LF GeForce 6200 128MB ($54.50)* These are the cheapest AGP card featuring DirectX 9.0c support

 *[PCIE][ATi]* *Connect3D 3032 Radeon X550 128MB ($54.99)*. For all intents and purposes this is a X300SE with a  higher clock and is the best PCI-Express based solution listed so far

 *[AGP][ATi]* *Sapphire Atlantis Radeon 9600 128MB ($57.99)*. Featuring a full Radeon 9600 (as opposed to the Mobility 9600 as in the Jayton card above) this card offers the best AGP performance of all the cards listed so far and does not require an active cooling solution

 *[AGP][nVidia]* *eVGA 256-A8-N341-LX GeForce 6200 256MB ($49.00 after $10.00 MIR)* With the rebate this card is worth considering if your selections are nVidia-limited (if not the Radeon9600 solutions presented above are superior). Without taking the rebate into consideration, this card can probably be looked over as overpriced


*Praetor's Picks - Category A*
*[PCIE][nVidia]*
Not a hard choice here, the  *ASUS N6200/TD/128 GeForce 6200 128MB ($48.99)*  is the only PCIE-nVidia based card here that does not require the user to compromise.
*Make * ASUS
*Model Name/# * N6200/TD/128
*Interface * PCI-Express x16
*GPU * nVidia GeForce 6200
*GPU Clock *350MHz
*Memory *128MB DDR
*Memory Clock *250MHz (DDR500)
*Memory Interface *64bit (4GB/s bandwidth)
*Pipelines * 4x2
*Cooling *Passive cool
*Shader Models *3.0
*Connectivity *VGA+DVI
*Product Link * http://www.asus.com/products4.aspx?l1=2&l2=7&l3=155&model=484&modelmenu=1​
*[AGP][nVidia]*
The *MSI NX6200AX-TD128LF GeForce 6200 128MB ($54.50)* gets the pick here because this card packs the broadest featureset without being excessively overpriced. In particular this card is capable of HDTV output.
*Make * MSI
*Model Name/# * NX6200AX-TD128LF
*Interface * AGP 4x/8x
*GPU * nVidia GeForce 6200
*GPU Clock *350MHz
*Memory *128MB DDR
*Memory Clock *250MHz (DDR500)
*Memory Interface *64bit (4GB/s bandwidth)
*Pipelines * 4x2
*Cooling *Passive cool
*Shader Models *3.0
*Connectivity *VGA+DVI, + HDTV/SVideo
*Product Link * http://www.msi.com.tw/program/products/vga/vga/pro_vga_detail.php?UID=707​
*[PCIE][ATi]*
I was going to originally pick the X550 however the performance improvement does not warrant the extra  12% price jump. Thus, the pick for this category is the  *ASUS EAX300SE-X/TD/128 Radeon X300SE 128MB ($48.00)*. The ASUS model here has the same performance profile as the Sapphire except this model has a wider connectivity base.
*Make *ASUS
*Model Name/# * EAX300SE-X/TD/128M
*Interface * PCI-Express x16
*GPU * ATi X300SE
*GPU Clock *325MHz
*Memory *128MB DDR
*Memory Clock *200MHz (DDR400)
*Memory Interface *64bit (3.2GB/s bandwidth)
*Pipelines * 4x2
*Cooling *Passive cool
*Shader Models *2.x
*Connectivity *VGA+DVI+Composite
*Product Link *http://www.asus.com/products4.aspx?l1=2&l2=8&l3=14&model=399&modelmenu=1​
*[AGP][ATi]*
With the Sapphire 9600 being extremely close pick, I eventually chose the  *JetWay 96MX-AT-128C Mobility Radeon 9600 128MB ($39.00 after $10.00 MIR)*  due to it's significantly lower price. The sticking points were that while the this card packs the same performance profile as the Sapphire model, being a Mobility Radeon 9600, you will have to use ATi's mobility drivers which are, compared to the mainstream drivers, outdated. The saving grace is that this entire category is not focused on squeezing the absolute best out of every single card but rather emphasizes low cost over performance.
*Make *JetWay
*Model Name/# * R9MX-AD-064C
*Interface * AGP 4x/8x
*GPU * ATi Mobility Radeon 9600
*GPU Clock *325MHz
*Memory *128MB DDR
*Memory Clock *300MHz (DDR600)
*Memory Interface *128bit (9.6GB/s bandwidth)
*Pipelines * 4x2
*Cooling *Passive cool
*Shader Models *2.x
*Connectivity *VGA+SVideo
*Product Link *http://www.jetway.com.tw/jetway/system/productshow.asp?id=135&proname=96MX-AT-128C​


----------



## Praetor

*Category B - "I want a video card that will let me play the occassional videogame: I'm not looking for the best of the best, just something that will let me play the occasional game."*
As the description suggests, people falling into this category will be looking for a videocard that will let them play the occasional game, perhaps not at the maximum settings but appreciably well. So in a sentence, the profile for cards falling into this bracket is "low-midrange gaming" and the specific constrains applied here will be

 Pricing will fall in the sub-150 range
 Dual video outputs are a requirement whether it be DVI+DVI or VGA+DVI etc
 No cards with 64bit memory interface will be selected


 *[PCIE][nVidia]* *BIOSTAR V6502SS26 GeForce 6500 256MB ($62.00)*. The cheapest half-decent PCIE card to start us off, this card falls dead between the 6200 and 6600s: it lacks the pipelines of the latter but has a superior memory interface to the former; this card provides respectable performance particularly so for it's pricing prof

 *[AGP][ATi]* *Sapphire Radeon 9600Pro 128MB ($63.00)*, 

 *[PCIE][ATi]* *ASUS EAX1300/TD/128MB Radeon X1300 128MB ($67.50)*. Offering higher clockspeeds as well as a superior API featureset, the X1300 handedly beats out the X300/X550 for a spot on the reccomendation list

 *[AGP][ATi]* *Sapphire Radeon 9800SE 128MB ($69.00)*, *Sapphire 100566-RD Radeon 9800SE 128MB ($69.00)*, *Sapphire 100566L-RD BK-HS Radeon 9800SE 128MB ($60.00 after $10.00 MIR)*, *Sapphire 100132 Radeon 9800SE Advantage 256MB ($74.00)*. In it's stock condition, the performance of the Radeon9600Pro listed above is superior to this however a clever user may/will find that these 9800SE's can be BIOS-flashed and overclocked quite readily ... and in that case, these cards are a steal. If you're just looking to run a card in it's stock condition, save your money and grab the Radeon9600Pro

 *[AGP][ATi]* *JetWay 96XT-AD-256C Radeon 9600XT 256MB ($83.00)*. Featuring a significant clock jump from the 9600Pro, this card definitively offers the best AGP performance of the cards listed so far

 *[PCIE][nVidia]* *MSI NX6600-TD128E Lite GeForce 6600 128MB ($86.99)*, *Albatron PC6600Q GeForce 6600 256MB ($82.00 after $10.00 MIR)* . With the Radeon X1300 roughly matching the performance profile of the 6600LE cards (at a cheaper price), the 'full' GeForce 6600 offers significantly improved performance over all the PCIE cards so far

 *[AGP][nVidia]* *ABIT FX5900 OTES GeForce FX5900 128MB ($89.99)*. Although a two-slot solution, to find a flagship card (albeit several generations back) in this price bracket is impressive; for non-DirectX9 games, this card will handedly come out on top for AGP cards however if you're interested in playing DirectX9 games, the 9600XT still holds the reccomendation.

 *[PCIE][ATi]* *Sapphire 100121L Radeon X700 128MB ($90.00)*, *ASUS EAX700-X/TD/128M Radeon X700 128MB ($96.00)*. The first 8-pipe ATi card to hit this reccomendation list, this card pulls out ahead over the performance of the X1300 and takes place as the best ATi-PCIE card so far. Of the two, the ASUS, featuring a higher memory clock will provide better performance

 *[PCIE][ATi]* *Sapphire 100125L Radeon X800GT 128MB ($99.00)*. Handedly dominating all previously listed PCIE cards, this card features 8-pipes running on a 256bit memory bus. Stock or modified, this card handedly outperforms the previously listed cards

 *[PCIE][ATi]* *Sapphire 100139L Radeon X800GTO 128MB ($96.00 after $10.00 MIR)*. Adding an extra four pipelines ot the X800GT, this card readily garners top-PCIE reccomendation so far.

 *[AGP][ATi]* *PowerColor R98-PC3G Radeon 9800Pro 128MB ($109.00)*. Priced significantly higher than the 5900/9600XT which previously had the reccomendations, this card offers no-compromise, better performance compared to both in the AGP category however purchasing this card should be done with a grain of salt: compared to current generation cards, the 9800Pro, as good as it might be, is incredibly outdated (not to mention, AGP is essentially dead and this would be an investment into a dead platform).

 *[PCIE][ATi]* *Sapphire 100147L Radeon X1600Pro 128MB ($111.00)*, *Sapphire 100144 Radeon X1600Pro 256MB ($116.00)* With only 55% of the memory bandwidth compared to the X800GTO, the only reason this card makes the list is because it has a higher core clock and offers newer API support as well as having memory RingBus. Now whether or not the efficiency advantages of RingBus will overcome the lack of memory bandwdith is something benchmarks will show, this card is respectable nonetheless

 *[PCIE][nVidia]* *eVGA 256-P2-N384 GeForce 6800 256MB ($99.00 after $30.00 MIR)*. This card's price and performance profile put it deadsmack in the same category as the X800GTO albeit behind in both core and memory clockspeeds. However, for those users looking for nVidia cards and/or considering pipeline unlocking, this card offers itself as an excellent buy.

 *[AGP][nVidia]* *AOpen 90.05210.616 GeForce 6600GT 128MB ($113.00 after $30.00 MIR)*, *Leadtek A6600GT TDH GeForce 6600GT 128MB ($139.00)*. Priced significantly higher than the 9800Pro above, this card offers hands-down superior performance

 *[AGP][ATi]* *Sapphire 100148 Radeon X1600Pro 256MB ($139.00)*. A current generation chip, the inclusion of RingBus, higher clocks and more pipelines compared to the venerable 6600GT, this card is takes the top position for AGP buyers.



*Praetor's Picks - Category B*
*[PCIE][nVidia]* 
Definitely an easy choice here, the *eVGA 256-P2-N384 GeForce 6800 256MB ($99.00 after $30.00 MIR)* offers remarkable amount of performance for it's price; you're getting a 12pipe card running a 256bit memory addressing bus for a remarkably low price.
*Make *eVGA
*Model Name/# * 256-P2-N384 
*Interface * PCI-Express
*GPU * nVidia 6800
*GPU Clock *325MHz
*Memory *256MB DDR
*Memory Clock *300MHz (DDR600)
*Memory Interface *256bit (19.2GB/s bandwidth)
*Pipelines * 12x5
*Cooling *Stock HSF
*Shader Models *3.0
*Connectivity *VGA+DVI+SVideo
*Product Link * http://www.evga.com/products/moreinfo.asp?pn=256-P2-N384-TX​
*[AGP][nVidia]*
Somewhat a difficult choice between the 6600GT and the FX5900 with the former offering significantly improved performance while the latter offering a significantly reduced pricetag, I ended up picking the 6600GT and in particular, the *AOpen 90.05210.616 GeForce 6600GT 128MB ($113.00 after $30.00 MIR)* since that it more acurately fits the category description. While the FX5900 will definitely win if the category was restricted to DirectX8 games and older, most major titles now (and definitively those to come will be) DirectX9 based or even more advanced -- something that the GeForceFX architecture cannot handle very well.
*Make *AOpen
*Model Name/# * 90.05210.616 
*Interface * AGP 4x/8x
*GPU * nVidia 6600GT
*GPU Clock *500MHz
*Memory *128MB GDDR3
*Memory Clock *500MHz (DDR1000)
*Memory Interface *128bit (16.0GB/s bandwidth)
*Pipelines * 8x4
*Cooling *Stock HSF
*Shader Models *3.0
*Connectivity *VGA+DVI+SVideo
*Product Link * http://usa.aopen.com/products/vga/GF6600GT-DV128AGP.htm​
*[PCIE][ATi]*
A relatively easy decision here, the *Sapphire 100139L Radeon X800GTO 128MB ($96.00 after $10.00 MIR)* offers absolutely stunning performance for a $100 card. No regrets on reccomending this card whatsoever
*Make *Sapphire
*Model Name/# * 100139L
*Interface * PCI-Express
*GPU * ATi Radeon X800GTO
*GPU Clock *400MHz
*Memory *128MB GDDR2
*Memory Clock *350MHz (DDR700)
*Memory Interface *256bit (22.4GB/s bandwidth)
*Pipelines * 12x7
*Cooling *Stock HSF
*Shader Models *3.0
*Connectivity *VGA+DVI+SVideo
*Product Link * http://www.sapphiretech.com/en/products/graphics_specifications.php?gpid=119​
*[AGP][ATi]* 
A somewhat difficult choice, the *JetWay 96XT-AD-256C Radeon 9600XT 256MB ($83.00)* ended up with the pick due to the very good performance/price ratio. Was originally even considering the Sapphire 9600Pro however the 9600XT offers a significant enough performance jump to warrant the pick
*Make *JetWay
*Model Name/# * 96XT-AD-256C
*Interface * AGP 4x/8x
*GPU * ATi Radeon 9600XT
*GPU Clock *500MHz
*Memory *256MB DDR
*Memory Clock *300MHz (DDR600)
*Memory Interface *128bit (9.6GB/s bandwidth)
*Pipelines * 4x2
*Cooling *Stock HSF
*Shader Models *2.x
*Connectivity *VGA+DVI+SVideo
*Product Link * http://www.jetway.com.tw/jetway/system/productshow.asp?id=168&proname=96XT-AD-128C​


----------



## Praetor

*Category C - "I'm building a gaming box, but I dont have a fortune to spend on the top-tier parts; I'd still like to play my games at near-max settings if possible"*
As the title suggests, this category caters to those who are looking to build machines that will be able to play all the current games at somthing in the ballpark of maximum settings. As a minimum, cards qualifying for this category:

 Must have at least 12 pixel shader pipes
 Must utilize a 256bit memory bus
 Must have at least 256MB of video memory
 Must fall in the $150 to $275 bracket
Note that reccomendations for this category will take into account overclocking, unlocking etc.


 *[PCIE][ATi]* *Sapphire 100169L Radeon X800GTO 512MB ($157.00)*, *Sapphire 100129FBSR Radeon X800GTO 256MB ($159.00)*. With the former packing a whopping 512MB framebuffer and the latter being highly overclockable, both options are extremely competitive and of high value

 *[PCIE][ATi]* *Sapphire 100130L-BL X800GTO² 256MB ($149.00 after $20.00 MIR)*. Known to be unlockable to a 16pipe card, this card offers exceptional value after unlocking. If you plan on running the card in it's stock condition, I would advice purchasing either of the previously reccomended cards

 *[PCIE][ATi]* *Sapphire 100105SR-BL Radeon X800XL 256MB ($169.00)*. This is the cheapest 16pipe card to make it on this list: with similar clocks to the X800GTO, the extra four shader pipelines will allow this card to pull ahead of the former

 *[PCIE][nVidia]* *eVGA 256-P2-N389-AX GeForce 6800GS CO SE 256MB ($154.00 after $15.00 MIR)*. Featuring a copper based cooler, the best warranty in the industry and out-of-the-box overclocked components, this 12-pipe card will handedly compete with all other 12-pipe cards and even some 16-pipe cards

 *[PCIE][ATi]* *HIS Hightech HX80XLQ256-3TOEN Radeon X800XL IceQ II 256MB ($169.00 after $20.00 MIR)* Featuring an aftermarket cooler which also doubles as a rear exhaust fan, this card is profiled to be overclocked: based on a proven 16-pipe card, it's a very solid buy

 *[PCIE][ATi]* *Sapphire 100106-RD Radeon X850XT 256MB ($189.00)*, *HIS Hightech HX85XTTQ256-3TOEN IceQ II Radeon X850XT 256MB ($197.00 after $50.00 MIR)* . For it's generation, this was easily the best card on the market (granted there was the Platinum Edition but that card is, for all intents and purposes, nonexistent). The HIS model features an exhaust-type aftermarket cooler: more then suitable for overclocking this already extremely impressive card

 *[AGP][nVidia]* *PNY VCG6800SAWB GeForce 6800GS 256MB ($189.00 after $40.00 MIR)* Although significantly more expensive than it's PCI-Express brethen, this card offers a (relatively) good performance/price pair for users looking to buy an AGP solution

 *[PCIE][nVidia]* *MSI NX7800GT-VT2D256E GeForce 7800GT 256MB ($264.00 after $30.00 MIR)*, *eVGA 256-P2-N516 GeForce 7800GT 256MB ($265.00 after $20.00 MIR)* This is the first card to be able to userp the performance and value of the X850XT. Of the two, the eVGA model features a superior warranty as well as a higher clock

 *[AGP][ATi]* *ATI All-In-Wonder X800XT 100-714200 Radeon X800XT 256MB ($299.00)* For this graphical performance for it's price, I would personally consider biting the bullet and shifting to a PCI-Express solution however this card offers VIVO and TV capabilities: all on an AGP solution. 

 *[AGP][nVidia]* *eVGA 256-A8-N507 GeForce 7800GS 256MB ($299.00)*, *XFX PVT70KUAD7 GeForce 7800GS 256MB ($309.00)*, *eVGA 256-A8-N506-AX GeForce 7800GS CO 256MB ($309.00)*, *eVGA 256-A8-N508-AX GeForce 7800GS CO Superclock 256MB ($319.00)*. This is about as good as the AGP products get: these 7800GS cards are all excessively overclocked from the reference clock speed with the last two featuring copper based heatsinks for even better heat dissipitation (and thus potentially even more overclocking). If you're not interested in the additional video features offered by the X800XT AIW, these cards are the ones to get if you want to make the most of your AGP machine


*Praetor's Picks - Category C*
*[PCIE][nVidia]*
Definitely not a hard choice here, the *eVGA 256-P2-N516 GeForce 7800GT 256MB ($265.00 after $20.00 MIR)* easily garnishes the pick due to it's extreme performance/price value. Being protected by the best warranty in the industry helps too!
*Make *eVGA
*Model Name/# * 256-P2-N516
*Interface * PCI-Express
*GPU * nVidia GeForce 7800GT
*GPU Clock *460MHz
*Memory *256MB GDDR3
*Memory Clock *550MHz (DDR1100)
*Memory Interface *256bit (35.2GB/s bandwidth)
*Pipelines * 20x7
*Cooling *Copper HSF
*Shader Models *3.0
*Connectivity [/BDVI+DVI+SVideo
Product Link http://www.evga.com/products/moreinfo.asp?pn=256-P2-N516-AX&family=22*​*

[AGP][nVidia]
A slightly closer and more difficult pick here, I ended up picking the eVGA 256-A8-N508-AX GeForce 7800GS CO Superclock 256MB ($319.00) since, if you're going to put the ballpark $300 into an effectively obsolete platform, you might as well maximize the performance if possible. Having the copper heatsink and best industry warranty is a big plus for users who really want to maximize the performance through overclocking
Make eVGA
Model Name/#  256-A8-N508-AX
Interface  PCI-Express
GPU  nVidia GeForce 7800GT
GPU Clock 460MHz
Memory 256MB GDDR3
Memory Clock 675MHz (DDR1350)
Memory Interface 256bit (43.2GB/s bandwidth)
Pipelines  20x7
Cooling Copper HSF
Shader Models 3.0
Connectivity DVI+DVI+SVideo
Product Link http://www.evga.com/products/moreinfo.asp?pn=256-A8-N508-AX​
[PCIE][ATi]
A close call between both of the X850XTs, I ended up picking the HIS Hightech HX85XTTQ256-3TOEN IceQ II Radeon X850XT 256MB ($197.00 after $50.00 MIR) due to the inclusion of the exhaust aftermarket cooler. Either X850XT would have been more than satisfactory as the GPU is tried and true however the aftermarket cooler on the HIS unit definitively adds to it's value by increasing it's potential overclockability
Make HIS
Model Name/#  HX85XTTQ256-3TOEN IceQ II
Interface  PCI-Express
GPU  ATi Radeon X850XT
GPU Clock 520MHz
Memory 256MB GDDR3
Memory Clock 540MHz (DDR1080)
Memory Interface 256bit (34.6GB/s bandwidth)
Pipelines  16x6
Cooling Exhaust type HSF
Shader Models 2.x
Connectivity VGA+DVI+SVideo
Product Link http://www.hisdigital.com/html/product_ov.php?id=176&view=yes​
[AGP][ATi]
Since there really only was a single AGP card for this category, the selection here was pretty simple. Featuring an "older" X800XT, this card's saving grace is the significantly improved functionality by means of the All-in-Wonder components
Make ATi
Model Name/#  All-in-Wonder X800XT
Interface  AGP 4x/8x
GPU  ATi Radeon X800XT
GPU Clock 500MHz
Memory 256MB GDDR3
Memory Clock 500MHz (DDR1000)
Memory Interface 256bit (32.0GB/s bandwidth)
Pipelines  16x6
Cooling Stock HSF
Shader Models 2.x
Connectivity VGA+DVI+VIVO+TV
Product Link http://www.ati.com/products/radeonx800/aiwx800xt/index.html​*​


----------



## Praetor

*Category D - "I want to play all the latest games at the highest settings, I'm willing to pay the premium for it"*
Buyers in this category are looking for the best of the best and are willing to pay for it. Cards here are the cream of the crop either out of the box or are very readily overclockable. Due to market trends AGP cards do not exist in this reccomendation category. Products reccomended here however are subject to the following:

 No price restriction
 Only top tier cards will be chosen
 Only cards with 256bit memory interfaces will be chosen
 Only PCI-Express cards will be chosen
 Only cards with a minimum of 256MB of video memory will be chosen, preference will explicitly be given for 512MB+ cards
 Preference will be given for copper/exhaust coolers
Note that large points are given for overclockability and that this category is not focused on "reckless spending of money"


 *[PCIE][ATi]* *ATi 100-435703 Radeon X1800XL 256MB ($312.00)*. This is no X1800XT however it is quite readily overclockable and thus has significant value there

 *[PCIE][nVidia]* *eVGA 256-P2-N519-AX GeForce 7800GT CO 256MB ($349.00)* Also not the top dog card, this card's value lies in it's overclockability (thanks to the cupper cooler) as well as the free SLI motherboard that it comes with

 *[PCIE][ATi]* *Gigabyte GV-RX18T512V-B Radeon X1800XT 512MB ($394.99)*, *HIS Hightech H180XT512DVN Radeon X1800XT 512MB ($399.99 after $50.00 MIR)* These are the flagship cards of ATi's X1800 series and offer the best performance that GPU series had to offer. Both models feature semi-exhaust type coolers

 *[PCIE][nVidia]* *eVGA 256-P2-N527-AX GeForce 7800GTX 256MB ($434.00)* The 7800GTX (A1/A2) has undeniable performance and this eVGA model featuring a custom copper cooling system allows for potentially even more overclocking past it's already out-of-the-box overclocking

 *[PCIE][ATi]* *Powercolor 1900XT512OEM Radeon X1900XT 512MB ($479.00)* Clocked a hair slower than the X1900XTX, the X1900XT is the undeniable currently reigning GPU, guaranteeing it a reccomendation

 *[PCIE][nVidia]* *XFX PVT70FUND7 GeForce 7800GTX 256MB ($499.00)*, *Leadtek WinFast PX7800GTX TDH myVIVO Extreme 256MB ($499.00)*, *eVGA 256-P2-N529-AX GeForce 7800GTX 256MB ($489.00)* Coming significantly overclocked and with the Leadtek model featuring a semi-exhaust cooler these cards maximize the performance of the 7800GTX (A1/A2)

 *[PCIE][ATi]* *PowerColor 1900XTX512OEM Radeon X1900XTX 512MB ($579.00)*, *MSI RX1900XTX-VT2D512E Radeon X1900XTX 512MB ($589.99)*, *Connect3D 3055 Radeon X1900XTX 512MB ($579.00 after $20.00 MIR)* Literally the best videocards available on the market, the X1900XTX cards command a very high premium .. but if you're looking for the absolute best card on the market (at time of writing), here it is.


*Praetor's Picks - Category D*
*[PCIE][nVidia]*

Picking the *eVGA 256-P2-N529-AX GeForce 7800GTX 256MB ($489.00)* wasnt easy: the Leadtek model comes with a slightly lower clock speed on the memory however it has an arguably superior cooler (heatpipe + semiexhaust) however the superior warranty provided by eVGA made the decision: for users who want to push their hardware to the max, the extra layer of protection afforded by eVGA's warranty is a huge bonus.
*Make *eVGA
*Model Name/# * 256-P2-N529-AX
*Interface * PCI-Express
*GPU * nVidia GeForce 7800GTX
*GPU Clock *490MHz
*Memory *256MB GDDR3
*Memory Clock *650MHz (DDR1300)
*Memory Interface *256bit (41.6GB/s bandwidth)
*Pipelines * 24x8
*Cooling *ACS³ custom cooler
*Shader Models *3.0
*Connectivity *DVI+DVI+VIVO
*Product Link *http://www.evga.com/products/moreinfo.asp?pn=256-P2-N529-AX​
*[PCIE][ATi]* 

Not a hard choice here! Although the X1900XTX is a superior card, paying the premium for the extra 25/50MHz overclock is boarderline insane! Picking the *Powercolor 1900XT512OEM Radeon X1900XT 512MB ($479.00)* saved us $80 compared to the cheapest X1900XTX.
*Make *PowerColor
*Model Name/# * 1900XTX512OEM 
*Interface * PCI-Express
*GPU * ATi Radeon X1900XT
*GPU Clock *625MHz
*Memory *512MB GDDR3
*Memory Clock *725MHz (DDR1450)
*Memory Interface *256bit (46.4GB/s bandwidth)
*Pipelines * 16x8
*Cooling *Semi-exhaust HSF
*Shader Models *3.0
*Connectivity *DVI+DVI+VIVO
*Product Link *http://www.powercolor.com/global/main_product_detail.asp?id=105​


----------



## Praetor

Section 03 - Long and Painful: What do I need to know to pick out a good videocard?
There are four steps to picking out a good videocard:

 _Define your requirements._ This one is pretty straight-forward: what kind of features or performance do you need/want from your videocard? Do you want to be able to play the latest and greatest games on the highest settings? Do you want to have video capture capabilities? Do you want a super-silent computer? Or something else? By defining what your needs are you can cut down on the number of possible products to consider. 
 _How much money do you have to work with?_ Yes we would all like to be able to spend a zillion dollars in every area of our computer to have the best system possible but that's not an option generally available to us. By putting a limitation on the price tag we can further reduce the number of options to consider
 _Indirect considerations_ There are three considerations here (1) people who want silent computers (2) people who want to do fancy things like overclock or unlock pipes and (3) people who want to game on LCD monitors
 Do some more research

*Part 1: Defining your requirements*
The questions we are trying to answer with this step are:

  How much graphical horsepower do you need? Here are a few scenarios:
 A machine that will just be used for work, movies, email, chatting will not have virtually zero graphical horsepower requirements
 A machine that will serve as a "general purpose" computer that will encounter the occasional game will have minimal gaming requirements
 A machine that will be playing the latest and greatest games will have a very high gaming requirement
 A machine that will be used in for the purposes of doing animation or rendering will have a very high workstation requirement
 A machine that will be used in t home theater type setup where the primary screen is the TV/projector will have a VIVO/TV/Media requirement and as a secondary requirement, it would be benificial if the videocard was relatively silent
 A machine where the user intends to maximize the performance of each and every component through unlocking or overclocking will have thermal requirements

 What type of horsepower do you need?
 Gaming?
 Professional?
 Media?
 Other?

 What other requirements do you need?
 Silent?
 Will thermal management be an issue?
 Power constraints?
 Connectivity constraints?


*Part 2: How much money do you have to work with?*
This step should be pretty self-explainatory.

*Step 3: Indirect Considerations*

 If you plan on gaming on a LCD, be aware that LCDs have something known as a _native resolution_. If you play the game (or do anything really) at anything other than this native resolution then the image quality will be very poor. If you own an LCD with a relatively high native resolution of say 1280x1024, and you want to play a game such as FEAR (which is graphcially intense in an of itself) then note that your videocard must be able to handle that game at 1280x1024. People on CRTs (aka "normal monitors") dont need to worry about this since their displays are not limited to a single resolution. 
 If you plan on buying a high powered videocard make sure your computer can actually power the card _reliably_. Videocard manufacturers and many people will say "get a 500W PSU" or something to that effect -- this is absolutely useless since a _bad_ 500W PSU will not be able to reliably power your videocard. This consideration is doubly important for people who are installing a new highend videocard into an OEM machine (i.e., Dell, HP, Compaq, IBM etc) where the quality of the included PSU is questionable at best
 Although this is not nearly an issue now as it was maybe a few years ago, but some games have certain hardware requirements (i.e.,"You need a DirectX8 videocard to play this game"). The common scenario was people bought budget videocards (i.e., GeForce 4 MX which is a DirectX7 card) and they wondered why they couldnt play a game that required DirectX8. Simply going to Microsoft.com and downloading the latest DirectX does *NOT* mean your hardware supports that version of DirectX
 If you're intending to make a shuttlebox or HTPC, please be aware of heat constraints: those small boxes are often not well ventilated enough to handle the heat output of the higherend cards

*Step 4: Do some more research*
When you've narrowed down your cards, do some research on them! Dont make the mistake of trusting customer reviews: many times people will have a problem with a product (usually due to them skipping over step 3 and forgetting something) and they will blame the product. Instead, do your research by searching for comments from well established hardware review sites aka people who know what they are talking about. 

An important step here is to recognize bias and fanboyism when you see it. For some reason kids (usually) tend to think that one company sucks or another company sucks without having any technical reason to back it up. Sure having opinions is cool, but just like you should not trust customer reviews, take people's experiences with a grain of salt: just because one person had a bad experience does not mean you will.

Now once you've done all this you should have a pretty small set of videocards to pick from: feel free to post your selection here (along with your budget and requirements) and you'll generally get a response or two from people indicating what route they think you should take. Again with the "grain of salt" ... look for _reasons_ so that you can deal with facts and not just opinions.

Lastly, have a look at the *VFAQ* to address some pretty common questions and concerns that people looking to buy videocards have.

_A bit more detail..._
*Gamer*
A good gaming card generally strives to feature the best of the best and to have the most of it; some stuff to shoot for when picking out a gaming card:

 *Pipelines*. Most commonly listed as "pixel pipelines" or "programmable shaders", the number of pipelines acts as the limiting factor on how many simultaneous shader programs can be run. Although cards do exist that do not have these shader pipelines, you are much better off getting a card with at least _some_ pipelines. Unless you are severely price limited, you absolute minimum number of pipelines you should shoot for in a gaming card is 4. Those looking for midrange cards should generally shoot for cards with 8-12 pixel pipelines and the high end gamers should be looking to buy cards with more than 12 pipelines.
 *Clockspeed*. Just like with processors, the higher the clockspeed, the faster the card and the smoother the gameplay so strive to get the fastest possible clock as financially feasible. 
 *Bandwidth*. All the GPU processing power in the world isn't going to be very useful if the processor is starved for data and users looking for high end cards will find that performance, as complexity increases, generally is limited by the memory bandwidth. Now how to pick out a card with good/high bandwidth? There are two considerations: 
 The 'bit-size' (technically called memory addrerssing bus width) of the video card. Budget cards often feature 64bit memory meaning for each memory clock cycle, 64bits (8bytes) can be processed at a time. Midrange cards will generally feature a 128bit memory structure (meaning each clock cycle will allow for 16bytes to be processed) and high end cards almost always feature 256bit memory structures (for 32bytes per clock cycle). 
 The memory clockspeed -- for lower-end cards this information is notoriously hard to come by and often, a quick Google search will return a dozen [potentially] conflicting results and interpreting them correctly takes abit of intuition/experience. The reason for the complexity is that marketing people know that 'the bigger number sells' and as such, they will often list the memory speed using the "DDR value" rather than the actual speed (what this DDR thing is, is that for each clock pulse, instead of sending one single per pulse, you send two per pulse and get twice the work done in the same amount of time; see *RAM 101* for more on this). The actual memory clock speed will always be the smaller of the two values and to be specific, it will be exactly half (i.e., for a videocard advertised with 700MHz 'effective speed', it will actually be running at 350MHz). In general, you'll want a higher memory clock speed.
Now to determine the memory bandwidth that a specific card has, simply peform the following calculation:
Bandwidth = BitSize x MemoryClockSpeed ÷ 4 ​So a quick example before moving on, a video chip like the nVidia 6800GT which features 256bit memory running at 500MHz (sometimes noted as "DDR1000", "1000MHz effective speed" or even worse, "1000MHz") will have 32GB/s of bandwidth (256 x 500 ÷ 4 = 32000MB/s which is roughly 32GB/s).
 *Memory amount and type*: in addition to the bit-size of the memory, you'll also want to get the most and most advanced type of memory that you can for your gaming card. For the most part, even low-budget gamers should strive to avoid 64MB cards as much as possible (although that really should be your ceiling unless you are _really_ budget constrained -- something i'll deal with later); gamers looking for mid and high end parts should make 128MB of memory as their minimum. Do note that videocards with 512MB of onboard memory do exist however there has yet to be significant/noticeable performance gains from investing into that platform.

As for memory type and getting the most advanced type, video memory is slightly different than normal system memory (which marketing people love because there is more jargon to toss at the consumer). In the context of videocards there are currently three commonly found types of memory:
 DDR (sometimes denoted as GDDR, DDR1, GDDR1 etc)
 DDR2 (sometimes denoted as DDR-II, GDDR2, GDDR-II etc): the difference here is DDR2 hit higher clockspeeds (thus offering better performance) however the memory chips still ran at the same 2.5v as the original DDR and so heat became an issue.
 DDR3 (mote commonly denoted as GDDR3 but other variants exist) solves the heat issue that DDR2 had by lowering the signalling voltage to 1.8-2.0v.
Performance-wise, clock-for-clock all the types of memories are the same (i.e., 500MHz DDR and 500MHz GDDR3 will yield pretty much the same results). To avoid confusion the remainder of this guide will make use of the notation: *DDR, DDR-II, GDDR3.*

 *Futureproof*. Although this is somewhat impossible given the fast turnarounds in the industry, you can somewhat make wise decisions by purachsing cards that may allow you to do more advanced things later on (i.e., SLI, overclocking friendly cards etc). For this most part, this isnt nearly a big consideration as the above points. For those looking to buy a new videocard, the only notceable impact here is deciding between AGP and PCIE: you should strive to get a PCIE system where possible as AGP will be phased out (in fact, it is all but phased out).
 *API Support*. Marketing people like to play with words and one word they enjoy is "compatible"; an example of this is with a GeForce4MX, the marketing will say "DirectX9 compatible". So far it may seem that there is nothing wrong with the statement however the hardware on the GeForce4MX is DirectX7 class. So how does this work (and are they lying)?! What this means is that when a game issues a DirectX7 command the videocard will respond as expected however when the game issues a DirectX8 (or better) command, the card is not capable of executing that command and in best case scenario, it will just sit there being dumb (more likely it'll crap out on you). The marketing people, however, aren't lying to you: the card is [literally] _compatible_ with DirectX9 -- what they mean is that you can install DirectX9 onto the computer and the graphics card wont have a problem with it. Translation? It's meaningless goobble-dee-goop that marketing people throw at the consumer.
For gaming cards, expect to spend upwards of 75USD for anything passable. Upper limits are in excess of $1000 for complete configurations.

*Mainstream Cards*
By mainstream I mean videocards you would find in an office computer or a basic no-games computer. The line between very-low-budget-gaming cards and mainstream cards is a thin grey one: for the most part, mainstream cards _are_ the low budget gaming cards. Since there is no gaming to be done one these machines, any cheap videocard will be sufficient. For the most part, expect to spend $30-50USD on a mainstream card. As an alternative that offers less fuss (but often less performance/flexibility), you might consider getting a motherboard with a built in videocard: the motherboard will probably cost an extra $5-10USD but that is offset by not having to buy a videocard.
*Theater/HTPC*
Buying a videocard for a 'movie machine' generally places the emphasis on three points:

 *Featureset*. What you should generally be looking for here is video in/out features as well as TV-related features. The biggest, baddest GPU isnt gonna be very useful as a HTPC (home-theater PC)( card if it cant interact with the TV (which for the most part is still the dominant video display in a home theater setup). Being able to interface with VCRs etc is also useful
 *Cool and Quiet* A fancy videocard, more often than not, is likely to generate tons of heat which will have to be dealt with. Normally this isnt much of an issue however in theater systems (which can/often are employed in small cases, airflow is an issue and as such, it's better to have a cool card to start with). For those concerned about noise, fans generate noise and having a cool card to start with often means the fan can spin at a lower speed (and thus generate less noise).
*Workstation/Professional*
Unlike gamer cards where performance between various benchmarks can vary wildly, workstation cards are bit more consistent (with much more well established benchmarks). For the most part the determining factor will be

 *Amount and type of memory*. Same as for a gaming card, the more memory the better and the more advanced the memory, the better.
 *Fillrate* All other things being equal, a card with more bandwidth willl give a card a greater fillrate, both geometric and texture


----------



## Praetor

Section 04 - VFAQ

 *How much can I overclock my videocard to?*Nobody can tell you. Just because one dude hits a certain clock speed doesnt mean you will either, you may match that, surpass it or come up dismally short. The only way to find out how much you can overclock your videocard by is to try it out yourself
 *If I overclock my videocard, how much performance increase should I expect?*Depends on the card, what you are overclocking and how much you are overclocking it by!
 As a trend, nVidia based cards seem to benifit more per clock cycle
 When overclocking you have the option of overclocking the coreclock, the memory clock or both. Depending on how what you are using to measure performance you will notice varying changes in performance depending on which you overclock
 Overclocking a GPU core by 10MHz wont produce anything noticeble; neither will overclocking the memory by 25MHz. Also something to consider: a videocard which uses a 128bit memory path will benifit less from each memory clock increase than on that uses a 256bit memory interface
 Overclocking a low end budget card in hopes of making it perform like a high end part will only result in failure, dissapointment and/or naivete. People who say they've overclock their budget gaming card by # percent and have gotten like twice the performance generally dont know what they are talking about. The reason (yes there is a reason) is that the bottleneck with budget parts is usually not the clock speed but something more crucial like the number of pipelines or the memory addressing bus width -- neither of which are affected by overclocking
 Some GPUs have frequency scaling where you wont notice any performance gains until you've overclocked by certain amounts. A bit is mentioned *here*

*I've not bought a videocard yet but when I do, I want to overclock it, what should I look for when buying one?* Either look for non-standard cooler (whether it be copper based, uses some form of exhaust or something else) or consider buying an aftermarket cooler. Increasing the cooling capacity (especially for the memory units) will help to improve the amount you can overclock by (why? because if the card, or some of its components, overheats then it may throttle itself and/or artifacts/corruption will show up). While cooling is only half the battle, it's a fairly easy thing to look after.
 *What brand should I buy?* The general answer is, "if you're asking this then you probably wont notice all that much" The reasoning is that serious overclockers/performance nuts will have done their research and everybody else probably wont be pushing their hardware enough to notice any difference between brands etc. Now to actually answer the question, a "better" brand depends on how you define "better". Here are some scenarios
 BFG generally sells their cards as overclocked out of the box. Geneally speaking it's an insignificant overclock however people see the words "overclocked" and are willing to shell out money for it. Lucky BFG. BFG parts are generally very well made however there is a premium involved for their parts being "overclocked"
 ASUS products are generally very well made however they make a very wide variety of products catering to budget users (-X series) to the top (-TOP, EXTREME series) tier users so to group all ASUS products into one specific description is generally unfair. It is fair to say their products are generally very well made however they tend to do "wierd things" (i.e., power connectors on ASUS video cards arent standard etc). As a trend, ASUS parts are usually more expensive as well -- often worth it -- but expensive nonetheless
 XFX. What sets XFX apart from the rest (other than a difficult to open product box) is their product warranty is single-sale transferable (i.e., if you bought the card originally, you can sell the card and the next owner is entitled to the warranty as well) as well as their cards being usually overclocked a bit. Their pricing is pretty in line with their performance
 eVGA. Their selling point is an obscenely comprehensive warranty. Short of blatant damage to the card, it's covered. Naturally you need to read more into this before running off an buying a random eVGA card.
 HIS/Sapphire/Powercolor/Connect3D. These companies often make ship cards with some form of custom cooler which adds to the value of the card by reducing temperatures and improving overclockability
 Other..... read up on the company and see for yourself!

 *Should I get SLI/Crossfire?* If you're asking this, the answer is "no". People who will benifit from this neat (and expensive) technological marvel have already made up their minds and bought the hardware for it. If you're asking about whether you should get it or not then you've not maximized your hardware enough to benifit from it (i.e., sort of how people who havnt quite learned how to use a basic calculator ... they shouldnt be messing with a graphing calculator quite yet). Now for a technical reason as to why one shouldnt go the multi-GPU route: consider the following scenarios

Dude is looking to buy a SLI-capable machine with a not-quite-top-of-the-line videocard and buying the second video card a few months later. Lets have a financial analysis
*Popular SLI Motherboards ... * [Min, Avg, Max] = [125,161,205]
- MSI K8N Neo4 SLI ... *Sep 05 @ 125USD*
- DFI Lanparty UT NF4 SLI-DR ... *Sep 05 @ 165USD*
- Abit Fatal1ty AN8 SLI ... *Sep 05 @ 205USD*
- ASUS A8N-SLI Premium ... *Sep 05 @ 175USD*
- Gigabyte GA-K8N Ultra-SLI ... *Sep 05 @ 135USD*

*Popular Non-SLI Motherboards ...*  [Min, Avg, Max] = [110,118,130]
- Gigabyte GA-K8N Ultra-9 ... *Sep 05 @ 115USD*
- DFI Lanparty UT NF4 Ultra-D ... *Sep 05 @ 130USD*
- ASUS A8N-E ... *Sep 05 @ 110USD*
- Abit AN8 Ultra ... *Sep 05 @ 115USD*
- MSI K8N Neo4 Platinum ... *Sep 05 @ 120USD*

*Popular 6600GTs*
- PNY GeForce 6600GT 128MB ... *Sep 05 @ 220USD, Feb 06 @ 150USD*
- BFS GeForce 6600GT OC 128MB ... *Sep 05 @ 190USD, Feb 06 @ 200USD*
- Gigabyte GeForce 6600GT 128MB ... *Sep 05 @ 175USD, Feb 06 @ 150USD*
- ASUS GeForce 6600GT 128MB ... *Sep 05 @ 185USD, Feb 06 @ 155USD*
- XFX GeForce 6600GT 128MB ... *Sep 05 @ 145USD, Feb 06 @ 120USD*
- Chaintech GeForce 6600GT 128MB ... *Sep 05 @ 140USD, Feb 06 @ 125USD*

*Popular 6800GTs*
- BFG GeForce 6800GT OC 256MB ... *Sep 05 @ 320USD, Feb 06 @ 385USD*
- MSI GeForce 6800GT 256MB ... *Sep 05 @ 325USD, Feb 06 @ 315USD*
- Gigabyte GeForce 6800GT 256MB ... *Sep 05 @ 380USD, Feb 06 @ 380USD*
- Leadtek GeForce 6800GT 256MB ... *Sep 05 @ 300USD, Feb 06 @ 300USD*
- XFX GeForce 6800GT 256MB ... *Sep 05 @ 290USD, Feb 06 @ 300USD*
- eVGA 6800GT 256MB ... *Sep 05 @ 290USD, Feb 06 @ 325USD*

*Some Current Cards*
- Gigabyte 7800GT 256MB ... *Feb 06 @ 295USD*
- MSI GeForce 7800GT 256MB ... *Feb 06 @ 265USD*
- eVGA GeForce 7800GT 256MB ... *Feb 06 @ 275USD*
- BFG GeForce 7800GT OC 256MB ... *Feb 06 @ 315USD*
- eVGA GeForce 7800GT CO SE 256MB ... *Feb 06 @ 260USD*
- XFX GeForce 7800GTX 256MB ... *Feb 06 @ 470USD*
- MSI GeForce 7800GTX 256MB ... *Feb 06 @ 435USD*
- eVGA GeForce 7800GTX ACS3 256MB ... *Feb 06 @ 435USD*
- ASUS Radeon X1800XL 256MB ... *Feb 06 @ 355USD*
- Sapphire Radeon X1800XL 256MB ... *Feb 06 @ $340USD*
- Connect3D Radeon X1800XL 256MB ... *Feb 06 @ 320USD*
- MSI Radeon X1800XT 512MB ... *Feb 06 @ 400USD*
- Gigabyte Radeon X1800XT 512MB .. *Feb 06 @ 395USD*
- MSI Radeon X1900XT 512MB ... *Feb 06 @ 530USD*
- Powercolor Radeon X1900XT 512MB ... *Feb 06 @ 480USD*
- Connect3D Radeon X1900XT 512MB ... *Feb 06 @ 510USD*​
Ok now that we have some figures lets do some math....
 *Configuration 1: 6600GT-SLI* ... SLI Motherboard (160USD) +  Sep 05 6600GT (175USD) + Feb 06 6600GT (150USD) ... 485USD
 *Configuration 2: 6800GT-SLI* ... SLI Motherboard (160USD} + Sep 05 6800GT (320USD) + Feb 06 6800GT (330USD) ... 810USD
 *Configuration 3: 6600GT -> 7800GT* ... NonSLI Motherboard (118USD) + Sep 05 6600GT (175USD) + Feb 06 7800GT (260) ... 553USD
 *Configuration 4: 6800GT --> 7800GT* ... NonSLI Motherboard (118USD) + Sep 05 6800GT (320USD) + Feb 06 7800GT (260) ... 698USD

*Analysis*
Ive provided a whole bunch of number that you can use to make your own comparisons and calculations (note that for the 7800GT I chose the cheapest part ... the reasoning is, because we are not forced to pick a specific part then we can simply buy the cheapest part, the rest of the prices are included for completeness sake). But anyways, the analysis
 Comparing Configuration 1 to Configuration 3, the latter costs an extra $85 so it would seem that the SLI route is more cost effective -- until we realize that the 7800GT will more than outright destroy a 6600GT-SLI configuration. A quick Googling of the relevant benchmarks will quickly confirm this. As a quick proof, consider *Doom3 UQ @ 1600x1200*
 Comparing Configuration 2 to Configuration 4, the latter turns out to be cheaper and you'll also see that the 7800GT will outperform 6800GT-SLI (although the margin of victory is, as expected, significantly less). Again, Google up some benchmarks for yourself but as a quick example, consider *FarCry UQ @ 1600x1200* or *FEAR @ 1600x1200*

Also note that the above non-SLI configurations do not take into consideration any resale value of the originally purchased cards. We're not done with the analysis quite yet: consider that in a span of a few months that if the video subsystem is obsolete enough to warrant a multiGPU configuration then one has to wonder how obsolete the memory and processing subsystems are. Granted the majority of the burden is on the GPU but one cannot deny the impact of the RAM/CPU -- adding a second videocard to the system will improve performance no doubt but putting all that money into the videosubsystem when the CPU/RAM is the bottleneck does not seem like a wise move.

Now as a final statement, if you can afford a multiGPU configuration .. by all means, go for it, there ARE gains in both performance and image quality however for someone who is honestly asking "should I?", they will most likely be better served by the normal upgrade route.

 *AGP or PCI-Express?*
A somewhat difficult question as it depends a lot on the amount of money you have to play, how much you want to upgrade and when you plan to move onto the next upgrade. Some considerations
 AGP is essentially dead. Although AGP cards can stil be bought, quantity and availabilty will be limited; due to how the economy works, the pricing for AGP cards will also be higher.
 Since AGP is effectively dead, buying an AGP card would be an investment that will not take you anywhere past the current hardware. While this is ok for people who are looking to build a basic machine, users who are looking at Category C and Category D might be better off moving the entire upgrade along into a platform change.
 The reason budget is an issue is, well, if you want to avoid investing into a dead platform, you're going to have to buy, at the least, a new motherboard (i.e., Athlon64 using a K8T800Pro ) or at the worst, a motherboard, CPU and RAM (a Pentium4 usign a P4C800). For all intents and purposes this is essentially akin to building a whole new computer.


----------



## Praetor

*How do I check my temperatures? How do I know if the videocard is overheating?* Assuming your videocard has a thermal sensor (lower-end models do not generate enough heat in normal operation to warrant a thermal sensor and thus do not include one). If you do have a card with a thermal sensor, a tool like *RivaTuner*, or *SpeedFan* can be used to view and log the temperature of your videocard. If you re trying to isolate whether or not your videocard is overheating or not, make sure you start the logging before you start up the game, alt-tabbing back to Windows will drop the temperature and skew your results

 *How to do I test my videocard both for stability and for performance?* There are tons of benchmarking tools available to test your video card for stability. Pick a benchmark and let it run a handful of passes and if they run cleanly without any issues you can assume your videocard is running stable. The following are but a limited few
 *FutureMark 3DMark 06*, *FutureMark 3DMark 05*, *FutureMark 3DMark 03*. *FutureMark 3DMark 01 SE*
 *AquaMark 3*
 *Quake4 Benchmark Utility*
 *FarCry Benchmarking Utility*
 *HOC Halflife 2 Benchmark Utlity*
 *Splinter Cell Chaos Theory Benchmark Utility*
Pretty much any major game title will feature some form of benchmarking: some games to consider are FEAR, Need for Speed Most Wanted, X2-The Threat, LockOn.

 *Is my card overheating? How can I tell? How hot is too hot?* Using a tool like *RivaTuner*, or *SpeedFan*, you can see your videocard's core and memory temperature. To see if your videocard is overheating, simply set either program to start logging the temperatures and then run a benchmark/game for a few passes and then check the log. It's important to not alt-tab back and forth to check the temperatures because switching to 2D mode (i.e., Windows) will result in a significant drop in temperatures and thus skew results. For both current generation nVidia and ATi cards, having a core temperature less than 80ºC at all times is ok; even at 80ºC there is tons of leeway (over 20ºC for both ATi and nVidia cards) before the temperatures are actually dangerous ot the videocard although naturally, the lower the better

 *I want to cool down my videocard, what can I do? * Well if you're having heat issues or you just want to cool down the videocard, almost any aftermarket cooler wll do the job (although the cost of most is that they will use an additional slot in your motherboard). Building a cooler here is dependent on the model of video card you've got; for some popular cards:
 *Arctic Cooling AVC-NV5R3 NV Silencer Rev 3 ($29.99)* 
 *Arctic Cooling AT4 Rev2 ($29.99)* 
 *Zalman VF700-CU VGA Cooler ($29.99)*, *Zalman VF700LED CU VGA Cooler ($34.99)* 
 *Arctic Cooling AVC-AT5 Rev2 ($36.99)*
There are tons more products on the market, just have a look.

 I've heard some people talk about unlocking pipes? How can I do this?

 *I have a GeForce 7800GT, I want to unlock the pipes to make it more like the GeForce 7800GTX* Here is a perfect response to that question


			
				Guru3D said:
			
		

> *Unlocking disabled Pipelines and Vertex Units*
> I can imagine that certain geeks are already looking into a way to unlock the remaining pipes and vertex unit to make this product even faster. In fact .. just grab Rivatuner and let's have a look inside the G70:
> 
> At this point I have spend no more than a couple of minutes in figuring out if the disabled Pipelines and Vertex units should be able to get unlocked, unfortionately that's just not the case though.
> 
> As you can see, the pixel pipes are configured as quads (units of four), we see 6x4 (24 pipes) of them with one quad (4 pipes) disabled. When we look at the Vertex units we see that indeed one unit is disabled. Likely if you can enable the quad or vertex unit properly you have a high chance of it being damaged anyway. It's fun to play arround with in Rivatuner though, yet after a reboot and a third try to 'strap' the driver it still did not kick in at all. Maybe in a future version of Rivatuner, who knows.
> 
> I just had a quick chat with uber guru and RivaTuner programmer Alexey, and at this time it looks like you can not unlock the pipes by patching the drivers due to a new protection. NVIDIA uses a similar protection on NV41 and new revisions of NV43 (A4 and newer). I'm afraid that they use the same stuff on G70
> And hey. If you do not have a clue what I just wrote, don't even think of trying this please.


Something very important that should be noted is the last sentence _ If you do not have a clue what I just wrote, don't even think of trying this please._ That definitively applies here. If you really want to try unlocking anyways and are looking for a guide, here is a starter guide at *Anandtech Forums*

*I want to get <aHighEndCard> My friend has <aDifferentHighEndCard> and he says I should get his because his card gets so many more fps than mine! Should I?*
So long as the two high end cards being compared are roughly equal (i.e., they are competing products), I would reccomend going with the cheaper of the two. The reasoning here is that, within the context of highend cards (whether we're talking about high-end cards for current generation cards or _relatively_ high end cards -- that is, buying a current generation card for a three-generation old game), you're generally going to be looking at fairly high framerates -- and 99% of people (if not more) wont be able to discern the difference between, say, 100fps and 80fps ... so why pay the price premium? (now if it's a cheaper card then by all means!)

*I'm building a work/no-gaming machine and my friend told me to get a <midRangeCard> or <highEndCard>, should I?*
Absolutely, positively not! Any real-world noticeable difference between cards will essentially only be exhibited in gaming so if there to be no gaming involved then there simply is no point in buying a gaming card.

*Support ... Compliance ... Hardware ... Software? What gives?!*
A classic case of marketing jargon being tossed around left and right, hopefully this breaks things down

 *Support* This means that the hardware (i.e., your video card) _physically_ supports whatever extensions are being asked of it. So for example, if a box says "Supports DirectX9" this means that the hardware complies with the hardware requirements for DirectX9. This is what you want and 'support'. Now do note that even though this is what support is _supposed to_ mean, marketing people often use English definitions rather than technical ones and as such you may find things to be inaccurate
 *Compliant/Compatible*. When a video card states that it is _complient with_ or _compatible with_ something (say DirectX9) all this means is that you can install DirectX9 (or whatever) on the computer and the videocard will still work. It does *not* mean that the videocard will be able to execute DirectX9 instructions (which is what marketing people want you to think)
 *Hardware and Software [Requirements]*. When a video game markets that it "requires DirectX9 [to be installed]" thats literally what it means .. that you need to install DirectX9 in order to be able to install/play the game. Now when the game states that it "requires DirectX9 hardware" or that it "requires DirectX9 compliant" hardware that means your videocard needs to physically be able to execute DirectX9 instructions. 

*Ok so what 'version' of DirectX is my videocard?*
For ATi card owners,

 *DirectX7*. 
 Classic Radeons
 '7000' series Radeons

 *DirectX8*
 '8500' series Radeons
 '9000' series Radeons
 '9100' series Radeons
 '9200' series Radeons
 '9250' series Radeons

 *DirectX9.0*
 '9500' series Radeons
 '9600' series Radeons
 '9700' series Radeons
 '9800' series Radeons
 'X200' series Radeons
 'X300' series Radeons
 'X500' series Radeons
 'X600' series Radeons
 'X7x0' series Radeons
 'X8x0' series Radeons

 *DirectX9.0c*
 'X1x00' series Radeons

For nVidia Owners,

 *DirectX7*
 'GeForce' cards
 'GeForce2' cards
 'GeForce4MX' cards

 *DirectX8*
 'GeForce3' cards
 'GeForce4Ti' cards

 *DirectX9.0*
 'GeForceFX' cards

 *DirectX9.0c*
 'GeForce 6x00' cards
 'GeForce 7x00' cards


*I've noticed that you dont make a very big deal about benchmarks? Why not?*
Benchmarks serve to provide a _general_ overview of the performance capabilities of whatever is being benchmarked however I've noticed that way too many people think that benchmarks are the be-all, end-all of videocard performance: if they cant get a certain benchmark score then the configuration isnt good enough -- even when they really wont notice the performance improvement! (i.e., sure you can score a million 3DMarks with the fancy SLI rig but are you really going to notice the difference between 100fps and 150fps in the games you play? probably not). Benchmarks _are_ useful, it's just that its neccesary to be able to interepet the benchmark scores with a grain of salt.

*Should I use the drivers from my videocard manufacturer or from the chip manufacturer?*
As a general principle, unless there is a specific reason not to, it's always better to use the drivers made from the chip manufacturer: more frequent updates and direct performance improvements; 'sides, who knows the chip better than the maker themselves?

*I've read in this guide as well as heard around: why does the GeForceFX get such a bad rap?*
While nVidia did a good job  of providing an entire platform of chips that are DirectX9 capable, the architecture of the chips was not suited for running in DirectX9 mode: while this was not nearly as noticeable with their low end models (think of it as 'the low end chips are so slow already that any inefficiency or whatnot wont really be noticeable'), their high end chips did not perform in a competitive manner against their ATi counterparts. 

In fact, the GeForceFX architecture is so dismal in DirectX9 mode that, it's way better treat the cards  as DirectX8 hardware (and run them in DirectX8 mode where possible). Running games with DirectX8 mode generally resulted in a marked and significant performance jump (often 50% improvement or better). 

*What is pipe unlocking?*
The difference between highend and midrange cards (or even amoung high end models) lies in the number of pixel/vertex shader pipelines present: for example, some cards feature 16 while others feature 12. In many cases, the card with 'only 12' pieplines will still _physically_ have 16 pipelines yet have one of the pipeline-quads disabled -- and it may be possible to unlock them for a free performance boost.

*How do I see what version of drivers Ive got?*

 Start --> Run
 Type *dxdiag*
 Goto the *Display* tab
 See in top right corner where it says "version"


----------



## Praetor

Section 05 - When Stuff Goes Wrong
*I bought a new videocard, plugged it in, computer turns on, fans spin but I got nothin on the monitor*
- Is the card firmly plugged in all the way?
- Does the card require additional power connectors? Are those connectors plugged in? If you're using molex->PCIE convertor... is the convertor plugged into the power supply? Does your PSU provide enough power on the 12V line for the videocard?
- Tried the other VGA/DVI connector? Is the monitor confirmed to be working?
- Does the fan on the videocard spin up? How about any diagnostic lights on the motherboard? Do you get any beeping?

*My computer shuts-down/restarts (or slows down) when I'm playing games .. what gives? I know my CPU isnt overheating!*
Try logging your video card's temperatures. See the *VFAQ* for a link dealing with this


----------



## Praetor

Section 06 - The Encyclopedia
*GPU/VPU*
An abbreviation for _graphical processing unit_ or _visual processing unit_ this term refers to the chip that provides the video functionality to the computer as a whole. This chip can be present on a discrete card or it can be embedded into the platform as an integrated solution. To avoid confusion, the rest of this guide will refer to this chip as GPU

*Video Card*
Technically speaking this term refers to a physical expansion card that plugs into the motherboard and provides video capabilities to the system. Used more conversely, the term 'video card' generally refers to the GPU present in the system whether it is a discrete expansion card or not. To avoid confusion, whenever the term videocard is used in this guide, unless otherwise stated, it will refer to a discrete expansion card.

*Integrated Graphics/Onboard Video*
In order to support a discrete videocard, a computer must have existing expansion slots: providing these expansion slots, from the manufacturer's perspective, has two impacts on the system: [1] increases the cost of the motherboard and [2] increases the cost of the system as a whole (since the system will then have to feature a videocard). To cut these costs, computer manufacturers resort to integrating the video controller directly into the motherboard: the end user has a video display, and the manufacturer makes cuts the most expenses. From a consumer perspective, having integrated video solution generally means less expandability and less performance as a whole (since the video memory will draw from the main system memory, thus reducing the amount of memory available to the system as a whole) however there are reasons sometimes to opt for intagrated video: 

 Low end baseline machines: ideal for people who just need a basic computer to type documents, check email, etc
 HTPCs (home theater PCs): watching movies doesnt require an exhaustive amount of graphical processing power .. a simple onboard video solution is sometimes sufficient
 Terminal/office type machines: secretarial type computers do not need massive processing power and nor do public access machines (i.e., library)

*Interfaces: PCI, AGP, PCIE*
The performance of a videocard is limited by what interface is using: some interfaces provide more bandwdith than others and as such, make a better choice for videocards (and especially gaming cards which are heavily bandwidth dependent). A quick breakdown of interfaces, past and present:

 *PCI*. An abbreviation for _peripheral component interconnect_, this is a 32bit interface (there are 64bit PCI slots available however they are generally used in server environments). Even as videocards moved to faster and better interfaces, PCI videocards still have a very useful purpose in troubleshooting.

The PCI interface operates at 33Mhz and thus offers 133MB/s of bandwidth (33MHz x 32bits ÷ 8bits/byte = 133MB/s). There are numerous revisions and spin-offs of the PCI interface however such innovations are generally not videocard oriented and as such wont be covered here.
 *AGP*. An abbreviation for _accelerated graphics port_, this 32bit interface was developed in order to better facilitate faster and more bandwidth intensive videocards. AGP operates at 66MHz and as such offers a baseline of 266MB/s of bandwidth (66MHz x 32bits ÷ 8bits/byte = 266MB/s). There does exist a 64bit specification for workstation class cards however that's outside of the scope of this 101.

For all intents and purposes, AGP is a dedicated PCI connection just for videocards: the advantage of AGP is that eliminates the latencies in accessing the system memory and processor: whereas PCI devices need to negotiate their way to these resources, an AGP device has direct access to these resources. As a sidenote, many times people will refer to AGP as a type of BUS: this is in fact, technically incorrect as a BUS is supposed to facilitate multiple devices: AGP only facilitates a single video device. This is, of course, a very trivial technicality and in general converse, overlooked. 

Different AGP specifications provide varying amount of bandwith:
 *AGP 1X*. 266MB/s (66MHz  x 32bits ÷ 8bits/byte)
 *AGP 2X*. 533MB/s (66MHz  x 32bits ÷ 8bits/byte x 2double-pump)
 *AGP 4X*. 1066MB/s (66MHz  x 32bits ÷ 8bits/byte x 4quad-pump)
 *AGP 8X*. 2133MB/s (66MHz  x 32bits ÷ 8bits/byte x 8oct-pump)

Over the years, the AGP specification has been refined and newer revsions have been introduced with increased performance at each stage
 *AGP 1.0*. This specification supported 1X, 2X and used a AGP3.3v physical connector
 *AGP 2.0*. This specification supported 1X, 2X and 4X and used a AGP1.5v, AGP3.3v or AGP Universal physical connector
 *AGP Pro 1.x*. The pro moniker designates that these videocards are destined for highend workstations and often employed in drafting/animation/studio environments. This specification supports 1X, 2X and 4X and  used an AGP Pro3.3v, AGP Pro1.5v and AGP Pro Universal physical connector
 *AGP 3.0* This specifcation supports 1X, 2X, 4X and 8X and uses an AGP 1.5v physical connector
 *AGP Pro 3.0*. This specification is an expansion on the AGP Pro 1.x and supports 1X, 2X, 4X and 8X; it uses a AGP Pro 1.5 physical connector

Something that may be confusing are the various voltage values being thrown around along with various speed grades and the possibility of incompatability. To clarify, there are two types of voltages used with AGP videocards: _key_ and _signalling_. 
 The key-voltages are associated with the physical connector (i.e., in the above list, "AGP 1.5v physical connector" indicates to us that the key-voltage is 1.5 etc (the only exception is that "AGP Universal" has no key-voltage).
 The signal voltage is the voltage that is associated with the speed ratings of the card:
 AGP8X uses 0.8v
 AGP4X uses 1.5v or 0.8v
 AGP2X and AGP1X uses 3.3v or 1.5v


With the somewhat overwheliming set of voltages, differing types of voltage, keys, Pro vs non-Pro and various specifications, things can get a bit confusing: as such, there is a fallback: _all the devices are physically made such that you can only insert them into compatible sockets (with reasonable force)._
 *PCIE*. An abbreviation for _PCI-Express_ (and sometimes abbreviated as PCIEx16), this specification, formerly known as 3GIO, PCIE is a very high-speed serialized interface which can be somewhat 'parallelized' by grouping 'lanes' of PCIE (or PCIEx1) together. Each lane provides 250MB/s with video devices having access to either 8-lanes (2GB/s) or 16-lanes (4GB/s) (more on this later in *VFAQ*)

*Shader Models: Pixel Shaders, Vertex Shaders*
Firstly, to preemptively clarify a very common misnomer: a shader is *not* a hardware 'thing' but rather, a shader is simply _code_. It is a specific type of code that affects the pixels or vertices of a 3d object. To reiterate: video cards, in the context of pixel/vertex shaders, do not "come with shaders".

As just mentioned, a shader is simply a block of code that allows for a game developer to add geometric and/or lighting transformations to a 3d object before it is finally rendered and seen by the end-user. Vertex shaders are available to augment the T&L features (and consequently, any vertex shader programs are run roughly when T&L effects are applied) as well as performing geometric deformations. Pixel shaders are run after all the geomtry has been finalized generally concern themselves with texture, lighting and other surface-related effects.

Over the years, several revisions of the shader models have evolved

 *Shader Model 1.x* In contrast to DirectX7 class games/hardware where the game developers experienced "flexibility" by means of a series of toggle-able special effects. DirectX8.x changed things slightly by allowing the game developer to do whatever they wanted within an input and output point so long as the hardware supported the said code (i.e., there were enough hardware registers present to actually execute the desired shader op).
 *Shader Model 2.x* Introduced with DirectX9, the SM2.0 and later SM2.x feature set allows game developers to pretty much do everything that gamers have been used to for the last few years: SM2.0 increased the minimum number of registers required (thus increasing the length of the shader programs and indirectly, the complexity). The last major change that SM2.0 brought was looping! No longer do game developers have to write mass procedural programs (which were then limited by the low register counts) as well as some basic [static] branching. The later revisions (i.e., 2.0b) were, for all intents and purposes, effectively, SM3.0 (but without the extended maximum shader length -- which hasn't yet been realized by game developers yet)
 *Shader Model 3.0*. The current shader model revision, SM3.0 is essentially a "flexibility extention" on SM2.0b. The major additions include dynamic branching, increased color precision requirements and texture lookups
 *Shader Model 4.0* At this point it seems that SM4.0 is looking to merge the two shader blocks into a single coherent block (so that shader enhancements will apply to both pixel and vertex shaders) as well as integer vs FP memory addressing adjustments

*Shader Pipelines*
A shader pipeline is essentially a dedicated hardware path for a shader program to run: having more such paths allows for more simultaneous programs to run (and thus there is an overall performance increase as the number of pipelines is increased). For the most part, the effects seen in games effects the world after all geometric data has been processed (i.e., after the vertex shaders have done their work) so there is generally more of a benifit to increase the number of pixel processing pipelines as cards become more advanced (this also explains why there are almost always more pixel pipelines than vertex pipelines).

*SLI/Crossfire and other Multigpu configurations*
3dfx: where it all started
SLI, originally introduced by 3dfx with their Voodoo2 cards, stood for _scan line interleave_ and it was just that: two video processsors would work towards rendering a frame: one GPU would process the odd lines and the other would render the even ones. 3dfx's implemented SLI using two cards connected by a dongle (which suffers from timing issues since the scene reconstruction occured after the video had been sent to the RAMDACs) as well as having two GPUs on the same board. For the most part, when 3dfx dissapeared, SLI somewhat dissappeared.

nVidia
More recently, nVidia has ressurected the idea of multi-GPU rendering (they weren't the only bunch to do so, just the only one really successful at it). As things currently stand, SLI stands for _scalable link interface_. The principle here is the same: take to identical and compatible video cards (that is, same make and model), link them up and have the two cards jointly render a scene. There is a theoretical performance increase of 100% however it will never hit that mark due to load-balancing overhead).

 A few intrepid manufacturers like Gigabyte and ASUS have gone a step further and put two GPUS and two sets of memory together on single card (i.e., single card SLI). Also, with the latest driver revisions (ok, for some time now), SLI no longer requires that the cards be exactly identical: in fact, the individual cards can be run asynchronously of each other (also, the PEG link is no longer required per se however not using it comes at a roughly 5% performance cost).

ATi's competing technology, Crossfire is essentially the same principle however ATi's crossfire is significantly more 'open'. Whereas nVidia's SLI requires an NF4SLI chipset to be paired with two identical cards for SLI to be enabled, ATI's solution works across multiple chipsets (RX200 and i955X) and two identical cards are not required. In short, Crossfire is slightly more flexible and lenient as far as product selection goes but at the end of the day, as far as the consumer is concerned, it's essentially the same deal. As noted above, SLI has become more flexible as of late however its interesting to note that Crossfire was developed with the flexibility in mind.

*RingBus*
This is ATi's new fancy 512-bit memory controller architecture which was designed with very high clockspeeds and futureproofing in mind. The quick and dirty explaination of how it works is that there are two 256bit memory paths that data can traverse with four stop-points (at each of those points, the controller can access two memory modules). This means that traversing between memory and the memory controller and the cache is done extremely efficiently -- which may be the explaination behind why the Radeon X1800 cards can compete so well against their GeForce7800 counterparts even though they suffer a pipeline defiecit. Again, a picture can quickly convey the idea how how things operate




*HyperMemory & TurboCache*
The easiest way to cut costs on budget cards is to use less 'stuff' (i.e., fewer pipelines, a smaller memory bus, less memory etc). This works however, up to a point after which the cost savings no longer outweigh the performance hits. One method of further reducing costs without dropping the performance significantly is by leeching off the system memory (and thus reducing the amount of memory that is physically included on the card).

For an AGP card this would result in a massive performance hit since the bandwidth is optimized for one-way transfers however PCI Express is a full-bandwidth bi-directional interface and so this [leeching the system memory] is now possible with only minor architecture changes. To the end user, the performance
if a card with Hypermemory/TurboCache is comparable (although less than) to the same card with the same amount of actual physical memory present. Generally speaking, it's advisable to avoid purchasing such cards since comparable cards (which dont leech off the system memory) are available for the same price bracket.

*Video Memory: [G]DDR, [G]DDR2, [G]DDR3?*
To clarify a few things about naming conventions and such within the context of video memory,

 DDR=DDR1=GDDR=GDDR1
 DDR2 = DDR-II = GDDR2 = GDDR-II
 DDR3 = DDR3 = GDDR3
There seems to be something of a consensus to use DDR, DDR-II and GDDR3 to denote the different types of memory and as such we'll use them here. As a side note, the reason they tack on the 'G' is to denote that we are talking about _graphics_ memory rather than normal system memory (the reason for the distinction is because video and system memory are quite different and the extra letter allows us to avoid confusion)

As far as performance goes, there's no difference between the different types of memory: they are all "DDR" meaning that they will all do double-data-rate (meaning that for each clock pulse, data is sent on both the rising and falling portions of the pulse as opposed to the older types of memory which only sent data once per pulse). The difference is however that DDR-II and GDDR3 are capable of higher clock speeds with GDDR3 using a lower signalling voltage and thus not suffering the heat issues encountered with DDR-II. Of the three, as expected, GDDR3 is the most advanced and for those concerned with overclocking and squeezing the most performance from the card. will be the memory type of choice.

*Refresh Rate/Response Time*
These are measuires of the performance of a display device: the refresh rate measures the number of times a *CRT* will refresh per second with higher values being superior. Response time is a measure of performance for *LCD* displays. Although not technically accurate you can 'translate' a response time into a refresh rate using the following conversion:
Approximate Refresh Rate Equivalent = 1000 ÷ Response Time​

*DSub15, DVI, RCA, Coax, SVideo*
*DSub15* and *DVI* are the two common connectors found on videocards which allow for users to connect CRTs and LCDs to them. High end video cards often feature or or more DVI connectors but users with CRTs that dont interface with DVI, a *convertor* needs to be used.

For videocards featuring more videoIn and videoOut connectors, *RCA* or *S-Video* may be used. Both RCA and S-Video are established media connectivity formats with S-Video being the superior of the two.

*Vsync*
An option present in many games, vsync or _vertical synchonization_ means that the frames being drawn on the screen will coincide will the actual refresh of the display device. Disabling vsync generally allows for higher framerates however since frames are being generated regardless of whether or not the user sees them or not, sometimes there are artifacts. Enabling vsync will limit these artifacts and flickering.

*VIVO*
An abbreviation for Video-In Video-Out this simply means that the card supports both input and out to/from external video sources. Most videocards will feature some form of VideoOut which allows you to output what you see to a TV or a VCR etc. VideoIn is the exact reverse: it allows you to capture video coming in from a TV or VCR or other similar device. VideoOut is often a standard feature on videocards and such wont add much of a cost to the device; videoin however will add significantly to the cost of the device and as such, if you're on a budget, make sure you want these features if you select a card with VIVO

*RAMDAC*
An abbreviation for RAM Digital-Analog-Convertor, these devices convert the internal digital signal to an analog form that analog monitors can interpret (digital displays do not need this extra processing). The performance of a RAMDAC is given in MHz with higher ratings allowing for higher resolution + refresh rate combinations. For the most part, avoid buying cards with RAMDACs rated less than 350MHz.


----------



## Praetor

*API: DirectX, OpenGL*
APIs or _application programming interfaces_ are essentially just that: a code interface that a developer can make use of so that he does not have to code every tiny little thing over and over again (i.e., to avoid reinventing the wheel). In the context of 3D game engines, two major API packages exist, DirectX and OpenGL

 *DirectX* A Microsoft developed alternative to OpenGL (because, at the time, OpenGL required extensive hardware resources), the DirectX package has expanded to encompass all major aspects of modern consumer computing
 *DirectDraw*. For dealing with 2D scenes
 *Direct3D*. For developing 3D scenes and the related geometric and lighting aspects
 *DirectInput*. For interfacing with input devices
 *DirectPlay*. For network intercommunication with games
 *DirectSound*. For playing PCM based sounds; also supports positional audio
 *DirectMusic*. For playing MIDI based audio

DirectX has numerous revisions over the years, a quick summary:
 *DirectX 1.0, 4.02.0095*.  Released in September 1995 (after the launch of Windows95 but before Windows95 OSR2), this was the very first incarnation of DirectX and features (DDraw, DInput, DPlay and DSound)
 *DirectX 2.0, 4.03.1095*. Featured with Windows95 OSR2 and Windows NT4.0, the DirectX 2.0 release was very small.
 *DirectX 2.0a, 4.03.1096*. Released August 1996, this featured with Windows95 OSR2 and Windows NT4.0, the DirectX 2.0a release added 
 *DirectX 3.0, 4.04.0068*. Released October 1996, expanded on DPlay, DSound (added positional sound capabilities via DSound3D) and a major step forward with DInput (added the joystick control panel applet)
 *DirectX 3.0a, 4.04.0069*. Released March 1997, this release addressed a few bugs revolving for MMX machines
 *DirectX 3.0b, 4.04.0070*. Released March 1997, this release addressed some minor issues with international versions of Windows95
 *DirectX 4.0*. This was an internal release and was never made available to the public
 *DirectX 5.0, 4.05.00.0155*. Released July 1997, DirectX5.0 added force feedback capabilities as well as improved support for MMX systems. From a graphical perspective, the step forward here is that DX5.0 added major support for z-buffers; at the time GPUs were essentially high speed rasterizers with z-buffers (this was the era where fill-rate was a meaningful benchmark)
 *DirectX 5.0, 4.05.01.1721*. Shipping with Windows98, this was a tweaked version of the previous release (which really was a beta for NT5.0)
 *DirectX 5.2, 4.05.01.1998*. Released in July 1997, this released fixed several security holes in DPlay and made yet some more minor changes
 *DirectX 6.0, 4.06.02.0436*. Released in December 1998, DirectX6.0 had support for stencil buffers, texture compression and environment mapped bump mapping, software T&L. Furthermore, DirectX6.0 improved compatiability with firewalls in DPlay as well as performance in both DDraw and DSound.
 *DirectX 6.1, 4.07.00.0700*. Released in February 1999, this release improved rasterizer performance as well as improved AGP performance. DirectMusic was also added to the DirectX package at this time.
 *DirectX 7.0, 4.07.00.0700*. Released in September 1999, DirectX7 was a revolutionary step forward in computer graphics and introduced hardware T&L, support for hardware texture compression and interfacing with EAX for positional sound. With DirectX7, Microsoft set out a standardized specification for hardware platforms (prior to this it was each manufacturer to their own). 
 *DirectX 7.0a, 4.07.00.0716*. Released in December 1999, this release addressed compatability and performance issues with force feedback devices
 *DirectX 8.0, 4.08.00.0400*. Released November 2000, DirectX 8.0 consolidated the DDraw and D3D interfaces, as well as consolidating DSound and DMusic. DirectX8.0, building on the concrete framework established by DirectX7, added support for Shader Models 1.0, 1.1 and 1.2. 
 *DirectX 8.1, 4.08.01.0810*, Launched November 2001, DirectX8.1 added support for Shader Model 1.3 and 1.4 as well as fixing some issues with DirectPlay. 
 *DirectX 9.0, 4.09.0000.0900*, Introduced December 2003, this release added support for Shader Model 2.0.
 *DirectX 9.0a, 4.09.0000.0901*. Launched March 2003, this release featured fixes and improvements with D3D and DPlay.
 *DirectX 9.0b, 4.09.0000.0902*. Released August 2003, this release featured improvements to performance across the entire package as well as security fixes (mostly in DPlay).
 *DirectX 9.0c, 4.09.0000.0904*. Released August 2004, this released added support for Shader Model 3.0 as well as adding security fixes to the previous installation.
 *DirectX 9.0l, 4.09.0000.0905*. As of yet currently unreleased but this is suspected to be a Windows Vista release and will feature support for Shader Model 4.0. As it stands now, DirectX 9.0l adds an additional D3D interface.

 *OpenGL*. An abbreviation for the _Open Graphics Library_, OpenGL is a platform independent and for the context of this guide, it competes with DirectX in the gaming industry and also has applications in heavy-duty graphics. OpenGL is only concerned with visual outputs and as such does not have any mechanisms available to deal with input, network, sound etc interfaces.

While not as revision-crazy as DirectX, there have been several releases to OpenGL
 *OpenGL 1.0*. Released April 1992, this is the baseline release of OpenGL
 *OpenGL 1.1*. Released in March 1997, this release adds vertex arrays, polygon offset, more support for different texture image formats, as well as minor changes to the baseline specification
 *OpenGL 1.2x*. Launched April 1999, this release adds support for 3D textures, texture LOD (level of detail) control, BGRA pixel format support, normal rescaling etc.
 *OpenGL 1.3*. Released in August 2001, this adds support for compressed textures, cube mapping, multisampling, multitexturing (in a single pass) as well as numerous texture and environmental transformations.
 *OpenGL 1.4*. Released in July 2002, this adds suport for mipmap generation, depth and shadow textures, normal mapping as well as defining a vertex shading framework.
 *OpenGL 1.5*. Launched in October 2003, this introduces the GLSL (GL Shader Language), removes the restruction of power-of-two textures for more efficient memory useage and  improved shadow functions
 *OpenGL 2.0*. Released September 2004, this release adds support for MRT (multiple render targets), two sided stencils, improved GLSL and improved shader performance with particle systems


*Pixels and Texels*
Pixel is an abbreviation for _picture element_ and is the smallest unit of a digital display; each such dot is assigned a color and brightness value and a composite of such dots creates what we see as an image. The greater the number of pixels in a scene/image, the higher the resolution (i.e., higher quality).

In a similar fashion, a texel is an abbreviation for _texture element_ and where a pixel might be the base unit for a 2D image, a texel is the base unit for a 3D image and defines not just the color/brightness but also the surface characteristics.

More commonly advertised with older (pre-2000) videocards, the pixel-fillrate was a useful benchmark which allowed potential customers to discern between good and bad videocards: with more modern videocards where the emphasis is more on 3D scenes and much more advanced transformations, the pixel fillrate is no longer a significant performance indicator.

*Z-Buffer*
In a 3D scene where objects may appear in front of other objects, there is no point in wasting processing resources on rendering an object that will never been seen by the user. Whether an object is seen or not by the user is determined. Z-buffers are specified by its precision: 8bit, 16bit, or 24bit

*T&L, Transform & Lighting*
T&L was a revolutionary step forward for computer graphics where the GPU took over the process of performing all the 3D calculations (geometric and lighting) that used to be performed by the CPU (thus allowing the CPU to concentrate on other tasks). Generally speaking T&L refers to _Hardware T&L_ - software implementations do exist and are used where the hardware acceleration for T&L does not exist but at the cost of a significantly increased burden on the CPU. Hardware that is based on the nVidia GeForce256 or ATi Radeon class cards or better support T&L

*Static/Dynamic Lights & Shadows*
Light sources in older games were stationary (even when you knocked over what should have been the light source, you could still see the room) in a somewhat "standard" light level. With dynamic lights, light sources can move (i.e. a swinging spotlight will constantly change the lighting in a room). Hand in hand with lighting are shadows: static shadows are just that; you create the same shadow regardless of the lighting intensity coverage: with dynamic shadows, physics calculations are done to determine what shape to draw a shadow as well as how dark to make it.

*Meshes, Models, Polygons, Skins, Bones, etc*
In a modern 3d game, most or all the "objects" that you can see and interact with are represented in three dimensions and in order to do so, require some form of representation.

 _Mesh_. A mesh, in simple terms, is a collection of vertices  in 3-space that are interconnected in specifica ways to represent the desired object. As a simple example, consider the eight points of a cube: those vertices along with the lines that interconnect them, are a mesh for a cube object
 _Polygon_ A polygon is just that: any closed loop of vertices (the minimum number of vertices that can comprise a polygon is three and consequently, the simplest polygon is a triangle). As you increase the number of polygons in a mesh, you increase the complexity (and detail) of that mesh
 _Triangles_. Every polygon, no matter how many vertices it has, can be  constructed using triangles and as such, the number of triangles comprising a mesh is sometimes used as an indication of its complexity. For older generations of videocards, the number of triangles/sec that a GPU could render was used as a measure of performance
 _Skin_. A mesh by itself, is a very boring thing to look at -- just imagine running around in a first person shooter where all the bad guys are simply collections of dots and lines that move in some barely discernable pattern! By adding a skin to the models, the mesh becomes much more interactable -- think of a skin as the fabric of a tent: the tentframe by itself (which represents the mesh) isnt very useful without the covering
 _Bones & Skeletons_. Suppose you are creating a mesh for an object and you want to have specific portions of that object to be interconnected and effected by other portions (i.e., head-neck). The bone-structure behind the model tells the physics engine how to deal with that given object when other objects collide with it etc. Entire groups of bones are referred to skeletons
 _Model_. This term is sometimes used to refer to the mesh but sometimes refers to the entire final 3d object (i.e., mesh+skins+bones)

*Soft Shadows*
In videogames we are often accustomed to seeing shadows being very hard-edged and concrete however in reality, the edge regions of shadows are actually blurred and softened. This is what soft-shadowing attempts to emulate albeit at a significant performance hit. For the most part (at the time of this writing), soft-shadowing techniques have just begun to emerge into the market and often, the image quality enhancement does not offset the performance drop.

*High Dynamic Range (HDR) & Blooming*
In real life, as we transition from a dark area to a very bright one, the light 'feels' brighter than it really is and consequently, what we can see (due to the brightness) is different from what we might have been able to see if there was not such a dramatic transition. 

Blooming refers to the spill-over effect (which is very similar to HDR and was used as a less intensive means of emulating proper HDR) where light from a bright object spills over to the surrounding space. As GPUs and video engines have become more and more advanced, HDR has (and will continue) to become more prevalent. For users not using high end hardware, it's often best to disable this option as it will result in a significant performance hit.

As a side note, nVidia cards often market something called HPDR for High Precision Dynamic Range which just means they use 64bit color rather than the customary 32bit color ... for the most part this is just a bragging poijnt rather than anything significant in terms of final image quality (as perceivable by the human eye) or performance; although the potential for image quality improvement is definitely there due to the improved color precision.

Naturally, nothing quite explains all this fancy wording better than a picture (from FarCry):







*Antialiasing*
When a line is drawn at non-90 degree angles, there will be "jaggies" (i.e., curves look blocky, slanted lines look chunky etc). This "jaggy" phenomena is known as _aliasing_ and it frankly makes games look poor. A method for overcoming this is to perform what is known as antialiasing (AA): by having a look at the data near the point being antialiased, the hardware/engine can smooth out the jaggies so that image quality is improved. Again, nothing like a picture to explain:





​
Recently ATi and nVidia have developed funky techniques for performing AA on stacked-transparent objects (ATi calls theres _adaptiveAA_, nVidia calls there's _transparencyAA_(. Again, a quick picture explains tons:



Often associated with AA is a term, FSAA  which stands for Fullscreen AA which is essentially where the videocard renders the screen at a higher resolution than displayed internally, performs a quick AA operation on that and resizes the image to the displayed resolution: the processes gives a better quality/performance ratio than performing a stronger AA operation on the final resolution

*Texture Filtering: Billinear, Trillinear, Ansiotropic*
All of these texture filturing techniques attempt to deal with a type of artifact that occurs when the camera is far away from a textured surface and/or on a sharp angle to it. Billinear and Trillinear are both isotropic techniques meaning that their texture interpolation is square based. Anisotropic filtering uses a non-square sampling block which allows the filtering method to ensure that the filtering process itself does not introduce more blurring. As before, a picture explains this all very clearly


Like AA, anisotropic filtering (AF) is very taxing on system resources and shouldnt be enabled unless you're running decently high-end hardware. Even though isotropic filtering (i.e., billinear and trillinear) can present problems (and more so billinear), for the most part, you should have some form of filtering (preferably trillinear) and since this filtering costs almost nothing to implement, there is little reason not to.

*Texture Compression: 3Dc*
3Dc is a open compression technique developed by ATi that allows for up to 4:1 compression meaning that game developers can include 4x as much texture information per memory block as before. As things currently stand, nVidia cards do not utilize 3Dc but instead they use a different texture compression algorithm known as V8U8 which allows for only 2:1 compression.

*Framerate*
The framerate (fps or Hz) of a game or benchmark is literally the number of frames being rendered per second: the higher this value the better suited your video card is for that task. As a general rule, 30fps is the minimum bar for what is considered playable: if you're getting framerates below 30fps then you should lower the quality settings or even consider an upgrade. Many games and benchmarks have built in mechanisms for displaying the framerate however there is an application called *FRAPS* that will also do the same for any DirectX/OpenGL scene (and do other functions too).


----------



## Praetor

Section 07 - A Look at ATi
*ATi* one of the largest (in terms of marketshare) players in the videocard market with their direct competition being nVidia. Their claims to fame are:

 *Image Quality*. Typically speaking, ATi cards are generally known to have superior image quality when compared to their nVidia counterparts
 *AA/AF Performance*. Related to their image quality, not only do ATi cards generally have better image quality, enabling performance hitting features like AA and AF generally results in a smaller performance hit on ATi hardware than on nVidia hardware (meaning that more often than not, ATi users can, at the same framerate as nVidia hardware, have AA/AF turned one level higher than their competition)
 *Texture Compression: 3Dc* if one of ATi's general claim to fame is image quality, the other is memory efficiency: and 3Dc is a testiment to that: allowing for 4:1 compression of texture, this means that game developers can cram 4x as much data into the texture memory as per usual. ATi's competition, nVidia, uses a different technique that only allows for (currently) 2:1 compression meaning that ATi users enjoy a [potential] 2:1 advantage in terms of texture quality.
 *Unlocking and overclocking*. Especially with older cards like the 9500Pro, it was found that pixel processing pipelines could be activated with a simple hack (thus allowing people to buy the cheaper 9500Pro and get the performance of the 9700Pro). This trend is common with other ATi cards as well. Furthermore, ATi cards (until recently with their X1000 series cards) run cooler which then allows for overclocking.
 *Ringbus*. Again with memory efficiency, ATi's new 512bit RingBus allows for extremely efficient memory access and even allows for the controller itself to be programmed meaning it can adapt to the changing memory access requirements.

On the flip side however, there are some not-so-great things generally known about ATi:

 *Less than perfect drivers.* Until maybe a year or two ago, ATi drivers were quite known to be unstable and present a wide variety off "undesired effects" depending on various games and applications: Starting with Catalyst 4.x, things began to shape up however and for the most part, ATi drivers are quite stable. The problem now however is that dealing with ATi's drivers often means [a] [potentially] dealing with a Catalyst Control Center (which, requiring .NET, has it's own issues to begin with, although users can still opt for the Classic setup) and * getting access to ATi's drivers can be a hassle: hotlinking to ATi's website isnt nearly as friendly an experience as, say, nVidia's driver management
[*] UDA. ATi's unified driver architecture, an attempt to follow nVidia's footsteps in making a one-file-to-download-for-all-drivers is generally 'bloated'. While this isnt nearly as much of an issue as the .NET drivers or even worse, unstable drivers, it does irk quite a number of people
[*] Paper launches. By far one of the most annoying things about ATi are therir paper launches: they announce that a zillion different products are available but none actually hit the market for months to come and others are pulled after a very short lifespan (i.e., X700XT)
[*] Nomenclature. It seems with each new generation ATi cards add half a dozen more suffixes to their cards (although nVidia is following suit) ... much to the dismay of casual consumers everywhere.
*
*

Now for a quick breakdown of the last three or so generations of ATi cards

 DirectX7 Class Radeons. These were the cards prior to the introduction of shaders and DirectX; their competition was the GeForce2
 R100 - Radeon [SDR, DDR, DDR-AIW, KE ,VE]
 R100 - Radeon 7000
 R100 - Radeon 7200
 R200 - Radeon 7500 [Plain, AIW, LE]

 DirectX8 Class Radeons. These cards featured were generally inferior to thei nVidia counterparts the GeForce3/4Ti cards.
 R200 - Radeon 8500 [Plain, AIW, LE]
 R250 - Radeon 9100 [Plain, IGP, ProIGP]
 R280 - Radeon 9200 [Plain, SE, AIW, Pro]
 R280 - Radeon 9250

 DirectX9 Class Radeons - 9000 Series. nVidia proved to be a superior make for DirectX8 class cards however when the Radeon9700 hit the market, things rapidly switched leaving ATi as the dominant player for a very long time (i.e., pretty much until now). nVidia's response was the much failed GeForceFX lineup.
 R300 - Radeon 9500 [I, L, Pro]
 R350/R360 - Radeon 9550 [Plain, XT]
 R350/R360 - Radeon 9600 [Plain, AIW, Pro, SE, Pro-AIW]
 R360 - Radeon 9600 [XT, AIW-XT]
 R300 - Radeon 9700 [Plain, AIW, Pro, Pro-AIW]
 R350 - Radeon 9800 [Plain, SE, XL]
 R350/R360 - Radeon 9800 [Pro, Pro-AIW]
 R360 - Radeon 9800 XT

 DirectX9 Class Radeons - X-Series. This series, lead by the X800/X850 is the direct competitor to the neck-in-neck race with nVidia's GeForce6 lineup
 R370 - Radeon X300 [Plainl, SE, SE Hypermemory]
 R380 - Radeon X600 [AIW, Pro, XT]
 R410 - Radeon X700 [LE, Pro, XT]
 R410 - Radeon X740 XL
 R420 - Radeon X800 [SE, Pro,  XT, XT PE, XT-AIW]
 R423/R480 - Radeon X800 [GT, GTO, GTO²]
 R423 - Radeon X800 XT
 R430 - Radeon X800 [Plain, XL]
 R430Pro - Radeon X800 STD
 R481 - Radeon X850 [Pro, XT, XT PE]

 DirectX9.0c Class Radeons - X1000-Series. This is the current incarnation of the Radeon lineup, which is due to compete with the current GeForce7 lineup from nVidia
 R515 - Radeon X1300 [Plain, Hypermemory, Pro]
 R530 - Radeon X1600 [Pro, XT, XL]
 R520 - Radeon X1800 [XL, XT]
 R580 - Radeon X1900 [Plain, XT, XTX]

*


----------



## Praetor

Section 08 - A Look at nVidia
Aside from ATi, *nvidia* is the other corporate behemoth, their claims to fame are:

 *Driver stability* For as long as memories can stretch, we've heard of the horror stories with ATi drivers and while that's mostly been fixed by now, nVidia drivers have been, for the most part, stable all the way through. With constant betas and updates being released left, right and center, nVidia's driver team is constantly looking to improve functionality and performance.
 *Revolutionary*. While ATi has the Radeon9700 to it's credit (and a hell of a credit it is), first to implement 256bit memory, 3Dc/3Dc+ and more recently, the RingBus; nVidia has major landmarks like TnT (Twin 'n Texel, doubling the texture fill rate capabilities), first 256bit GPU, first to implement hardware T&L, FSAA, UltraShadow, first to provide a top-to-bottom DX9 solution, first to implement SM3.0 and support DX9.0c, the first to successfully ressurrect SLI and multiGPU processing.
 *Product Launches* When nVidia says a product is made and ready to go, it really is! Goto a dealer odds are they've got the product to ready to sell (albeit at potentially insane prices)
 *OpenGL*. While ATi may have superior performance in DirectX games, the advantage there is nothing compared to the class leading advantage that nVidia has enjoyed for the longest time (although it seems ATi is dealing with this issue via driver tweaks -- and successfully at that).

Although just like ATi, nVidia is known for a few shady things too:

 *Driver cheating*. A fairly well known issue of nVidia's drivers performing extensive 'optimizations' for a widely used benchmark known as 3dMark03 and their refusal to admit it when they got caught was a bad moment both for the trustworthiness/effectiveness of benchmarks as well as the true measure of nVidia's performance
 *GeForceFX*. Just as this lineup was known for being the first top-to-bottom DirectX9 solution, it was also known for absolutely _dismal_ performance in DirectX9 mode -- so poor that it was better off to treat GeForceFX hardware as DirectX8 hardware. All that and the excessively noisy two slot cooler used with the GeForceFX 5800Ultra. 
 *Power Requirements* Although this isnt so much of an issue now with the GeForce7/X1000 lineup but with the GeForceFX/GeForce6/X800 lineup, nVidia's cards required an excessive amount of power without providing a linear relationship with performance. Having high thermals didnt help things either
 *GeForce4MX*. After a steller job with the GeForce3/GeForce4Ti series, nVidia screws up by releasing the GeForce4MX budget lineup of cards which were DirectX7 parts -- a full step backwards.

Now for a quick breakdown of the last three or so generations of nVidia cards

 *DirectX7 class GeForce cards*. nVidia was the first to have a viable (and incredibly successful) DirectX7 platform:
 NV10 - GeForce [SDR, DDR]
 NV11 - GeForce2 [MX, MX200, MX400]
 NV15 - GeForce2 [GTS-V, GTS, Pro, Ultra, Ti]
 NV17 - GeForce4 [MX420, MX440, MX440SE, MX460, MX4000]

 *DirectX8 class GeForce cards* Up until the release of the Radeon9700, these cards were incredibly successful and the later models were still viable as budget options all the way until the GeForce6 line-up
 NV20 - GeForce3 [Plain, Ti200, Ti500]
 NV25 - GeForce4 [Ti4200, Ti4400, Ti4600]
 NV28 - GeForce4 [Ti4800SE, Ti4800]

 *DirectX9 class GeForce Cards - GeForceFX lineup*. To nVidia's credit, they released an entire platform of cards that were out of the box DirectX9 capable (to address the GF4MX debacle) but then again the performance of these cards was less than sellar
 NV30 - GeForceFX 5800 [Plain, Ultra]
 NV31 - GeForceFX 5600 [Plain, SE, XT, Ultra]
 NV34 - GeForceFX 5200 [Plain, Ultra]
 NV34 - GeForceFX 5500
 NV35 - GeForceFX 5900 [Plain, ZT, SE, XT, Ultra]
 NV36 - GeForceFX 5700 [Plain, VE, LE, Ultra]
 NV38 - GeForceFX 5950 [Ultra]

 *DirectX9.0c class GeForce cards - GeForce6 lineup*. To address the DirectX9 issues the GeForceFX lineup had, the GeForce6 lineup offered a generational leap in performance and was the first to support DirectX9.0c, a marketing point that it's competition, the X800 lineup, could not make.
 NV40 - GeForce6800 [Ultra Extreme]
 NV40/NV41/NV42 - GeForce6800 
[*] NV40/NV45 - GeForce6800 [GT, Ultra]
[*] NV41/NV42 - GeForce6800 [XT, LE]
[*] NV44 - GeForce6200 [Plain, Turbocache]
[*] NV44 - GeForce6600 [Plain, GT, XT, LE]
[*] NV45 - GeForce6800 [GTO]
[/list]
[*] [b]DirectX9.0c class GeForce cards - GeForce7 lineup[/b]. Yet another massive leap in performance, the GeForce7 lineup has had a 6 month marketshare advantage on ATi's competition, the X1000 series
[list]
[*] G70 - GeForce 7800 [GS, GT, GTX]
[/list]
[/list]


----------



## Praetor

Section 09 - Official Crap
*ATi Stuff*

 A whole crapload of *Catalyst* downloads
 ATi's Whitepaper on *3Dc*
 *ATi's OpenGL comeback*
 *ATi's Crossfire*
 *Hypermemory*

*nVidia Stuff
*
 A whole crapload of *Forceware* downloads
 A very painful look at why * GeForceFX* is not a DX9 card
 *nVidia's SLI*
 *TurboCache*
 *CoolBits*, the registry tweak to enable hidden nVidia driver features

*General Video Stuff*

 *DirectX*
 *RivaTuner*, a very useful information/tweaking tool for both ATi and nVidia hardware
 A huge collection of *benchmarks*
 A look at what is happening *to OpenGL in Vista*


----------

