Original Link: https://www.anandtech.com/show/671



It used to be that if one wanted a reasonably priced video card, be it for an upgrade or for use in a lower priced system, one would have to buy a video card based on a previous generation chip. For example, rather than shell out the $250 or more for a GeForce 256 last year, a good number of system builders and upgraders simply decided to go with a video card based off of NVIDIA's previous product: the TNT2. The same has been true with other company's offerings: many skipped the ATI Rage Furry MAXX to go with a Rage128 Pro or held out from buying a 3dfx Voodoo3 3000 only to go for a lower cost Voodoo2.

This has been the way that the video card industry has worked for some time. Rather than designing a new product targeted at the cost conscious market, many manufacturers have pushed older, usually cheaper and less desirable products on the budget consumer. The consumer on a budget has historically been pushed around, swept aside, and somewhat abused by the lack of cutting edge technology available at a reasonable price. Companies have been focused on producing bigger, better, and faster cards, a philosophy which has resulted in the $400 and plus cards we see today.

The truth of the matter is that for every one performance freak out there who has no monetary limit when it comes to getting the best, there are about 23 people looking to spend less than $300. These numbers do not come out of thin air: a recently conducted AnandTech poll showed that while 511 people were willing to pay $400 and up for a video card, 12,532 people were looking to spend up to $400. The same poll shows that 91% of people polled believe that video card prices are getting excessively high.

Well, thankfully, graphics chip manufacturers have finally realized that a large part of the market is untapped. With hype over old products dying with the release of new and more expensive ones, manufacturers realized that the production of a high performance "budget" chip could result in some massive sales. Thus, the idea of the crippled video card was born.



The Contenders

Rather than produce new cores for each budget card on the market today, NVIDIA, 3dfx, and ATI have all chosen to make less costly versions of their upper end products. Essentially taking higher-cost parts and decreasing their complexity, companies are finally starting to produce the cutting edge, low cost chips necessary to power low cost video cards. This idea of taking a higher performing chip and "crippling" it to reduce cost is not so revolutionary: a very similar method is employed by Intel when differentiating their Celeron and Pentium III cores. In the case of Intel's chips, Coppermine cores that have "bad" cache blocks are designated as Celeron parts, while Coppermine cores with fully working 256K of cache are thus labeled Pentium III processors.

In a similar fashion, NVIDIA, 3dfx, and ATI have all found ways to lower the cost of the cards they produce. Each manufacturer utilizes different methods in order to decrease the cost, but the same result is achieved: a lower cost product. Although modifications of the graphics processor core differ from manufacturer to manufacturer, there is on constant in the current field of budget video cards: SDR memory.

As short as a year and a half ago, nearly every video card featured the same basic memory type. Sure, there were different types of this memory used, mainly SDRAM and SGRAM, but all of it shared the same common principal: they could only transfer data one time per clock cycle. The introduction of the GeForce 256 DDR changed all that.

NVIDIA's GeForce 256 DDR essentially revolutionized the graphics card industry. It is not that other manufacturers had not been working with double data rate memory, memory that can transfer data on both the rising and falling edges of a clock cycle, it is just that NVIDIA beat them to it.

Although the benefits that arose from using DDR memory were extensive, so were the costs. Even though we have gotten word time and time again that DDR memory only costs 3% to 5% more to manufacture than SDR memory, the price to consumers has been much greater. The memory manufacturers are able to sell DDR memory at such high prices simply due to the fact that card manufacturers are willing to pay it. Since in many cases the memory bus becomes a limiting factor before the actual processor does, switching from SDR memory to DDR memory (effectively doubling your memory bandwidth) usually results in massive performance gains. However, as we say again, at a price.

It is therefore no surprise that all the cards we will be examining today utilize cheaper SDR memory. And while you may have heard of budget cards coming with more powerful DDR memory, things may not be as they appear. More on that later, but for now, let's take a look at the cards that are currently considered the budget gamers best choice, an arena dominated by the same three powerhouses present in more expensive video card markets: 3dfx, ATI and NVIDIA.



The 3dfx Voodoo4 4500

Arriving a nearly 6 full months behind its faster performing Voodoo5 5500 brother, 3dfx's Voodoo4 4500 was quite delayed. Powered by a VSA-100 chip, 32MB of SDR memory, and a 128-bit memory pipeline, the Voodoo4 4500 set its sights on the sub $150 video card market.

Unlike the one other card you will see in this comparison, the Voodoo4 4500's VSA-100 processor is completely unmodified from the version found on the more expensive Voodoo5 5500. Instead of modifying the core, 3dfx choose to make the Voodoo4 4500 a single processor counterpart to its dual processor Voodoo5 5500 brother. Therefore, unlike the 5500's dual chip configuration, the Voodoo4 4500 only comes in a single VSA-100 package.

Since the power of the Voodoo4 4500 hinges on the power of a single VSA-100 chip, it was important that a single unmodified version of the chip be powerful enough to handle the other chips in this price range, a feat quite difficult considering that the VSA-100 was produced to be used in an SLI type setup, as can be determined by its name: VSA stands for "Voodoo Scalable Architecture."

Just how much power does each VSA-100 chip have? Based on an enhanced 0.25 micron process, the VSA-100 chip features two rendering pipelines, each capable of processing one texel per clock. The chip supports T-Buffer effects, full screen antialaising, and FX1/DXTC texture compression. This power is contained in a package of 14 million transistors housed on a 6-layer metal process. Perhaps the most crucial change for the VSA-100 chip over its Voodoo3 older brother is the fact that the chip now supports 32-bit color rendering, 32-bit textures, 32/24-bit Z & W, an 8-bit stencil buffer, and support for 2048x2048 textures. The core is clocked at a conservative 166MHz, most likely as a result of the VSA-100's large 0.25 micron process. The 166MHz core clock translates into a dual textured fill rate of 166 megapixels per second or 333 megatexels per second. And while the VSA-100 chip is able to support anywhere from 4MB to 64MB, each Voodoo4 4500 comes outfitted with 32MB of memory.

As we briefly discussed above, the memory bus in a video card plays a crucial role. As with nearly all other cards found in the sub $150 price range, the Voodoo4 4500 has a 128-bit memory bus that is dependent on 32MB of SDR SDRAM chips. The chips are clocked at 166MHz, providing 2.7 GB/s of memory bandwidth.

Although the raw specs of the Voodoo4 4500 are dwarfed in comparison to some of the other value cards out there, some features of the VSA-100 chip make it special. You have probably heard of all the features 3dfx has been promoting with this chip: excellent full screen antialasing, DXTC texture compression, T-Buffer effect, the works. It is here that we see the cutting edge of 3dfx's budget chip, proving once again that just because a processor is inexpensive does not mean it has to be short on features.


Click to Enlarge


Click to Enlarge

For more information regarding both the VSA-100 chip as well as the Voodoo4 4500 as a whole, please see our 3dfx Voodoo4 4500 AGP review.



The ATI Radeon SDR

Of all the budget cards out there, ATI's Radeon SDR posses the finest line between itself and the more expensive Radeon DDR models. In fact, the core on the Radeon SDR is essentially identical to many Radeon DDR cores out there. Before we discuss how ATI chose to cut costs and differentiate between the upper end and lower end Radeons, let's see how powerful the Rage6C core is.

Operating at 166MHz in the Radeon SDR, the Rage6C core features two rendering pipelines, each capable of processing three texels per clock. This is unique, not only for value video cards but for performance cards as well. In the past, we have seen cards able to process two texels per pipe per clock. By including a third texel, the Rage6C is a perfect solution for tri-textured games. In situations where the video card is called upon to process three textures per pixel, the Rage6C is able to render the pixel in a single pass. In contrast, every other card at this price level is forced to use two passes to render the pixel, as each pipeline is only able to process two textures per pass.

The problem is that while the Rage6C core excels in tri-textured games, it takes a hit when a scene is only rendered in two textures. In this situation, the tables turn, with the Rage6C not able to take advantage of its extra texel per pipe. When rendering a dual textured scene, the third texel of the Rage6C's pipeline goes unused. Other dual texel cards, such as the aforementioned Voodoo4 4500 and the GeForce2 MX, are able to use both texels to render the scene.

As you can see, the Rage6C core would easily dominate in tri-textured games, as it is able to process 2 pixels per clock (using all three texels in each pipeline) while NVIDIA's GeForce2 MX, for example, would only be able to process 1 pixel per clock (using both texels in the first pipeline and only one texel in the second one). In principal, this is great, as more textures in game play equates to a higher level of detail. We run into a problem, however, when putting this theory into practice. Since the vast majority of games out there now are currently dual textured games, the Rage6C core is not able to fully use its 3 texel per pipe feature. In the future, this may change as games move to tri-textured rendering, but for now many Radeon owners are not getting the most from their card. The good news is that the Radeon SDR is able to match up perfectly to NVIDIA's GeForce2 MX when in dual textured games, as both the MX and the Radeon SDR are able to process a total of 2 dual textured pixels per clock cycle. The Voodoo4 4500 falls behind here, as it is able to process only one dual textured pixel per clock.

On top of the rendering pipelines, the Rage6C core also possess quite an impressive T&L engine, named the Charisma engine. The Charisma engine is able to perform all transform, lighting, and clipping calculations, thus freeing up some of the CPU power to perform other tasks. On paper, the Charisma engine is able to process 30 million triangles per second, however this number is hard to verify. The engine is also able to take advantage of quite a few DirectX 8 features, something no other card in this price range is able to do.

Combining the 166MHz clock speed and three texels per clock rating, we are able to see that the Rage6C core is able to process 1000 million texels per second, making it equivalent to a "gigatexel shader." There is quite a bit of power behind the core.



The problem is that for all the power present in the Rage6C core, coupling it with nearly any existing memory bus cripples it. Even in the case of the Radeon DDR, the memory bus is a limiting factor in card speed. In order to help alleviate the immense stress encountered by the memory bus, ATI devised their HyperZ technology.

HyperZ is ATI's answer to the memory bandwidth bottleneck. Rather than rely on increasing memory speeds, ATI saw the opportunity to decrease some overhead normally present in 3D rendering. The result is what ATI calls HyperZ technology and is made up of three parts: Hierarchical Z, Z-Compression, and Fast-Z Clear. Each function of HyperZ acts upon the Z-buffer portion of memory, the part of memory dedicated to holding the z-values of rendered pixels. It is from these numbers that scene is rendered, according to each pixel's position on the z-axis. ATI's HyperZ technology optimizes access to the Z-buffer, thus speeding up the process of rendering by reducing the amount of data that needs to travel along the memory bus. For more in-depth discussion of HyperZ and its components, see our ATI Radeon 32MB SDR review.

It is clear that HyperZ plays a crucial role in all Radeon incarnations, however in the Radeon SDR it plays the most important role. Since the memory bus on the Radeon SDR is already "crippled" by using SDR memory instead of the DDR memory used in the Radeon DDR cards. Doing so effectively halved the available memory bandwidth on the 128-bit memory bus, reducing it to 2.7 GB/s. In this case, the numbers do not tell all. Since HyperZ is working to reduce the amount of data that would normally have to travel over the memory bus, the Radeon SDR actually has higher effective memory bandwidth. This is key to the success of Radeon SDR, since in all value cards memory bandwidth plays the greatest role. Often, it is a bottleneck on the memory bus that prevents optimal performance, not a slow core.


Click to Enlarge


Click to Enlarge

By including both a T&L engine courtesy of the Rage6C core as well as using HyperZ technology, the Radeon SDR is on the cutting edge, be it a budget card or not.



The NVIDIA GeForce2 MX

Perhaps the most well known of the bunch, and in many incarnations the cheapest, NVIDIA's GeForce2 MX chip powers quite a variety of budget video cards that are on the market today. Since NVIDIA only makes the chips and relies on third party manufacturers to make the cards, we have seen quite a few GeForce2 MX based budget cards out there. Before we go into the two different types of GeForce2 MX cards there are out there, let's take a look at the core itself.

The GeForce2 MX core is very similar to its more powerful GeForce2 GTS brother, however there are some key differences that designate the MX as a budget chip. The MX contains the same advanced T&L engine found on the more expensive GTS and Ultra chips. This engine is NVIDIA's second generation T&L processor and this time gains the ability to perform not only transform and lighting calculations but also clipping calculations. Because of its lower clock speed, the engine boasts the ability to process 20 million triangles per second versus 25 million on the GeForce2 GTS. In addition, the MX is produced on a 0.18 micron process, making it cooler running and less power hungry than the 0.25 micron VSA-100 chip used in the Voodoo4 4500.

It is here, however, that the similarities between the GeForce2 GTS core and the GeForce2 MX core end. Perhaps one of the most disappointing cutbacks that the MX has is the fact that two of the four rendering pipelines present on the GTS are lost. The GeForce2 MX contains a total of two rendering pipelines, each capable of processing two texels per clock. This produces a total of two pixels per clock when in a dual textured game, exactly half of the GeForce2 GTS' four. By cutting out two of the memory pipelines on the GeForce2 MX, NVIDIA is able to reduce die size as well a chip complexity, thus reducing overall cost.

The second most disappointing cutback present on the GeForce2 MX is its clock speed. In order to increase yields and thus decrease production costs, NVIDIA clocks all GeForce2 MX chips at 175MHz, 25MHz shy of its higher performing brother. This does two things for card producers. First off, it means that no fan or heatsink is required for proper operation. Since heatsinks and fans do not come cheap, not requiring any form of cooling keeps the cost of the GeForce2 MX cards down. Although we do see some manufacturers including heatsinks and even fans, they are not necessary for normal operation. Secondly, lowering the clock speed ensures the MX's place as a budget core, meaning that it is kept a safe distance from potential GTS buyers. The worst situation for NVIDIA is to have two products competing against each other and by clocking the GeForce2 MX at 175MHz core, it can be assured that the MX's performance falls well below that of the GTS core. Thankfully, not all of the news surrounding the lower clock speed is bad. Since the MX core is so similar to the GTS core, one can be almost certain that with some cooling every MX will reach an overclocked speed of around 200 MHz with a bit of tweaking.

Although the value side of the MX may shine through when examining the list of features lost, one attractive feature is gained: TwinView. TwinView is NVIDIA's answer to Matrox's DualHead technology that allows for the use of a second monitor on a single video card. This feature is targeted more at the workstation market, but is also quite nifty for users out there with a second monitor lying around. Most of the truly "budget" GeForce2 MX cards will not have a second VGA or DVI connector, for a bit more one can get a card that can take advantage of this feature.

A second feature added to the MX core is what NVIDIA calls "digital vibrance control." According to NVIDIA, this technology allows the user to "manage the digital bitstream between the graphics pipeline and the display system," thus resulting in more accurate color management. In our testing, we found that digital vibrance control is nothing more than glorified gamma control that is packaged with the GeForce2 MX's drivers.

When both the decreased rendering pipelines and decreased clock speed are taken into account, we can see that the GeForce2 MX can output 700 million texels per second, well below the 1600 million texels per second the GeForce2 GTS is capable of doing. Then again, some of what is lost in speed is made up for in features not found on any other NVIDIA card.



The memory bus controlled the GeForce2 MX is rather unique. As opposed to most cards out there, the GeForce2 MX is able to accommodate two different memory bus lengths: a 64-bit SDR/DDR bus and a 128-bit SDR bus. This means that manufacturers of GeForce2 MX based cards have a choice. They can use the standard SDR memory on the 128-bit bus or they can choose to use DDR memory on the 64-bit memory bus. It is this property of the GeForce2 MX that has led to quite a bit of confusion regarding MX based cards.

The vast majority of GeForce2 MX cards out there use the standard 128-bit memory bus coupled with SDR memory clocked at 166 MHz to provide 2.7 GB/s of memory bandwidth. This is essentially the same memory bus used by both the Radeon SDR as well as the Voodoo4 4500. Creative Labs, however, got quite some attention when they announced that they would be producing a "DDR GeForce2 MX." For many users, the letters DDR equate to higher performance, as the DDR memory can transfer twice as much data per clock over SDR memory. Therefore, quite a good number of users out there were ecstatic to hear of a DDR GeForce2 MX card, thinking that it would bring the MX up to the levels of many GTS cards. The fact of the matter is that this would be true, if it were not for the fact that all DDR GeForce2 MX based cards use the narrower 64-bit memory bus, steering NVIDIA clear from any internal competition.

By halving the memory bus width and doubling the amount of data that can travel along this bus in a cycle, the 64-bit DDR GeForce2 MX based cards should end up performing identical to the 128-bit SDR MX cards. Indeed, both types of cards feature the same memory bandwidth of 2.7 GB/s. The truth of the matter is that we may see the 128-bit SDR cards performing slightly faster than the 64-bit DDR cards due to the fact that SDR memory is theoretically more efficient than DDR memory.

So why use a 64-bit DDR configuration in a GeForce2 MX based product. Well, first off, this setup may prove to be slightly cheaper to produce when compared to the 128-bit SDR cards. It is true that DDR memory is more pricey than SDR memory, however the money spent using DDR memory is money saved when producing only a 64-bit wide memory bus. Since the smaller the width of the memory bus, the less internal traces are necessary in the board, production costs are decreased. The second reason for using DDR memory its marketing value. As we mentioned above, users who see the letters DDR equate this to higher performance. If two cards were placed on a store shelf next to one another, one boasting DDR memory and the other using SDR memory, the vast majority of users would pick the "faster" DDR product giving the two are the same price. Judging by the number of e-mails we get regarding the "DDR GeForce2 MX" cards, it seems that this marketing strategy is working. Even examining Creative Labs' website brings no mention of the narrower 64-bit bus, only the key phrase "DDR" all over the page, leading the uninformed consumer to believe that the DDR card is faster. Whether or not it was Creative Labs' desire to mislead consumers can not be know, but rest assured that not only this "DDR GeForce2 MX" card, but all others taunting the same name will contain the sub par 64-bit memory bus.

Regardless of the type of memory bus used, GeForce2 MX based cards are quite powerful. Featuring a fill rate 220 million texels per second faster than NVIDIA's previous top performer, the GeForce 256, the GeForce2 MX core is not just a rehashed old processor. By including features not found on any other NVIDIA product, the MX may just find its way into non-buget systems as well.


Click to Enlarge


Click to Enlarge



The Test

 

Windows 98 SE Test System

Hardware

CPU(s)

Intel Celeron 366

Intel Pentium III 550E

AMD Athlon Thunderbird 1.0GHz
Motherboard(s)
ASUS CUBX
ASUS CUBX ASUS A7V
Memory
128MB PC100 Corsair SDRAM

128MB PC133 Corsair SDRAM

128MB PC133 Corsair SDRAM
Hard Drive

IBM Deskstar DPTA-372050 20.5GB 7200 RPM Ultra ATA 66

CDROM

Phillips 48X

Video Card(s)

3dfx Voodoo4 4500 AGP 32MB

ATI Radeon 32MB SDR
NVIDIA GeForce 2 MX 32MB SDR

Ethernet

Linksys LNE100TX 100Mbit PCI Ethernet Adapter

Software

Operating System

Windows 98 SE
Windows 2000 Professional

Video Drivers

3dfx Voodoo4 4500 AGP 32MB - beta drivers v1.01.03 (Windows 98)
3dfx Voodoo4 4500 AGP 32MB - beta drivers v1.01.01 (WIndows 2000)

ATI Radeon 32MB SDR - D714-0831a-62B-SPD

NVIDIA GeForce2 MX 32MB SDR - Detonator 6.31

Benchmarking Applications

Gaming

idSoftware Quake III Arena demo001.dm3
MDK2 Demo
GT Interactive Unreal Tournament 4.32 Reverend's Thunder Demo



Quake III Arena Performance

As we have noted many times in the past, two main issues factor into a card's 640x480x32 performance when tested on the same CPU. These factors are T&L performance and driver maturity. Since the cards we are looking at here all have a rather mature driver base (as they all use essentially the same drivers as their older, higher performing counterparts), we can narrow the differences between the cards to the T&L engine. This serves to explain the rather lackluster performance of the Voodoo4 4500, as this is the only card on the list that does not have any T&L support. It is for this reason that the GeForce2 MX and the ATI Radeon SDR differ by only 7.8% percent while the Voodoo4 4500 falls behind the Radeon SDR by a full 25.9% percent. The 7.8% speed difference between the ATI Radeon SDR and the GeForce2 MX shows that the Radeon's Charisma T&L engine may not be up to the par of NVIDIA's.

Although the 800x600x32 resolution is normally left out of our video card comparisons, in the case of budget video cards it is often the resolution of choice. Running on the 1.0 GHz processor, the cards had no problem running up to speed at this resolution. We found that while the Voodoo4 4500 is crippled, both by its lack of power as well as its lack of T&L support, both the Radeon SDR and the GeForce2 MX perform similarly.

At 1024x768x32, the line between the second and first place cards is nearly nonexistent. The GeForce2 MX ends up beating the ATI Radeon SDR by a mere .3 FPS; a speed increase that is completely unnoticeable in any game. The reason that the Radeon SDR is able to catch up to the GeForce2 MX is unquestionably due to HyperZ technology. Since increasing the resolution puts significant strain on the memory bus, both the GeForce2 MX and the Voodoo4 4500 begin to suffer from memory bandwidth limitations. ATI's HyperZ technology, on the other hand, allows the Radeon SDR to continue to scale well at higher resolutions simply due to the fact that the memory bandwidth bottleneck is not as large of a problem.

Although the difference between the Radeon SDR and the GeForce2 MX is gone, the difference between these two cards and the Voodoo4 4500 is quite large. Both the Radeon SDR and the GeForce2 MX dominate the Voodoo4 4500 by a large 55%. In fact, it is for this reason that it was necessary to include scores at 800x600x32, since many Voodoo4 4500 users will choose this resolution over 1024x768x32. The 38.6 FPS that the Voodoo4 4500 supplies here may not be enough to satisfy many gamers.

The final resolution we will investigate in this comparison is 1280x1024x32. Once again, this is not a resolution commonly included in our video card tests. The reason it is included here is that for every value card currently out there, this resolution serves as the maximum playable resolution in nearly all cases. At 1280x1024x32, the Radeon is able to take position as top card, once again thanks to HyperZ technology. The problem is that even this card performs at 37.9 FPS, a speed which is unplayable for many.



MDK2 Performance

We saw at 640x480x32 in Quake III Arena how the Voodoo4 4500's lack of T&L resulted in a huge performance penalty. Once again, in MDK2, we find that the Voodoo4 4500 is quickly dominated at this low resolution. Although the GeForce2 MX performs 3.6% faster than the Radeon SDR, it beats the Voodoo4 4500 by 148%.

Since MDK2 is rather heavily dependent on a cards T&L engine, we continue to see the GeForce2 MX on top. The MX performs 23.4% faster than the Radeon SDR. The Voodoo4 4500 remains the slowest of the pack, performing 70.4% slower than the Radeon SDR.

Unlike in Quake III Arena at 1024x768x32, the Radeon SDR can not catch up to the GeForce2 MX. Although HyperZ helped the Radeon SDR along in Quake III Arena, it can not speed the Radeon SDR up to GeForce2 MX speed in MDK2 at 1024x768x32. This is most likely a result of the Radeon SDR's sub par T&L engine. First shown in Quake III Arena at 640x480x32, the T&L performance of the Radeon could be hindering its performance. Since our MDK2 tests are conducted at default settings with the T&L check box clicked, the game is relying on the T&L engine more than other games do. Unfortunately, this problem really manifests itself in MDK2, where at 1024x768x32 the Radeon SDR performs 23% slower than the GeForce2 MX. In addition, the lack of T&L support on the Voodoo4 4500 prevents it from performing up to par as well, as it performs 97% slower than the GeForce2 MX.

The results of the MDK2 demo at 1280x1024x32 mimic the results found at 1024x768x32. In this case, with the amount of information passing through the memory bus growing, the Radeon SDR cuts its losses to 18%. The fact remains, however, that only the GeForce2 MX provides playable frame rates, with the Voodoo4 4500 rendering at almost unplayable frames per second.



Unreal Tournament Performance

Unlike the aforementioned games, Unreal Tournament does not take advantage of any T&L abilities. It is for this reason that we see almost no difference between the GeForce2 MX and the ATI Radeon SDR. The Voodoo4 4500, on the other hand, falls to the bottom of the pack, running about 27% slower than the other cards.

In Unreal Tournament, the Radeon SDR quickly rises to the top of the charts. Beating the GeForce2 MX by only 4 FPS, a speed difference that would not be noticed under gameplay. The reason for the Radeon SDR finishing on top is its HyperZ technology. Unreal Tournament is very memory intensive, due to the fact that the textures in the game are very large. By using HyperZ, the Radeon SDR is able to minimize the memory bottleneck present on the other cards.

At 1024x768x32, with the amount of information being passed over the memory bus increasing, the Radeon SDR is able to gain a larger lead of 15%. Both the GeForce2 MX and the ATI Radeon SDR perform at playable speeds, with the Voodoo4 4500 lying on the outskirts of this definition.

At 1280x1024x32, the results mimic those at 1024x768x32. In this case, the game is becoming too slow to play under some circumstances by our two top performing cards. The Voodoo4 4500's 16.5 FPS rating is too slow to be counted as playable.



16-bit vs 32-bit Performance

Since not everyone plays in 32-bit color, it is also important to see how video cards perform when in 16-bit color mode. This is important for two reasons. First off, some users may choose to play at a higher resolution with a lower number of colors, making game play possible at higher resolutions. Secondly, the 16-bit performance of a video card becomes important as a card ages. With game complexity and demands constantly increasing, it is important that a current generation video card is able to handle the next generation of games. In many cases to achieve a desirable resolution, it may be necessary to decrease the color depth of a scene, thus relieving some of the stress on the video card. In order to see how each card is able to adapt to the future by speeding up with less color depth, we took the resolution of 1024x768 and tested each card at both 16-bit and 32-bit color. Although in an ideal world, the 16-bit performance of a card and the 32-bit performance of a card would be identical, this is not the case due to memory bandwidth limitations. Therefore, what we look for is a large jump in performance when going from 32-bit color to 16-bit color. This gives us an idea of how much performance can be gained by reducing the color depth.

In Quake III Arena, our OpenGL color depth comparison, we find that the GeForce2 MX responds best to a decrease in color depth. Increasing a full 26.3 FPS, the MX speeds up almost 50% when changing color depth. This speed increase just might be enough to make future games playable at desirable resolutions.

The Radeon SDR, as a result of its HyperZ technology, can not gain as much when moving from 16-bit to 32-bit. This is because even at 32-bit the card is performing at almost full speed due to the effective memory bandwidth increase from HyperZ technology. Because the card is not hitting as large of a memory bottleneck in 32-bit color, not too much performance can be gained by decreasing color depth.

The Voodoo4 4500 numbers clearly show that it is memory bandwidth limited at 32-bit color, thus there is quite a bit to gain when decreasing color depth. A jump of 55%, the Voodoo4 4500 becomes playable at 1024x768x16.

In our D3D color performance test, we find a very similar situation to that of the OpenGL test, except with the Radeon SDR possessing a slightly larger gain. The card continues, however, to possess the smallest performance difference when going from 32-bit color to 16-bit color with a gain of 40%. The GeForce2 MX shows a 47% improvement and the Voodoo4 4500 shows a 89% improvement.



CPU Scaling Performance

All the above tests were conducted on an Athlon "Thunderbird" 1.0 GHz processor in order to gain an understanding of how each card performs under the most ideal situation. The truth of the matter, however, is that many users will not be placing a value card such as these in a system of such speed. Although all of these cards make perfect sense for builders of budget systems (even 1.0 GHz Thunderbird based ones), they also make sense for people on the upgrade path.

Since each card matches with processors differently, it is necessary to investigate how each can scale with different processors. For the purposes of this review, we choose two other processors to compare the cards on: a low end Celeron 366 and a medium end Pentium III 550E. To examine how each card scales with the CPU at various resolutions, we will take a look at each card individually and see how it performs under each circumstance.

In the case of the Voodoo4 4500, at resolutions above 640x480x32 the card really does not care which CPU it is on. This is a result of hitting a video card bottleneck before any CPU bottleneck is encountered. At 640x480x32, it is clear that the Voodoo4 4500 gains speed from a faster CPU. Once one looks at the 1024x768x32 and 1280x1024x32 numbers, it becomes clear that the CPU is not limiting game play: the Voodoo4 4500 is.

In the case of the Radeon SDR, the card is a bit more picky when it comes to CPU choice. Once again, the card shows dramatic speed fluctuations at 640x480x32, where the video card is not really being challenged at all. Unlike the Voodoo4 4500, the Radeon SDR does show that the CPU choice can matter in budget game cards. The Radeon SDR most certainly prefers any processor of 550 MHz or above, as the substitution of the 366 MHz Celeron decreases performance of the card by 21%. In this case, the CPU is acting as the bottleneck, not the video card. Keep in mind, however, that the lack of performance increase when going from the Pentium III 550E to the Athlon 1.0 GHz processor shows that in these cases the video card is our bottleneck once again. This trend continues, as at 1280x1024x32 the Radeon SDR performs identically on each platform, showing that the video card is the only bottleneck.

The results of our GeForce2 MX scaling tests reveal an outcome very similar to that of the Radeon SDR. Once again, the CPU that provides a bottleneck at 1024x768x32 is the Celeron 366 MHz, while the other two CPUs show the video card acting as the bottleneck. Performance drops about 12% when going down the the Celeron 366 MHz speed, showing that the GeForce2 MX is a bit less sensitive than the Radeon SDR when it comes to the CPU used. At 1280x1024x32, the card performs the same regardless of processor, showing that the video card is the bottleneck at this resolution.



Windows 2000 Driver Performance

One important aspect of a video card, and one that is often times pushed aside by manufacturers, is the card's driver support under operating systems other than Windows98/ME. One area we noticed many consumers having problems in is under Windows 2000, where some manufacturers ignore the users needs. In many cases, the driver support under Windows 2000 is enough to slow down gameplay a noticeable amount.

Once again, we examine each card individually to see what kind of performance hit is taken when using Windows 2000 instead of Windows 98SE.

We are proud to announce that in the case of 3dfx's Voodoo4 4500 there is no longer a performance loss when going from Windows 98 to Windows 2000. When we originally looked at the Voodoo4 4500, we were a bit disappointed with it's lack of Windows 2000 speed, as the card performed up to 33% slower under Windows 2000. With beta drivers version 1.01.01 in Windows 2000, there was absolutely no performance decrease over driver version 1.01.03 in Windows 98. This is great news for all Voodoo4 4500 owners currently running Windows 2000, as it means that your card's total speed can be fully realized in Windows 2000.

One problem we originally noted upon the release of the Radeon SDR was its poor performance under Windows 2000. Unlike 3dfx, however, ATI has not yet fixed the problem, leaving Radeon SDR owners who run Windows 2000 wishing for a driver update.

Compared to the Windows 98 performance of the Radeon SDR, the performance of the card under Windows 2000 is unacceptable. The card performs 10% slower at 640x480x32, 16% slower at 800x600x32, and a full 19% slower at 1024x768x32. With a frame rate decrease of almost 10 frames per second, the Radeon's poor Windows 2000 performance is only due to one thing: poor driver support. We have noted time and time again that ATI's drivers are often not up to the par they need to be, and again in the Radeon SDR we find that this is the case. With 3dfx fixing their Windows 2000 performance problems, there is no reason that ATI can not do the same thing.

In the case of the GeForce2 MX's drivers, we once again could not be any happier regarding Windows 2000 performance. The Windows 2000 scores are identical to the Windows 98SE scores. In fact we have gotten the GeForce2 MX to produce higher scores under Windows 2000, proving once again that this is not an impossible task.



Conclusion

With three main value video card choices out there, how can one decide on the perfect card for their needs? By examining the above results, there emerges a clear winner.

3dfx's Voodoo4 4500 can be discounted simply due to its poor performance. Although 3dfx does have some good ideas on their hands, including their T-Buffer effects and the idea of scaling down by using less chips, the fact of the matter is that the Voodoo4 4500 does not possess enough power to be a viable threat. If the card came out when it was supposed to, around 6 months ago or earlier with the launch of the Voodoo5 5500, we may be telling a different story. If the card was launched at this time, the competition would be less fierce and the Voodoo4 4500 could have very well come out on top. The problem is that it did not, and instead it was pushed in a rather crowded arena where NVIDIA and ATI products can walk all over the Voodoo4 4500.

Although the decision to throw out the Voodoo4 4500 from a budget gamer's shopping list may be easy, the choice between the Radeon SDR and the GeForce2 MX may not be as clear cut. It is after testing each card as well as evaluating the features of each card, can we throw out ATI's Radeon SDR as the budget card of choice. This decision comes from a few flaws in the Radeon SDR.

The first, and perhaps most devastating flaw in the Radeon SDR is its poor driver support. As shown in the Windows 2000 driver performance section, ATI gives off the appearance that it couldn't care less about drivers. We know ATI is "working" to alleviate the poor Windows 2000 performance as well as a host of other driver related problems the Radeon series cards have, yet we have yet to see any substantial improvement and therefore can not expect the drivers to get better very fast.

A second problem with the Radeon is its performance. Although the Radeon SDR coupled with HyperZ technology is not only a good idea but the way other manufacturers seem to be going in future products, the fact of the matter is that the technology is not as powerful as it needs to be to compete with the GeForce2 MX. Sure, in higher resolutions of Quake III Arena the Radeon SDR beats out the GeForce2 MX, however it does not do so by much. In addition, the tables turn when looking at the T&L intensive MDK2 demo. No longer can the Radeon SDR keep up to the GeForce2 MX's speed, even at higher resolutions. Finally under Unreal Tournament, the Radeon SDR gains quite a lead over the GeForce2 MX, however it does not make up for the poor MDK2 performance. Since the lackluster performance under MDK2 suggests a sub par T&L engine, we can not be sure how the Radeon SDR will perform in the future as more and more T&L titles come along. Texture intensive, low polygon games similar to Unreal Tournament are quickly fading out, and with the departure of these games also goes much of the attractiveness of the Radeon SDR.

The final straw on the Radeon SDR's back is its price. Selling for around $131 in a white box OEM fashion only, the Radeon SDR runs $30-40 more than many GeForce2 MX based cards.

So, with some consideration, we can pick a GeForce2 MX based card as the budget card of choice. Not only does the card shine in T&L intensive games, but it also has strong performance in all the benchmarks. In addition, many GeForce2 MX cards are currently on the market for around $100 or even a bit less. And for a bit more money you can get an MX based card with both a heatsink and a fan, making overclocking a cinch. Dual VGA TwinView ready GeForce2 MX based cards are still rather hard to find and may end up costing up to $150, but for many users in a business type environment it may be worth it.

It won't be long before the current generation of high performance video processors become the processor of choice for the gamer on a budget. With GeForce2 GTS prices falling nearly daily, it will not be long before the GeForce2 MX based cards are priced even lower. Do not expect, however, for the GeForce2 MX to remain the budget card of choice for too long. With in a few months we should see new product announcements from both NVIDIA as well as ATI. Without question, a modified version of these new processors will soon find their way into new budget video cards, making the cycle start all over once again.

Log in

Don't have an account? Sign up now