![](/Content/images/logo2.png)
Original Link: https://www.anandtech.com/show/742
Almost a full month has passed since NVIDIA made their announcement of the GeForce3, their Infinite Effects GPU. Although benchmarks have cropped up here and there, for the most part there were no complete reviews of the retail GeForce3 product that will be soon finding its way into stores. The question of why has come up more than a few times around messageboards and discussion groups from all over the web including the AnandTech Forums. Before we dive into the benchmark comparison we thought we would shine some light on exactly why, for the first time since NVIDIA started tending to the online community, that benchmarks haven't been released for so long.
1) Drivers - Although NVIDIA's Unified Driver Architecture definitely helps keep driver development time to a minimum, with as radical a change of the core as the GeForce3 promised, the Detonator 3 drivers we had been using for so long weren't going to be the best solution. Until recently, the drivers that were available for the GeForce3 were in fact quite buggy and definitely full of issues. The driver NVIDIA supplied to all reviewers was supposedly built on March 15th (according to NVIDIA) and carries the version number 11.01. This driver will not be the driver shipping with the GeForce3, as it is still somewhat buggy. But it does fix a lot of the stability issues we noticed with the earlier drivers, including the 10.80 and 11.00 versions.
2) An expensive launch cannot be delayed - NVIDIA worked very hard to prepare for the launch of the GeForce3 at the most recent Intel Developers Forum conference. According to most, NVIDIA did in fact steal the show with the launch. If you haven't already noticed, it is generally advisable for companies to make major product announcements in conjunction with some major tradeshow, simply because the press and the enthusiasts are ready and waiting for information surrounding the show. Announcing a major product like the GeForce3 at IDF is mainly a PR tactic, and NVIDIA wasn't about to let their chance at IDF go to waste over buggy drivers that fixable.
3) Honestly, there are no benchmarks - Had NVIDIA allowed reviewers to go live with benchmarks on February 27 to coincide with the GeForce3's technology launch, there would have been quite a bit of negative press regarding the GeForce3. As we mentioned in our 'NV20' Revealed article, the GeForce3's performance superiority in current games will only lie at high resolutions (higher than 1024 x 768 x 32) or when enabling its Quincunx Anti Aliasing. In many ways, the GeForce3 would have paralleled the Pentium 4's launch in that the current crop of benchmarks (in this case, games) would not have shown any performance increase that's worth the money. Now, with 3DMark 2001 out as well as a demo of an upcoming DX8 title, NVIDIA has hopes that these two can lighten the blow. We'll let you be the judge as we're about to paint as complete of a picture of the GeForce3's performance as possible.
We strongly suggest you take a look at our GeForce3 Technology Review, entitled NVIDIA's GeForce3: 'NV20' Revealed, before moving forward as we will not explain any of the architecture here. Our benchmark analysis will assume a thorough knowledge of the GeForce3's architecture as provided in our Technology Review.
Let's get to it.
The Cards
Here we have MSI's StarForce-822 GeForce3 Card:
and the NVIDIA Reference Design:
Incompatibilities
The only issue we had with the GeForce3 that we couldn't work around was that Mercedes-Benz Truck Racing would not run without missing textures. This forced us to remove it from our benchmark suite.
The Drivers
The drivers we used were version 11.01. Screenshots of the utilities can be found below:
The Test
Windows 98 SE Test System |
|||||||
Hardware |
|||||||
CPU(s) | AMD Athlon-C (Thunderbird) 1.0GHz (133MHz) | ||||||
Motherboard(s) | ASUS A7V133 | ||||||
Memory | 128MB PC133 Corsair SDRAM (Micron -7E Chips) | ||||||
Hard Drive |
IBM Deskstar DPTA-372050 20.5GB 7200 RPM Ultra ATA 66 |
||||||
CDROM |
Phillips 48X |
||||||
Video Card(s) |
ATI Radeon 64MB DDR Hercules/Guillemot
3D Prophet 4500 NVIDIA
GeForce3 64MB DDR |
||||||
Ethernet |
Linksys LNE100TX 100Mbit PCI Ethernet Adapter |
||||||
Software |
|||||||
Operating System |
Windows 98 SE |
||||||
Video Drivers |
|
||||||
Benchmarking Applications |
|||||||
Gaming |
idSoftware
Quake III Arena demo127.dm3 |
OpenGL Performance - Quake III Arena
No video card review could possibly start off without Quake III Arena benchmarks. Using the latest 1.27g patch and the new, more intensive, demo127 benchmark we kick off the GeForce3 tests.
The GeForce3 is consistently a couple of frames slower than the older GeForce2 cards, quite possibly because its T&L unit is now programmable and does not offer the same benefit as the GeForce2 cards with their hard-wired T&L in games that supported it.
It doesn't take too long for the GeForce3 to climb to the top. At 1024 x 768 x 32, the GeForce3 already begins to distance itself from the Ultra by about 7%. Remember, the GeForce3 has exactly the same amount of peak theoretical memory bandwidth as the GeForce2 Ultra. The only difference being that the GeForce3's Lightspeed Memory Architecture makes it more efficient in its use of that memory bandwidth.
The gap widens yet again, as the GeForce3 is able to provide 70.5fps at 1600 x 1200 x 32 under Quake III Arena, a feat once thought impossible. It is quite possible, and the GeForce3 makes it so, at a very steep price too.
OpenGL Performance - MDK2
MDK2 is an aging 3D person shooter title that has a lot of FPS characteristics to it. At the same time, this is the very type of title that won't see a major performance increase from the GeForce3. Starting at 640 x 480 x 32, the GeForce3 is no better than a GeForce2 GTS, Pro or Ultra since at this low of a resolution there are no memory bandwidth bottlenecks.
At this low of a resolution the performance really comes down to CPU performance and driver issues. Since all of the cards are on the same CPU, there is virtually no difference between any of the NVIDIA solutions and there don't appear to be any driver related issues either.
As we crank up the resolution, the GeForce3 begins to step ahead. Remember that the GeForce3 has the same amount of theoretical memory bandwidth as a GeForce2 Ultra, however it is simply more efficient at its usage of it. NVIDIA's Lightspeed Memory Architecture, including their Hyper-Z like features are helping its performance here.
The improvement isn't too great, since it only translates into about a 6% increase in frame rate over the GeForce2 Ultra.
What is worth noting is that the Kyro II hasn't changed in performance at all between 640 x 480 and 1024 x 768, while even the GeForce3 took a small performance hit. As a quick refresher, the Kyro II is a tile-based rendering chip which translates into extremely efficient management of memory bandwidth and fillrate power.
At 1600 x 1200 the GeForce3 is finally able to pull away from the GeForce2 Ultra, offering a 22% performance improvement courtesy of nothing more than the GeForce3's efficiency improvements. A 22% boost at 1600 x 1200 is impressive, but worth $500?
Real-World Fillrate
We debuted the Serious Sam Test 2 Fillrate test most recently in our Kyro II Review, using it as a good way to test real-world fillate instead of simply quoting manufacturer specs which are generally very unrealistic. Since the GeForce3 has the same theoretical fillrate as the GeForce2 GTS, its performance here should be compared to that of the GeForce2 GTS and the comparison will allow you to see how efficient the GeForce3's Lightspeed Memory Architecture is, including its Visibility Subsystem.
While the GeForce3 doesn't boast the same incredible efficiency of the Kyro II that uses a tile-based rendering algorithm to give it close to 100% of its peak theoretical fillrate, the GeForce3 does offer an impressive 93% improvement over the GeForce2 GTS in terms of real-world fill rate. It is this advantage that allows the GeForce3 to outperform the GeForce2 Ultra at very high resolutions, simply because it is more efficient at managing the memory bandwidth it has.
When comparing Multitexture Fillrates, the same picture is presented for the NVIDIA cards since the performance hit taken when moving to a situation where two textures are being used is negligable. Again, we see that the GeForce3's architecture is quite efficient indeed.
Now that we've proven that the GeForce3 is more efficient, let's see if it translates into any real world performance gains under Serious Sam.
OpenGL Performance - Serious Sam Test 2
At 640 x 480, the performance difference is nonexistant at best. Remember, at this low of a resolution, the only real limitations are related to your CPU, platform and video drivers.
In our Kyro II Review, the $149 Hercules 3D Prophet 4500 embarrassed the GeForce2 Ultra by coming in with higher performance than it at 1024 x 768 x 32. While the GeForce3 is no less than 3.5x the cost of the Kyro II based 3D Prophet 4500, it is able to beat the card by a few fps.
In comparison to the Ultra, the GeForce3 is 11% faster. Not bad, once again, considering that it doesn't have any more memory bandwidth than the Ultra.
As the resolution increases, the GeForce3's efficiencies are even more pronounced as it is able to offer 30% higher performance than the GeForce2 Ultra.
FSAA/HRAA Performance - Serious Sam Test 2
Enabling the GeForce3's Quincunx AA at 640 x 480 doesn't require enough memory bandwidth to exploit the GeForce3's architecture, and thus the performance with Quincunx AA enabled is lower than that of a GeForce2 Ultra with 2x2 FSAA turned on.
At 1024 x 768 the picture changes dramatically, while most of the cards take a few steps back, the GeForce3 jumps to the head of the pack with not only a very efficient management of memory bandwidth but a very efficient AA algorithm.
DX7 Performance - UnrealTournament
UnrealTournament is far from the best game to showcase the GeForce3's capabilities. As you can tell, at 640 x 480 there are driver and other issues that are keeping it even below the levels of its GeForce2 siblings.
UnrealTournament has never particularly stressed any of the video cards we've reviewed, mainly because the engine has featured limitations that prevented a great deal of variation in performance among like cards. Once again, the GeForce3's power isn't being used for much here, you don't even need an Ultra for high performance UnrealTournament.
Even at 1600 x 1200, the GeForce3 isn't necessary and continues to come out below the GeForce2 Ultra. This is the type of situation that NVIDIA wants to see the least, a present day game that not only doesn's stress the GeForce3 but one that runs just fine with a number of cheaper video cards.
DX8 Performance - Aquanox
The fact of the matter is that in current games, the GeForce3 just isn't worth the $500 price tag. The technology behind it is wonderful and what will really allow it to shine will be future titles that take advantage of its programmable nature. NVIDIA realizes this and managed to get a benchmark made based on the upcoming DX8 title Aquanox.
As you would guess, the benchmark completely humiliates everything aside from the GeForce3 because it is a game highly optimized for DX8 graphics cards. Let's take a look at the game performance first.
Interestingly enough, the ATI Radeon would not complete any of the Aquanox tests. The card would simply drop back to the desktop before starting the benchmark.
While the GeForce3 is clearly the leader in all of the benchmarks, it is interesting to note that its frame rate never increases above 45 fps, which is obtained at 640 x 480 x 32. It is also interesting to note that the Kyro II does extremely poorly in all of the benchmarks. If this is an indication of how the Kyro II will perform on future DX8 titles it could be a major turnoff for those that were planning on keeping the $149 card for more than 6 - 9 months.
The most interesting part of the Aquanox benchmark is the fact that it outputs information on the number of polygons displayed as well as the amount of time it took to display them, essentially giving us a method of measuring the T&L power of the GeForce3's programmable T&L. You'll find the results below:
It isn't a surprise that the GeForce3 comes out on top again, what is interesting is that the three GeForce2 cards and the Kyro II are all capable of the same real world T&L power. Keep in mind that the Kyro II has no hardware T&L, meaning that all four of the cards are relying on the host CPU for the T&L processing.
If this is an indication of what can be expected from future titles, are GeForce2 owners left in the lurch with a hard-wired T&L unit that will yield no tangible performance improvements in future games? If developers all move to support programmable T&L like that on the GeForce3, which they most likely will, will the T&L units on the GeForce2 series of cards be rendered completely useless?
There is the possibility that future games will be able to take advantage of both by providing support for the GeForce2's hard-wired T&L but also offering the option of taking advantage of a programmable T&L unit. It's too early to say for sure, but it's something definitely worth thinking about.
Final Words
We are still very impressed with the GeForce3, and applaud NVIDIA for architecting a wonderful chip and the technology behind it. However the fact remains that in spite of how great the technology is, the GeForce3 isn't something we can recommend at this point in time.
The only real performance advantages the GeForce3 currently offers exist in three situations: 1) very high-resolutions, 2) with AA enabled or 3) in DX8 specific benchmarks. You should honestly not concern yourself with the latter, simply because you buy a video card to play games, not to run 3DMark on although there will be quite a bit of comparing of 3DMark scores of the GeForce3 regardless of what we think.
The AA quality and performance of the GeForce3 is quite attractive, but at a $500 asking price it quickly loses its appeal as does the charm of high frame rates at very high resolutions.
If you must purchase a card today and aren't going to replace it for the next two years, then the GeForce3 is not only your best bet, it's your only choice. But if you can wait, we strongly suggest doing just that. In 3 - 4 months ATI will have their answer to the GeForce3, and in 3 months following that, NVIDIA will have the Fall refresh of the GeForce3 running on a smaller process, at a higher clock speed, offering more performance and features at a lower cost.
Between now and the release of the GeForce3's successor, it is doubtful that there will be many games that absolutely require the programmable pixel and vertex shaders of the GeForce3. If you've got $500 to kill, the GeForce3 is a luxury item right now, but in 6 - 12 months, you'd be foolish to get caught without a programmable GPU under your case. The best move here is to wait until games need it, and then pick up the best you can at that point.
Until then, there are a number of cheaper alternatives to tide you over.