NVIDIA 680i: The Best Core 2 Chipset?
by Gary Key & Wesley Fink on November 8, 2006 4:45 AM EST- Posted in
- CPUs
LinkBoost
One of the features unique to the nForce 590SLI and 680i SLI MCP is a system called LinkBoost. If a GeForce 7900 GTX or GeForce 8800 is detected on either MCP then LinkBoost will automatically increase the PCI Express and MCP HyperTransport (HT) bus speeds by 25%. This increases the bandwidth available to each PCI Express and HT bus link from 8GB/s to 10GB/s.
Since this technology increases the clock speed of the PCI Express bus by 25% to the x16 PCI Express graphics slots, NVIDIA requires certification of the video card for this program to work automatically. In this case, the 7900GTX and 8800 series are the only compatible cards offered, although you can manually set the bus speeds and achieve the same results depending upon your components. We feel this feature is worthwhile for those users who do not want to tune their BIOS and go through extensive test routines to find the best possible combination of settings.
In essence, NVIDIA is guaranteeing their chipset's PCI Express and HT interconnect links are qualified to perform up to 125% of their default speeds without issue. While LinkBoost is an interesting idea, the 25% increase in PCI Express x16 slots and HT bus speeds yielded virtually the same performance as our system without LinkBoost enabled in most cases.
Its actual implementation did not change our test scores in single video card testing but did provide a 1%~2% difference in SLI testing at resolutions under 1600x1200 in several game titles. The reason for the minimal increases at best is that the performance boost is being applied in areas that have minimal impact on system performance as the link to the CPU/Memory subsystem is left at stock speed thus negating the true benefits of this technology.
FirstPacket
As part of the overhaul of the networking features first introduced in the NVIDIA nForce 590SLI and now 680i SLI Series, FirstPacket is a packet prioritization technology that allows latency-sensitive applications and games to effectively share the upstream bandwidth of their broadband connection. Essentially this technology allows the user to set network data packets for applications and games that are more latency sensitive with a higher queue priority for outbound traffic only.
FirstPacket is embedded in the hardware and offers driver support that is specifically designed to reduce latency for networked games and other latency-sensitive traffic like Voice over IP (VoIP). When network traffic constrains a connection, latency is increased which in turn can result in dropped packets that would create a jitter and delay in VoIP connections or higher ping rates to the game server resulting in stutters and decreased game play abilities.
In the typical PC configuration, the operation system, network hardware, and driver software are unaware of latency issues and therefore are unable to reduce it. The standard interfaces that allow applications to send and receive data are basically identical to the OS in a typical system. This type of design results in latency-tolerant and large packet applications like FTP or Web browsers filling the outbound pipeline without regards to the needs of small packet and very latency-sensitive applications like games or VoIP applications.
FirstPacket operates by creating an additional transmit queue in the network driver. This queue is designed to provide expedited packet transmission for applications the user determines are latency-sensitive applications. The ability of the designated applications to get preferential access to the upstream bandwidth usually results in improved performance and lower ping rates. The FirstPacket setup and configuration is available through a revised Windows based driver control panel that is very easy to use.
In our LAN testing, we witnessed ping rate performance improvements of 22% to 36% during the streaming of video from our media server while playing Serious Sam II across three machines on our LAN. We noticed ping rate performance improvements of 14% to 33% while uploading files via BitTorrent and playing Battlefield 2 on varying servers.
The drawback at this time is that only outbound packets are prioritized so if you spend more time downloading than uploading the FirstPacket technology will have little impact on your computing experience. Worth mention is that nearly all broadband connections have a lot more downstream bandwidth than upstream bandwidth, so focusing on prioritizing outbound traffic does make sense. Also, the upload time for our test file increased by 41% with FirstPacket turned on but the overall gaming experience was significantly better. However, in NVIDIA's defense they cannot control the behavior or quality of service on other networked clients, so FirstPacket addresses the services NVIDIA can control - namely uploading.
One of the features unique to the nForce 590SLI and 680i SLI MCP is a system called LinkBoost. If a GeForce 7900 GTX or GeForce 8800 is detected on either MCP then LinkBoost will automatically increase the PCI Express and MCP HyperTransport (HT) bus speeds by 25%. This increases the bandwidth available to each PCI Express and HT bus link from 8GB/s to 10GB/s.
Since this technology increases the clock speed of the PCI Express bus by 25% to the x16 PCI Express graphics slots, NVIDIA requires certification of the video card for this program to work automatically. In this case, the 7900GTX and 8800 series are the only compatible cards offered, although you can manually set the bus speeds and achieve the same results depending upon your components. We feel this feature is worthwhile for those users who do not want to tune their BIOS and go through extensive test routines to find the best possible combination of settings.
In essence, NVIDIA is guaranteeing their chipset's PCI Express and HT interconnect links are qualified to perform up to 125% of their default speeds without issue. While LinkBoost is an interesting idea, the 25% increase in PCI Express x16 slots and HT bus speeds yielded virtually the same performance as our system without LinkBoost enabled in most cases.
Its actual implementation did not change our test scores in single video card testing but did provide a 1%~2% difference in SLI testing at resolutions under 1600x1200 in several game titles. The reason for the minimal increases at best is that the performance boost is being applied in areas that have minimal impact on system performance as the link to the CPU/Memory subsystem is left at stock speed thus negating the true benefits of this technology.
FirstPacket
As part of the overhaul of the networking features first introduced in the NVIDIA nForce 590SLI and now 680i SLI Series, FirstPacket is a packet prioritization technology that allows latency-sensitive applications and games to effectively share the upstream bandwidth of their broadband connection. Essentially this technology allows the user to set network data packets for applications and games that are more latency sensitive with a higher queue priority for outbound traffic only.
FirstPacket is embedded in the hardware and offers driver support that is specifically designed to reduce latency for networked games and other latency-sensitive traffic like Voice over IP (VoIP). When network traffic constrains a connection, latency is increased which in turn can result in dropped packets that would create a jitter and delay in VoIP connections or higher ping rates to the game server resulting in stutters and decreased game play abilities.
In the typical PC configuration, the operation system, network hardware, and driver software are unaware of latency issues and therefore are unable to reduce it. The standard interfaces that allow applications to send and receive data are basically identical to the OS in a typical system. This type of design results in latency-tolerant and large packet applications like FTP or Web browsers filling the outbound pipeline without regards to the needs of small packet and very latency-sensitive applications like games or VoIP applications.
FirstPacket operates by creating an additional transmit queue in the network driver. This queue is designed to provide expedited packet transmission for applications the user determines are latency-sensitive applications. The ability of the designated applications to get preferential access to the upstream bandwidth usually results in improved performance and lower ping rates. The FirstPacket setup and configuration is available through a revised Windows based driver control panel that is very easy to use.
In our LAN testing, we witnessed ping rate performance improvements of 22% to 36% during the streaming of video from our media server while playing Serious Sam II across three machines on our LAN. We noticed ping rate performance improvements of 14% to 33% while uploading files via BitTorrent and playing Battlefield 2 on varying servers.
The drawback at this time is that only outbound packets are prioritized so if you spend more time downloading than uploading the FirstPacket technology will have little impact on your computing experience. Worth mention is that nearly all broadband connections have a lot more downstream bandwidth than upstream bandwidth, so focusing on prioritizing outbound traffic does make sense. Also, the upload time for our test file increased by 41% with FirstPacket turned on but the overall gaming experience was significantly better. However, in NVIDIA's defense they cannot control the behavior or quality of service on other networked clients, so FirstPacket addresses the services NVIDIA can control - namely uploading.
60 Comments
View All Comments
StriderGT - Thursday, November 9, 2006 - link
Also, some Intel chipset fans believe that Intel chipsets are best for a rock solid system (for the record, I'm not one of these people), I guess we'll see if nVidia will change thier minds.No it won't, its the same group of people that suggested the P4 was a more "stable" platform than the Athlon 64 platform. Its simply a psychological state of denial, when someone has paid more for less needs an excuse: "Stability"
skrewler2 - Wednesday, November 8, 2006 - link
I agree with you on your two points.I also wish PM tech was standardized.. I just went through a headache researching what was compatible with what chipset etc, imo it should just all work. From what I understand, the SATA II standard isn't even really a standard at all.. anyways I agree that NV should start implementing Port Multiplier support!
yyrkoon - Wednesday, November 8, 2006 - link
Yeah, I recently bought a budget Asrock board that SUPPOSEDLY supported SATAII connections. As per the standard, SATAII is supposed to support native command queuing (NCQ), and up to 3Gbit/s throughput on each connector. Anyhow this motherboard does not support NCQ . . . which is the majority of the reason to own a SATAII drive / interface, the rest is basicly marketing hype.Kougar - Wednesday, November 8, 2006 - link
Wanted to point out all the tables on the Memory Performance page are mislabled as "980i".Also some power consumption figures would be good, even if not critical. With a chipset cooler that huge it's about a giveaway it is hiding a nice and crispy chipset! ;) Thanks for the article!
Wesley Fink - Wednesday, November 8, 2006 - link
The perils of Table cut-and-paste are now corrected.Please see comments above above Power Consumption. That information will be added to the review since several have requested it.
Avalon - Wednesday, November 8, 2006 - link
I was much more interested in the 650i Ultra boards, specifically how well they overclocked compared to the 680i you benched. Additionally, do you think it's necessary for an active fan cooling the northbridge when highly overclocked on this chipset, or does it run fairly cool?Gary Key - Wednesday, November 8, 2006 - link
We will not have 650i boards until early December for review. The fan is required for upper-end 24/7 overclocking in my opinion, otherwise the board ran fine without it.yzkbug - Wednesday, November 8, 2006 - link
tables in page 10: NVIDIA 980i -> NVIDIA 680iShoNuff - Wednesday, November 8, 2006 - link
I'm impressed with the review. It was very thorough. In particular, I was amazed at your overclock with the X6800. I am looking forward to getting one of these boards in my hands.
It appears that NVIDIA has done it this time with respect to the on board memory controller. It is hard to imagine things getting better when the OEM's add their nuances to this board. If results are this good based upon the reference design, it is almost scary thinking about how good a board DFI would/could produce.
Oh…and btw…I like the new location of the front panel connectors. The new location will make it easier to "stealth" the wires.
hubajube - Wednesday, November 8, 2006 - link
These are ass-kicking OC's!!! Can't wait to own this board.