nForce 500: nForce4 on Steroids?
by Gary Key & Wesley Fink on May 24, 2006 8:00 AM EST- Posted in
- CPUs
New Feature: LinkBoost
One of the feature sets unique to the nForce 590 SLI MCP - and highly touted by NVIDIA - is called LinkBoost. If a GeForce 7900 GTX is detected on the nForce5 system then LinkBoost will automatically increase the PCI Express (PCIe) and MCP HyperTransport (HT) bus speeds by 25%. This increases the bandwidth available to each PCIe and HT bus link from 8GB/s to 10GB/s. Since this technology increases the clock speed of the PCI Express bus by 25%, NVIDIA requires certification of the video card for this program to work automatically. In this case, the 7900GTX is the only compatible card currently, although you can manually set the bus speeds and achieve the same or better overclock depending upon your components.In essence, NVIDIA is guaranteeing their chipset's PCIe and HT interconnect links are qualified to perform up to 125% of their default speeds without issue. While LinkBoost is an interesting idea, its actual implementation did not change our test scores outside the normal margin of error. The 25% increase in PCIe and HT yielded virtually the same performance as our system without LinkBoost enabled. The reason is that the performance boost is being applied in areas that have minimal impact on system performance.
LinkBoost is part of a package of easy-to-use auto overclocking features on the nForce5 designed for the OC newbie. If you fit in that category and you are excited about the 25% LinkBoost speed increase, you need to clearly understand that the 25% LinkBoost increase yielded little to no real performance increase. The true performance potential of this technology would have been realized on the AM2 CPU if the MCP's link to the CPU/Memory subsystem would have been dynamically increased from the base 8GB/s level, but NVIDIA does not control AMD CPU certification and thus left the CPU at stock speed.
The end result is that the Northbridge to CPU HyperTransport link remains at 8GB/s, and only the link between the MCP and SPP as well as the PEG slots get increased bandwidth. Having 8GB/s of bandwidth feed 10GB/s basically means you are still effectively limited to 8GB/s. It is possible that increasing the Northbridge to CPU bandwidth could improve performance slightly, but HyperTransport performance is rarely the bottleneck in current systems as you will see in our performance results.
New Feature: FirstPacket
As part of the overhaul of the networking features found in the NVIDIA nForce 500 Series, FirstPacket is a packet prioritization technology that allows latency-sensitive applications and games to effectively share the upstream bandwidth of their broadband connection. Essentially this technology allows the user to set network data packets for applications and games that are more latency sensitive with a higher queue priority for outbound traffic only.FirstPacket is embedded in the hardware and offers driver support that is specifically designed to reduce latency for networked games and other latency-sensitive traffic like Voice over IP (VoIP). When network traffic constrains a connection, latency is increased which in turn can result in dropped packets that would create a jitter and delay in VoIP connections or higher ping rates to the game server resulting in stutters and decreased game play abilities.
In the typical PC configuration, the operation system, network hardware, and driver software are unaware of latency issues and therefore are unable to reduce it. The standard interfaces that allow applications to send and receive data are basically identical to the OS in a typical system. This design results in latency-tolerant and large packet applications like FTP or Web browsers filling the outbound pipeline without regards to the needs of small packet and very latency-sensitive applications like games or VoIP applications.
FirstPacket operates by creating an additional transmit queue in the network driver. This queue is designed to provide expedited packet transmission for applications the user determines are latency-sensitive. The ability of the designated applications to get preferential access to the upstream bandwidth usually results in improved performance and lower ping rates. The FirstPacket setup and configuration is available through a new Windows based driver control panel that is very simple to use.
In our LAN testing, we witnessed ping rate performance improvements of 27% to 43% during the streaming of video from our media server while playing Serious Sam II across three machines attached to the network. We noticed ping rate performance improvements of 15% to 23% while uploading files via BitTorrent and playing Battlefield 2 on varying servers with VoIP conversations on Skype during game play. The drawback at this time is that only outbound packets are prioritized, so if you spend more time downloading than uploading the FirstPacket technology will have little impact for you. However, in NVIDIA's defense they cannot control the behavior or quality of service on other networked clients, so FirstPacket addresses the services NVIDIA can control - namely uploading.
64 Comments
View All Comments
Olaf van der Spek - Wednesday, May 24, 2006 - link
<quote>These devices can be configured in RAID 0, 1, 0+1, and 5 arrays. There is no support for RAID 10.</quote>That's probably because there's effectively no difference between 1+0 and 0+1 on a good controller.
Olaf van der Spek - Wednesday, May 24, 2006 - link
Doesn't this require support from the modem/router too?
The delay (usually) happens in the modem and not in the network card.
Zoomer - Saturday, May 27, 2006 - link
No, because you make the bottleneck your network card, instead of the modem. :)There will be a slight loss of throughput. Read some QoS articles. lartc.org is also a good resource. I bet it's the same principle. ;)
Trisped - Wednesday, May 24, 2006 - link
<quote>Multiple computers can to be connected simultaneously </quote>http://www.anandtech.com/cpuchipsets/showdoc.aspx?...">http://www.anandtech.com/cpuchipsets/showdoc.aspx?...
take out the "to"
Gary Key - Wednesday, May 24, 2006 - link
Thanks, it is corrected.....Googer - Wednesday, May 24, 2006 - link
When benchmarking core logic it's should be a high priority to measure I/O performance, since that is the primary job of any AMD Chipset.Where are the HDD, Network, Audio, and R.A.I.D. benchmarks?
Gary Key - Wednesday, May 24, 2006 - link
I answered above but we will have full benchmarks in the actual motherboard articles. Our efforts in the first three days was to prove out the platform and features that were added or changed (still doing it, feels weird to be up almost 72 hours). In answer to your question-
Foxconn Board
Network-
Throughput - 942 Gb/s
CPU utilization - 14.37% (with TCP/IP offload engine on), near 30% off.
HDD/RAID
No real difference compared to nF4 as we stated. The numbers are within 1% of each other. The interesting numbers will be in our ATI SB600 comparison.
Audio-
Dependent on the codec utilized in each motherboard, the RealTek ALC883 used in most of them have the same numbers as the nF4 boards. The only difference is the new 1.37 driver set we used. It will be interesting in the comparison as Asus went back to ADI for HDA.
Pirks - Wednesday, May 24, 2006 - link
That's why AT is my favorite review site - 'cause you're really crazy bunch :-) Just don't ruin yourself completely, we need you!Gary Key - Wednesday, May 24, 2006 - link
The ending should read NF4 Intel or ATI/Uli AMD boards. Where is that edit function? Hit enter too soon. :)Gary Key - Wednesday, May 24, 2006 - link
They will be in our roundup comparison and ATI AM2 articles.