Intel Core 2 Extreme X6800 Preview from Taiwan
by Anand Lal Shimpi & Gary Key on June 6, 2006 7:35 PM EST- Posted in
- CPUs
Memory Latency and Bandwidth
We've never been able to look at some of the low level characteristics of Intel's Core architecture, and although we didn't have enough time to do a thorough run of low level benchmarks we were able to run ScienceMark 2.0 in order to get an idea of how the Core 2 Extreme stacked up against the FX-62 in terms of memory latency and bandwidth.
We had seen Conroe performance results that showed the new architecture being able to offer fairly competitive memory access latencies to AMD's architecture, without the need of an on-die memory controller. Our ScienceMark 2.0 results confirm just that:
While AMD still offers lower memory latency, the Core 2 Extreme X6800 is very close in comparison - especially considering that it has no on-die memory controller. With lower clock speeds than its Pentium D siblings and a faster FSB, memory access latency is reduced tremendously with Conroe. On a larger scale, through a very effective cache subsystem as well as memory disambiguation, Conroe can offer significantly improved memory performance compared to its predecessors, including the Athlon 64 X2/FX.
ScienceMark's memory bandwidth results offer a very telling story, showing us the bandwidth limitations of Intel's FSB architecture. While the FX-62's peak theoretical bandwidth is not achieved in real world, you can see how AMD's Direct Connect architecture offers higher limits for chip-to-chip communication.
134 Comments
View All Comments
IntelUser2000 - Tuesday, June 6, 2006 - link
So... when Pentium 4 came out with Willamette core, it was better since it was new right?? I am sure before Conroe benchmarks, people would have thought 20% performance gain by CPU alone was crazy, like the person claiming that would be from another planet.
bob661 - Tuesday, June 6, 2006 - link
Hmmm...no mention of better here. Maybe a wormhole sucked that word right off of my page transported it to the year STFU.
Maybe 10 years ago but nowadays, everything but Celerons are fast. Where did I say better?
saratoga - Tuesday, June 6, 2006 - link
20% gain from the CPU is nothing. You get that every couple months, usually.Anyway, you're missing the point. AM2 was not meant to improve performance, it was meant to cut costs. DDR1 has passed the inflection point relative to DDR2, and AMD needs to get off of it before it sinks. AM2 allows this to happen. Essentially, it maintains the status quo.
It'll be the K8L that saves AMD (well, assuming it ever comes out).
IntelUser2000 - Tuesday, June 6, 2006 - link
You must be definitely not understanding me.
20% before=architecture+clock speed
Core's 20-30% is architecture alone, clock speed will add on top of that. And that's over the FASTEST CPU out there now. Core is 50-70% faster than higher clocked Pentium D. Nothing had that much of an improvement.
JarredWalton - Tuesday, June 6, 2006 - link
Realistically, no... FSB performance has some impact overall, but generally not more than 5-10%, especially once you get past a certain point. FSB-533 to FSB-800 showed reasonable increases in quite a few applications. 800 to 1066 didn't help all that much, and I would wager 1333 is not truly necessary. Of course, Intel needs the higher FSB speeds due to the CPU-to-NB-to-RAM pathway, whereas AMD connects to the RAM directly and has a separate connection to the NB.The only real question now is: when will K8L arrive, and how much will it help? I can't answer the latter, but the former looks to be late 2006/early 2007 AFAIK.
classy - Tuesday, June 6, 2006 - link
The scores are somewhat not telling the whole story. I bet with AA/AF the FX62 bandwidth starts to flex its muscle some. The FX is clocked a little slower as well not mention this is the very best from Intel. While I'll be buying a cheaper one :), truth is Core 2 is damn nice but far from distancing itself from AMD. It looks like with 65mm alone AMD may be able to challenge for the crown. Something many thought earlier wouldn't be possible.Calin - Wednesday, June 7, 2006 - link
AA and AF are entirely and completely video card dependent. As you increase graphic quality, the processor will wait more and more for the video card. Benchmarking how fast a processor waits isn't so interesting.also, the FX62 is the very best from AMD, and even with 65nm AMD would need to increase the clock speed by 25% to equal the top Conroe
IntelUser2000 - Tuesday, June 6, 2006 - link
Same BS over and over and over and over again for the doubters/skeptics. You guys will never learn.
This is a CPU test. If you want to see graphics benchmarks, don't get the highest end CPU, get the Semprons and the Celeron D's with X1900XT Crossfire.
Games are lower resolutions are put exactly to show the CPU performance is CONSISTENT over variety of applications.
Plus, the people who really care about gaming and play competitively(even somewhat) will see that CPU matters a lot for performance since they play at 1024x768 so they don't notice lag spikes. I have seen competitive gamers wonder why they have lag spikes and they are pissed off about it when they are getting 80+ fps in benchmarks.
classy - Tuesday, June 6, 2006 - link
Hahahaha I won't get into what my online name was when I was gaming, been in slight retirement. Lets just say I was one of the best damn railers on the net and it is clear you have no f'in clue what your talking about. Just so you know most configs limit fps and most main stream cpus can easily supply plenty of power, so your babble about cpus is a joke. Lag is almost always related to ram or the video card except at the highest resolutions. The point is that more than likely in real world use you probably won't see a difference at all. And if you find a gamer today more concerned about the cpu than his graphics, it is clear he should step away from the keyboard and mouse.IntelUser2000 - Tuesday, June 6, 2006 - link
Yes I do know what I am talking about. It's you who doesn't. None of the real world people I have seen use low latency memory. They all use generic samsung memory. There is a pattern I noticed. The one who knows about hardware aren't really good gamers, and the one who's hardcore gamer that plays good enough to win prizes don't know so much about hardware, I guess they don't have time for both.
CPU will matter for a competitive gamer simply because they will run at low resolutions(I am comparatively speaking here not low by 640x480) to avoid lag. Lag in competition=bad, so they do anything to avoid it. As I said, my friend has Dell M170 laptop, that's with Pentium M 2.0GHz/533Mhz FSB/2MB L2, 1GB DDR2, 5400RPM drive and Geforce 7800GTX Go 256MB. He runs at resolutions where some newer games don't look better than older games because he runs it at so low. He doesn't put graphics effects like Bloom since it inteferes with his play. And he does play GOOD, in everything first person shooter.
They think its the graphics card that matters, but if they notice lag on a laptop that good, it won't get much better getting X1900 or whatever top end now as those top end GPUs aren't faster at 1024x768 by much anyway, it becomes a CPU bottleneck.