Intel Core 2 Extreme X6800 Preview from Taiwan
by Anand Lal Shimpi & Gary Key on June 6, 2006 7:35 PM EST- Posted in
- CPUs
Business Application Performance
We start off with Business Winstone 2004, a benchmark that has since been discontinued by VeriTest but one we continue to use because of the relevance of its results. Business Winstone doesn't generally vary all that much with CPU speed as the benchmark itself is quite I/O heavy. As you can see below, this doesn't stop the Core 2 Extreme X6800 from maintaining a healthy lead over the FX-62:
With a 17.5% performance advantage, the Core 2 Extreme starts off by performing very well in an area where the Pentium 4 could not: general business applications. The Pentium D would not only offer mediocre performance here, but also produce a lot of heat while doing it; Intel's Core architecture is a very different beast and the results here show it.
We turned to SYSMark 2004's Office Productivity suite for another look at office application performance, and the results were no less impressive:
Overall Office Productivity performance with the Core 2 Extreme X6800 is just over 26% faster than the identically configured FX-62. The breakdown of the OP suite is below, as you can see some individual tests are closer than others:
The Communication tests in particular are very close, but there's a strong possibility that is because of the I/O bound nature of those benchmarks. The Communication suite was great at showcasing hard disk performance, so it's not a surprise that it barely shows any performance difference between the two CPUs.
134 Comments
View All Comments
IntelUser2000 - Tuesday, June 6, 2006 - link
So... when Pentium 4 came out with Willamette core, it was better since it was new right?? I am sure before Conroe benchmarks, people would have thought 20% performance gain by CPU alone was crazy, like the person claiming that would be from another planet.
bob661 - Tuesday, June 6, 2006 - link
Hmmm...no mention of better here. Maybe a wormhole sucked that word right off of my page transported it to the year STFU.
Maybe 10 years ago but nowadays, everything but Celerons are fast. Where did I say better?
saratoga - Tuesday, June 6, 2006 - link
20% gain from the CPU is nothing. You get that every couple months, usually.Anyway, you're missing the point. AM2 was not meant to improve performance, it was meant to cut costs. DDR1 has passed the inflection point relative to DDR2, and AMD needs to get off of it before it sinks. AM2 allows this to happen. Essentially, it maintains the status quo.
It'll be the K8L that saves AMD (well, assuming it ever comes out).
IntelUser2000 - Tuesday, June 6, 2006 - link
You must be definitely not understanding me.
20% before=architecture+clock speed
Core's 20-30% is architecture alone, clock speed will add on top of that. And that's over the FASTEST CPU out there now. Core is 50-70% faster than higher clocked Pentium D. Nothing had that much of an improvement.
JarredWalton - Tuesday, June 6, 2006 - link
Realistically, no... FSB performance has some impact overall, but generally not more than 5-10%, especially once you get past a certain point. FSB-533 to FSB-800 showed reasonable increases in quite a few applications. 800 to 1066 didn't help all that much, and I would wager 1333 is not truly necessary. Of course, Intel needs the higher FSB speeds due to the CPU-to-NB-to-RAM pathway, whereas AMD connects to the RAM directly and has a separate connection to the NB.The only real question now is: when will K8L arrive, and how much will it help? I can't answer the latter, but the former looks to be late 2006/early 2007 AFAIK.
classy - Tuesday, June 6, 2006 - link
The scores are somewhat not telling the whole story. I bet with AA/AF the FX62 bandwidth starts to flex its muscle some. The FX is clocked a little slower as well not mention this is the very best from Intel. While I'll be buying a cheaper one :), truth is Core 2 is damn nice but far from distancing itself from AMD. It looks like with 65mm alone AMD may be able to challenge for the crown. Something many thought earlier wouldn't be possible.Calin - Wednesday, June 7, 2006 - link
AA and AF are entirely and completely video card dependent. As you increase graphic quality, the processor will wait more and more for the video card. Benchmarking how fast a processor waits isn't so interesting.also, the FX62 is the very best from AMD, and even with 65nm AMD would need to increase the clock speed by 25% to equal the top Conroe
IntelUser2000 - Tuesday, June 6, 2006 - link
Same BS over and over and over and over again for the doubters/skeptics. You guys will never learn.
This is a CPU test. If you want to see graphics benchmarks, don't get the highest end CPU, get the Semprons and the Celeron D's with X1900XT Crossfire.
Games are lower resolutions are put exactly to show the CPU performance is CONSISTENT over variety of applications.
Plus, the people who really care about gaming and play competitively(even somewhat) will see that CPU matters a lot for performance since they play at 1024x768 so they don't notice lag spikes. I have seen competitive gamers wonder why they have lag spikes and they are pissed off about it when they are getting 80+ fps in benchmarks.
classy - Tuesday, June 6, 2006 - link
Hahahaha I won't get into what my online name was when I was gaming, been in slight retirement. Lets just say I was one of the best damn railers on the net and it is clear you have no f'in clue what your talking about. Just so you know most configs limit fps and most main stream cpus can easily supply plenty of power, so your babble about cpus is a joke. Lag is almost always related to ram or the video card except at the highest resolutions. The point is that more than likely in real world use you probably won't see a difference at all. And if you find a gamer today more concerned about the cpu than his graphics, it is clear he should step away from the keyboard and mouse.IntelUser2000 - Tuesday, June 6, 2006 - link
Yes I do know what I am talking about. It's you who doesn't. None of the real world people I have seen use low latency memory. They all use generic samsung memory. There is a pattern I noticed. The one who knows about hardware aren't really good gamers, and the one who's hardcore gamer that plays good enough to win prizes don't know so much about hardware, I guess they don't have time for both.
CPU will matter for a competitive gamer simply because they will run at low resolutions(I am comparatively speaking here not low by 640x480) to avoid lag. Lag in competition=bad, so they do anything to avoid it. As I said, my friend has Dell M170 laptop, that's with Pentium M 2.0GHz/533Mhz FSB/2MB L2, 1GB DDR2, 5400RPM drive and Geforce 7800GTX Go 256MB. He runs at resolutions where some newer games don't look better than older games because he runs it at so low. He doesn't put graphics effects like Bloom since it inteferes with his play. And he does play GOOD, in everything first person shooter.
They think its the graphics card that matters, but if they notice lag on a laptop that good, it won't get much better getting X1900 or whatever top end now as those top end GPUs aren't faster at 1024x768 by much anyway, it becomes a CPU bottleneck.