AMD in Consumer Electronics
The potential of Fusion extends far beyond the PC space and into the embedded space. If you can imagine a very low power, low profile Fusion CPU, you can easily see it being used in not only PCs but consumer electronics devices as well. The benefit is that your CE devices could run the same applications as your PC devices, truly encouraging and enabling convergence and cohabitation between CE and PC devices.
Despite both sides attempting to point out how they are different, AMD and Intel actually have very similar views on where the microprocessor industry is headed. Both companies have stated to us that they have no desire to engage in the "core wars", as in we won't see a race to keep adding cores. The explanation for why not is the same one that applied to the GHz race: if you scale exclusively in one direction (clock speed or number of cores), you will eventually run into the same power wall. The true path to performance is a combination of increasing instruction level parallelism, clock speed, and number of cores in line with the demands of the software you're trying to run.
AMD has been a bit more forthcoming than Intel in this respect by indicating that it doesn't believe that there's a clear sweet spot, at least for desktop CPUs. AMD doesn't believe there's enough data to conclude whether 3, 4, 6 or 8 cores is the ideal number for desktop processors. From our testing with Intel's V8 platform, an 8-core platform targeted at the high end desktop, it is extremely difficult finding high end desktop applications that can even benefit from 8 cores over 4. Our instincts tell us that for mainstream desktops, 3 - 4 general purpose x86 cores appears to be the near term target that makes sense. You could potentially lower the number of cores needed if you combine other specialized hardware (e.g. an H.264 encode/decode core).
What's particularly interesting is that many of the same goals Intel has for the future of its x86 processors are in line with what AMD has planned. For the past couple of IDFs Intel has been talking about bringing to market a < 0.5W x86 core that can be used for devices that are somewhere in size and complexity between a cell phone and an UMPC (e.g. iPhone). Intel has committed to delivering such a core in 2008 called Silverthorne, based around a new micro-architecture designed for these ultra low power environments.
AMD confirmed that it too envisions ultra low power x86 cores for use in consumer electronics devices, areas where ARM or other specialized cores are commonly used. AMD also recognizes that it can't address this market by simply reducing clock speed of its current processors, and thus AMD mentioned that it is working on a separate micro-architecture to address these ultra low power markets. AMD didn't attribute any timeframe or roadmap to its plans, but knowing what we know about Fusion's debut we'd expect a lower power version targeted at UMPC and CE markets to follow.
Why even think about bringing x86 cores to CE devices like digital TVs or smartphones? AMD offered one clear motivation: the software stack that will run on these devices is going to get more complex. Applications on TVs, cell phones and other CE devices will get more complex to the point where they will require faster processors. Combine that with the fact that software developers don't want to target multiple processor architectures when they deliver software for these CE devices, and by using x86 as the common platform between CE and PC software you end up creating an entire environment where the same applications and content can be available across any device. The goal of PC/CE convergence is to allow users to have access to any content, on any device, anywhere - if all the devices you're trying to gain access to content/programs on happen to all be x86, it makes the process much easier.
Why is a new core necessary? Although x86 can be applied to virtually any market segment, the range of usefulness of a particular core can extend throughout an order of magnitude of power. For example, AMD's current desktop cores can easily be scaled up or down to hit TDPs in the 10W - 100W range, but they would not be good for hitting something in the sub-1W range. AMD can easily address the sub-1W market, but it will require a different core from what it addresses the rest of the market with. This philosophy is akin to what Intel discovered with Centrino; in order to succeed in the mobile market, you need a mobile specific design. To succeed in the ultra mobile and handtop markets, you need an ultra mobile/handtop specific processor design as well. Both AMD and Intel realize this, and now both companies have publicly stated that they are doing something about it.
The potential of Fusion extends far beyond the PC space and into the embedded space. If you can imagine a very low power, low profile Fusion CPU, you can easily see it being used in not only PCs but consumer electronics devices as well. The benefit is that your CE devices could run the same applications as your PC devices, truly encouraging and enabling convergence and cohabitation between CE and PC devices.
Despite both sides attempting to point out how they are different, AMD and Intel actually have very similar views on where the microprocessor industry is headed. Both companies have stated to us that they have no desire to engage in the "core wars", as in we won't see a race to keep adding cores. The explanation for why not is the same one that applied to the GHz race: if you scale exclusively in one direction (clock speed or number of cores), you will eventually run into the same power wall. The true path to performance is a combination of increasing instruction level parallelism, clock speed, and number of cores in line with the demands of the software you're trying to run.
AMD has been a bit more forthcoming than Intel in this respect by indicating that it doesn't believe that there's a clear sweet spot, at least for desktop CPUs. AMD doesn't believe there's enough data to conclude whether 3, 4, 6 or 8 cores is the ideal number for desktop processors. From our testing with Intel's V8 platform, an 8-core platform targeted at the high end desktop, it is extremely difficult finding high end desktop applications that can even benefit from 8 cores over 4. Our instincts tell us that for mainstream desktops, 3 - 4 general purpose x86 cores appears to be the near term target that makes sense. You could potentially lower the number of cores needed if you combine other specialized hardware (e.g. an H.264 encode/decode core).
What's particularly interesting is that many of the same goals Intel has for the future of its x86 processors are in line with what AMD has planned. For the past couple of IDFs Intel has been talking about bringing to market a < 0.5W x86 core that can be used for devices that are somewhere in size and complexity between a cell phone and an UMPC (e.g. iPhone). Intel has committed to delivering such a core in 2008 called Silverthorne, based around a new micro-architecture designed for these ultra low power environments.
AMD confirmed that it too envisions ultra low power x86 cores for use in consumer electronics devices, areas where ARM or other specialized cores are commonly used. AMD also recognizes that it can't address this market by simply reducing clock speed of its current processors, and thus AMD mentioned that it is working on a separate micro-architecture to address these ultra low power markets. AMD didn't attribute any timeframe or roadmap to its plans, but knowing what we know about Fusion's debut we'd expect a lower power version targeted at UMPC and CE markets to follow.
Why even think about bringing x86 cores to CE devices like digital TVs or smartphones? AMD offered one clear motivation: the software stack that will run on these devices is going to get more complex. Applications on TVs, cell phones and other CE devices will get more complex to the point where they will require faster processors. Combine that with the fact that software developers don't want to target multiple processor architectures when they deliver software for these CE devices, and by using x86 as the common platform between CE and PC software you end up creating an entire environment where the same applications and content can be available across any device. The goal of PC/CE convergence is to allow users to have access to any content, on any device, anywhere - if all the devices you're trying to gain access to content/programs on happen to all be x86, it makes the process much easier.
Why is a new core necessary? Although x86 can be applied to virtually any market segment, the range of usefulness of a particular core can extend throughout an order of magnitude of power. For example, AMD's current desktop cores can easily be scaled up or down to hit TDPs in the 10W - 100W range, but they would not be good for hitting something in the sub-1W range. AMD can easily address the sub-1W market, but it will require a different core from what it addresses the rest of the market with. This philosophy is akin to what Intel discovered with Centrino; in order to succeed in the mobile market, you need a mobile specific design. To succeed in the ultra mobile and handtop markets, you need an ultra mobile/handtop specific processor design as well. Both AMD and Intel realize this, and now both companies have publicly stated that they are doing something about it.
55 Comments
View All Comments
strikeback03 - Friday, May 11, 2007 - link
This implies that actual performance numbers would make Barcelona more visible. But to factor into a buying decision they have to know Barcelona is coming, and anyone who knows that can probably guess it will be a significant step forward, based on it's need to compete with Intel. Soe either you don't know Barcelona is coming, in which case performance numbers don't matter; or you do know it is coming, in which case the only reason to buy AMD before then is because it's cheap.
At least they stated that the new processors will be usable in the AM2 motherboards.
TA152H - Friday, May 11, 2007 - link
You are using pretzel logic here.If you know Barcelona is a significant step forward, why do you need the results posted beforehand?
Actually, performance would make Barcelona more visible, and if it were better than expected, you'd kill current sales. You can speculate on performance, but you really don't know. The only place you'd really want people to know beforehand is the server market, because people plan these purchases. And guess what? AMD released those numbers, and there were pretty high.
It's also completely different to know something is coming out and guessing at the performance, than actually seeing the numbers and from that being thoroughly disgusted with the performance. I could live with any of the processors today, but once I see one get raped by the next generation, I don't want it. It hits you on a visceral level, and after that, it's difficult to go back to it. Put another way, say there is a girl you can out with today that's fairly attractive and would certainly add to your life. You could wait for one that will be more attractive later on, but you don't really need to since this new one is more than adequate. Now say you see this bombshell. Do you think you'd really want to go back to the one that wasn't so attractive?
We're human, we respond to things on an emotional level even when we know we shouldn't. The head never wins against the heart. I'm not sure that's a bad thing either, life would be so uninteresting were it not so.
blppt - Friday, May 11, 2007 - link
"AMD's reasoning for not disclosing more information today has to do with not wanting to show all of its cards up front, and to give Intel the opportunity to react."Come on....I'm sure Intel already has a pretty good idea of what they are up against. I'm sure Intel has access to information on their competitors that the general tech public doesn't.
michal1980 - Friday, May 11, 2007 - link
All they said is there is new stuff coming. Trust me, if the cpu's they had right now were beating the pants off of intel, they would post the number. I'm not saying give us the freq, the cpu runs on. But if they knew that games run 50% faster, they would at least hint it.Nice things: looks like the new mobo chip runs cool, look at how small the hsf are on those chips.
Not nice: how hot are these new cpus? look at all those fans, its like a tornado in the case.
Note nice: No DATES? all that means is its even easier to push things back. Winter 2007, because early 2008
Ard - Friday, May 11, 2007 - link
Excellent article as always, Anand. It's nice to finally get some info on AMD and find out that they're not throwing in the towel just yet. Some performance numbers would've been nice but I guess you can't have everything. I did have to laugh at the slide that said S939 will continued to be supported throughout 2007 though, considering you can't even buy new S939 CPUs.Beenthere - Friday, May 11, 2007 - link
It's a known fact that Intel has had to try and copy the best features of AMD's products to catch up in performance to AMD. Funny how when Intel was secretive and blackmailing consumers for 30 years that was fine but when AMD doesn't give away all of their upcoming product technical info. for Intel to copy, that's not good -- according to some. With Intel being desperate to generate sales for their non-competitive products over the past 2-3 years, they decided to really manipulate the media - and it's worked. The once secretive Intel is the best friend a hack can find these days. They'll tell a hack anything to get some form of media exposure.I find AMD's release of info. just fine. If it were not for AMD all consumers would be paying $1000 for a Pentium 90 CPU today and that would be the fastest CPU you could buy. People tend to forget all that AMD has done for consumers. The world would be a lot worse off than it is if it were not for AMD stepping to the plate to take on the bully from Satan Clara.
Many in the media are shills and most of the media is manipulated by unscrupulous companies like Intel, Asus, and a long list of others. Promise a hack some "inside info." or insiders tour so they can get a scoop or a prototype piece of hardware that has been massaged for better performance than the production hardware and the fanboy hacks will write glowing opine about a companies products and chastise the competition every chance they get.
Unfortunately what was once a useful service - honest product reviews -- is now a game of shilling for dollars. You literally can't believe anything reported at 99% of websites these days because it's usually slanted based on which way the money flows... It's no secret that Intel and MICROSUCKS are more than willing to lubricate the wheels of the ShillMeisters to get favorable tripe.
TA152H - Friday, May 11, 2007 - link
Beenthere,What are you talking about? Intel invented the microprocessor (4004), invented the instruction set used today (8086) and has been getting copied by AMD for years.
The Athlon was certainly nothing to copy, you could just as easily say they copied the Pentium III (and did a bad job of it, whereas the Core is much better than the Athlon). What's so unique about the Athlon that could be copied anyway? It's a pretty basic design. It worked OK, I guess, but the performance per watt was always poor until the Pentium 4 came around and redefined just what poor meant.
x86-64 is straightforward, and you can be sure Microsoft designed most of it. I'm not saying this as anything bad about AMD, because who better to design the instruction set than Microsoft? Intel and Microsoft do enough software to understand what is best, AMD is allergic to software, so I think this is a good thing.
I agree, only slightly, that these review sites are ass-kissers by nature, because they need good relationships with the makers. I doubt they are getting kick-backs, but say Anand is more honest with his opinions (he always is about a lousy product, after the company comes out with a good one), he'd get cut off from some information or products from that same company. So, they kiss ass because if they write scalding and honest reviews they lose out and can't function as an information site as well. I don't like it, but can you blame him? In his situation, you'd have to do exactly the same thing - give a review in a delicate way without offending the hand that feeds you, but trying to get your point across anyway with the factual data. Tom Pabst was funny as Hell in his old reviews, he took a devil may care attitude, but nowadays even that site has accepted the reality of being on good terms with technology companies whenever possible. In the long run, it's worth it.
Viditor - Saturday, May 12, 2007 - link
Actually, most of the work was done at Fairchild Semiconductor...that's where both Gordon Moore (founder of Intel) and Jerry Sanders (founder of AMD) worked together.
Moore left FS in 1968 to form Intel (along with Bob Noyce) and Sanders left in 1969 to form AMD.
Intel began as a memory manufacturer, but Busicom contracted them to create a 4-bit CPU chip set architecture that could receive instructions and perform simple functions on data. The CPU becomes the 4004 microprocessor...Intel bought back the rights from Busicom for US$60,000.
Interestingly, TI had a system on a chip come out at the same time, but they couldn't get it to work properly so Intel got the money (and the credit).
You're kidding, right??
1. Athlon had vastly superior FP because of it's super-pipelined, out-of-order, triple-issue floating point unit (it could operate on more than one floating point instruction at once)
2. Athlon had the largest level 1 cache in x86 history
3. When it was first launched, it showed superior performance compared to the reigning champion, Pentium III, in every benchmark
4. Three generalized CISC to RISC decoders
5. Nine-issue superscalar RISC core
Just look at the reviews during release (you might think it's si,ilar to the C2D reviews...)
http://www.aceshardware.com/Spades/read.php?articl...">Aces Hardware
That's just silly...while I'm sure MS had plenty of input, there are no chip architects on their staff that I'm aware of (in other words nobody their COULD design it).
It's like saying that when a Pro driver gives feedback to the engineers on what he wants, he's the one who designed the car...don't think so.
TA152H - Sunday, May 13, 2007 - link
What's your point about the 4004? You're giving commonly known information that in no way changes the fact that Intel invented the first microprocessor. It wasn't for themselves, initially, but it was their product. AMD didn't create it, and they didn't create the other microprocessors they were a second source from Intel. Look at their first attempt at their own x86 processor to see how good they were at it, the K5. It was late, slower than expected, and created huge problems for Compaq, which had bet on them. Jerry Sanders was smart enough to by NexGen after that.You are clearly clueless about microprocessors if you think any of those things you mention about the Athlon are in any way anything but basic.
The largest L1 cache is a big difference??? Why that's a real revolution there!!!! They made a bigger cache! Holy Cow! Intel still hasn't copied that, by the way, so even though it's nothing innovative, it was still never copied.
The FP unit was NOT the first pipelined one, the Pentium was and the Pentium Pro line was also pipelined, or superpipelined as you misuse the word. Do you know what superpipelined even means? It means lots of stages? Are you implying the Athlon was better in floating point because it had more floating point stages? Are you completely clueless and just throwing around words you heard?
Wow, they had slightly better decoding than the P6 generation!!!! Wow, that's a real revolutionary change.
You're totally off on this. They did NOTHING new on it, it was four years later than the Pentium Pro, and barely outperformed it, and in fact was surpassed by the aging P6 architecture when the Coppermine came out. It was much bigger, used much more power, and had relatively poor performance for the size and power dissipation. The main problem with the P6 was the memory bandwidth too, if it had the Athlon's it would have raped it, despite being much smaller. I don't really call that a huge success. Although, it does have to be said the Athlon was capable of higher clock speeds on the same process. Still, it was hardly an unqualified success like the Core 2, which is good by any measure.
The Core 2 is MUCH faster than the Athlon 64, and isn't a much larger and much more power hungry beast. In fact, it's clearly better in power/performance than the Athlon 64. The Athlon was dreadful in this regard.
I was talking about the instruction set with regards to Microsoft, which should have been obvious since x86-64 is an instruction set, not an architecture. And yes, they did design most of it, if not all. Ask someone from Microsoft, and even if you don't know one, use some common sense. Microsoft writes software, and compilers, and have to work with the instruction set. They are naturally going to know what works best and what they want, and AMD has absolutely no choice but to do what Microsoft says to do. Microsoft is holding a Royal Flush, AMD has a nine high. Microsoft withholding support for x86-64 would have made it as meaningless as 3D Now! They knew it, AMD knew it, and Microsoft got what they wanted. Anything else is fiction. Again, use common sense.
hubajube - Friday, May 11, 2007 - link
Dude, WTF are YOU talking about? Allergic to software? Is that an industry phrase? YOU have NO idea what AMD did or didn't do in regards to X86-64 so how can you even make a comment on it?