Intel's 32nm Update: The Follow-on to Core i7 and More
by Anand Lal Shimpi on February 11, 2009 12:00 AM EST- Posted in
- CPUs
The Manufacturing Roadmap
The tick-tock cadence may have come about at the microprocessor level, but its roots have always been in manufacturing. As long as I’ve been running AnandTech, Intel has introduced a new manufacturing process every two years. In fact, since 1989 Intel has kept up this two year cycle.
We saw the first 45nm CPUs with the Penryn core back in late 2007. Penryn, released at the very high end, spent most of 2008 making its way mainstream. Now you can buy a 45nm Penryn CPU for less than $100.
The next process technology, which Intel refers to internally as P1268, shrinks transistor feature size down to 32nm. The table above shows you that first production will be in 2009 and, after a brief pause to check your calendars, that means this year. More specifically, Q4 of this year.
I’ll get to the products in a moment, but first let’s talk about the manufacturing process itself.
Here we have our basic CMOS transistor:
Current flows from source to drain when the transistor is on, and it isn’t supposed to flow when it’s off. Now as you shrink the transistor, all of its parts shrink. At 65nm Intel found that it couldn’t shrink the gate dielectric any more without leaking too much current through the gate itself. Back then the gate dielectric was 1.2nm thick (about the thickness of 5 atoms), but at 45nm Intel’s switched from a SiO2 gate dielectric to a high-k one using Hafnium. That’s where the high-k comes from.
The gate electrode also got replaced at 45nm with a metal to help increase drive current (more current flows when you want it to). That’s where the metal gate comes from.
The combination of the two changes to the basic transistor gave us Intel’s high-k + metal gate transistors at 45nm, and at 32nm we have the second generation of those improvements.
The high-k gate dielectric gets a little thinner (equivalent to a 0.9nm SiO2 gate, but presumably thicker since it’s Hafnium based, down from 1.0nm at 45nm ) and we’ve still got a metal gate.
At 32nm the transistors are approximately 70% the size of Intel’s 45nm hk + mg transistors, allowing Intel to pack more in a smaller area.
The big change here is that Intel is using immersion lithography on critical metal layers in order to continue to use existing 193nm lithography equipment. The smaller your transistors are, the higher resolution your equipment has to be in order to actually build them. Immersion lithography is used to increase the resolution of existing lithography equipment without requiring new technologies. It is a costlier approach, but one that becomes necessary as you scale below 45nm. Note that AMD to made the switch to immersion lithography at 45nm.
Intel reported significant gains in transistor performance at 32nm; the graphs below help explain:
We’re looking at the comparison of leakage current vs. drive current for both 32nm NMOS and PMOS transistors. The new transistors showcase a huge improvement in power efficiency. You can either run them faster or run them at the same speed and significantly reduce leakage current by a magnitude of greater than 5 - 10x compared to Intel’s 45nm transistors. Intel claims that its 32nm transistors boast the highest drive current of all reported 32nm technologies at this point, which admittedly there aren’t many.
The power/performance characteristics of Intel’s 32nm process make it particularly attractive for mobile applications. But more on that later.
64 Comments
View All Comments
Jovec - Wednesday, February 11, 2009 - link
Take a look at your Program Menu and tell me what apps today that are not multithreaded would receive serious benefit from being multithreaded? Besides gaming? Single-thread apps do receive benefits from multiple cores in typical usage scenarios because they can be run on a (semi) dedicated core and not interfere with other apps.philosofool - Wednesday, February 11, 2009 - link
Interesting thought. I'm hoping that with the mainstreaming of the dual core, multi-threaded apps become more common and that the single to dual jump turns out to be the biggest leap. But it's really just a hope on my part, don't know if it will happen.Isn't there a multitasking advantage with 4 core machines? Also, once we start ripping 720 and 1080p files, 6 cores is gonna be hot.
7Enigma - Thursday, February 12, 2009 - link
There are definite multitasking advantages with quadcore if you are heavily multitasking (i'd argue tri-core is probably used more effectively currently than that final 4th core). Single to dual, however, was a much greater difference for multitasking on the whole.I just don't see the quad-hex jump being more beneficial than quad-juicedquad in this case.
strikeback03 - Wednesday, February 11, 2009 - link
Yeah, can't say I'm real happy about the lack of a 32nm quad-core for 1366. If my motherboard supported Penryn I'd probably just buy one of those cheap, getting an SSD, and waiting for Sandy Bridge. Since it doesn't, the decision is more difficult. Probably depends how much business I get this year.Pakman333 - Wednesday, February 11, 2009 - link
DailyTech says Lynnfield will come in Q3? Hopefully it will have ECC support.iwodo - Wednesday, February 11, 2009 - link
SSE 4.2 doesn't bring much useful performance to consumers.There is no Dual Core Westmere or Nehalem. Not without Intel Sh*test Graphics On Earth.
No wonder why Unreal Dev and Valve are complaining that Intel GFX is basically Toxic....
And i cant understand why Anand is excited, Macbook with Intel Graphics all over again?
And Just before anyone who say Intel Gfx will improve. Please refer to history, from G965 to their X series are so full of Marketing BS.... And never did they delivery what they promised.
ssj4Gogeta - Wednesday, February 11, 2009 - link
noone's forcing you to use G45. you can still use discrete gfx cards.Daemyion - Wednesday, February 11, 2009 - link
Actually, they fully delivered on the marketing. It's just that when Nvidia/ATI delivered products in the same space Intels product looked rubbish. There is nothing wrong with the G45 other than it not being an 9400 or a 790GX.Spoelie - Wednesday, February 11, 2009 - link
wasn't yonah the first processor out at the 65nm node? if so intel did perform the same stunt earlier, only at 45nm did they not release a laptop version first.IntelUser2000 - Wednesday, February 11, 2009 - link
No, the Pentium 955XE based on Pentium D was.