news / tech talk

Dual-Core Processors

by Lee LeClair
02/17/2006
As seen in Inside Tucson Business

Long ago, the hot topic used to be the clock speed of your new PC’s processor (“I just got a 3.2GHz system”). Though it was not an accurate measure of overall system power, it was easy to understand. Then AMD rained on the parade by marketing processors of roughly equal capability but running at lower clock speeds. In order to make the public understand their products relative power, they had to use a product numbering scheme to show the approximation (hence, an Athlon 3000 was equivalent to a Pentium 3GHz CPU even if the AMD operated at 2GHz clock speed). Next, 64 bit processing became the rage. Intel based its strategy on its Itanium CPU – a 64 bit processor based on a wholly new and not backwards-compatible architecture. AMD chose a very different approach that extended and improved on the x86 type processors and thus could still run 32 bit code natively (without slow emulators). By now, it should be clear that AMD placed the smarter bet. Intel’s huge investment in the Itanium has been a boat anchor and embarrassment; the CPU is relegated to niche status whatever its technical merits might be. Intel has instead been forced to eat crow and follow AMD’s 64 bit approach by using a modified form of its Pentium line for 64 bit processing. All this in spite of the fact that mainstream 64 bit software support from Microsoft and other vendors has been slow to appear in the form of operating systems, drivers, and applications.

Now, the hot topic of PC parlance is “dual core” processors – essentially 2 CPUs in one physical chip that looks unchanged from the single core. In fact, many are dual core 64 bit processors. But what does that mean? Will a dual core processor be twice as fast as a single core CPU? Does 64 bit processing mean anything to the average PC user yet? Again, the landscape is confusing and product marketing doesn’t help. As usual, hardware development is far ahead of software development. In order to really use dual core CPUs, software must be written to be aware of multiple CPUs i.e., multi-threaded applications. Most office applications (e.g., MS Office, Browsers, Email clients, etc.) are not multi-threaded and so do not really take advantage of the extra CPU. A notable exception is Adobe’s Photoshop which is multi-threaded. If you multi-task a lot, then there are advantages because you will be able to use a dual core CPU to get a several things done at once more efficiently (e.g., you’re running a game while you’re burning a DVD or ripping some songs from a CD).

So, is a dual core CPU twice as fast as a single core CPU? No, software and operating systems aren’t yet built to really take advantage of the multiple cores; it will come but not likely until 2007 since Microsoft won’t likely ship their next OS (Vista) until late this year. Similarly, 64 bit software support is also forthcoming with Microsoft’s next generation of operating system and office products and the attendant tail of driver and applications develoeprs. Right now, 64 bit processing is mostly a benefit in server environments where they provide a direct access to memory advantage. I’m not talking much about Apple, Linux, BSDs, Sun, and other systems (most of whom already support 64 bit processing to a greater degree) only because the vast majority of people are familiar only with Windows stuff.

Software (operating systems and applications) that truly takes advantage of 64 bit, multi-core systems is 2 years off at a desktop level. The first desktop software to really push the hardware will likely be intensive gaming applications. Even this landscape may change with totally new CPU architectures arriving in the form of consoles like the Xbox 360 and Sony’s PlayStation 3. Sony’s development (along with IBM and Toshiba) of the Cell processor in particular may affect the 3-5 year landscape because the consortium is supporting open development for the Cell platform and IBM is incorporating the Cell into blade servers.

Software development to take advantage of threading is new, complex, and requires updated compilers and thinking. In fact, software development will drive the success or failure of the next decade’s hardware architecture far more than hardware performance on theoretical benchmarks. So, predicting the future in information technology is as difficult, slippery, and treacherous as ever. But at least hardware costs have come down to commodity levels – take heart in that and be careful with technology stocks!

Lee Le Clair is the CTO at Ephibian. His Tech Talk column appears the third week of each month in Inside Tucson Business