Everyone knows about the 3G iPhone, but what about Snow Leopard?

I’m going to guess – and I think this is a fair assumption – that just about everyone has been focusing on the recent announcement of the 3G iPhone. Indeed, the announcement is no small event, and a good deal of coverage should be directed at the new smart phone. But for computer enthusiasts like myself, there were some other announcements at the World Wide Developers Conference that didn’t receive much press, yet were still of significant interest. In retrospect, maybe Apple should have renamed the event the “WWiDC” – short for “World Wide iPhone Developers Conference.”

So what were the announcements that missed out on all the fanfare? First of all, the next version of OS X: code named “Snow Leopard.” It’s still unclear as to whether the next iteration of the Mac OS will retain this moniker as its official release name, but given the details surrounding its feature-set, it wouldn’t be surprising. For at least according to the announcements so far, it looks like Apple will forego any major feature additions in 10.6 and instead focus on optimization and stability.

Of course, one would hope that Leopard was already optimized and stable, but additional stability and optimization would give OS X a further edge against Windows Vista (not exactly the benchmark for system optimized performance, but getting better in the stability arena). So if Snow Leopard really doesn’t have any major feature additions to distinguish it from the current version of OS X (Leopard), then “Snow Leopard” might be an appropriate name for a new version of the OS that isn’t quite a completely different cat.

But perhaps the most exciting announcement regarding Snow Leopard is the new “Grand Central.” It has the potential (at least given the vague description so far) to really turn the entire computer industry on its head. Seriously. This could be really, really important to the future of computing and the continuation of Moore’s Law.

Which begs the question: what is “Grand Central” and why is it so special?  In a nutshell, it is an OS level functionality to fully utilize multiple CPU cores.  A little background into why that is such an important step in computing technology:

In the heyday of the Pentium 4, the race was on between Intel and its main competitor AMD to reach the fastest clock speed on a single core processor.  The idea was, the faster a CPU was able to run a computational cycle, the faster the computer could run, and the more demanding the CPU task it could handle in a set amount of time.  Intel pretty much won the speed contest, but AMD introduced (and eventually surpassed Intel with this marketing strategy) the idea of “performance per watt” in which sheer clock speed wasn’t the goal; but rather the most CPU bang for energy buck.

This shifted the focus a bit from the clock speed wars, but another issue arose that pretty much forced Intel’s hand in the matter.  CPU makers were running into fundamental thermal barriers (with the then current 90nm chips) which largely put a cap on clock speeds.  Issues of stability, heat dissipation, and energy consumption were starting to block any efforts (outside of exotic cooling solutions that were hardly economical or energy efficient) to increase CPU clock speed.

Improving the efficiency of chip design is one way to combat this limitation.  But redesigning and optimizing CPU’s is very hard work, to be sure.  What else could be done?  Well, Intel (and AMD shortly thereafter) figured out that while the clock speed of a single core was essentially limited in clock speed, there was no limit to the number of cores that a CPU could have.  So, Intel and AMD released CPU’s with more than one core.  Dual-core came first, and is still the most popular CPU type today.  Intel released a quad core CPU (actually, just two dual-core CPU’s on the same chip), while AMD has released a “true” quad core CPU and a triple core CPU (a dual core cpu with one faulty core turned off).

Ok, so we understand how the hardware went from single core Pentium 4’s to the quad-core Core2’s and Phenoms.  But that’s only half the story.  For while it was (relatively) easy for CPU manufacturers to add more cores to their processors, it’s quite another matter for the software side.  Software is designed to execute as a largely single-execution sequence.  All modern operating systems do “multitask” between different core system functions and application processes, but they’re all meant to be executed by the main CPU core.  Having more than one core starts to really complicate things; which core does what?

There has been some progress on individual applications and system functions to use more than one core, but as of yet, the hardware of multi-core CPU’s is vastly under-utilized.  Grand Central, at least according to the brief announcement at WWDC, is supposed to make writing and running applications in OS X across multiple cores much, much simpler and streamlined.  If it works, this could be a boon to computing performance.  Right now, parallel processing (running a single process or application simultaneously across multiple cores) is very limited in actual uses.  But if Grand Central succeeds, it could really put a huge performance gap between Windows and OS X.

With Mac Pro’s shipping with two quad-core Intel processors (a total of eight cores), there’s a lot of largely unused potential there.  Now, to be fair, video encoding is one of the few areas of computing that regularly makes use of parallel processing and multiple cores (as it scales well to multiple cores).  So for video work, having 8 cores in a Mac Pro to encode video on is currently well-utilized.  But enabling the performance gains of multiple cores to ALL applications (and the operating system itself) could really make for some astounding results.

Better yet, by the time Snow Leopard is released, Apple will have time to implement Intel’s new architecture (Nehalem) and make use of the Simultaneous Multi-Threading (SMT), the successor to Intel’s Hyperthreading of the Pentium 4 days.  In essence, Mac Pro’s of the near future will have access to 16 simultaneous threads of computing (2 threads per core, four cores per CPU, two CPU’s = 16 total threads).  And having the ability to run any application with full use of this ability will likely blow previous generations of OS X (and any version of Windows) right out of the water.

In case you can’t tell, I’m excited about this . . .

Also of note from the lesser-publicized announcements at WWDC was OpenCL (Open Computer Language), whose function it will be to make better use of the general processing potential of GPU’s (which themselves are powerful parallel processing units).  If it catches on with developers, it could finally find a use for all those 9800 GTX’s out there other than just for playing games.  😛

Could Snow Leopard herald the dawn of a new age in parallel computing and many-core CPU’s?  Time will tell, but I sure hope it turns out even half as well as it looks right now . . .

Leave a Reply

Your email address will not be published. Required fields are marked *