Those of you with a business background or familiarity with operations terminology will recognize the phrase “The Theory of Constraints.” Outside the business world, it’s not a novel concept either: certain parts of any system will become bottlenecks to performance or throughput. In the case of computers, one of the biggest bottlenecks has been storage, or more specifically, spinning magnetic hard drives. So with the move to incredibly fast solid state drives, that bottleneck should be removed, right?
Well, that’s actually a funny story. And that story begins with an explanation of how “computer interface buses” have evolved over time. Regardless of the composition or mechanics of the storage device itself (spinning magnetic hard drive, solid state drive, tape drive, etc) the bits have to flow somehow between the storage device and the rest of the computer. That mechanism is called a “computer interface bus” and has gone through many iterations over time. In the early 2000’s, the old PATA interface (Parallel ATA: big ‘ol ribbon cables) was replaced – or rather, joined – by a new type of interface Serial ATA (or “SATA” for short).
The first SATA interface ran at 1.5Gbps (gigabits per second) or the equivalent of 150MBps (megabytes per second). This was plenty fast at the time for spinning magnetic hard drives, and times were good. Within a few years, SATA II was launched, which effectively doubled the throughput of SATA (I) to a hefty 3Gbps or the equivalent 300MBps. And until recently, this provided enough throughput for even the highest performing spinning magnetic hard drives.
But then came SSDs. The first few generations didn’t really get close to those SATA II speed limits, but subsequent generations quickly evolved and improved in performance to max out the SATA II interface. Luckily, that crisis was averted with the advent of SATA III, with it’s repeat of the bandwidth doubling to 6Gbps or equivalent 600MBps. However, the pace of SSD performance was progressing at break-neck speed, and within short order (2 years) the latest SSDs were saturating their SATA III interfaces. The SATA interface had become the bottleneck.
The writing had been on the wall for some time, and so new interface designs were developed to avoid the quickly obsoleted SATA standards (at least the physical connectors). To in some sense future-proof – or at least give some flexibility for major future bandwidth needs – the computer interface bus for storage needs, some of the higher-end and enterprise-class SSDs moved to PCI-Express, also known as PCIe.
The PCI (Peripheral Component Interconnect) interface has been the staple of computer expansion cards for years. But it has only been in the last decade that PCI-Express (PCIe) has risen to the challenge of providing the massive amounts of fast, low latency connection bandwidth for demanding components like high-end graphics cards. The PCIe standard has also gone through numerous revisions (we are now on the third generation), but that standard was being increased for a set of target devices (graphics cards) that consume a lot more bandwidth than any hard drive (by orders of magnitude). PCIe 3.0 can push 8 GT/s (gigatransfers), which effectively delivers 985 MB/s per lane.
That last emphasis is critical: one lane of PCIe 3.0 is 50% faster than a SATA III interface. Almost every motherboard on the market today as at least one x16 PCIe 3.0 slot, which means those 16 lanes combine to provide a whopping 15,000MBps (roughly). Compare that to the measly 600MBps of SATA III, and you can quickly see why PCIe is the ideal interface bus for the future of high-performance solid state drives.
Granted, this is the absolute highest for the current PCIe iteration using a x16 slot (read as “by 16”), and graphics cards will likely still take up those slots for massive bandwidth needs. And there is a practical limit to the total number of PCIe lanes for any platform (40 PCIe 3.0 lanes for Sandy Bridge-E processors). But the combination of existing interface design, low latency, expansion options (just add more lanes!), and massive bandwidth (even from a single lane) make PCIe the best option going forward for solid state drives.
There have not been a great number of adopters of PCIe SSDs in consumer grade products (enterprise-class products exclusively use PCIe interfaces these days) outside of Apple. As one of the first manufacturers to really push for SSDs in mainstream devices (such as the 2010 MacBook Air), it’s not surprising that Apple would be among the first to take it to the next level. A note on connection standards, though: there’s been a fragmented approach to PCIe SSD interfaces, with a few new standards proposed but not really taking off. mSATA looks a lot like the PCIe SSD interface design on Apple’s products, but is distinctly different: it uses the SATA communication standard (optimized for hard drives, not SSDs) and falls under the same 600MBps SATA III limits.
Perhaps the most interesting development in the world of PCIe SSDs has been the release of the 2013 MacBook Air, which has an impressive 800MBps read/write speed. Let’s put that into context: an ultra-portable laptop has a much faster SSD than any SATA III SSD you can buy for your powerhouse gaming desktop. And the announcement of the new MacBook Pro (slated for release “sometime in 2013”) is even more exciting because it really opens the throttle on PCIe SSD performance:
So while SATA III SSDs have certainly leapfrogged their spinning magnetic hard drive ancestors in a relatively short amount of time, the rise of the PCIe SSD will soon eclipse even the fastest of those SATA III SSDs out there today (and already has in some cases). I can’t wait for the next generation of these speed demons to arrive, and hope the PC vendors can at least keep pace with Apple. It is for SSDs, one might say, the way of the future.