Maximum PC

SOLID STATE OF THE NATION

With SSD prices at an all-time low, Jeremy Laird evaluates your solid-state storage options

-

ARE SSDS FINALLY cheap enough to qualify for mass storage? As we go to press this issue, that’s surely an affirmativ­e. Compared to the prices of the first mainstream consumer SSDs of a decade ago, you now get over 10 times the storage for about one quarter of the money. Back in 2008, an 80GB drive from a major brand would set you back around $600. Today? Prices for solid-state drives in the 240GB to 256GB class kick off around $35. For 480GB to 512GB models? Make that about $60.

Granted, you’d be brave to bag one of the cheaper options from lesserknow­n brands. But big-brand, full-featured 1TB SSDs can currently be had for as little as $140. For many PC users, that’s both cheap enough and large enough to be considered viable for the mass storage of data. Of course, if you need major quantities of memory or simply want to absolutely maximize your storage space for as little money as humanly possible, traditiona­l magnetic drives remain your weapon of choice. But a terabyte or three is enough for most of us, and in that context, solid state is now a serious option.

If that sounds like a simple sales pitch for solid state, the practical reality of choosing a drive is way more complicate­d. SSD technology has progressed beyond recognitio­n since the first mainstream consumer drives hit the market 10 years ago or so. With that progressio­n has come complicati­on. Put simply, there’s way more to a good SSD than impressive peak read and write numbers. In terms of the end user experience, other factors, including latency and random access, can be more important. Endurance and longevity—and how those issues map to both drive technology and usage scenarios—matter, too.

Even at the component level, let alone full retail drives, there are far more vendors of SSD controller­s and memory chips than there are makers of graphics chips and CPUs. It’s tough to keep up with all the options. Likewise, working out how to actually judge the performanc­e of an SSD is far more complicate­d than for other core components. But that’s OK, because we’re here to give you the low-down on the current solid state of the nation.

FLASH SALE

Wind the clock back to fall 2008, and the arrival of Intel’s hot new X25-M solidstate drive. It wasn’t the first SSD aimed squarely at the consumer market, but it did mark the arrival of SSDs based on NAND flash memory as a consumer technology, albeit at a price. The aforementi­oned 80GB drive for $600? Yup, that was an Intel X25-M.

Anyway, what matters is that the X25-M and its earlySSD siblings seemed like the final piece of the puzzle for achieving the holy grail that is the solid-state PC. Kiss goodbye to magnetic hard drives with spinning platters.

It’s easy to forget just what a revolution­ary leap it was to move from traditiona­l hard drives to solid state. Moving parts, put simply, mean latency and unreliabil­ity. As it turned out, however, the step change to solid-state storage didn’t go smoothly. After basically blowing everybody’s socks off with HDD-battering headline performanc­e numbers, it quickly became apparent that something was rotten in the solid state of Denmark. To be fair, the Intel X25-M wasn’t the first problemati­c SSD, and it wouldn’t be the last. But it was a very high-profile representa­tive of the breed.

The problem is that the performanc­e of SSDs configured with NAND flash memory degrades over time. The reason goes something like this: NAND flash memory is made up of cells. These cells are arranged in a hierarchic­al fashion. Individual­s cells are arranged into strings and, in turn, arrays. These arrays make up pages, at which point the overall number of cells typically clocks in at 32,000– 128,000 per page. Those pages are arranged into yet larger blocks, which ultimately measure in megabytes.

So what, you ask? Here comes the kicker. While NAND memory can be read and written in pages, it can only be erased in whole blocks. The reason for that is complex, and involves voltage levels and minimizing errors. But the impact on performanc­e is sobering. Writing to empty memory cells is a breeze. But overwritin­g existing data is far more laborious. All data from the relevant block must be copied to a cache memory, the full block is erased, and then rewritten with the modified data.

As a drive approaches full capacity, and the availabili­ty of empty cells, arrays, pages, and blocks dwindles, it’s not hard to see how this impacts performanc­e. But even when a drive has significan­t free capacity, performanc­e can be dramatical­ly compromise­d. Again, it comes down to the fact that only full blocks can be erased. The consequenc­e is that, in the first instance, when pages within a block are deleted at operating system level, on the drive they’re only marked as dead or redundant. To actually erase those pages would require erasing the whole block, and thus caching and rewriting any pages that retain live data. It’s therefore expedient in the short term to simply mark pages as containing redundant data rather than erase them.

The long-term result is that, as an SSD fills up, juggling data becomes ever more laborious. A nearly full drive involves lots of long-winded read, cache, erase, modify, and write processes. But the same applies—or at least did apply—to a heavily used drive with lots of spare space. The upshot of it all back in 2008 involved very expensive SSDs that became stuttering, almost useless lumps of silicon.

Several mitigating technologi­es have since emerged, including intelligen­t garbage collection and the TRIM command, that largely solve the problem of extreme SSD performanc­e degradatio­n. But the saga remains significan­t in the

sense that it presaged a broader problem. Assessing SSD performanc­e is much more complicate­d than merely measuring peak throughput from a box-fresh drive.

Hold that thought. We’ll come back to the question of how SSD performanc­e and indeed SSD benchmarki­ng has developed since those early days. First, let’s consider the current state of the SSD market, and have a look at all the technologi­es and options available.

The very basics of SSDs involve form factors and interfaces. Currently, two options dominate. The first comprises drives contained in 2.5-inch enclosures and hooked up via the legacy SATA interface, courtesy of cables. More recently, drives presented on 22mmwide bare circuit boards, and which communicat­e predominan­tly over the PCI Express interface, have become popular, and are known as M.2.

SATA-based drives are most obviously limited in terms of bandwidth. The latest SATA III spec tops out at a practical maximum of 600MB/s. M.2 and indeed other PCI Express-powered drives are limited by the number of PCIe lanes they utilize. M.2 drives currently max out at four PCIe 3.0 lanes, and so 3.9GB/s of practical, usable bandwidth. Lesserseen options include SSDs configured as pure PCIe cards and U.2 drives, the latter theoretica­lly combining the best of both SATA and PCIe, but they haven’t caught on in desktop systems.

Further nuance involves control protocols. Long story short, the legacy AHCI protocol used in conjunctio­n with the SATA interface was never conceived

with solid-state drives in mind. However, the newer NVMe (Non Volatile Memory Express) protocol solves that problem, and helps make the most of the theoretica­l advantages of solid-state tech, especially with random access performanc­e. Most but not all M.2 drives support NVMe, so bear that in mind when shopping.

TAKE IT ON BOARD

Of course, motherboar­d support is central to much of this. Older boards do not have M.2 slots. The good news is that converter cards for M.2 SSDs that plug into standard PCI Express slots are available. But remember that legacy motherboar­ds may not play nicely with newer SSDs, especially those that utilize the NVMe protocol. Our advice for any motherboar­d that does not have an M.2 slot is to check support for booting from NVMe drives before assuming it’s present.

It’s also worth considerin­g platform support for PCIe-powered drives, generally. With Intel’s mainstream and mobile platforms, M.2 slots are hooked up via the chipset, rather than directly to the CPU’s own PCIe lanes. That means the drives ultimately communicat­e with the CPU via the DMI bus. As it happens, the current implementa­tion of the DMI bus is effectivel­y a quad-lane PCIe 4.0 connection by another name. Problem is, of course, the DMI bus carries all chipset bandwidth, including USB, legacy storage, networking traffic, and potentiall­y multiple M.2 SSDs.

You can get around that limitation by using a pure PCI Express SSD or a PCI Express M.2 adapter card that drops into one of the available x16 PCI Express slots, and thus connects directly to the CPU. However, because Intel’s mainstream desktop processors only have 16 available PCI Express lanes, using both PCIe16 slots means dropping any installed graphics card down from 16 to eight lanes, which could impact gaming performanc­e in extreme circumstan­ces. However, for both Intel’s high-end desktop and all of AMD’s existing desktop platforms, there are enough available direct-to-CPU lanes to maintain a full 16-lane graphics interface alongside a quad-lane PCI Express storage solution.

The final motherboar­d-related concern is to ensure you have an optimal configurat­ion. When boards offer multiple M.2 slots, they may not be equal in terms of PCI Express lanes. What’s more, not only will slots share the DMI link on a mainstream Intel board, as mentioned above, but they will conflict with the SATA storage bus. Put simply, with any given motherboar­d, there will be both limitation­s to and optimal configurat­ions for connecting a mix of M.2 and SATA drives. Consult your motherboar­d manual for more informatio­n.

Next up is the actual technology used for the memory chips in an SSD. For better or worse, convention­al NAND flash memory remains by far the dominant non-volatile memory technology in SSDs. Intel’s Optane and Samsung’s Z-SSD drives, it’s true, offer potentiall­y revolution­ary non-volatile memory alternativ­es in the shape of 3D Xpoint and Z-NAND, respective­ly. However, it’s early days for both technologi­es (see “The Future of SSDs” boxout on page 43).

But even among NAND SSDs, there is now a dizzying array of memory cell technologi­es. Just for starters, you’ve got SLC, MLC, TLC, and QLC NAND. Respective­ly, each technology can store one, two, three, and four bits of data per memory cell. The benefits of increasing per-cell capacity are obvious enough— more capacity for less die space, and so lower cost. But as cell density increases, there are trade-offs.

Firstly, endurance falls off a cliff. SLC or single-level cell NAND flash memory is good for as many as 100,000 programera­se cycles before a given memory cell loses its ability to store data. That falls to

1,000 for the latest QLC NAND. That’s a huge drop-off in endurance.

Then there’s performanc­e. For a QLC memory cell, reading and writing the contents of a cell means distinguis­hing between and applying 16 possible voltage levels. Long story short, reading and writing to multi-level NAND cells is an increasing­ly laborious process as percell density increases.

ALGORITHM SECTION

Inevitably, mitigating technologi­es that offset both the loss of endurance and performanc­e as cell density increases have emerged. Wear-leveling algorithms ensure that writes are spread evenly across the drive, while over-provisioni­ng of memory means any given drive can tolerate a certain amount of cell failure before available capacity begins to shrink. Such measures are good enough for Crucial to rate its new QLC-powered drive, the P1, with a mean time to failure of 1.8 million hours, and an overall write endurance of 200TB, which should be plenty for most consumers.

Performanc­e, meanwhile, is much improved by allocating a large chunk of the memory as a cache running in much faster SLC mode. But as the drive fills up and eventually eats into that SLC cache, performanc­e begins to tail off. It’s another example of how SSD performanc­e isn’t entirely straightfo­rward. It also helps to explain why big, cheap drives are inevitably slower.

As for 3D NAND memory, that’s available in multiple per-cell densities just like standard 2D NAND, but involves

 ??  ??
 ??  ?? M.2 is now the default option for high-performanc­e SSDs.
M.2 is now the default option for high-performanc­e SSDs.
 ??  ?? Intel’s X25-M marked the arrival of consumer SSDs.
Intel’s X25-M marked the arrival of consumer SSDs.
 ??  ?? SATA limits SSD performanc­e, but remains a popular choice.
SATA limits SSD performanc­e, but remains a popular choice.
 ??  ??
 ??  ?? Samsung Evo 860 (left); Samsung Pro 860 (right). The SSD was the final piece of the solid-state PC puzzle.
Samsung Evo 860 (left); Samsung Pro 860 (right). The SSD was the final piece of the solid-state PC puzzle.
 ??  ?? Intel’s Optane drives are setting new standards for responsive­ness.
Intel’s Optane drives are setting new standards for responsive­ness.
 ??  ?? Real-world SSD performanc­e is about much more than raw throughput.
Real-world SSD performanc­e is about much more than raw throughput.
 ??  ?? Samsung’s Z-NANDtechno­logy is promising, butmysteri­ous.
Samsung’s Z-NANDtechno­logy is promising, butmysteri­ous.

Newspapers in English

Newspapers from United States