Tweaking Our Take on SSDs
It’s time to adjust our aim at an ever-moving target
SSDs ARE A PAIN in the ass. At least, they are from a lab-testing perspective. Of all the major components, they’re the tricksiest to test. For instance, it’s simply not viable to assess long-term performance of an SSD when you might only be loaned the darned thing for a few days.
Likewise, the metrics by which SSDs are measured have shifted dramatically over time. At first, it was all about raw bandwidth. Then it became apparent that drive performance could degrade over time. The latter issue has been largely resolved, with well-engineered drives at any rate. More recently, it’s become clear that what really matters for desktop PCs is 4K random access at low queue depths.
You can read more about that in our feature on page 36, but suffice to say it’s time to tweak our testing methodology. We were already measuring 4K performance, but now we’re increasing granularity by testing 4K reads and writes at queue depths of one, two, and four. We’re keeping sequential reads and writes to provide a headline view of peak performance.
Likewise, our 30GB internal file copy test remains a useful real-world test. Admittedly, any half-decent NVMe drive will chew that benchmark up, but cheaper SATA drives that lack DDR cache or rely on a small chunk of SLC cache to post decent-looking sequential numbers can be caught out. An example of that is Crucial’s new entrylevel BX500 drive, particularly in 120GB.
That drive can only sustain its advertised performance for about 1.5GB of sequential data, at which point throughput falls off a cliff. Taken together, our tests can’t quite provide that full long-term view of how a drive will stand up to years of use, but the mix of sequential and random synthetics, plus a real-world sense check in the form of our internal file copy test, give a good overall picture of what you can expect.
Cutting-edge drives demand new metrics.