Benchmark your SSD
Check your speed the APC way.
YOU’LL NEED THIS
CrystalDiskMark 6 https://crystalmark.info/en
Anvil’s Storage Utilities 1.1.0 https://anvils-storageutilities.en.lo4d.anvils-storageutilities.en.lo4d.com/windows
Beyond that, around 30GB of mixed files, ideally from a game install, is required.
Surely the most mercurial of PC components, solid-state drives have long presented a moving target when it comes to benchmarking. Early SSDs were all about headline transfer speeds. Then it became apparent that performance fell off rapidly as the drive’s capacity was used. As solid-state technology matured, random access to small data sets emerged as a more relevant measure of real-world performance. But even now, the best test regimes can’t fully capture the character of an SSD. After all, there’s no substitute for hammering a drive with day-today storage for years.
However, it is possible to capture a pretty good idea of the performance of a drive in relatively short order using freely available software. Our approach involves testing both peak throughput and arguably more relevant 4K performance, along with an added real-world element in terms of file transfers and prepping the drive for testing. The latter involves filling the drive entirely before running benchmarks. That can be quite time-consuming with very large drives, in which case you can opt to skip it. Where practical, and especially with a brand-new drive, however, it can help to make benchmarking more realistic as well as shake out any serious flaws in sustained performance.
Initial setup
To mimic our benchmarking, the target SSD should be a secondary rather than primary OS drive. However, results don’t vary hugely when testing an SSD as the main system drive.
We typically test box-fresh SSDs, so a critical part of our regime is to first fill the drive to the brim and then delete all the data and format the volume. It’s an easy way to make sure that a drive’s basic garbage collection routines are functional. For an SSD that’s already heavily used, this is probably redundant.
It doesn’t really matter what kind of data you use for the pre-test fill. We favor large files like highresolution video for simplicity. While you’re filling the drive, keep an eye on the Windows Explorer progress graph – it will show the transfer rate. Should the drive’s performance dip or stall due to exhausting any cache [Image A] or the controller chipset overheating, you’ll get a feel for how long that takes in terms of time or data quantity.
Peak sequential throughput
To measure peak performance we use CrystalDiskMark 6. It’s the tool we use for testing sequential read and write performance. It’s worth noting that this type of benchmark typically shows the drive off at its best. The quantity of data transferred is limited and won’t exhaust features like high-speed caches, which can mask the underlying performance of the flash memory used in a drive.
CrystalDiskMark uses incompressible data in default mode, so it won’t throw out overly optimistic results as a consequence of the clever compression algorithms that often can’t be used with real-world data sets. Also, for SATA drives, sequential performance is often limited by the interface. In practice, you won’t see much more than 550MB/s in either direction.
Once downloaded, fire up CrystalDiskMark and select the correct drive in the drop-down menu. Then hit the “Seq Q32T1” button [Image B] and let it rip. Results in CrystalDiskMark should be close to the peak sequential performance claimed by your drive manufacturer. If not, you may have a faulty drive or a setup issue.
Random access performance
Peak sequential performance involves the biggest numbers. But when it comes to that all-important subjective sense of snappiness and response, peak throughput is largely irrelevant. Instead, measuring random access performance provides more insight.
The standard test in this context is usually known as 4K random reads and writes. “4K” refers to the file size, and “random” to the idea that the files will be written to or read from multiple locations on the drive, in a process that mimics typical OS and application drive traffic.
A further layer of distinction involves queue depth. That’s a metric of the number of parallel storage requests or threads in operation at any one time. In a desktop context, a typical application will generate a single thread, requesting a read or a write, waiting for the result, then following with another. Server systems often support multiple applications or clients and operate with queue depths of 32 or more. But for desktop PCs, queue depths in the range of one to four are more realistic. IOPS or IO operations is another measure of the same performance.
Anvil’s Storage Utilities
Our weapon of choice for measuring 4K random performance is Anvil’s Storage Utilities, as it allows for tests at the queue depths most relevant to desktop computing.
Download Anvil version 1.1.0, fire it up and then select “Benchmarks” from the top menu. Choose “Threaded IO / write.” At the top of the pop-up window is a slider, with a selection of queue depths. Choose queue depth one, then hit “Start” below. Note the result in MB/s, then repeat the process at QD2 and then QD 4.
Next, go through the same process with reads [Image C], selecting “Benchmarks” and then “Threaded IO / read,” before once again testing performance at QD1, QD2, and QD4. You should find that QD1 is a worst-case scenario, with performance increasing as you increase the queue depth. Inevitably, most drive manufacturers quote 4K random or IOPS performance at QD32. That’s a best-case scenario that isn’t hugely relevant to desktop as opposed to server applications. Consequently, we focus our testing on shorter queue depths.
File transfer
When it comes to emulating our SSD test routine, the trickiest element is our 30GB internal file copy. The idea is to provide a real-world snapshot of sustained performance and uncover any glaring limitations that can occur, such as the cache limitations and chipset overheating.
For our testing, we use a 30GB tranche of game files, chosen for a mix of large and small file sizes. For obvious copyright reasons, we can’t share the precise files we use. However, select 30GB of mixed files from any modern game installation and you will have something that’s reasonably similar, albeit not accurate enough for direct comparisons of performance.
The test routine is straightforward. It’s conducted internally on the drive to exclude the influence of a secondary drive. First, copy your test files to the target drive. Then create a new folder on that same drive and paste a new copy of those files into the folder. Record the time it takes to complete the copy. Again, it’s worth keeping an eye on the Windows Explorer progress graph [Image D] to look for any obvious drops in performance.
Comparing performance drives
So, you’ve installed the software and run the numbers. But what does it all mean? As we implied earlier, much depends on the type of SSD in hand. SATA-based drives don’t just have the peak sequential limitations mentioned earlier. They’re also compromised by the AHCI protocol, which was originally conceived for mechanical drives with magnetic platters.
Newer M.2 drives benefit from both the PCI Express interface for improved peak bandwidth, and the NVMe protocol, which was expressly optimised for use with modern solid-state storage tech. For quad-lane M.2 drives operating on the PCIe 3.0 standard, maximum theoretical read and write speeds are in the region of 4GB/s, with real-world performance topping out at around 3.5GB/s. For the latest PCIe 4.0 drives, you can double that to 8GB/s as a maximum theoretical, with early drives hitting around 5GB/s in practice. As for 4K performance, that’s more of a leveller, with few drives of any type scoring above 200MB/s at QD1.