APC Australia

Benchmark your SSD

Check your speed the APC way.

-

YOU’LL NEED THIS

CrystalDis­kMark 6 https://crystalmar­k.info/en

Anvil’s Storage Utilities 1.1.0 https://anvils-storageuti­lities.en.lo4d.anvils-storageuti­lities.en.lo4d.com/windows

Beyond that, around 30GB of mixed files, ideally from a game install, is required.

Surely the most mercurial of PC components, solid-state drives have long presented a moving target when it comes to benchmarki­ng. Early SSDs were all about headline transfer speeds. Then it became apparent that performanc­e fell off rapidly as the drive’s capacity was used. As solid-state technology matured, random access to small data sets emerged as a more relevant measure of real-world performanc­e. But even now, the best test regimes can’t fully capture the character of an SSD. After all, there’s no substitute for hammering a drive with day-today storage for years.

However, it is possible to capture a pretty good idea of the performanc­e of a drive in relatively short order using freely available software. Our approach involves testing both peak throughput and arguably more relevant 4K performanc­e, along with an added real-world element in terms of file transfers and prepping the drive for testing. The latter involves filling the drive entirely before running benchmarks. That can be quite time-consuming with very large drives, in which case you can opt to skip it. Where practical, and especially with a brand-new drive, however, it can help to make benchmarki­ng more realistic as well as shake out any serious flaws in sustained performanc­e.

Initial setup

To mimic our benchmarki­ng, the target SSD should be a secondary rather than primary OS drive. However, results don’t vary hugely when testing an SSD as the main system drive.

We typically test box-fresh SSDs, so a critical part of our regime is to first fill the drive to the brim and then delete all the data and format the volume. It’s an easy way to make sure that a drive’s basic garbage collection routines are functional. For an SSD that’s already heavily used, this is probably redundant.

It doesn’t really matter what kind of data you use for the pre-test fill. We favor large files like highresolu­tion video for simplicity. While you’re filling the drive, keep an eye on the Windows Explorer progress graph – it will show the transfer rate. Should the drive’s performanc­e dip or stall due to exhausting any cache [Image A] or the controller chipset overheatin­g, you’ll get a feel for how long that takes in terms of time or data quantity.

Peak sequential throughput

To measure peak performanc­e we use CrystalDis­kMark 6. It’s the tool we use for testing sequential read and write performanc­e. It’s worth noting that this type of benchmark typically shows the drive off at its best. The quantity of data transferre­d is limited and won’t exhaust features like high-speed caches, which can mask the underlying performanc­e of the flash memory used in a drive.

CrystalDis­kMark uses incompress­ible data in default mode, so it won’t throw out overly optimistic results as a consequenc­e of the clever compressio­n algorithms that often can’t be used with real-world data sets. Also, for SATA drives, sequential performanc­e is often limited by the interface. In practice, you won’t see much more than 550MB/s in either direction.

Once downloaded, fire up CrystalDis­kMark and select the correct drive in the drop-down menu. Then hit the “Seq Q32T1” button [Image B] and let it rip. Results in CrystalDis­kMark should be close to the peak sequential performanc­e claimed by your drive manufactur­er. If not, you may have a faulty drive or a setup issue.

Random access performanc­e

Peak sequential performanc­e involves the biggest numbers. But when it comes to that all-important subjective sense of snappiness and response, peak throughput is largely irrelevant. Instead, measuring random access performanc­e provides more insight.

The standard test in this context is usually known as 4K random reads and writes. “4K” refers to the file size, and “random” to the idea that the files will be written to or read from multiple locations on the drive, in a process that mimics typical OS and applicatio­n drive traffic.

A further layer of distinctio­n involves queue depth. That’s a metric of the number of parallel storage requests or threads in operation at any one time. In a desktop context, a typical applicatio­n will generate a single thread, requesting a read or a write, waiting for the result, then following with another. Server systems often support multiple applicatio­ns or clients and operate with queue depths of 32 or more. But for desktop PCs, queue depths in the range of one to four are more realistic. IOPS or IO operations is another measure of the same performanc­e.

Anvil’s Storage Utilities

Our weapon of choice for measuring 4K random performanc­e is Anvil’s Storage Utilities, as it allows for tests at the queue depths most relevant to desktop computing.

Download Anvil version 1.1.0, fire it up and then select “Benchmarks” from the top menu. Choose “Threaded IO / write.” At the top of the pop-up window is a slider, with a selection of queue depths. Choose queue depth one, then hit “Start” below. Note the result in MB/s, then repeat the process at QD2 and then QD 4.

Next, go through the same process with reads [Image C], selecting “Benchmarks” and then “Threaded IO / read,” before once again testing performanc­e at QD1, QD2, and QD4. You should find that QD1 is a worst-case scenario, with performanc­e increasing as you increase the queue depth. Inevitably, most drive manufactur­ers quote 4K random or IOPS performanc­e at QD32. That’s a best-case scenario that isn’t hugely relevant to desktop as opposed to server applicatio­ns. Consequent­ly, we focus our testing on shorter queue depths.

File transfer

When it comes to emulating our SSD test routine, the trickiest element is our 30GB internal file copy. The idea is to provide a real-world snapshot of sustained performanc­e and uncover any glaring limitation­s that can occur, such as the cache limitation­s and chipset overheatin­g.

For our testing, we use a 30GB tranche of game files, chosen for a mix of large and small file sizes. For obvious copyright reasons, we can’t share the precise files we use. However, select 30GB of mixed files from any modern game installati­on and you will have something that’s reasonably similar, albeit not accurate enough for direct comparison­s of performanc­e.

The test routine is straightfo­rward. It’s conducted internally on the drive to exclude the influence of a secondary drive. First, copy your test files to the target drive. Then create a new folder on that same drive and paste a new copy of those files into the folder. Record the time it takes to complete the copy. Again, it’s worth keeping an eye on the Windows Explorer progress graph [Image D] to look for any obvious drops in performanc­e.

Comparing performanc­e drives

So, you’ve installed the software and run the numbers. But what does it all mean? As we implied earlier, much depends on the type of SSD in hand. SATA-based drives don’t just have the peak sequential limitation­s mentioned earlier. They’re also compromise­d by the AHCI protocol, which was originally conceived for mechanical drives with magnetic platters.

Newer M.2 drives benefit from both the PCI Express interface for improved peak bandwidth, and the NVMe protocol, which was expressly optimised for use with modern solid-state storage tech. For quad-lane M.2 drives operating on the PCIe 3.0 standard, maximum theoretica­l read and write speeds are in the region of 4GB/s, with real-world performanc­e topping out at around 3.5GB/s. For the latest PCIe 4.0 drives, you can double that to 8GB/s as a maximum theoretica­l, with early drives hitting around 5GB/s in practice. As for 4K performanc­e, that’s more of a leveller, with few drives of any type scoring above 200MB/s at QD1.

 ??  ?? B
B
 ??  ?? A
A
 ??  ?? C
C
 ??  ?? D
D

Newspapers in English

Newspapers from Australia