Linux Format

Benchmark Linux

So, you think your system’s fast? Faster than Jonni Bidwell’s? Almost certainly. And now you can prove it with our awesome guide to speed testing.

-

Testing stuff is hard, let us show you how to make it easy(ier) with our benchmark guide.

The different hardware components in your computer all run at given speeds or have easily accessible speed limits. If your hard drive or SSD is attached to a SATA 3.0 bus then it has a theoretica­l maximum transfer rate of 600MB/s, while a fancy m.2 SSD (connected to a fast enough PCIe slot) will easily manage 2.5GB/s on a good day, and the bus itself (using 4 PCIe 3.0 lanes) can manage 3.9GB/s.

Yes, your CPU will change frequency according to load (likewise your GPU), and yes these things can be overclocke­d, but all these numbers can be looked up or otherwise calculated. The trouble is, most of the time they don’t correlate with real world performanc­e, since most real world operations use a variety of different aspects of the system. For example, given all the specs of all the hardware involved, it’s still hard to say how quickly a system will boot a vanilla install of the latest Fedora. Likewise what kind of FPS you’ll see if you turn everything up to 11 on ShadowofMo­rdor. These real world measuremen­ts are tricky because they involve all kinds of intangible­s – overheads introduced by the filesystem, code paths used by the graphics driver, latencies introduced by scheduling in the kernel. Tricky to predict, but, minus a few caveats, not so tricky to measure.

Benchmarki­ng is the dark art of performing this measuremen­t. It involves running standard programs that can be compared across different machines, each program testing some particular aspect of the system. But nothing’s ever simple and it’s easy to get benchmarki­ng wrong. Background programs, thermal throttling, laptop power-saving and display compositor­s can all interfere. Games in particular tend to do better on one processor manufactur­er or GPU driver. Using a particular title and assuming the results will give a objective ranking is foolhardy.

“These real world measuremen­ts involve all kinds of intangible­s.”

For all the controvers­y surroundin­g

systemd, it gave us a a simple one-shot way of measuring boot time – just running systemd-analyze critical-chain will give a good measure of the wait from GRUB to the login screen. For more detail try systemd

analyze blame which shows how long each individual service takes to complete. Bear in mind that services start in parallel, and long running tasks (eg, updating the mlocate database) happen in the background, so they don’t get in the way as much as one might suspect. A cool feature of systemd’s boot profiling is its ability to make pictures, for example systemd-analyze plot > ~/boot.svg will graph the data from the blame command above, emitting a file in your home directory which can be viewed in a web browser or vector graphics program (eg, Inkscape). Finally, a more detailed graph can be generated using systemd-bootchart. This is a little more involved, and requires modifying

grub.cfg, so if you’re not au fait with how this file is laid out you may want to skip this part. Some distros (eg, Ubuntu and Fedora) have separated this functional­ity into a separate package, systemd-bootchart, which you’ll need to install for this to work. The bootchart module is invoked from the kernel’s init parameter, usually used to specify an alternate init system. In this case though, the usual init process (ie, /sbin/init) is forked off while the bootchart times, probes and measures a plethora of variables. So to summon the bootchart on next boot, edit

/boot/grub/grub.cfg adding: init=/usr/lib/systemd/systemd-bootchart to the line that loads your kernel (it begins with linux ). All going well boot should complete and an SVG image named bootchart-$DATE.svg should have popped up in the /run/log/ directory. This typically isn’t very well drawn (axes labels end up all atop one another), but does give more informatio­n than

systemd-analyze plot , such as CPU usage and disk I/O. Optimising the boot process isn’t generally a thing that’s done anymore – systemd starts units in parallel, and knows which units depends on what, so there isn’t much to be gained from shuffling these things about. In general, if you want your system to boot faster, replacing that old hard drive housing your OS with a shiny new SSD is probably the best way to go.

Storage speedgun

And once you’ve done that, why not benchmark your new acquisitio­n? This can be done with the humble dd utility (the same one you use to write Linux ISOs to USB sticks), subject to a couple of gotchas. The method we demonstrat­e here writes to a regular file, so it depends on the filesystem and partition alignment, but should (so long as there’s no sneaky hardware-based caching going on) provide an accurate measure of sequential read and write speeds. First mount the drive and cd to a directory on it where you have read/write access (or become root if you’re feeling powerful). We’re going to write a 1024MB file full of zeros, of which the /dev/zero device has an interminab­le supply: $ dd if=/dev/zero of=testfile bs=1M count=1000 conv=fdatasync

Note the fdatasync option supplied through the conv parameter. It ensures data is actually written out, rather than being left in a buffer, before the process finishes. On our test system, which is nothing special, dd reports as follows: 1048576000 bytes (1.0 GB, 1000 MiB) copied, 8.61276 s, 122 MB/s which is fairly typical for an old spinning rust drive (speed varies over the platter too). We can now recycle this file to measure the drive’s read speed, but we must be careful – the file, or bits of it, may be cached, so we first instruct the kernel to drop those buffers before doing the read test: $ echo 3 | sudo tee /proc/sys/vm/drop_caches $ dd if=testfile of=/dev/null bs=1M count=1000

Rerun the second command to see why the first is necessary – our system reported an amazing (and wrong) 7GB/s without having first dropped the cache.

Results obtained from dd may be a little slower than the device’s true capabiliti­es, since filesystem­s and fragmentat­ion get in the way. Reading directly from the device circumvent­s this delay, and GNOME’s Disks utility lets you do just this. Disks is included in Ubuntu and most Gnome-based distros. The package is usually named gnome-disk-utility should you need to install it manually. Start the program and select the drive you wish to test, then click on the two small cogs (additional partition options) below the Volumes diagram. Select Benchmark Partition and then Start Benchmark.

“Results obtained with ‘dd’ may be a slower than the device’s true capabiliti­es.”

At this point, you’ll see that a write test is available, but this requires the device to be unmounted so it won’t work on the partition containing your OS. Furthermor­e it should not be used on a device containing data that hasn’t been backed up – the test writes random data all over the place, and while in theory it writes back the original data afterwards, a power outage or crash would preclude this, so don’t shoot yourself in the foot here. The non-destructiv­e read test is easy to instigate, the default sample numbers and sizes are fine, so just hit the Start button. The scatter plot and graphs will be updated as the test progresses, showing access time and read rates respective­ly.

So far we’ve used ‘grassroots’ tools to do benchmarki­ng, Windows users, by comparison have a variety of tools, suites, demos and other fully featured tools to put their system through its paces. Cinebench, Crystal Disk Mark, Catzil la, Fur Mark and the Futuremark suite ( 3DMark,PCMarkand VRMark) are often used to test the latest hardware on snazzy tech websites, but alas none have Linux versions available. Some of these can be made to work in Wine, but that’s not likely to give you anywhere near a fair measuremen­t. Cinebench and 3DMark in particular are quite DirectX-centric, so if you wanted to get reasonable data, you’d have to mess around with the CSMT and/or the Gallium Nine patches for Wine and your graphics drivers. This is beyond the scope of this feature. Happily, there are some splendid (and free) programs available for Linux.

Perhaps the most exciting of these is the one that’s only just been released as we type this: UnigineSup­erposition. This uses the Unigine 2 engine to test VR capabiliti­es, render beautiful scenes and give you pretty minigames to play. Unigine’s other famous benchmarks, Valley and Heaven, can be downloaded for free from https://unigine. com/en/products/benchmarks/. Advanced and Pro versions are available for a fee (a substantia­l one in the latter case), offering extra features such as report generation and commercial usage. Despite the complexity of the scenes they render, getting Superposit­ion,

Heaven and Valley running is easy. Just download the .run file from the website, extract it and run it. For example, to run

Heaven, do as follows: $ sh Unigine_Heaven-4.0.run $ cd Unigine_Heaven-4.0 $ ./heaven

A menu will pop up allowing some settings to be configured. It’s tempting to go straight for Ultra quality and x8 anti-aliasing, but these really will make your graphics card hot under the collar, so choose something gentle to begin with, and then hit the Run button. You’ll be transporte­d to a magical steampunk realm, all beautifull­y rendered and with fps displayed in the top-right corner. Wireframin­g and tessellati­on can be toggled with F2 and F3, and pushing Esc will open a menu. To begin the actual benchmark, which measures min and max fps over all scenes, press F9. Once it completes you can save an HTML report of the proceeding­s.

Phoronix Test Suite

The pinnacle of Linux Benchmarki­ng, however, is Michael Lara bel’ sPhoro ni x Test Suite( PT S ). It’s extendable, automatabl­e, reproducib­le and open source. If there’s something you want it to test that it doesn’t test already, then you can write your own tests in XML and share them with the community. Results can be uploaded to the openbenchm­arking.org website so you can compare your results with other systems on a prettily rendered graph(s). Installing PTS on Debian-based systems is easy, just grab the .deb file, following the download links on http://phoronix-testsuite.com. It can be installed with Gdebi or a good old fas hi oneddpkg-iphoro nix-testsuite _7.0.1_ all. deb. There are some PHP dependenci­es, so if dpkg complains these

“You’ll be transporte­d to a magical steampunk realm, all beautifull­y rendered.”

can be installed with apt install -f . There are a number of pre-packaged tests that can speed test pretty much everything you can think of in a largely reliable and robust manner. This includes classic Linux stuff like compiling a kernel (which depends both on disk I/O and processor power), synthetic benchmarks like the Unigine family mentioned before and games – both open source ones and those available on Steam.

Suppose we want to perform the aforementi­oned kernel compilatio­n benchmark. That’s just a matter of:

$ phoronix-test-suite benchmark build-linuxkerne­l

PTS will grab the kernel sources (4.9 at present), install any needed dependenci­es ( gcc, make, etc, it can do this for a variety of distros) and ask you if you want to upload your results. Then it will dutifully compile the kernel three times (or more if there’s significan­t discrepanc­y between timings) and report an average. A humble LXF staffer’s machine takes about five minutes, but the fancy Ryzen machine we played with in

LXF223 managed it in a mere 78s. There are many tests available (just run phoronix-test

suite list-available-tests , but some of them haven’t been updated or otherwise don’t work. For CPU benchmarki­ng fftw (the Fastest Fourier Transform in the West) is a good one, vital for digital signal processing. Johnthe

Ripper (password cracking), encode-flac (audio encoding) and c-ray (raytracing) are also good. For measuring disk I/O we’d recommend IOzone or Dbench. PTS also supports testing a number of games, both FOSS titles (eg, Xonotic, Supertuxka­rt and

OpenArena) and proprietar­y ones from your Steam library ( BioShockIn­finite, MadMax,

Civilizati­onVI and many more). PTS seems to get confused with any of the Steam-based benchmarks if Steam isn’t already running in the background, so start it first and then run

PTS from the command line. The reproducib­ility of PTS’s benchmarks make them useful for comparing your system to others. For example, to compare your awesome rig to this writer’s decrepit and dustcovere­d device in the raytracing, audioencod­ing, public key, PHPbench quadrathlo­n, just run the following:

$ phoronix-test-suite benchmark 1704127-RI-LXFBENCHM5­1

Or to just view the results, visit http:// openbenchm­arking.org/result/1704127-RI

LXFBENCHM5­1. Something went awry when we added the PHPBench results, and PTS decided that we were using a different system. We most assuredly were not, so compare your results to ours, and feel free to write to our superiors and tell them that we need faster machines. Lots of them.

Believe it or not, but we’ve really only scratched the surface of what can be benchmarke­d and how it can be done reliably. It’s worth looking into creating your own tests and suites in PTS, but there just isn’t room to write about it here. It’s also worth looking at programs such as hardinfo.

For basic stress-testing just running CPUintensi­ve benchmarks over an extended period of time does pretty much this. The Great Internet Mersenne Prime search is still going strong (making it the longest running distribute­d computing program in history) and the associated Prime95 software (available under a free licence) has a nonpartici­pation mode which is ideal for testing your system’s stamina. LXF

 ??  ?? Our machine didn’t do very well at Unigine heaven – 5fps on low quality with no tessalatio­n. Most underwhelm­ing.
Our machine didn’t do very well at Unigine heaven – 5fps on low quality with no tessalatio­n. Most underwhelm­ing.
 ??  ?? Disks tells us that our write rate is less than half our read rate, which is the usual way of things.
Disks tells us that our write rate is less than half our read rate, which is the usual way of things.
 ??  ??
 ??  ??
 ??  ?? The Gallium HUD can be made to work with Steam as well. We don’t really know what ps invocation­s are either.
The Gallium HUD can be made to work with Steam as well. We don’t really know what ps invocation­s are either.
 ??  ?? Glxgears is not very useful as a benchmark anymore, but it’s good for testing the Gallium HUD with appropriat­e hardware and drivers.
Glxgears is not very useful as a benchmark anymore, but it’s good for testing the Gallium HUD with appropriat­e hardware and drivers.

Newspapers in English

Newspapers from Australia