SUPER LINUX!
Let’s change tack and talk about software, because the majority of the world’s servers and virtual machines do run Linux. But why?
This sounds like a contradiction, but it’s not because the software is free (technically, yes it is, but it costs to deploy and support it). It’s because the software has been freed under the GNU GPL (General Public License).
Back in the day, Microsoft required a Windows licence for every physical processor in a system. Think about that for a moment. For your single-core desktop, who cares? But even modest workstations had dual CPUS. And what about enterprises that run warehousesized server installations? Even worse, what do you do if you want to spin up infinities of virtual hosts on the fly?
The open licence nature of Linux, and the associated open-source software ecosystem, just makes everyone’s lives easier when it comes to deploying it in complex environments. Even if there were no cost savings, that alone made it compelling for business. However, that was just one driving factor.
The same issues and more applied to academia and research. Not only the freedom to use the software, but to study, research and modify the source code at will, with no restrictions – other than the stipulation of the GPL to share your changes back.
This itself creates a venerable feedback loop of development. Students study and enhance the Linux kernel, businesses utilise and optimise the kernel, researchers explore and develop entire new systems. The students move on to the professional world, taking their open-source work and ideas with them, and so on. It’s this type of progress that eventually leads to every single supercomputer in the world’s top 500 running Linux (www.top500.org/statistics/ overtime). Perhaps RISC-V will experience the same…?
UK’S public broadcaster) Microcomputer. In 1983, it took the radical decision to develop its own 32-bit RISC processor architecture called ARM to power its next computer model. If that name isn’t familiar, Arm now powers tens of billions of mobile devices.
A reduction of choice
At this point, no one would blame you for thinking this was a feature on RISC, rather than open hardware. What we’ve been doing is looking at the background of why open hardware is needed. And that background is one of ever-reducing options – not just for consumers, but for developers, manufacturers, and innovators. The last decade of the 20th century was vibrant for the processor market, with multiple vendors selling multiple architectures. So what happened? Intel kept driving its
RISC-V’S APPEAL IN A NUTSHELL “An open hardware processor promises to do away with licences and processor monopolies.”
fab technology, which eventually gave it enough transistors to optimise its x86 architecture – x86 in post Pentium Pro designs is decoded to an internal RISC microcode – and pushed out all competition.
The main downside of this pursuit of power is that it consumes more energy, which is fine when your target is servers, workstations, desktops or even bulky laptops. But jump forward four decades, and the emphasis is on low-power architectures for cell phones and ultramobile laptops, and Intel’s x86 simply can’t complete with Arm and its RISC design. A lazy way of putting this is that it requires more transistors to decode an x86 CISC instruction than an Arm RISC one, therefore x86 consumes more power per instruction. It isn’t the only reason why Intel has failed to make an impact in the ultra-mobile market, but it’s a major one.
So, all we’ve done is swapped one
monopolising architecture for another – big whoop. You can argue that Arm is a better competitive landscape, in that anyone can license the Arm architecture and sign its required NDAS, if they want to and can afford it. With x86, there’s just Intel, AMD and VIA that can design and sell x86 processors; we believe IBM can produce processors based on the 80486 architecture, but it’s legally complicated and there have been lawsuits.
An open hardware processor promises to do away with licences and processor monopolies. It might sound like a pipe dream, but a solution that you can buy already exists. It’s called RISC-V (http://riscv.org) and it was developed at Berkeley, the birthplace of RISC.
Established during the summer of 2010, RISC-V (fifth-generation Berkeley design) sets itself apart from other processor designs for a number of key reasons. The first we’ve already covered: it’s an open design, licenced under the open-source BSD licence. Another key difference is that previous open architecture designs focused on being simplistic, to ease understanding for academic teaching rather than being optimised for practical commercial deployment. RISC-V also has all the required commercial-level software tools in place. The instruction architecture, for example, is implemented in the open-source compiler GCC. All the components to boot an OS, such as Debian Linux or FREEBSD, have been in place since 2016, including the important U-boot and UEFI specification support.
These are the required firmware and software tools you need to compile an operating system and software for the platform, with the firmware to boot-strap a system from cold, load the UEFI system and hand over to an OS kernel. Boom! An entire open system.
Analysis of RISC-V
So what is RISC-V, and when is it going to be running our desktop/laptop/robot-overlords? At its heart, RISC-V is an open, extensible processor architecture. There’s a base fixed set of definitions, with various open extensions. It supports 32-, 64- and (as yet unused)
We can’t emphasise enough that benchmarks are largely irrelevant at this point, because RISC-V implementations are going to be targeting embedded, low-power, controller and enthusiast boards. No one is going to be releasing a consumer-level laptop or desktop at this early stage. There is a question, though, of how efficient the ISA is versus other commercial implementations.
A thorough micro-op analysis of, say, x86-64 versus RISC-V compiled binaries is the sort of thing good PHD theses are made from. In fact, that was exactly what was done back in 2016; you can see an outline of the report on Youtube (www.youtube.com/ watch?v=ii_pexkkyug), which shows RISC-V binaries are competitive in micro-op density, and can outperform x86 and ARMV8 code, with a number of potential compiler optimisations being highlighted from the study.
For a simpler look, we’ll take the ancient integer-based Dhrystone benchmark. It may be getting on a bit, but it’s been tweaked to mitigate against hardware and compiler cheats. It also acts as a rough guide for performance across differing hardware architectures.
The standard measure is millions of Dhrystones per MHZ, aka DMIPS/MHZ. It roughly scores how much work a single core can do per MHZ clock. It obviously doesn’t count accelerated vector or SIMD instructions; think of it as a standard program benchmark. We’ve listed a range of processors – the one we find of interest is the Atom N455, as this is a modern in-order x86 architecture. It clearly has work to do to catch up ARM; Cortex A53 is an eight-stage, dual-issue in-order architecture and, running on the Raspberry Pi 3, is significantly faster.
RISC-V RV64GC architecture. The interesting part is that you can download every element of the design: the schematics, bill of materials and processor design files. Sifive has gone on to offer a custom RISC-V design service offering solutions from the smallest embedded packages up to Arm Cortex-a72 level cores.
At this early stage of development, it seems silly to compare performance. The single-issue and in-order design is going to peg the performance below any Arm Cortex-a5x processor, which is out of order with branch prediction. Indeed, tests of the Hifive Unleashed show it running from four to 10 times slower than the 2GHZ Nvidia Jetson TX2. The Nvidia SOC shows the sort of competition RISC-V, as an architecture, is up against. It’s not enough to deliver a working processor core; these days, people expect a host of connectivity to come with it, from memory controllers and PCIE buses to Bluetooth and wireless and wired networking. All of these come with their own controllers and patents, so the dream of building a truly open hardware platform is an uphill struggle, but it’s one that can be overcome.
Commercial interest
RISC-V is certainly attracting industry attention. Current Platinum members of the RISC-V Foundation include Google, HP, IBM, Oracle, Microsoft, Nvidia and Qualcomm. At the seventh RISC-V workshop at the end of November 2017, Western Digital announced that it was planning to transition a billion cores per year to RISC-V design, for data centre and edge computing. It released its first EH1 SWERV at the end of 2018 and announced plans for EH2 are underway, check it out: https://github.com/chipsalliance/cores-swerv. Samsung is using RISC-V in its 5G modems.
It’s a bold statement, but its implementations will be, at best, low-end CPUS running in embedded controllers, which could deliver a cost advantage to WD down the line. It’s all early days for RISC-V, but like the Linux kernel that was also dismissed, once academia, researchers and businesses start enhancing your open design, world domination is just a decade away.
If we were Intel, we’d be worried, and it seems Arm is already circling its wagons. But RISC processors are weak, aren’t they? You forget how powerful they were in the early 90s. Up until 2018, China had the fastest supercomputer in the world, built around Arm RISC processors. It’s planning three new models for 2020, again x86-free. Japan has a similar design using Arm processors for 2021. Even the recent USA Summit supercomputer gets the majority of its power not from the IBM Power9 RISC processors, but from Nvidia’s Tesla V100 units. While Intel or AMD will undoubtedly be powering your desktop through the 2020s, with ARM inside your mobile devices, we’d be surprised if RISC-V wasn’t appearing in all manner of embedded devices, even some mobile ones, and perhaps moving to the data centre. Your desktop could be next.