CELEBRATE 30 YEARS OF LINUX!
How a 21-year-old’s bedroom coding project took over the world and a few other things along the way.
Turn to and discover how a 21-yearold’s bedroom coding project took over the world and a few other things along the way…
Linux only exists because of Christmas. On January 5, 1991, a 21-year-old computer science student, who was currently living with his mum, trudged through the (we assume) snow-covered streets of Helsinki, with his pockets stuffed full of Christmas gift money. Linus Torvalds wandered up to his local PC store and purchased his first PC, an Intel 386 DX33, with 4MB of memory and a 40MB hard drive. On this stalwart machine he would write the first-ever version of Linux. From this moment on, the history of Linux becomes a love story about community collaboration, open-source development, software freedom and open platforms.
Previous to walking into that computer store, Linus Torvalds had tinkered on the obscure (UK-designed) Sinclair QL (Quantum Leap) and the far better-known Commodore VIC-20. Fine home computers, but neither was going to birth a world-straddling kernel. A boy needs standards to make something that will be adopted worldwide, and an IBM-compatible PC was a perfect place to start. But we’re sure Torvalds’ mind was focused more on having fun with Prince of Persia at that point than specifically developing a Microsoftconquering kernel.
Let’s be clear: a 21-year-old, barely able to afford an Intel 386 DX33, was about to start a development process that would support a software ecosystem, which in turn would run most of the smart devices in the world, a majority of the internet, all of the world’s fastest supercomputers, chunks of Hollywood’s special effects industry, SpaceX rockets, NASA Mars probes, selfdriving cars, tens of millions of SBC like the Pi and a whole bunch of other stuff. How the heck did that happen? Turn the page to find out…
LINUS TORVALDS’ ACHIEVEMENT
“A 21-year-old, barely able to afford an Intel 386 DX33, was about to start a development process that would support a software ecosystem…”
To understand how Linux got started, you need to understand Unix. Before Linux, Unix was a well-established operating system standard through the 1960s into the 1970s. It was already powering mainframes built by the likes of IBM, HP, and AT&T. We’re not talking small fry, then – they were mega corporations selling their products around the globe.
If we look at the development of Unix, you’ll see certain parallels with Linux: freethinking academic types who were given free rein to develop what they want. But whereas Unix was ultimately boxed into closed-source corporatism, tied to a fixed and dwindling development team, eroded by profit margins and lawyers’ fees, groups that followed Linux embraced a more strict open approach. This enabled free experimentation, development and collaboration on a worldwide scale. Yeah, yeah, you get the point!
Back to Unix, which is an operating system standard that started development in academia at the end of the 1960s as part of MIT, Bell Labs and then part of AT&T. The initially single or uni-processing OS, spawned from the Multics OS, was dubbed Unics, with an assembler, editor and the B programming language. At some point that “cs” was swapped to an “x,” probably because it was cooler, dude.
At some point, someone needed a text editor to run on a DEC PDP-11 machine. So, the Unix team obliged and developed roff and troff, the first digital typesetting system. Such unfettered functionality demanded documentation, so the “man” system (still used to this day) was created with the first Unix Programming Manual in November 1971. This was all a stroke of luck, because the DEC PDP-11 was the most popular minimainframe of its day, and everyone focused on the neatly documented and openly shared Unix system.
In 1973, version 4 of Unix was rewritten in portable C, though it would be five years until anyone tried running Unix on anything but a PDP-11. At this point, a copy of the Unix source code cost almost $100,000 in current money to licence from AT&T, so commercial use was limited during the 70s. However, by the early 80s costs had rapidly dropped and widespread use at Bell Labs, AT&T, and among computer science students propelled the use of Unix. It was considered a universal OS standard, and in the mid-1980s the POSIX standard was proposed by the IEEE, backed by the US government. This makes any operating system following POSIX at
least partly if not largely compatible with other versions.
At the end of the 1980s, the Unix story got messy, with commercial infighting, competing standards and closing off of standards, often dubbed Unix Wars. While AT&T, Sun Microsystems, Oracle, SCO, and others argued, a Finnish boy was about to start university…
We GNU that
Before we dive into the early world of Linux, there’s another part of the puzzle of its success that we need to put in place: the GNU Project, established by Richard Stallman. Stallman was a product of the 1970s development environment: a freethinking, academic, hippy type. One day, he couldn’t use a printer, and because the company refused to supply the source code, he couldn’t fix the issue – supplying source code was quite normal at the time. He went apoplectic and established a free software development revolution: an entire free OS ecosystem, free software licence and philosophy that’s still going strong. Take that, proprietary software!
The GNU Project was established by Stallman in 1983, with GNU being a hilarious (to hackers, at least) recursive acronym for “GNU is Not Unix.” Geddit? Its aim was to establish a free OS ecosystem with all the tools and services a fully functioning OS requires. Do keep in mind that most of the tools created then are still being used and maintained today.
By 1987, GNU had established its own compiler, GCC, the Emacs editor, the basis of the GNU Core Utilities (basic file manipulation tools such as list, copy, delete and so on), a rudimentary kernel and a chess engine (See LXF273). But more importantly, Stallman had cemented his ideal of software freedom with the 1989 “copyleft” GPL software licence, and his manifesto setting out the four software freedoms enabling users to run, study, modify and distribute any software – including the source – for any reason.
The GPL remains the strongest copyleft licence, and while it has perhaps fallen out of vogue, it’s still regarded as the best licence for true open-source development, and cements most Linux distros. GCC is still an industry standard, Emacs remains a feature-rich development environment, and the GNU Core Utilities are still widely used in certain POSIX systems and most Linux distros.
You could argue that without the GNU Project being established, Linux would never have taken off. The GPL licence (adopted early on in Linux development) forces all developers to share back their enhancements to the source code. It’s a feedback loop that promotes shared improvements. Alternative open-source licences enable corporations to take source code and never share back
RICHARD STALLMAN’S GNU WORK
“He established a free software development revolution: an entire free OS ecosystem, free software licence and philosophy that’s still going strong.”
improvements, meaning the base code is more likely to remain static. This was backed by a generation of developers that grew up studying and using Unix, looking to contribute to a truly freed open-source OS.
Let’s all Freax out!
We’re getting ahead of ourselves. Linus Torvalds had his Intel 386, was studying computer science at the University of Helsinki, and was using the MINIX 16-bit OS and kernel. MINIX is a POSIX-compatible Unix-like OS and micro-kernel. In 1991, it had a liberal licence, costing just $69, offering the source code but restricted modification and redistribution.
We imagine the 16-bit limitation spurred Torvalds to create his own 32-bit kernel, but he states the licence restrictions were also key. So, on 25 August, 1991, he posted to comp.os.minix that he was developing his own free OS. He said that it was “nothing professional like GNU,” and it’d only support AT disks, as that’s all he had.
This was developed on a MINIX system, compiled on GNU GCC, and he’d ported GNU bash. Torvalds had planned to call his OS Freax, combining “Free,” “Freak,” and “Unix,” but once he’d uploaded it to ftp.funet.fi, a volunteer admin (Ari Lemmke) renamed it Linux, as he thought it was better. So, version 0.01 of Linux was released to the world in September 1991.
One telling part of the release notes states: “A kernel by itself gets you nowhere. To get a working system you need a shell, compilers, a library, etc… Most of the tools used with Linux are GNU software and are under the GNU copyleft. These tools aren’t in the distribution – ask me (or GNU) for more info.”
Importantly, this outlines Linux’s reliance on other GPL-licenced tools, and shows the use of the term “distribution,” now shortened to “distro.” As Torvalds points out, an operating system isn’t a kernel alone; it’s a collection of tools, scripts, configs, drivers, services and a kernel, lumped together in an easier form for users to install and use.
As for the licence, Torvalds initially used his own, which restricted commercial use, but by January 1992 he’d been asked to adopt the GPL, and had stated the kernel licence would change to align it with the other tools being used. It was December 1992, and for the release of v0.99, the Linux kernel was GPLv2-licenced. This cemented the legal clause that anyone using the kernel source has to contribute back any changes used in published code.