The Daily Telegraph - Saturday - Review

From sci-fi to Wi-Fi to my-wi

In just 50 years, a vision of the future became the phones in our pockets. Now, are we accelerati­ng towards the post-human era?

- By Jeanette WINTERSON

In 1965 my dad brought home a transistor radio from the television factory where he worked. At home, we still had the stately valve-amp radiogram that took up half the parlour, where my mother had listened to Churchill’s radio broadcasts while my dad was fighting in the war. As a small child in the 1960s, I liked to sit behind the humming radiogram, watching the orange glow of the glass valves. It was fairy-like and warm.

Those valves, as the Brits called them, were vacuum tubes. They were invented in Britain, in 1904, by John Ambrose Fleming – really as a spin-off from the incandesce­nt light bulb, a filament inside an evacuated glass container. When hot, the filament releases electrons into the vacuum; it’s called the Edison Effect (technical term, thermionic emission). Thomas Edison had invented the lightbulb in 1879, and Ambrose realised that if he put a second electrode into a similar evacuated envelope, like a light bulb, this second electrode (the anode) would attract the electrons released from the heated cathode filament, and create a current. Vacuum tubes become easy to imagine if you think about old-fashioned filament light bulbs.

Remember (no, probably not, but I am old) how light bulbs used to get really hot? That was wasted energy generated as heat, not light, hence the term, “more heat than light”, and the wonderful expression reminiscen­t of my entire childhood – “incandesce­nt with rage”. Gentle low-energy bulbs just don’t offer the same opportunit­y for third-degree burns or social commentary.

But back to the vacuum tube. The vacuum tube was the early enabler of broadcast signals, whether the telephone network, radio or TV, and of course, early computers. Vacuum tubes do their job, but the glass is easy to break, and they are bulky and energy inefficien­t, as the whole tube gets heated up when the cathode is heated up. Early computers were huge because vacuum tubes and miles of connecting wires take up a lot of space, as well as using masses of electricit­y. The pretty orange glow they give off is waste.

In 1947, at Bell Labs in New Jersey, it was observed that when two separate point contacts, made of gold, were applied to a carbon crystal made of germanium (atomic number 32), a signal was produced where the output was greater than the input. Energy was not being wasted as heat loss. The guys described the discovery as transcondu­ctance within the varistor family (varistors are electronic components with a varying resistance depending on the input). This was a great descriptio­n for jubilant electrical engineers, but it was never going to sell anything. An internal competitio­n at Bell Labs championed the suffix ISTOR as being sci-fi-like and futuristic, and TRANS was clear and simple, so the brand-new world-changing product soon became known as the transistor.

By the mid-1950s, in America, Chrysler was offering an in-car all-transistor radio – which was better than your wife sitting on the passenger seat underneath a 20lb set of glowing valves. But it was Sony, in 1957, who manufactur­ed the world’s first mass-production transistor radio, the TR-63. These came in funky colours like green, yellow and red. They looked modern. (Radiograms were brown or cream and looked like your parents’ wardrobe.) Best of all, the Sony could fit in a pocket – well, depending on the size of your pocket. The story went that Sony reps had special shirts with an oversized breast pocket.

But, whatever the outfit, the device was cool and neat and contempora­ry. No cathode meant no glow and no heat-up time. No longer would it take a few minutes, after the familiar click of the Bakelite switch, for the BBC World Service to crackle out of our set. The TR-63 ran on a 9-volt battery and boasted six transistor­s. Take off the back and here’s the circuit board looking like a badly packed 1950s suitcase. This, though, is the beginning of the future – with the buzzwords we all know and love: instant, portable, personal.

By the early 1960s, transistor­s were replacing vacuum tubes at the cutting edge of technologi­cal developmen­t. Best of all they were small – and their property of smallness changed everything. The first transistor measured around a half-inch. They were placed on a printed circuit board. It wasn’t until the 1970s that the integrated circuit was developed by Intel – by etching transistor­s, not onto germanium, but silicon. And then they got smaller and smaller and smaller, like something out of the genie world. So small that your iPhone 12 has 11.8 billion of them.

I think that needs a pause. Six transistor­s on the 1957 Sony portable TR-63; 11.8 billion in your hand right now. But in between then and now, quite a bit has happened – including the moon landing.

In 1969 Apollo 11 landed on the moon. Michio Kaku, theoretica­l physicist and author, put it like this: “Today, your cell phone has more computer power than all of NASA back in 1969, when it placed two astronauts on the moon.” That doesn’t mean your phone can fly you to the moon – but it is a useful comparison when thinking about the exponentia­l increase in computing capacity in such a small amount of time.

So, what are we doing with the 100,000 times faster processing speeds in our iPhones? Well, mainly, playing games. We’re smart but we’re still apes. Pass the banana.

Thinking of bananas, remember the banana-shaped phone in The Matrix movies? The movies that make it seem inevitable that our world is only a simulation? That banana was a Nokia 8110, once the world-leader mobile phone. But not a smartphone. The 1996 Nokia 9000 Communicat­or was the first mobile phone with internet connectivi­ty – in a really limited way. Smartphone­s – digitally enabled devices that can do more than make a call – came into the world via IBM in 1994 with the Simon Personal Communicat­or. It was clunky, but alongside calls it could manage emails, and even faxes.

Almost thirty years earlier, in 1966, in her novel Rocannon’s World, sci-fi writer and general genius Ursula K Le Guin had devised the ansible – really a texting/email device that worked between worlds.

One end was fixed, the other end was portable. We would be waiting a while for that to hit Planet Earth.

In 1999 Blackberry released their smartphone with the Qwerty keyboard. Like an ansible, with its keyboard and screen, the Blackberry could do calls, but its main function was email. We had to get into the 21st century for the Apple iPhone.

In 2007, when Apple was already making megamoney with its iPod, Steve Jobs was persuaded to “do” a phone that would handle everything the iPod did, plus make calls, send emails and texts, and access the internet. To do that Apple turned the humble phone into what Apple did best – computers. Safarienab­led, the iPhone wasn’t really a phone at all – it was a pocket computer.

A year later, in 2008 – the year of the global economic crash – Apple added the App Store, which is the beginning of what we think of as a truly smart phone: a phone that is globally connected, and that can be customised (personalis­ed) by the user. It was a prescient move – a move driven by hackers and developers, who realised that what a phone is for isn’t for making calls. Since the revolution in communicat­ion that is Facebook, a phone has become primarily a social-media device. Now we go on Instagram, Snapchat, WhatsApp, Twitter, YouTube, play games, check BuzzFeed, order food and cabs. Google the internet, ask Siri, click on Spotify or Sonos, and sometimes, maybe, make a call.

When is a phone not a phone? Google’s soon-to-berealised dream of ambient computing – really the internet of things, where all smart devices, from fridges to phones, are connected – includes, at a later date, connecting humans directly to its services, and to one another, via a nanochip implant in our brain. This will be the ultimate, and planned, end of staring at your phone – an activity that presently involves 97 per cent of Americans and 37 per cent of the world. The timeline of the smartphone 2007–20?? may be one of the shortest in the history of any world-changing invention.

In 1964, when Arthur C Clarke made his prediction­s of a future where “we can be in instant contact wherever we may be [with] our friends anywhere on earth, even if we don’t know their actual physical location,” he saw the exponentia­l impact of the transistor, but he also understood that network communicat­ion depends on satellites. The first man-made satellite in space was Sputnik 1 in 1957. It looked like a steel beach ball with feelers. Today, there are thousands of satellites in space – mostly put there by nation states for scientific research. Others are for mutual co-operation, such as telecoms, and the global GPS system that tells you (and others) where you are. TV and phone signals depend on our satellite network; signals are sent upwards to a satellite, and instantly relocated back down to earth again. This avoids annoying signal-blockers, like mountains, and saves thousands of miles of land-routed cable network.

Elon Musk’s SpaceX programme, Starlink, controls more than 25 per cent of all satellites in space, and he is seeking permission to get 12,000 up there by 2025, and eventually 42,000. There are risks to all this, including light pollution and energy guzzling. As with so much of tech, most of us just don’t know what is going on, and by the time we find out it will be too late to regulate. Musk is aggressive­ly anti-regulation.

And who owns space? Not Elon Musk. This is another kind of land-grab. Another kind of enclosure. Government­s will have to regulate space – if they

Google’s future dream involves connecting us to its services via a nanochip brain implant

don’t, it’s already been stolen. The 1967 Outer Space Treaty declared space to be a common good of mankind. By 2015 the Commercial Space Launch Competitiv­eness Act had a different wording: “to engage in the commercial exploratio­n and exploitati­on of space resources”. New technology. Old business model.

A satellite is crazily simple – as well as being enormously complex. Sputnik 1 really is the size of a beach ball. Like every satellite, Sputnik 1 has antennae and a power source. The antennae send and receive informatio­n. The power source can be a battery or solar panels. On the journey from sci-fi to Wi-Fi – when a vision of the future becomes the phone in your pocket – it is transistor­s and satellites that join the dots. We think of the computer as the ultimate invention of the 20th century, yet without transistor­s and satellites your home computer would still be running on vacuum tubes, taking up the whole of your spare bedroom, and you’d be dialling up via your landline.

Are you old enough to remember scrabbling to connect via the telephone line and hearing the wheecrackl­ebuzzbuzzb­ass of the slow-motion dial-up modem? Actually, it’s not that long ago. I live in the countrysid­e, and even in 2009 I had no broadband. I was trying to conduct a love affair with a cool New Yorker living in London. She was fully connected. I was pretending to be.

Most mornings saw me propping my laptop on the bread board and running an extension to the solitary phone socket in the understair­s cupboard. I made the mistake of leaving the extension in place and a week later the mice had chewed through everything. Mice love cable. I had no phone and no internet. Progress wasn’t on my side.

But what is Wi-Fi? What it’s not is wireless fidelity. Wi-Fi started out as “IEEE 802.11b Direct Sequence”. It’s radio waves. Plain old radio waves with a geeky label. Nobody was going to buy into that except a Dalek. So, in 1999, the brand consulting firm Interbrand made a pun on hi-fi, which really is highfideli­ty, and came up with the catchy name and icon we all know so well.

In that same millennial-moment year, when we were partying with Prince like it’s 1999, Apple launched the first Wi-Fi-enabled laptop. That is so recent. So near in time. Broadband internet was citywide across the world by 2000. That felt like a true new beginning for a true new century. And look what happened next.

Google had started out as a small search engine in 1998. The telephone directory-style headline-only internet searches were boring and slow. Stanford students Sergey Brin and Larry Page thought they could do better – and by 2003 Google had become the default search for Yahoo. Google went public in 2004, the same year that Facebook joined the world – or the world joined Facebook. Those first 10 years of the new century were incredible: Wikipedia, 2001; YouTube, 2005; Twitter, 2006; Instagram, 2010.

Even old forms, like reading, caught the revolution as the iPad and Kindle kicked off mega-sales of ebook publishing. Those sales and those devices didn’t destroy the book though, any more than the car destroyed the bicycle. A physical book, like an apple or an egg, seems to me to be a perfect form. But a perfect form that is still evolving – like the bicycle. Not everything in this world is destined to be replaced by something else.

What about humans, though? Are we going to be replaced – or at least become less and less relevant – or are we evolving? In the next decade – 2020 onwards – the internet of things will start the forced evolution and gradual dissolutio­n of Homo sapiens as we know it. But before we get to the internet of things and a world of connected devices – and some directly connected humans – let’s go back to the internet itself, to see how far we have come, and where we might be going.

Back in late-1960s America, soon after the Summer of Love, the Advanced Research Projects Agency Network (ARPANET) adopted a British packet-switching system to transmit limited data between research institutio­ns. The more familiar term, INTERNET – really just internetwo­rking – came into use in the 1970s, to describe a collection of networks linked by a common protocol.

It was Tim Berners-Lee, an Englishman working at the physics lab CERN, in Switzerlan­d, who developed HTML. HTML (hypertext mark-up language) linked up an informatio­n system accessible from any node (computer) in the network. In 1990 the World Wide Web as we know it came into existence. Think of the internet as the hardware and the web as the software.

By 2010, the web had become the way for billions of us across the world to access the internet. And, of course, we had Google as our search engine. The bigger the internet, the more sophistica­ted the search needs to be. The question now, though, is, are we being nudged as we search? Do we really want advertisin­g in our face whenever we type in a word? Do we want our data tracked and traced and repackaged? Do we want to be profiled by an algorithm? Why can’t I buy something online without clicking ACCEPT on their privacy policy – which really means that what I’ve just bought is not private at all?

Personalis­ing the web is where the money is. Your web – where everything is tailored to “help” you navigate faster, get to what you want, often via what you might be persuaded to want – is the new consumer model where the customer pays twice: with cash for the goods, and with the free gift of informatio­n about ourselves.

That informatio­n is valuable. Even when we aren’t buying stuff, when we are browsing around or using social media, we are being strip-mined for our data. Ads aren’t just selling you any old stuff – they are trying to sell you stuff your cookie trail tells them you might be persuaded to buy. More worryingly than selling you stuff, your newsfeed is algorithmi­cally tailored to what you “want” to hear about. Our clicks and likes determine the so-called Editors’ Picks, making sure that the little we know – and all our personal bias – will be looped back to us again and again, ensuring more clicks and likes in the echo chamber of “choice”.

Access to different ideas and a wider world view just disappears. It’s censored. Not by a censor, of course, because that would be totalitari­an – but by what looks like personal choice, your very own personal choices, nudged a little, just for you.

Most of life is about being wrong, making mistakes, changing our minds. Web profiling means you need never be wrong, never seem to make a mistake, never have to change your mind. You’ll be sold what you have already bought. You will read what you have already read. Amplified.

This will get more interestin­g/worrying as Siri and Alexa grow up – or if Google figures out how to develop a genuine personal assistant for each of us. Siri and Alexa are fun, but all they really do is connect

 ??  ?? g When the wireless looked like your parents’ wardrobe: a 1940s family tune into the radiogram
g When the wireless looked like your parents’ wardrobe: a 1940s family tune into the radiogram
 ??  ??

Newspapers in English

Newspapers from United Kingdom