The Daily Telegraph

Fujitsu’s role in the Post Office scandal is proof that we trust technology too much

ITV’S drama highlighte­d the consequenc­es of believing that computers cannot make mistakes ‘The law presuming that the computer is correct must now be revised’

- ANDREW ORLOWSKI

In September 1983, Lieutenant Colonel Stanislav Petrov chose not to believe what his computer was telling him, and probably saved the world. The system had assured him that the very convincing blips on the Soviet early warning radar scopes were five incoming interconti­nental ballistic missiles, launched from the United States as a first strike.

The Soviet Union’s strategy was to order a massive counter strike as soon as it saw such an attack. In reality, the blips were just high-altitude clouds. Five also seemed a small number, Petrov reasoned, with which to start a surprise thermonucl­ear exchange.

Sometimes the computer generates a fiction, but why would we choose to believe it? That’s what a lot of us were wondering last week, watching ITV’S Mr Bates vs The Post Office.

The drama has shocked Britain, leaving us astonished at how long the Post Office could persecute the innocent. But the origin of the scandal begins with a computer-generated fiction, just like the one that confronted Petrov.

At Fujitsu, staff wrote very poor quality computer programmin­g code that generated a false reality: widespread cash shortfalls at Post Office branches.

Theft was the logical inference. The prosecutor­s and courts also chose to believe that the computer couldn’t be lying, so the postmaster­s must be.

To cap it all, the innocent were deprived of the reality-based evidence they needed to prove their innocence for many years.

Only in 2018 and 2019 did Mr Bates and his correspond­ents receive that proof, as the Post Office was finally obliged to disclose records: specifical­ly, the KEL (Known Error Log) and audit data. Both had been held centrally by Fujitsu.

It seems absurd that anyone would want to think that computers give us a truer reality than what we know and have experience­d. But it’s a problem that’s more subtle and widespread than you might suppose.

For example, understand­ing how humans behave is now conducted through computer metaphors; the workings of our brains given terms such as “informatio­n processing”.

One of the greatest minds of the past century, John von Neumann had concluded in 1958 that the human nervous system was “prima facie digital’ – so why not study a computer instead of a subject, who may not even turn up on time?

When psychology professor Robert Epstein, a former editor of Psycholog y Today, challenged researcher­s at one of the leading institutes to come up with non-computatio­nal metaphors for the brain instead, they were completely stumped.

“They saw the problem. But they couldn’t offer an alternativ­e,” Epstein later reflected. Digital had become such a pervasive metaphor, it was the only matter that mattered.

Another reason we may trust computers too much is that we want them to perform magic. Our political realm is messy and dysfunctio­nal, so perhaps technology can fix things that we can’t seem to fix ourselves? Things such as poor productivi­ty, or poor social relations.

The political Left has been seduced by this Utopian desire many times. In Edward Bellamy’s 1888 book Looking

Backward, describing life in the year 2000, a world of Deliveroo and Ocado-style deliveries awaited us, along with streaming media on demand – albeit religious sermons. In 2019, Aaron Bastani’s Fully Automated

Luxury Communism offered a similar fantasy of post-plenty.

From Harold Wilson’s White Heat of Technology to Tony Blair, leaders have sought to yoke themselves to new technology. But the very use of phrases such as “informatio­n society”, or “networked economy” casts us in a subservien­t role. They imply that we’re the nuisance if we get in their way, just like Mr Bates.

A more immediate problem is that as systems become better at impersonat­ing us, the more likely people are to believe them. Generative AI poses this challenge today.

So it’s worth rememberin­g the response of the MIT professor Joseph Weizenbaum, who in the 1960s wrote a modest, interactiv­e program called Eliza – one of the first chatbots, a

‘Lt-col Petrov chose not to believe what his computer was telling him and probably saved the world’

robo-psychother­apist. Weizenbaum was shocked when users believed that Eliza was really intelligen­t, and poured out their hearts to the software for many hours.

The professor became a prominent popular voice warning about the dangers of over-trusting technology.

We had forgotten, he wrote, how catastroph­ically computers fail, “when their rules are applied in earnest”.

Of course, in a different legal system, the Post Office persecutio­ns could never have happened at all: the US discovery process would have exposed the truth much sooner. And some good may yet come from the scandal, if the standards for computer evidence are re-examined.

The law that presumes the computer is correct must now be revised, Stephen Mason, a retired barrister and expert has demanded for a decade.

He and other barristers have warned that without the group action by Bates, the Horizon bugs would never have been revealed.

“The presumptio­n has ruined too many lives already,” IT expert James Christie told Karl Flinders of Computer

Weekly last week. “It must go, the sooner the better.”

Petrov received no credit for disobeying his computer system, and was later reprimande­d for keeping poor paperwork.

Had he been recognised, the Soviet officials reasoned, then the bug would have been discovered, and the designers of the system punished. That would have been too embarrassi­ng.

 ?? ?? ITV’S drama ‘Mr Bates vs The Post Office’ could lead to the standards of computer evidence in court cases being re-examined
ITV’S drama ‘Mr Bates vs The Post Office’ could lead to the standards of computer evidence in court cases being re-examined
 ?? ??

Newspapers in English

Newspapers from United Kingdom