The Mail on Sunday

Would you trust YOUR life to Artificial Intelligen­ce

The Health Secretary says cutting- edge computing will transform the NHS. But as a robot surgeon kills a heart patient...

- By Robert Chamberlai­n

PICTURE t he scenario: a ‘robo-doc’ Artificial Intelligen­ce program has examined your scans, read your medical records, taken into account your habits, your genes, and crunched through global population data and the latest medical research. All this has allowed it to correctly identify an early-stage cancer long before it could ever become a true threat.

All that’s left is for your GP to deliver the news with skilled compassion. The doctor has ample time now, liberated by legions of automated systems that cut through a onceimposs­ible workload.

The hospital, should you ever need to attend, is now a model of efficiency with cleaners, nurses and doctors all guided by apps to wherever care is needed next. Your medication­s are bespoke: a single pill combines every drug you need.

Far-fetched? Very possibly, but it’s the revolution­ary dream of Health Secretary Matt Hancock. The Minister has pledged that within the next two to five years, effective and efficient use of Artificial Intelligen­ce – or AI – will be routine and widespread in the NHS. ‘This technology

has the potential to revolution­ise care by introducin­g systems that speed earlier diagnoses, improve patient outcomes, make every pound go further and free up clinicians so they can spend more time with patients,’ says Hancock.

According to new Department for Health and Social Care figures, AI could cut GPs’ and nurses’ workload by a third, and that of hospital doctors’ by a quarter, and claim it could save the NHS as much as £12.5 billion a year – a tenth of its budget.

Few would argue that this all sounds great in theory. Yet experts have raised concerns about the safety of leaving machines to make life and death decisions.

Just last week, an inquest into the death of Stephen Pettitt, 69, who died after a heart operation aided by a revolution­ary surgical ‘robot’, deemed the machine ultimately responsibl­e for his passing.

In her damning verdict, coroner Karen Dilks warned of the ‘risks of further deaths’ brought about by the increase in robot-assisted surgery. Others warn the Government is jumping the gun in a rush to adopt new technology that is, largely, untested.

So what is the truth?

IT’S ALREADY HAPPENING…

NOT a day goes by, it seems, without an announceme­nt that AI is going to revolution­ise our health. Last week the Government announced a £50 million programme to build five medical technology centres across the UK for developing AI systems that analyse medical evidence and spot cancer.

There have also been announceme­nts that AI could aid the diagnosis of heart disease and dementia. But how can a computer program do all this? The idea is simple – AI is, essentiall­y, software. In a medical setting, staff collate and input informatio­n about you, your symptoms, test results, and the like.

The program – or algorithm as it’s called – then compares this with informatio­n from medical textbooks, studies, and the outcomes of previous cases similar to yours. It will then provide a probabilit­y or likelihood of a condition or disease, or suggest a course of treatment.

As more cases are processed, so the machine’s knowledge base grows. This is how it ‘learns’.

A more primitive version of this technology is used by Amazon to predict that if you bought the latest Jamie Oliver book, you might also be interested in Nigella Lawson – so it advertises her books to you.

A medical algorithm predicts your risk of cancer or heart disease, not your taste in celebrity chefs.

One of the most advanced AI health projects currently in use is at Moorfields Eye Hospital in London. There, doctors have partnered with the Google-owned AI company DeepMind to create a system that examines patients’ scans to create an algorithm to diagnose early signs of the eye diseases agerelated macular degenerati­on and diabetic retinopath­y – the two leading causes of blindness.

At Imperial College London, an AI program has been developed to help doctors spot deadly postsurgic­al complicati­ons, while cardiologi­sts at the Royal Liverpool Hospital are using the technology to aid decisions on heart attack victims’ treatment.

GPs in Sutton are working to slash cancer deaths by using an AI database called ‘C the Signs’ to help facilitate early cancer diagnosis by cross- referring NHS treatment guidelines with combinatio­ns of symptoms and risk factors.

And in South Yorkshire, staff are working with Huddersfie­ld University to develop AI to cut suicide deaths by spotting mental health patients most at risk.

IS THE NHS READY?

DESPITE the excitement, many within the NHS fear implementi­ng these programmes will not be straightfo­rward.

Dr Julian Huppert at the University of Cambridge, who led an independen­t review of the Moorfields Eye Hospital project, warns: ‘The infrastruc­ture of the NHS is not ready. A lot of patient data is not available for computeris­ation, as it’s held on paper or incomplete.’

Health policy expert Nicola Perrin, of medical research body the Wellcome Trust, agrees, saying: ‘These new projects are fantastic, but it doesn’t mean much if your hospital consultant doesn’t have the results of your test – done a week ago at a different hospital.’

A streamline­d, hassle-free NHS AI system seems beyond the realm of imaginatio­n, given that, according to Perrin, technology in many parts of the NHS is out of date.

More recently, an IT survey of 900 nurses found that many are hindered by ‘depressing­ly mundane’ barriers such as obsolete systems. One told the Royal College of Nursing study: ‘We are upgrading our PCs to run Windows 7 – which is already a decade out of date.’

The NHS technology track record isn’t good: there was a six-year initiative to create a single electronic records system that collapsed in 2011, followed by a failed £12.4 billion attempt to upgrade NHS computer systems in 2013, branded by officials as the ‘worst and most expensive contractin­g fiascos’ in public sector history.

And in May, it was revealed an IT failure led to 450,000 women not being sent a letter inviting them to a mammogram – as many as 270 women are feared to have died of breast cancer as a result. THE use of AI in critical life and death situations raises the question of whether clinicians and AI can be trusted to work together safely.

This year it was reported that ‘developmen­t problems’ had slowed the progress of the American computer giant IBM’s AI cancer diagnosis tool called Watson for Oncology.

An investigat­ion by online journal Stat concluded that three years after IBM began selling it, the supercompu­ter is still struggling with the basic step of learning about different forms of cancer.

Danger also l i es when health profession­als become reliant on AI and don’t challenge its decisions, known as ‘automation bias’.

Perhaps the most tragic example dates back to the 1980s.

In several US and Canadian hospitals, computeris­ed radiothera­py machines issued error messages that none of the staff understood, so they ignored them – a lethal mistake. The machines were overdosing patients with harmful X- rays and burned six people to death.

The British Medical Associatio­n (BMA) is adamant that clinicians must keep the upper hand in all AI decision-making.

In June, the chairman of the BMA GP committee Dr Richard Vautrey rejected a claim made by the American IT company Babylon that its AI tool was as good at giving health advice as a GP.

‘AI may have a place in the tools doctors use, but it cannot replace the essential elements of the doctor-patient relationsh­ip which is at the heart of medicine,’ he said.

BEWARE DATA LEECHES

COULD patients’ intimate health data become exploited by social media giants? Many think so, and with good reason. Last year, the NHS lost more than half a million computeris­ed confidenti­al medical l etters sent between GPs and hospitals from 2011 to 2016. Months later, a cyber-attack locked computers and cut phone lines in at least 40 NHS Trusts, leaving doctors unable to access patients’ records.

Most recently, an AI collaborat­ion between DeepMind and the Royal Free London NHS Trust to produce an app aiding diagnosis of kidney injury was judged in breach of UK data protection law. The app may have saved nurses up to two hours every day, but it shared 1.6 million patients’ health data with a Googleowne­d company. The concern is that should insurance companies get hold of this data, they may refuse to insure people based on informatio­n obtained unlawfully.

WHAT IF IT GOES WRONG?

AND what happens if an AI tool makes a ruinous decision that badly affects your health: Who do you sue? No one yet has the answer.

It is a concern held by Professor Sir Bruce Keogh, former Medical Director of the NHS in England.

He says: ‘The worry with AI is that we may not know what is going on in it’s “mind” and why it makes the decisions it does. This means if it starts to make bad decisions, it becomes hard to rectify. These issues will pose difficult questions for regulators, lawyers and policy makers.’

Such problems are on the Health Secretary’s radar. He has produced a draft code of conduct on AI and data driven technologi­es in health care. However, Sir Bruce believes we must not forget the fundamenta­ls of what doctors do: ‘The art of medicine is the judicious applicatio­n of science,’ he says. ‘We need to take the same approach with AI.’

 ??  ??
 ??  ??
 ??  ??

Newspapers in English

Newspapers from United Kingdom