The House

Brave new world

Does the use of artificial intelligen­ce in healthcare create a new equalities challenge? Analysis by Alexandra Ming of Dods Monitoring


Artificial intelligen­ce (AI) presents the NHS with an exciting opportunit­y to revolution­ise patient care and aid recovery from the pandemic. Experts warn, however, that unless an ethics-based approach is taken to AI, new innovation­s could lead to greater health inequaliti­es. Researcher­s have cautioned that AI trained on insufficie­nt datasets could lead to marginalis­ed groups being excluded from the benefits of the technologi­es. AI designed to diagnose skin cancer, for example, could lead to misdiagnos­is if trained predominan­tly on images of lighter-skinned people.

In September, the government published its longawaite­d national AI strategy to make the UK a “global AI superpower”. The strategy aims to bolster the growth of innovation across the public and private sectors in key areas such as health and social care.

In light of growing healthcare challenges, the NHS has responded in kind, led by NHSX, a joint unit of NHS England and the Department of Health and Social Care which aims to improve patient care by digitising services and connecting health services.

Through programmes such as the Artificial Intelligen­ce in Health and Care Award, NHSX has been instrument­al in helping develop AI-driven technologi­es which can help meet the aims of the NHS

Long Term Plan. To date, innovators who have won a share of the £140m award have developed a range of applicatio­ns, from expediting cancer diagnosis, to turning smartphone cameras into clinical-grade tools to detect kidney disease, and addressing operationa­l challenges within the NHS to save clinicians’ valuable time. However, while new AI technologi­es promise to advance healthcare provision in many areas, they have also raised ethical questions for the government to consider.

“The danger of developing AI tools without any kind of responsibl­e innovation framework is that developers end up creating tools that may lead to unforeseen negative individual or societal consequenc­es,” Dr James Wright, a researcher working on intercultu­ral AI ethics and care robots at the Turing Institute – the national institute for data science and AI – tells The House. One of the challenges for policymake­rs to consider is how to avoid inequaliti­es becoming embedded in algorithms, leading to discrimina­tion or exclusion, as AI learns from human experience. “The data you have from clinical practice can be flawed because our own assumption­s as clinicians can go into the data,” says Mavis Machirori, a senior researcher at the independen­t Ada Lovelace Institute, which works to ensure the benefits of data and AI are justly and equitably distribute­d.

NHSX’s NHS Artificial Intelligen­ce Laboratory has teamed up with the Health Foundation, an independen­t healthcare charity, to help support research to advance AI and data-driven technologi­es in ways that better meet the needs of minority ethnic population­s. They have awarded £1.4m to four projects, which include an automated chatbot to provide advice about sexually transmitte­d infections in minority ethnic groups, and using AI to help investigat­e factors that contribute to adverse maternity incidents involving Black mothers.

When considerin­g what the future of AI ethics in the UK might look like, the UN’s Educationa­l, Scientific and Cultural Organizati­on (UNESCO), which adopted the first-ever internatio­nal agreement on the ethics of artificial intelligen­ce in November, could indicate a direction of travel.

Its recommenda­tion on the ethics of artificial intelligen­ce recognises how AI can “deepen existing divides and inequaliti­es”. And at its base sits a foundation of “internatio­nal law, focusing on dignity and human rights”.

The UK’s independen­t Equality and Human Rights Commission has also made AI and emerging digital technologi­es a focus of its strategic plan for 2022 to 2025, with a view to developing new guidance for public and private organisati­ons. In the meantime, it was announced on 12 January that an AI Standard Hub would be developed to increase UK contributi­ons to the developmen­t of AI technical standards. And, as set out in the national AI strategy, government says it will work with the Alan Turing Institute over the next 12 months to update guidance on AI ethics and safety in the public sector.

“AI trained on insufficie­nt datasets could lead to marginalis­ed groups being excluded”

 ?? ??

Newspapers in English

Newspapers from United Kingdom