The Guardian (Nigeria)

Advances in Artificial Intelligen­ce top seven technologi­es to watch in 2024

- Read the remaining part of this article on www. guardian. ng

FROM protein engineerin­g and 3D printing to detection of deepfake media, here are seven areas of technology that the Nature journal will be watching in the year ahead. Deep learning for protein design

Two decades ago, David Baker at the University of Washington in Seattle, United States, and his colleagues achieved a landmark feat: they used computatio­nal tools to design an entirely new protein from scratch. ‘ Top7’ folded as predicted, but it was inert: it performed no meaningful biological functions. Today, de novo protein design has matured into a practical tool for generating made- to- order enzymes and other proteins. “It’s hugely empowering,” says Neil King, a biochemist at the University of Washington who collaborat­es with Baker’s team to design protein- based vaccines and vehicles for drug delivery. “Things that were impossible a year and a half ago — now you just do it.”

Much of that progress comes down to increasing­ly massive data sets that link protein sequence to structure. But sophistica­ted methods of deep learning, a form of artificial intelligen­ce ( AI), have also been essential.

‘ Sequence based’ strategies use the large language models ( LLMS) that power tools such as the chatbot CHATGPT. By treating protein sequences like documents comprising polypeptid­e ‘ words’, these algorithms can discern the patterns that underlie the architectu­ral playbook of real- world proteins. “They really learn the hidden grammar,” says Noelia Ferruz, a protein biochemist at the Molecular Biology Institute of Barcelona, Spain. In 2022, her team developed an algorithm called PROTGPT2 that consistent­ly comes up with synthetic proteins that fold stably when produced in the laboratory­1. Another tool co- developed by Ferruz, called ZYMCTRL, draws on sequence and functional data to design members of naturally occurring enzyme families.

CHATGPT? Maybe next year

Readers might detect a theme in this year’s technologi­es to watch: the outsized impact of deep- learning methods. But one such tool did not make the final cut: the much- hyped artificial- intelligen­ce ( AI)powered chatbots. CHATGPT and its ilk seem poised to become part of many researcher­s’ daily routines and were feted as part of the 2023 Nature’s 10 round- up ( see go. nature. com/ 3trp7rg). Respondent­s to a Nature survey in September ( see go. nature. com/ 45232vd) cited CHATGPT as the most useful AI- based tool and were enthusiast­ic about its potential for coding, literature reviews and administra­tive tasks.

Such tools are also proving valuable from an equity perspectiv­e, helping those for whom English isn’t their first language to refine their prose and thereby ease their paths to publicatio­n and career growth. However, many of these applicatio­ns represent labour- saving gains rather than transforma­tions of the research process. Furthermor­e, Chatgpt’s persistent issuing of either misleading or fabricated responses was the leading concern of more than two- thirds of survey respondent­s. Although worth monitoring, these tools need time to mature and to establish their broader role in the scientific world.

Sequence- based approaches can build on and adapt existing protein features to form new frameworks, but they’re less effective for the bespoke design of structural elements or features, such as the ability to bind specific targets in a predictabl­e fashion. ‘ Structure based’ approaches are better for this, and 2023 saw notable progress in this type of protein- design algorithm, too. Some of the most sophistica­ted of these use ‘ diffusion’ models, which also underlie image- generating tools such as DALL- E. These algorithms are initially trained to remove computer- generated noise from large numbers of real structures; by learning to discrimina­te realistic structural elements from noise, they gain the ability to form biological­ly plausible, user- defined structures.

Rfdiffusio­n software developed by Baker’s lab and the Chroma tool by Generate Biomedicin­es in Somerville, Massachuse­tts4, exploit this strategy to remarkable effect. For example, Baker’s team is using Rfdiffusio­n to engineer novel proteins that can form snug interfaces with targets of interest, yielding designs that “just conform perfectly to the surface,” Baker says. A newer ‘ all atom’ iteration of Rfdiffusio­n5 allows designers to computatio­nally shape proteins around non- protein targets such as DNA, small molecules and even metal ions. The resulting versatilit­y opens new horizons for engineered enzymes, transcript­ional regulators, functional biomateria­ls and more. Deepfake detection

The explosion of publicly available generative AI algorithms has made it simple to synthesize convincing, but entirely artificial images, audio and video. The results can offer amusing distractio­ns, but with multiple ongoing geopolitic­al conflicts and a US presidenti­al election on the horizon, opportunit­ies for weaponized media manipulati­on are rife.

Siwei Lyu, a computer scientist at the University at Buffalo in New York, says he’s seen numerous AI- generated ‘ deepfake’ images and audio related to the Israel– Hamas conflict, for instance. This is just the latest round in a high- stakes game of cat- andmouse in which AI users produce deceptive content and Lyu and other media- forensics specialist­s work to detect and intercept it.

AI and science: what 1,600 researcher­s think

One solution is for generative- AI developers to embed hidden signals in the models’ output, producing watermarks of AI- generated content. Other strategies focus on the content itself. Some manipulate­d videos, for instance, replace the facial features of one public figure with those of another, and new algorithms can recognize artefacts at the boundaries of the substitute­d features, says Lyu. The distinctiv­e folds of a person’s outer ear can also reveal mismatches between a face and a head, whereas irregulari­ties in the teeth can reveal edited lip- sync videos in which a person’s mouth was digitally manipulate­d to say something that the subject didn’t say. AI- generated photos also present a thorny challenge — and a moving target. In 2019, Luisa Verdoliva, a media- forensics specialist at University Federico II of Naples, Italy, helped to develop Faceforens­ics++, a tool for spotting faces manipulate­d by several widely used software packages6. But image- forensic methods are subject- and softwaresp­ecific, and generaliza­tion is a challenge. “You cannot have one single universal detector — it’s very difficult,” she says.

And then there’s the challenge of implementa­tion. The US Defense Advanced Research Projects Agency’s Semantic Forensics ( Semafor) programme has developed a useful toolbox for deepfake analysis, but, as reported in Nature ( see Nature 621, 676– 679; 2023) major socialmedi­a sites are not routinely employing it. Broadening the access to such tools could help to fuel uptake, and to this end Lyu’s team has developed the Deepfake- OMeter, a centralize­d public repository of algorithms that can analyse video content from different angles to sniff out deepfake content. Such resources will be helpful, but it is likely that the battle against AI- generated misinforma­tion will persist for years to come.

Newspapers in English

Newspapers from Nigeria