Farmer's Weekly (South Africa)
Using AI in agriculture could boost global food security, but we need to anticipate the risks
Asaf Tzachor, a research affiliate at the Centre for the Study of Existential Risks at the University of Cambridge in the UK, outlines the findings of a recent paper that looked at the risks involved in the roll-out of advanced and autonomous technologies in agriculture.
As the global population has expanded over time, agricultural modernisation has been humanity’s prevailing approach to staving off famine.
A variety of mechanical and chemical innovations delivered during the 1950s and 1960s represented the third agricultural revolution. The adoption of pesticides, fertilisers and high-yield crop breeds, among other measures, transformed agriculture and ensured a secure food supply for many millions of people over several decades.
Concurrently, modern agriculture has emerged as a culprit of global warming, responsible for one-third of greenhouse gas emissions, namely carbon dioxide and methane.
Meanwhile, inflation on the price of food is reaching an all-time high, while malnutrition is rising dramatically. Today, an estimated two billion people are afflicted by food insecurity (where having access to safe, sufficient and nutrient-rich food isn’t guaranteed). Some 690 million people are undernourished.
The third agricultural revolution may have run its course, and as we search for innovation to usher in a fourth agricultural revolution with urgency, all eyes are on artificial intelligence (AI).
AI, which has advanced rapidly over the past two decades, encompasses a broad range of technologies capable of performing human-like cognitive processes, such as reasoning. It’s trained to make these decisions based on information from vast amounts of data. In assisting humans in fields and factories, AI may process, synthesise and analyse large amounts of data steadily and ceaselessly. It can outperform humans in detecting and diagnosing anomalies, such as plant diseases, and making predictions, including those about yield and weather.
Across several agricultural tasks, AI may relieve growers of their need for labour entirely, automating tilling, planting, fertilising, monitoring and harvesting. Algorithms already regulate drip-irrigation grids, command fleets of topsoil-monitoring robots, and supervise weed-detecting rovers, self-driving tractors and combine harvesters. A fascination with the prospects of AI creates incentives to delegate to it further agency and autonomy.
This technology is hailed as the way to revolutionise agriculture. The World Economic Forum, an international non-profit organisation promoting public-private partnerships, has set AI and AI-powered agricultural robots (called ‘agbots’) at the forefront of the fourth agricultural revolution.
But in deploying AI swiftly and widely, we may increase agricultural productivity at the expense of safety. In a recent paper by The Conversation published in the journal Nature Machine Intelligence, the risks that could come with rolling out these advanced and autonomous technologies in agriculture were considered.
FROM HACKERS TO ACCIDENTS
First, given these technologies are connected to the Internet, criminals may try to hack them.
Disrupting certain types of agbots would cause hefty damages. In the US alone, soil erosion costs US$44 billion (about R642 billion) annually. This has been a growing driver of the demand for precision agriculture, including swarm robotics, that can help farms to manage and lessen its effects. But these swarms of topsoil-monitoring robots rely on interconnected computer networks and are thus vulnerable to cyber sabotage and shutdown.
Similarly, tampering with weed-detecting rovers would let weeds loose at a considerable cost. We might also see interference with sprayers, autonomous drones or robotic harvesters, any of which could cripple cropping operations.
Beyond the farm gate, with increasing digitisation and automation, entire agrifood supply chains are susceptible to malicious cyberattacks. At least 40 malware and ransomware attacks targeting food manufacturers, processors
and packagers were registered in the US in 2021. The most notable was the US$11 million (R161 million) ransomware attack against the world’s largest meatpacker, JBS.
Then there are accidental risks. Before a rover is sent into the field, it’s instructed by its human operator to sense certain parameters and detect particular anomalies, such as plant pests. It disregards, whether by its own mechanical limitations or by command, all other factors.
The same applies to wireless sensor networks deployed on farms, designed to notice and act on particular parameters, for example, soil nitrogen content. By imprudent design, these autonomous systems might prioritise short-term crop productivity over long-term ecological integrity. To increase yields, they might apply excessive herbicides, pesticides and fertilisers to fields, which could have harmful effects on soil and waterways.
Rovers and sensor networks may also malfunction, as machines occasionally do, sending commands based on erroneous data to sprayers and agrochemical dispensers. And there’s the possibility we could see human error in programming the machines.
SAFETY OVER SPEED
Agriculture is too vital a domain for us to allow hasty deployment of potent yet insufficiently supervised and often experimental technologies. If we do, the result may be that they intensify harvests but undermine ecosystems.
As The Conversation emphasises in its paper, the most effective method to treat risks is prediction and prevention. We should be careful how we design AI for agricultural use and should involve experts from different fields in the process. For example, applied ecologists could advise on possible unintended environmental consequences of agricultural AI, such as nutrient exhaustion of topsoil, or excessive use of nitrogen and phosphorus fertilisers.
Also, hardware and software prototypes should be carefully tested in supervised environments (called ‘digital sandboxes’) before they are deployed more widely. In these spaces, ethical hackers, also known as ‘white hat hackers’, could look for vulnerabilities in safety and security.
This precautionary approach may slightly slow down the diffusion of AI. Yet it should ensure that those machines that graduate from the sandbox are sufficiently sensitive, safe and secure. Half a billion farms, global food security, and a fourth agricultural revolution hang in the balance.
• This article was originally published on The Conversation. To read the original article, visit bit.ly/3NOQBZ9.
BY WIDELY DEPLOYING AI, AGRICULTURAL PRODUCTIVITY MAY INCREASE AT THE EXPENSE OF SAFETY