Can machine learning and AI predict the future of regional conflicts?
In January 2014, most Iraq analysts knew security in the country was rapidly deteriorating. However, few predicted exactly how bad the situation would become.
Within six months, a terrorist group in the form of ISIS would take over a third of the country, embarking on a brutal campaign of violence. By August, the UN declared Iraq a “level 3 emergency” (the worst kind) as 3 million people fled their homes.
What if this could have been predicted? Not a general warning that things were getting worse but a detailed outline of the severity of potential conflict and its likely timeline.
Forecasting temporal and spatial aspects of conflict is now the task of researchers at the Turing Institute, the UK’s national institute for data science. To date, the complexity of predicting conflict means most efforts have succeeded only in broad warnings, ranking states at risk of violent episodes occurring within a year. To provide a risk analysis, some analysts focus on environmental factors like drought and food security. Others look at the interplay of governance and living standards. But the sum of data used for such analyses must be reliable, and in conflict-affected areas accurate data is hard to come by.
To tackle this reality, the Turing Institute has been harnessing artificial intelligence to give policymakers specific warnings. This project is known as the Global Urban Analytics for Resilient Defence – or Guard.
In 2015, Dr Weisi Guo, the lead analyst on Guard, looked at a map of the ancient Silk Road and was struck by how many of today’s most conflict-ridden areas correspond with the historical web of trade routes that once connected East and West and pass through the Middle East. He then created an algorithm splicing publicly available databases on violent incidents with overland routes, placing physical geography at the heart of his research.
Land routes are not the only component of Guard. But concentrating on historic “chokepoints” in the flow of goods and people led his team in 2017 to accurately predict 76 per cent of the cities in which terror attacks occurred.
Indeed, it is those “junction cities” that most violence takes place. Mosul, for example, has historically held this status – the city’s name loosely translates as “junction” in English. The geography-violence nexus is particularly relevant in the Middle East.
For millennia, the flat expanse of terrain in the vast Tigris and Euphrates river valleys that cross Iraq and Syria has been a blessing and a curse for its inhabitants. Flat terrain enabled rapid movement of goods, accelerating the development of some of the world’s earliest city settlements. These routes were coveted and contested, allowing rival groups to quickly move cavalry into enemy territory.
Dr Guo’s focus on geography may hold weight in modern times, too. Eight hundred years after the Mongols ransacked Baghdad, ISIS’s fleets of Toyotas exploited flat terrain to attack cities from unpredictable approaches. Clearly, geography makes certain areas more prone to conflict, regardless of state borders.
At Uppsala University in Sweden, another project called the Violence Early-Warning System, or Views, is under way to harness AI to predict war. Like Guard, Views works on the basis that without regional detail, statistical models are of little use.
Views’s lead analyst, Professor Havard Hegre says the project has had some early success, correctly predicting a high risk of violence in the Somali region of Ethiopia in July 2018.
Given all this data, it seems only a matter of time before forecasting potential conflict could be completely automated. But looking at various forecasting efforts, there is disagreement on the value of social media. This seems strange at first – after all, the Syrian conflict has been dubbed the first “smartphone war.” This is not lost on Professor Hegre, who believes social media is an important input for modelling, but has clear limitations.
“We are working on a Twitter model, so we are trying to identify tweets that are geotagged and refer to events,” he remarks. But a significant problem is that social media content has to be verified. Automated programmes also struggle with “sentiment analysis”. For example, they may struggle to detect sarcasm.
For Dr Guo, AI will likely remain a force multiplier for human analysts, rather than a stand-in. “Human beings are great at ingesting diverse data, experience and what other people summarise and articulating that in a reasoned manner,” he says. “So I do not see AI replacing humans. I see AI providing a nuanced surrogate to human reasoning, reducing personality bias, explaining to humans via explainable AI interfaces and helping them draw conclusions.”
The human challenge of foreseeing conflict brings us back to the central problem: if policy makers had a better grasp of the emerging disaster in Iraq, would it have changed their calculations?
Iraq expert Michael Knights is sceptical. A senior fellow at the Washington Institute for Near East Policy, he envisages a system that could have monitored locations of mobile phones of Iraqi forces in the years prior to the fall of Mosul. He says: “If such a system could predict systemic security collapse in northern Iraq in the second quarter of 2014, then-president Barack Obama’s administration would still have had to face the unpalatable choice of re-joining the war he campaigned to get us out of.”
According to Jack Watling, a research fellow at Britain’s Royal United Services Institute, a better use of AI could be to rally analyst resources to troublesome places at the earliest stage of crisis.
“AI monitoring of incidents in fragile states, while imperfect, can flag potential trouble spots and anomalies that human analysts might have missed.”
Like Dr Knights, Dr Watling is keen to stress there will always be the challenge of finding political will and co-ordinating elements of government. Even the best predictions will be no silver bullet.
“A red light flashing on a computer program won’t necessarily mobilise resources,” he cautions.
Professor Hegre agrees and sees value in raising accountability for policymakers, especially if there are warnings of mass violence of the kind perpetrated by ISIS. “What we are doing in Views will only complement things we observe. But this will be in the public domain and it will be harder for everyone to say, ‘Well, we didn’t know what was going on,’ if a major crisis occurs.’”
But in the end, what will matter above any calculation or warning from a machine is political will. And that quality, or a lack thereof, is all too human.
What if AI could forecast where and when chaos will strike?