The National - News

Can machine learning and AI predict the future of regional conflicts?

- Robert Tollast is a freelance security and political risk analyst, focused on Iraq ROBERT TOLLAST

In January 2014, most Iraq analysts knew security in the country was rapidly deteriorat­ing. However, few predicted exactly how bad the situation would become.

Within six months, a terrorist group in the form of ISIS would take over a third of the country, embarking on a brutal campaign of violence. By August, the UN declared Iraq a “level 3 emergency” (the worst kind) as 3 million people fled their homes.

What if this could have been predicted? Not a general warning that things were getting worse but a detailed outline of the severity of potential conflict and its likely timeline.

Forecastin­g temporal and spatial aspects of conflict is now the task of researcher­s at the Turing Institute, the UK’s national institute for data science. To date, the complexity of predicting conflict means most efforts have succeeded only in broad warnings, ranking states at risk of violent episodes occurring within a year. To provide a risk analysis, some analysts focus on environmen­tal factors like drought and food security. Others look at the interplay of governance and living standards. But the sum of data used for such analyses must be reliable, and in conflict-affected areas accurate data is hard to come by.

To tackle this reality, the Turing Institute has been harnessing artificial intelligen­ce to give policymake­rs specific warnings. This project is known as the Global Urban Analytics for Resilient Defence – or Guard.

In 2015, Dr Weisi Guo, the lead analyst on Guard, looked at a map of the ancient Silk Road and was struck by how many of today’s most conflict-ridden areas correspond with the historical web of trade routes that once connected East and West and pass through the Middle East. He then created an algorithm splicing publicly available databases on violent incidents with overland routes, placing physical geography at the heart of his research.

Land routes are not the only component of Guard. But concentrat­ing on historic “chokepoint­s” in the flow of goods and people led his team in 2017 to accurately predict 76 per cent of the cities in which terror attacks occurred.

Indeed, it is those “junction cities” that most violence takes place. Mosul, for example, has historical­ly held this status – the city’s name loosely translates as “junction” in English. The geography-violence nexus is particular­ly relevant in the Middle East.

For millennia, the flat expanse of terrain in the vast Tigris and Euphrates river valleys that cross Iraq and Syria has been a blessing and a curse for its inhabitant­s. Flat terrain enabled rapid movement of goods, accelerati­ng the developmen­t of some of the world’s earliest city settlement­s. These routes were coveted and contested, allowing rival groups to quickly move cavalry into enemy territory.

Dr Guo’s focus on geography may hold weight in modern times, too. Eight hundred years after the Mongols ransacked Baghdad, ISIS’s fleets of Toyotas exploited flat terrain to attack cities from unpredicta­ble approaches. Clearly, geography makes certain areas more prone to conflict, regardless of state borders.

At Uppsala University in Sweden, another project called the Violence Early-Warning System, or Views, is under way to harness AI to predict war. Like Guard, Views works on the basis that without regional detail, statistica­l models are of little use.

Views’s lead analyst, Professor Havard Hegre says the project has had some early success, correctly predicting a high risk of violence in the Somali region of Ethiopia in July 2018.

Given all this data, it seems only a matter of time before forecastin­g potential conflict could be completely automated. But looking at various forecastin­g efforts, there is disagreeme­nt on the value of social media. This seems strange at first – after all, the Syrian conflict has been dubbed the first “smartphone war.” This is not lost on Professor Hegre, who believes social media is an important input for modelling, but has clear limitation­s.

“We are working on a Twitter model, so we are trying to identify tweets that are geotagged and refer to events,” he remarks. But a significan­t problem is that social media content has to be verified. Automated programmes also struggle with “sentiment analysis”. For example, they may struggle to detect sarcasm.

For Dr Guo, AI will likely remain a force multiplier for human analysts, rather than a stand-in. “Human beings are great at ingesting diverse data, experience and what other people summarise and articulati­ng that in a reasoned manner,” he says. “So I do not see AI replacing humans. I see AI providing a nuanced surrogate to human reasoning, reducing personalit­y bias, explaining to humans via explainabl­e AI interfaces and helping them draw conclusion­s.”

The human challenge of foreseeing conflict brings us back to the central problem: if policy makers had a better grasp of the emerging disaster in Iraq, would it have changed their calculatio­ns?

Iraq expert Michael Knights is sceptical. A senior fellow at the Washington Institute for Near East Policy, he envisages a system that could have monitored locations of mobile phones of Iraqi forces in the years prior to the fall of Mosul. He says: “If such a system could predict systemic security collapse in northern Iraq in the second quarter of 2014, then-president Barack Obama’s administra­tion would still have had to face the unpalatabl­e choice of re-joining the war he campaigned to get us out of.”

According to Jack Watling, a research fellow at Britain’s Royal United Services Institute, a better use of AI could be to rally analyst resources to troublesom­e places at the earliest stage of crisis.

“AI monitoring of incidents in fragile states, while imperfect, can flag potential trouble spots and anomalies that human analysts might have missed.”

Like Dr Knights, Dr Watling is keen to stress there will always be the challenge of finding political will and co-ordinating elements of government. Even the best prediction­s will be no silver bullet.

“A red light flashing on a computer program won’t necessaril­y mobilise resources,” he cautions.

Professor Hegre agrees and sees value in raising accountabi­lity for policymake­rs, especially if there are warnings of mass violence of the kind perpetrate­d by ISIS. “What we are doing in Views will only complement things we observe. But this will be in the public domain and it will be harder for everyone to say, ‘Well, we didn’t know what was going on,’ if a major crisis occurs.’”

But in the end, what will matter above any calculatio­n or warning from a machine is political will. And that quality, or a lack thereof, is all too human.

What if AI could forecast where and when chaos will strike?

 ??  ??
 ??  ?? Data science is becoming a major tool of war and peace
Getty
Data science is becoming a major tool of war and peace Getty
 ??  ??

Newspapers in English

Newspapers from United Arab Emirates