The variables that create conditions for political unrest are understood. The data are available. The technology and analytical capabilities are robust.
There is growing demand for being more proactive about anticipating this sort of behavior. The qualms are about how it may be used by authoritarians. JL
Steven Zeitchik reports in the Washington Post:
Unrest prediction takes a promising approach that applies the complex methods of machine-learning to the mysterious roots of political violence. The field’s systems are being retooled with a new goal: predicting the next Jan. 6. By designing an AI model that can quantify variables - a country’s democratic history, democratic “backsliding,” economic swings, “social-trust” levels, transportation disruptions, weather volatility and others - predicting political violence can be more accurate. The science is sufficiently strong and the data now robust enough to etch a meaningful picture.“We now have the data — and opportunity — to pursue a very different path than we did before,” said Clayton Besaw, who helps run CoupCast, a machine-learning program now connected to the University of Central Florida that predicts the likelihood of coups and electoral violence for dozens of countries each month.
The efforts have acquired new urgency with the recent sounding of alarms in the United States. Last month, three retired generals warned in a Washington Post op-ed that they saw conditions becoming increasingly susceptible to a military coup after the 2024 election. Others have worried about other forms of subversion and violence.
The provocative idea behind unrest prediction is that by designing an artificial-intelligence model that can quantify variables — a country’s democratic history, democratic “backsliding,” economic swings, “social-trust” levels, transportation disruptions, weather volatility and others — predicting political violence can be more accurate than ever.
Some ask whether any model can really process the myriad and often local factors that play into unrest. To those enacting it, however, the science is sufficiently strong and the data now robust enough to etch a meaningful picture. In their conception, the next Jan. 6 won’t come seemingly out of nowhere as it did last winter; the models will give off warnings about the body politic as chest pains do for actual bodies.
“Another analogy that works for me is the weather,” said Philip Schrodt, considered one of the fathers of unrest-prediction, also known as conflict-prediction. A longtime Pennsylvania State University political science professor, Schrodt now works as a high-level consultant, including for U.S. intelligence agencies, using AI to predict violence. “People will see threats like we see the fronts of a storm — not as publicly, maybe, but with a lot of the same results. There’s a lot of utility for this here at home.”
CoupCast is a prime example. The U.S. was always included in its model as a kind of afterthought, ranked on the very low end of the spectrum for both coups and election violence. But with new data from Jan. 6, researchers reprogrammed the model to take into account factors it had traditionally underplayed, like the role of a leader encouraging a mob, while reducing traditionally important factors like long-term democratic history.
Its prediction of electoral violence in the U.S. has gone up as a result. And while data scientists say that America’s vulnerability to election violence is still way behind, say, a fragile democracy like Ukraine or a backsliding one like Turkey, it’s not nearly as low as it was.
“It’s pretty clear from the model we’re heading into a period where we’re more at risk for sustained political violence — the building blocks are there,” Besaw said. CoupCast was run by a Colorado-based nonprofit called One Earth Future for five years beginning in 2016 before being turned over to UCF.
Another group, the nonprofit Armed Conflict Location & Event Data Project, or ALCED, also monitors and predicts crises around the world, employing a mixed-method approach that relies on both machine-learning and humans working with software tools.
“There has been this sort of American exceptionalism among the people doing prediction that we don’t need to pay attention to this over here, and I think that needs to change,” said Roudabeh Kishi, the group’s director of research and innovation. ACLED couldn’t even get funding for U.S.-based predictions until 2020, when it began processing data in time for the presidential election; in October 2020 it predicted a potential attack on a federal building.
Meanwhile, Peacetech, a D.C.-based nonprofit focused on using technology in resolving conflict, will in 2022 relaunch Ground Truth, an initiative that uses AI to predict violence associated with elections and other democratic events. It had focused overseas but now will increase efforts domestically.
“For the 2024 election, God knows we absolutely need to be doing this,” said Sheldon Himelfarb, chief executive of Peacetech. “You can draw a line between data and violence in elections.”
The science has grown exponentially. Past models used simpler constructs and were regarded as weak. Newer ones use such algorithmic tools as Gradient Boosting, which fold in weaker models in a weighted way that makes them more useful. They also run neural networks that study decades of coups and clashes all over the world, refining risk factors as they go.
“There are so many interacting variables,” said Jonathan Powell, a UCF professor overseeing CoupCast. A machine can analyze thousands of data points and do it in a local context the way a human researcher can’t.”
Many of the models, for instance, find income inequality not to be correlated highly with insurrection; drastic changes in the economy or climate are more indicative.
And paradoxically, social-media conflict is actually an unreliable indicator of real-world unrest. (One theory is that when actual violence is about to take place many people are either too busy or too scared to unleash screeds online.)
But not all experts are sold. Jonathan Bellish, One Earth Future’s executive director, said he became disenchanted, leading him to pass off the project to UCF. “It just seemed to be a lot like trying to predict whether the Astros would win tomorrow night. You can say there’s a 55 percent chance, and that’s better than knowing there’s a 50 percent chance. But is that enough to interpret in a meaningful policy way?”
Part of the issue, he said, is that despite the available data, much electoral violence is local. “We ran a set in one country where we found that the possibility of violence could be correlated to the number of dogs outside, because people worried would pull their dogs in off the streets,” Bellish said. “That’s a very useful data point. But it’s hyperlocal and requires knowing humans on the ground. You can’t build that into a model.” Even ardent unrest-predictor advocates say that forecasting highly specific events, as opposed to general possibilities over time, is unlikely.
Bellish and other skeptics also point to a troubling consequence: Prediction tools could be used to justify crackdowns on peaceful protests, with AI used as a fig leaf. “It’s a real and scary concern,” Powell said.
Others concede the real world can be too dynamic for models. “Actors react,” said ACLED’s Kishi. “If people are shifting their tactics, a model trained on historical data will miss it.” She noted as an example the group’s tracking of a recent Proud Boys strategy of appearing at school-board meetings.
“One problem with the weather comparison is it doesn’t know it’s being forecast,” Schrodt conceded. “That’s not true here.” A prediction that a coup wasn’t imminent could, for instance, prompt those mulling one to act as a surprise tactic.
But he said the main challenges stem from a generational and professional resistance. “An Undersecretary with a Masters from Georgetown is going to think in terms of diplomacy and human intelligence because that’s what they know,” Schrodt said. He imagines a very slow transition to these models.
“I don’t think we’ll have this in wide use by January 6, 2025,” he added. “We should, because the technology is there. But it’s an adoption issue.”
The Pentagon, CIA and State Department have been moving on this front. The State Department in 2020 created a “Center for Analytics,” the CIA hires AI consultants and the military has embarked on several new projects. Last month, commanders in the Pacific announced they’d built a software tool that seems to determine in advance which U.S. actions might upset China. And in August, Gen. Glen VanHerck, NORAD and NORTHCOM commander, disclosed trials of the “Global Information Dominance Experiment,” in which an AI trained on past global conflict predicts where new ones are likely to happen.
But the FBI and the Department of Homeland Security — two agencies central to domestic terrorism — have shown fewer signs of adopting these models.
Advocates say this would be a mistake. “It’s not perfect and it can be expensive,” said Peacetech’s Himelfarb. “But there’s enormous unrealized potential to use data for early warning and action. I don’t think these tools are just optional anymore.”
0 comments:
Post a Comment