In 2020, cybersecurity company Mandiant's computer system was compromised by an intruder exploiting an innocuous crack: routine software updates pushed out by another company, SolarWinds. Mandiant was one of nearly 18,000 organizations compromised.
The attack - a supply-chain hack by a Russian intelligence agency - demonstrates the trade-off between system coordination and vulnerability to attack, according to science and technology scholar Rebecca Slayton.
"Standards to enable coordination and reduce coordinative uncertainty could potentially be exploited by an attacker," said Slayton, associate professor in the Department of Science and Technology Studies, in the College of Arts and Sciences and the Judith Reppy Institute for Peace and Conflict Studies.
"Analysts must navigate a tension between the need to trust in standards for collaboration, and the need to remain wary that their trust may be exploited by sophisticated actors," Slayton argues in a new article, "Coordinating Uncertainty in the Political Economy of Cyber Threat Intelligence," published online Jan. 10 in Social Studies of Science.
The research won the Randolph H. Pherson Innovative Paper Award from the Intelligence Studies Section of the International Studies Association, presented March 2 at the association's annual meeting in Chicago.
Slayton studies uncertainty, which is pervasive in the world of cyber threat intelligence. In this paper, she and co-author Lilly Muller, a former Cornell postdoctoral researcher now at King's College, London, describe the two types of uncertainty that play important roles in the cyber threat security industry - coordinative uncertainty and adversarial uncertainty - and analyze the relationship between them.
In the cyber threat security industry, intelligence analysts work collaboratively across multiple organizations in different countries to transform digital traces into marketable products and services. Cybersecurity firms collect and analyze data to give customers, including both government and nongovernmental organizations, insight into potential threats.
These threats come from nations and criminal groups. Industrial secrets could provide economic advantages; information about individuals also could be used for blackmail or propaganda campaigns, such as leaks from the Democratic National Committee during the 2016 U.S. presidential campaign.
Ambitious attacks might gain control over the computers that operate industrial systems. For example, Russian actors shut down the power grid in parts of Ukraine in December 2016, again in December 2017 and yet again in 2022.
"In each case, function was restored quickly," Slayton said. "But these attacks have the potential to erode trust of civilians in Ukrainian institutions, one of Russia's favorite things to do, and to signal to the Ukrainians, 'We have the goods on you.'"
There's something fundamentally uncertain about dealing with an intelligent adversary, Slayton said.But the process of collecting, analyzing and sharing data presents a different kind of uncertainty - coordinative uncertainty - especially across different organizations and countries.
"When you're trying to coordinate measurements across space and time with different communities who have different assumptions, how do you know that what they're measuring over there is the same thing you're measuring over here?" Slayton said.
The research is based on interviews and ethnographic research within organizations in the U.S., Europe and Australia. The scholars also studied public events, published reports and academic papers, and examined recent cybersecurity incidents in which attackers took advantage of trusted systems to breach cyber threat intelligence companies, such as the 2020 SolarWinds attack.
"A supply-chain hack is one of the most difficult to detect because this trusted relationship has been established between companies," Slayton said. "When they push out an update, particularly a security update, you just install it; you don't ask too many questions. It might even be automated. That's all part of a coordinated process that's designed to improve efficiency, and at some level it's designed to reduce uncertainty, but those standardized practices become targets for attack."
Slayton hopes her research may help people in the threat intelligence community reflect on the assumptions that guide their everyday work and the risks within those assumptions and think about how they can coordinate better without becoming susceptible to adversarial uncertainty.
And it points out that knowledge about cyber intelligence is inevitably shaped by a geopolitical context.
"When a decision is made to focus on a particular threat actor or vulnerability," Slayton said, "we risk missing a different threat actor or different vulnerability. You're always going to miss something, and adversaries are always looking to be unpredictable."
This research was supported by a grant from the National Science Foundation.
Kate Blackwood is a writer for the College of Arts and Sciences.