As artificial intelligence and drones become increasingly ubiquitous, Zachary Kallenborn—a policy fellow at George Mason University's Schar School of Policy and Government—stands at the cutting edge of researching how terrorists might use these emerging technologies to wipe out humanity.
In the last few days, he’s received coverage in Forbes and Newsweek for an essay he co-wrote that dissects the threat of existential terrorism to our national security—and the world.
Kallenborn is also featured in a new documentary series debuting on Netflix in July, Unknown: Killer Robots, which examines how AI represents, or not, a threat to mankind.
The rapidly circulating peer-reviewed study that has gripped the media describes existential terrorism as the desire to inflict damage of a magnitude so catastrophic that it threatens humanity’s survival. Even if the likelihood of such an event is minor, Kallenborn said it is vital to ask, “What if terrorists wanted to destroy all of humanity—what would that look like?” Most terrorists, he noted, are likely not interested in humanity’s annihilation “because [they] have constituencies and things they want to achieve.” Still, the possibility lingers.
As an officially proclaimed Army Mad Scientist—one of a network of experts exploring advanced warfare capabilities—and an authority on drone swarms and killer robots, it’s his job to be “interested in the highest risk scenarios that could have the biggest effect on society,” he said. Yet, Kallenborn maintains a flair for the comedic. He describes himself in his Twitter bio as an “analyst in horrible ways people kill each other.”
Before graduating high school, Kallenborn was a published mathematician, but he was still looking for “something that would be directly useful for people’s day-to-day lives,” he said. Studying the “horrible” ways we kill one another was something he stumbled on in college. “My notion of national security was like Jack Bauer-style guys,” he said referencing the action television show 24. “I didn’t really see it as an academic thing.”
But after hearing a talk on cybersecurity in his freshman year at the University of Puget Sound in Washington, he was hooked. Promptly adding a major in international relations, Kallenborn interned in chemical and biological warfare at the James Martin Center for Nonproliferation Studies in Monterey, California, where he developed a fascination with cataclysmic terrorism. (Later, by coincidence, he discovered drone swarm technology was not science fiction and quickly set out to become an expert on the topic.)
While drones are not a new technology, Kallenborn noted that their capacity for integration with artificial intelligence could form drone swarms capable of communicating and collaborating autonomously. Moreover, their affordability and versatility make them a formidable weapon of war.
In the war in Ukraine, for example, drones offer an inexpensive alternative for air combat weapons. Russia, he said, has “been using cheap drones to essentially saturate Ukrainian defenses.” Likewise, Kallenborn says Ukrainian forces have used drones to guide artillery strikes.
“The problem is that current artificial intelligence is very brittle,” he said. “Researchers have shown that a single pixel change can cause a mission vision system to confuse a stealth bomber with a dog.” In the event of an error, “you get not only death to civilians … but we also have escalation dynamics that come along with that.”
So, will drones and AI eventually destroy us all? After a chuckle, Kallenborn calmly replied, “No. But I think it’s a plausible enough scenario that it’s worth taking seriously.”