Is Superintelligent AI Already Waging World War X?

illustration of two AI heads facing each other with USA and China flags, glowing chip symbolizing artificial intelligence dominance

There’s reason to be scared of AI. There are bigger reasons to be scared of the people who control AI: their greed, their vanity and lust for power, their narrow ideas of what “intelligence” is—and of what the good of humanity is. There is nothing inevitable about how these technologies will develop and what they will do. Everything is at the discretion of those who have built these machines in their own image and the politicians and financiers who have made a pact with them, staking our future on their speculations.

Sam Altman, CEO of OpenAI, was recently asked what things about AI most worried him. “I think there’s three sort of scary categories,” he said. The first is:

A bad guy gets super intelligence first and misuses it before the rest of the world has a powerful enough version to defend [us]. So an adversary of the US says, I’m going to use this super intelligence and design a bio-weapon to take down the United States’ power grid [or] break into the financial system and take everyone’s money. [That’s] something that would just be hard to imagine without significantly superhuman intelligence, but with it, it becomes very possible.

Altman named China. The US and China are in an AI arms race. America is building AI without guardrails because it believes China is developing a hostile AI. That may well be true, but it doesn’t need to be accepted as a fact of life. It’s like the British-German naval rivalry before World War One and the nuclear arms race during the Cold War. Both were self-fulfilling cycles of escalation. Looking back, both should have been nipped in the bud, not escalated in a frenzy of speculation and (profitable) arms-building.

Normal accident theory tells us that, at some point, a technology will be misused or will malfunction. Given the exceptional capabilities of AI in biochemistry and synthetic biology, bio-weapons or rogue new forms of synthetic DNA are a catastrophe waiting to happen.

Instead of negotiating a control regime for weaponized AI now, while it’s still possible, the US and China are following a new version of deterrence theory—without any clear idea of what deterrence actually means in this totally new context.

An immediate risk is that Altman’s “good guys” aren’t so good after all. For example, Israel has been using AI systems for targeted killings in its war in Gaza. The Lavender system, Gospel and Where’s Daddy? have made the Israeli killing machine more relentless and—for Israel—more efficient. Big tech companies such as Palantir are part of this apparatus.

The war in Ukraine has seen extremely rapid development of drone technologies on both sides. The Ukrainians have created, from scratch, a vast array of startups building drones, deploying them on the battlefield, upgrading them—and merging them with AI technology. Each cycle of innovation, combat testing, and enhancement is a few months. When the war is over, this deregulated industry will surely find many markets for its product.

If an algorithm were to control the launch of nuclear missiles, it could mean the end of the world, in a Dr. Strangelove “doomsday machine” scenario. Zachary Burdette and co-authors recently examined these risks in the Bulletin of the Atomic Scientists, concluding ‘dystopian visions overstate the risk that AI will ignite a new wave of international conflict. Decisions to start wars are fundamentally about politics, not technology.’ That may be true. But a risk higher than zero is cause for alarm. 

And we can’t separate AI from the worldview of its champions, its financing, its entanglement in politics—and its jingoism.

This is where we need to be really worried. Altman framed the question as whether AI will be controlled by “us” (democracies) or “them” (China and authoritarians). This mischaracterizes the choice. It’s a rivalry among geo-kleptocrats over the tools of global counterinsurgency. Many of the tech bros don’t even pretend to be democrats—for example Peter Thiel of Palantir. In DOGE, we see Elon political business model applied to “democratic” governance. 

The Ur-text of their philosophy is Davidson and Rees-Mogg’s The Sovereign Individual. This isn’t about defending democratic values, it’s about power over unlimited money-making—which I described in a recent post as the true winnings of World War X.

The tech bros’ narrow worldview begins with the technology itself—especially LLMs—that emerged from an undertheorized strand of psycholinguistic theory that took for granted the superiority of alphabetic written languages (notably English) over ideographic scripts (notably Chinese and Japanese). Ever since writing was invented, cognitive prostheses extend our power in the world, but also colonize our own thinking, shaping our world in its image. AI is already doing the same.

But the alarm really must be sounded at the level of political economy. Big tech politics and financing have become central to the American project of maintaining global stock market supremacy.

Category two of the fears that keep Altman awake at night is “loss of control incidents”, as in Sci-Fi movies where superintelligent AI becomes uncontrollable and goes rogue. This was feared “singularity”, of AI evolving into a rival being that decides to eliminate humankind. A dozen years ago it animated a “doomer” strand of thinking, and spurred Altman, Musk and some others into founding OpenAI “for the good of humanity.” As stated, it was always an improbable threat. As Altman points out, AI is a tool, not a creature. But it nonetheless spurred a race to develop “Artificial General Intelligence”—a goal that no-one has yet defined. 

In his Washington presentation, Altman said his biggest worry was that “the models accidentally take over the world” without any actual malevolence. He suggested that this could happen if people in authority—up to the president of the United States—come to rely so much on ChatGPT-7 to make decisions that they cannot do without it, and “society has collectively transitioned a significant part of decision making to this very powerful system.”

I think this already happened— though not in the technical sense that Altman implied. It happened with the political and financial pact between the Trump Administration and the AI companies making a huge collective bet on unregulated American AI. They haven’t handed over the world to AI as a technology, but as a political economy. They have entrusted the American imperium to the logic that AI ascendancy over human society is inevitable, that there’s a race with China to get their first, that the torrent of transformation is too late to stop—so we should surrender any choice over our future and ride this current. 

Altman acknowledged the risks, and the fact that he didn’t know their scope: “I don’t know what else we can do there, but it’s like this is a very big thing coming.”

Notably, what didn’t figure among Altman’s worries was that a small group of ultra-wealthy people were developing the most powerful-ever technology at breakneck speed, with the explicit goal of eliminating the livelihoods of hundreds of millions of people, without any restraint except their own self-restraint.

In Empire of AI, Karen Hao shows how these earth-shattering decisions were taken by a handful of people with a particular worldview and a fierce competitiveness. Large language models (LLMs) were developed at an extraordinary pace, overruling intellectual property law, environmental concerns over its vast data centers, labor standards, not to mention concerns over the safety of the technology itself.

Hao’s comparison of OpenAI (along with its rivals) as “empires” is apt. The conquistadores of European empire were so convinced of their mission that they destroyed entire civilizations in their march. An Indian scholar, artist or craftsperson visiting London’s Great Exhibition of 1851 would have seen how the skills and treasures of millennia had been looted up by colonial adventurers, soldiers and profiteers, shipped overseas and repackaged as British triumphs. That describes how tech companies’ scrape every available document of human intelligence and creativity, turning them into digital quanta for machine learning, violating the basic principles of intellectual ownership, at which point their CEOs announce that the march of progress is unstoppable and all those who worked with their brains will shortly be redundant.

The tech empires are, like the maritime empires of the 18th century, the railroads of the 19th century, bubble-financed. Tech companies now comprise 30-40 percent of the capitalization of the US stock market and—in a political system run on money—the political imperative for sustaining them is unstoppable. They’re too big to fail. More and more money and human resources must be fed into this machine to keep it expanding, including the world’s best talent that could be devoted to solving world problems. And even if the tech giants fail—in line with historical speculative bubbles that burst—the world will be left with a generations-long legacy.

In short, the immediate danger of AI lies in the unchecked ambitions of those who seek to develop it and profit from it, demanding that the beast be fed because failing to do so would be worse.

Alex de Waal is a Research Professor at The Fletcher School, Tufts University, and leads the WPF research programs on African Peacemaking and Mass Starvation.

Considered one of the foremost experts on the Horn of Africa, his scholarly work and practice has also probed humanitarian crisis and response, human rights, pandemic disease, and conflict and peace-building. His latest book is New Pandemics, Old Politics: Two Hundred Years of War on Disease and its Alternatives. He is also author of Mass Starvation: The History and Future of Famine and The Real Politics of the Horn of Africa (Polity Press, 2015)

Following a fellowship with the Global Equity Initiative at Harvard (2004-06), he worked with the Social Science Research Council as Director of the program on HIV/AIDS and Social Transformation, and led projects on conflict and humanitarian crises in Africa (2006-09). During 2005-06, de Waal was seconded to the African Union mediation team for Darfur and from 2009-11 served as senior adviser to the African Union High-Level Implementation Panel for Sudan. He was on the list of Foreign Policy’s 100 most influential public intellectuals in 2008 and Atlantic Monthly’s 27 “brave thinkers” in 2009 and is the winner of the 2024 Huxley Award of the Royal Anthropological Institute.

Professor de Waal regularly teaches a course on Conflict in Africa at the Fletcher School, Tufts University.  During this course, students should gain a deeper understanding of the nature of contemporary violent conflict in Africa. Students will be expected to master the key theoretical approaches to violence in Africa, and to become familiar with a number of important case studies. The focus is on the origins and nature of violence, rather than policy responses and solutions. The course is inter-disciplinary and involves readings in political science, international relations, and social anthropology, while also touching on economics, environmental studies, and history. 

Stay Connected

This field is for validation purposes and should be left unchanged.