The World Is Not Prepared for an AI Emergency

The World Is Not Prepared for an AI Emergency

Picture waking up to find the internet flickering, card payments failing, ambulances heading to the wrong address, and emergency broadcasts you are no longer sure you can trust. Whether caused by a model malfunction, criminal use, or an escalating cyber shock, an AI-driven crisis could move across borders quickly.

In many cases, the first signs of an AI emergency would likely look like a generic outage or security failure. Only later, if at all, would it become clear that AI systems had played a material role.

[time-brightcove not-tgx=”true”]

Some governments and companies have begun to build guardrails to manage the risks of such an emergency. The European Union AI Act, the United States National Institute of Standards and Technology risk framework, the G7 Hiroshima process and international technical standards all aim to prevent harm. Cybersecurity agencies and infrastructure operators also have runbooks for hacking attempts, outages, and routine system failures. What is missing is not the technical playbook for patching servers or restoring networks. It is the plan for preventing social panic and a breakdown in trust, diplomacy, and basic communication if AI sits at the center of a fast-moving crisis. 

Preventing an AI emergency is only half the job. The missing half of AI governance is preparedness and response. Who decides that an AI incident has become an international emergency? Who speaks to the public when false messages are flooding their feeds? Who keeps channels open between governments if normal lines are compromised?

Governments can, and must, establish AI emergency response plans before it is too late. In upcoming research based on disaster law and lessons from other global emergencies, I examine how existing international rules already contain the components for an AI playbook. Governments already possess the legal tools, but now need to agree how and when to use them. We do not need new, complicated institutions to oversee AI—we simply need governments to plan in advance. 

How to prepare for an AI emergency

We have seen the general model of governance before. The International Health Regulations allow the World Health Organisation to declare a global health emergency and co-ordinate action. Nuclear accident treaties require rapid notification when radiation could spread across borders. Telecommunications agreements clear legal barriers so emergency satellite equipment can be switched on quickly. Cybercrime conventions set up 24/7 contact points so police forces can co-operate at short notice. The lessons show pre-agreed triggers, named co-ordinators, and fast communication channels save time in an emergency.

An AI emergency needs the same foundations. Begin with a shared definition. An AI emergency should be an extraordinary event caused by the development, use, or malfunction of AI that risks severe cross-border harm and outstrips any single country’s capacity to cope. Crucially it must also cover situations where AI involvement is only suspected or is one of several plausible causes so that governments can act before forensic certainty arrives, if it arrives at all. Most incidents will never reach that level. Agreeing the definition in advance helps avoid paralysis during the first critical hours.

Next, governments need a practical playbook. The first element of this playbook should be defining a common set of triggers and a basic severity scale so officials know when to escalate from routine incident to international alert, including criteria for determining where AI involvement is only credibly suspected rather than conclusively proven. A second chapter should include naming a global co-ordinator who can convene quickly, supported by technical experts, law enforcement partners and disaster specialists. A third part should be establishing interoperable incident reporting systems so countries and companies can exchange essential information in minutes, not days. Next, we must create crisis communication protocols using authenticated, analogue methods such as radio. Finally, we must write a clear list of continuity and containment measures. These might include slowing high-risk AI services or switching critical infrastructure to manual control.

Structuring AI emergency preparedness

So, who should oversee these AI emergency preparedness initiatives? My answer: the United Nations. 

Placing this system within the UN structure matters for several reasons. One is that an AI emergency will not respect alliances. A UN anchored mechanism offers wider inclusion and reduces duplication among rival coalitions. It provides technical help to countries without advanced AI capacity so the burden is not carried by a handful of major powers. It adds legitimacy and constraint. Extraordinary powers must be lawful, proportionate, and reviewable, especially when they touch digital networks used by billions of people.

This international layer needs to be matched by domestic steps governments can take now. Every country should name a 24/7 AI emergency contact point. Emergency powers should be reviewed to see whether they cover AI infrastructure. Sector plans should be aligned with basic incident management and business continuity standards. Joint exercises should practice disinformation waves, model failures, and cross sector outages. Migration to post-quantum cryptography should be prioritised before a hostile attack forces such an update. Governments should also register trusted senders and alert templates so messages can still reach citizens when systems are unstable.

These precautions are necessary right now. Reported AI-related cyberattacks are rising and many countries have already experienced smaller scale outages, data manipulation attempts, and disinformation surges that hint at what a larger event could look like. What’s more, a fast-moving AI failure could combine with today’s hyper-connected infrastructure to produce a crisis that no single country can handle alone.

This is not a call for a new global super agency. It is a call to stitch together what already exists into a coherent response. We need an AI emergency playbook that borrows these tools and rehearses them.

The measure of AI governance will be how we respond on our worst day. Currently, the world has no plan for an AI emergency—but we can create one. We must build it now, test it, and bind it to law with safeguards, because once the next crisis has begun it will already be too late. 

Leave a comment

Send a Comment

Your email address will not be published. Required fields are marked *