The Early Death of International AI Governance
Why the stars might never align on preventative international governance of AI, and what to do instead.
In 1922, four years had already passed since the end of World War I – but a destructive arms race for naval power was still going on. No amount of aligned incentive and reasonable argument had stopped that race, great costs to the allies’ economies notwithstanding. A policy window opened only when political incentives shifted: The British could no longer afford the race, and the Americans had little need for further naval power. And so it was only then that the Washington Naval Treaty was signed and the arms race halted for the moment.
Among the most high-effort, high-quality AI policy proposals are those advocating for international governance frameworks. In particular, suggestions often aim at what I’ll call a priori international governance: The creation of institutions, organisations, treaties or conditional commitments before the development and deployment of advanced powerful AI systems. A well-motivated pitch underlies these proposals. Many outsized harms of advanced AI are best prevented in an international setting: Substantial externalities, rampant destabilizing races, or misuse of below-frontier open source systems are unlikely to be addressed purely on a national level.
But one has to recognise that the real world is moving further and further away from international AI governance by the minute. Many of the most promising avenues to a priori institution building have led into quagmires at best. The summits didn’t get there, the G7 are paralysed by the unclear triangulations between the US, Russia and Europe, and very few other platforms to host top-level dialogue with all important participants exist. And with how close the USG came to accidentally closing its AI Safety Institute, it seems difficult to believe that it considers AISI collaboration as a platform for meaningful agreements. That reality provides few actionable paths forward — and as the policy world grows used to AI progress, the odds of changing that through a ‘wake-up call’ continue to decrease.
Yet still, research and advocacy for this kind of a priori governance is a major part of many policy institutions’ agendas; for the most recent example, just this month saw a call for an international AI organisation modeled after Intelsat.
What explains that disconnect? I suppose some of it may be chalked up to AI policy researchers simply seeking to propose a good solution, however unlikely it might be. But I believe there is a deeper misunderstanding at the heart of this divergence, too: Many people, in particular AI safety advocates, suppose that international governance of AI can be modeled as a coordination problem: That there is a complex set of incentives and idiosyncratic preferences to navigate, but that there is some governance regimen that can in principle satisfy these preferences in an optimized manner. I believe much policy work still chases the holy grail of a satisfying solution to that tricky problem. As with much AI policy, many of these proposals have tried to shift their rhetoric to appeal to the current administration; but the underlying content remains directionally quite similar.
But well-framed technical solutions only go so far. I take a fundamentally different view, and believe that national interests, political incentives and insurmountable timing problems render meaningful a priori international governance nearly impossible. As a result, I’ll argue that work on international AI governance should largely pivot toward focusing on a posteriori measures; that is, on trying to improve international reactions to the deployment of transformative AI.

Lack of Incentives
It’s worth taking a more thorough look at the unfavorable political economy that has stopped most meaningful progress toward international governance. That starts with the fact that no large AI power actually has meaningful incentive to advance a solution that’s at all tenable to the other major parties.
United States
Without the US, most international agreements would be a farce. Insofar as they’re aimed at the development of frontier models, they need US opt-in because that’s where most frontier models are built; but even if they mainly aim to touch on deployment or diffusion, the US is a key player, as it controls a major part of the compute supply chain as well. But the US isn’t particularly interested in any AI treaty — even less so than it is generally interested in international diplomacy nowadays.
That’s fundamentally because the US doesn’t need much from the rest of the world on AI governance: If there was substantial political momentum in the US to change the way that current or near-term AI systems are being deployed or developed, on most issues, it could quite easily be channeled through state or federal institutions instead. In the past, there had been hopes that the US would leverage and therefore support international governance to enact US standards globally, making it easier for US firms to operate internationally. But that, too, seems unlikely: Neither does the US have any regulatory standard it could be trying to export, nor does it seem particularly interested to address overseas attempts to regulate its AI firms through shrewd diplomacy. It instead chooses a harsher touch of transactional rhetoric to decry – and, to its credit, somewhat successfully stymy – attempts like the EU AI Act.
What the US might want is concessions on non-AI-related matters, from manufacturing to participation in export controls to maintaining base locations to intel; but it doesn’t want anyone to do more on anything AI-related, especially not on regulation. You might think this is not too dissimilar from past US positions, e.g. in the space race, where the US tried to get its allies on board, too – and indeed, that’s the point the authors of the Intelsat proposal make.
But there has been a real doctrinal shift in US foreign policy since then. Whether you deem this realist or irrational, it’s clear to observe that the US is currently unwilling to make even relatively minor concessions to allies in return for marginal contributions to its overall agenda – which reflects in its trade agenda as well as its break with European allies. In the 20th century, a very different doctrine of global presence and free world leadership drove US foreign policy: through the WTO, through NATO, and through organisations like Intelsat – all of which would face a drastically less favourable US political economy today. And so it seems unlikely that the US would be interested in any international dealings on AI beyond tough-love bilateral horse-trading and ‘deal-making’.

China & Great Power Competition
I’m not a China expert, so I won’t speak to its internal incentives. What I will say is that a China-led or even chiefly supported AI push that does not have equally strong US backing is very likely to fail. It goes without saying that the US wouldn’t join a Chinese proposal, but it goes beyond that: The USG’s hawkish position on China, especially as it relates to AI, will cast suspicion on anyone that follows a Chinese-driven international process. And in an AI age, any ‘AI middle power’ really does not want to be outside the USG’s good graces, lest they lose unfettered access to US compute and models.
To make matters worse, both great powers currently face politically untenable costs for ensuring the verifiability of governance provisions. Standard harmonisation currently comes with a real cost: Third-party-access to critically confidential weights and training processes — or at least to confidential institutions that carry out that oversight —, oversight over the import, deployment, and in China’s case, even over smuggling of compute. At the very least, it introduces a longer pre-deployment feedback loop and the requirement to share certain model features, like parameter number or training compute, either of which you might have strategic incentive to keep confidential. That makes enforceable governance genuinely costly to the great AI powers, who are currently very much waking up to the strategic nature of their leading models and are starting to guard them jealously. There has been real progress on verifiable measures, but it has been progress on making verification technically possible — but not on placing it above reproach in a highly paranoid international environment. Past successful civil technology harmonisation has managed to steer clear of requiring shared access to security-relevant information. It does not seem like international AI governance would be able to do that, putting it much closer to disarmament treaties that have both been notoriously difficult to enforce and required dire circumstance and acute precedence to even come into existence.
Middle Powers
Any hope that a major international push could emanate from any other country seems misplaced, too. The UK’s ultimately unsuccessful push to establish a more permanent series of AI safety summits failed as its institutional backers lost ownership of the issue; too many other participants started caring and stopped playing along with the UK’s idiosyncratic safety-focused take on frontier policy, right up until the French took it off the table entirely. Today, we find ourselves in the worst possible setting for middle-power-driven international governance: middle power governments care less and know less about AI than the major governments they’d want to constrain, namely China and the US. So they have no leverage and no technocratic edge in negotiations. And to make matters worse, any middle-power-driven attempt might be read as a ploy to help these powers ‘catch up’ – and so appear to the great AI powers as an attack on their leadership that must be thwarted, not indulged.
Bridging these diverging incentives seems exceedingly difficult, and compared to the advancements made in developing specific institutional proposals to channel the alignment of these incentives, I see very little progress toward fostering that alignment itself.
Different Degrees of AI Belief
The second major obstacle to a priori international governance are the wide and fluctuating divergences in assessments of AI’s role and importance.
No One Is On The Same Page
Underlying these specific incentive mismatches is an even broader issue: nations disagree quite strongly on how important and strategic AI will really be. That’s a problem, because it will make them disagree quite strongly on how seriously to take work on an agreement, what trade-offs to accept, and what issues to prioritize.
The US diffusion framework from late last year is a particularly telling example. The framework splits the world in three tiers, with only tier 1 receiving unlimited access to importing top-tier compute from the US. If the foreign policy apparatus of major global players were aware of the outsized importance of AI capacity to their respective national security and economy, they’d have dealt with the establishment of this framework quite differently. There would have been negotiations, concessions, threats, counter-offers, top-level meetings around which tier to be slotted into. India would have tried its best to prevent its classification as tier-2, the EU should have vigorously appealed its non-uniform treatment that has a line run right through the continent. It’s hard not to take that negligent attitude toward the diffusion framework as core evidence that diplomatic channels and foreign policy platforms are simply not remotely aware of what’s happening in AI.
What does that mean? Drastically different budgets, drastically different levels of ‘taking this seriously’ among partners. The founding story of just about any major international organisation begins with one thing above all others: Shared awareness of the underlying issue and its broad implications. With AI, no such shared awareness exists.

AI Moves Fast, Treaties Don’t
Even if you got everyone on the same page right now, you’d face another, even larger, issue, namely: nations’ perceptions of AI will radically change over the course of the months and years that negotiating international governance would require.
If tomorrow, a talented AI policy researcher figured out the ideal international governance proposal that reflected everyone’s current interests well, the process might still be doomed to fail. I’m sure most readers of this piece are aware just how many instances of “waking up” to AI’s transformative effects might still be ahead for many governments. With every such change of heart, their priorities will radically shift: the risks they think are real, the benefits they think they can’t miss out on. And so a consensus on agreements making certain trade-offs between these risks and benefits is prone to reconsideration at every wake-up point. If the process starts with deepfakes, copyright or jobs, and ends on misuse and strategic potential, it will not go smoothly at all.
The EU AI Act is a great example for this effect: It started as a technocratic piece of legislation on a marginal issue, but once AI reached some mainstream salience, incentives shifted radically, the structure was revised and the act almost failed. There were three intertwined effects at play here, all of which would similarly apply to any a priori international governance. First, the perceived policy area of the issue shifted: from one of consumer market regulation to one of broader economic policy with some security implications. As a result, compromises and hard-fought agreements reached on regulatory merits were worth little in the later negotiations, as the previous agreement was radically reevaluated in terms of unrelated policy areas. Second, the perceived importance of the issue shifted: AI turned from a peripheral policy issue to an (albeit niche) political issue that touched on electoral incentives beyond sound policymaking. As a result of both trends, the decisions were bumped up the ladder: from technical ministry advisers to ministers and heads of government. That political leadership had little reason to trust the early achievements, because they identified that these agreements were reached following different priorities, by people who they did not trust to have considered important novel dimensions.
You can see how this might quite easily happen with a priori AI regulation as well: Even if we get everyone in the same room today, the agreements they will inch towards will not matter much by the time a treaty might be signed. In the meantime, AI will pick up political steam, the policy debate will shift – maybe to economic policy, or labor policy, or security policy –, and any progress will be re-evaluated under completely novel terms that might give rise to completely novel and conflicting sets of incentives.
Because of how fast progress is and how slow diplomacy tends to be, it’s very unlikely to get any governance done in time to avoid these shifts. And because of how unpredictable AI progress is, it’s hard to pre-empt that trend: If you staff the negotiations with national security types, but the labor part of AI blows up first, you’re back to square one, and vice versa. And if you hope that you can start work only when the picture on AI is entirely clear to everyone involved, you might very well be way too late.

A Posteriori International Governance
Given these challenges, it seems much more prudent to conceptualise international AI governance as fundamentally narrow, transactional, and reactive – in short, as ‘a posteriori’ governance that responds to politically noteworthy developments and deployments of advanced AI systems. Under this notion, international governance is fundamentally reactive and deductive: It arises from a sudden alignment of political incentives and policy necessities, not from abstract prudence. I can imagine many settings for such moments:
As a reaction to untenable levels of AI misuse that threatens nation states, like through pathogen developments or cyber attacks – aligning incentives to prevent the ability to carry out that misuse from anywhere in the world.
As a reaction not to the emergence of truly destabilizing superintelligence, but to its proliferation to rogue actors, like in the case of the JCPOA – aligning incentives against the enormous volatility that would introduce.
As a reaction to an AI arms race that begins incurring a price to those in the driver's seat, like in the case of the Washington Naval Treaty – aligning incentives to prevent a manifestly costly coordination problem.
As a reaction to economic disruption and global inequality leading to mass migration, as has been dealt with in a number of regional treaties in the past – facing the reality that mass migration can not be feasibly stopped only at the borders of target countries.
These are not mutually exclusive – in fact, I suspect there will be not one big moment for international AI governance, but many small ones. What all those scenarios have in common is that they leave little time to build new institutional channels, set conditional, abstract terms. But above all, they don’t prompt an international approach to AI at large, but instead action on very narrowly delineated intersections. That also means many issues will never be the subject of international governance, depending on how the policy windows play out.
Wherever a posteriori governance does happen, I think it is likely to arise in hours of crisis diplomacy: eleventh hour summits on things that very recently reached political salience, playing whack-a-mole with the most urgent issues. Political leadership teams supported by ad-hoc scrambled groups of advisors in the lead, not technocratic officials. Rounds of pre-summit meetings that go nowhere, and long top-level sessions where the actual decisions get made in the room. None of the triggers for a posteriori governance I lay out above allow for anything else; they all come with political urgency for the leaders participating, policy urgency in the face of mounting threats and pressures, and sufficiently high stakes to all but require heads of governments to make all the important calls.
Such a trajectory is dangerous ground. The treaty of Versailles was forged in a very similar setting and made grave mistakes that destabilised the continent for decades: Experts and their input was sidelined as their ideas did not fit the political appetite for punishing Germany, and highly political backlash to World War I dominated negotiations. The result was untenable destitution and offense to Germany that led to the Weimar Republic’s fatal collapse and contributed to the atrocity that followed. Work needs to be done to ensure that a posteriori action on AI and its harms does not go the same way – and since this action might often come in politically charged time, that work faces an uphill battle.
In that sense, maybe the most serviceable example for how this governance could go is the JCPOA, the deal negotiated in 2015 to keep the Iranian nuclear programme at bay. Its nature was preventative, but I think it’s fair to characterise it as a posteriori governance, since it arose long after the destructive power of nuclear weapons and the volatility of Iran’s decision-making became clear. It was an exceedingly narrow treaty aimed at one particular risk emanating from one particular actor, with surprisingly broad buy-in: In a US-driven treaty, China and Russia got on board, and Iran mostly complied. And it has been leagues more effective than many other nuclear proliferation efforts, on the watch of which nuclear powers found their way into some of the most geopolitically volatile theatres, including Israel as well as India and Pakistan. But there’s a warning lesson in that, too — even the JCPOA did not last. Reactive governance ought to be well-prepared.
Preparing for Reactive Governance
If what I suggest is in fact the most likely pathway to international governance, then it needs very different preparation, fast. My main hope with this essay is to convince you that what I suggest is the most likely path ahead – but for good measure, I think there’s at least two highly addressable gaps in the current set-up.
Homogenous Awareness
Governments should be on broadly the same page when bigger AI disruptions hit. A lot of work on AI policy has been targeted at the most important AI jurisdictions. But many pathways to international agreement run through middle powers, their effectiveness in negotiating, contributing and providing neutral ground. With the current level of AI awareness both in the great majority of smaller AI powers overall, but in historical negotiation powers in particular – say, Switzerland, Germany, Qatar or India – their enthusiastic participation seems unlikely.
Once opportunities for narrow international governance arise, a lot will depend on the capacity of these countries to react to what’s happening. If they continue to be in the dark on what’s happening in AI, agreements would have to be made with a lot of wildcards in the room. Whether it’s coordinated expert outreach, information arbitrage from strong state capacity on AI in the US or UK, or more on-the-ground policy work – much more national-level work in AI middle powers might help.
Workable Proposals ‘In the Drawer’
Many international governance proposals proposed right now generally fit the current sense of urgency and political attention – to the extent that they are overfit to a political environment that won’t be for long: They usually include a long institutional run-up and assume a global political environment that’s only half awake to the implications of fast AI progress. If the windows for a posteriori governance come around, they’ll require narrow, quick, easy fixes that cover at least some low hanging fruit and can be implemented smoothly and with low frictions.
That means that measures should match closely with the language and process of the policy areas they interface with (whether that’s labor or security policy), and that they should be somewhat modular: Policymakers shouldn’t have to buy the entire package to take some useful suggestions away, so stipulations should ideally be robust across degree of opt-in or method of enforcement. Suggestions like these won’t look as good on paper — but they’ll be much more easily adopted by a 2am top-level crisis meeting.
This kind of language and policy can be developed well in advance. It’s less satisfying work in some respects: It will be incompletely theorised, and a lot of people will argue that it fails to address their most-feared edge case. It’ll also require some compromise, especially if it is to find endorsements of major players in respective intersectional policy fields. But I think pre-empting these moments of ad-hoc reactions can be very valuable: Policymakers will scramble for expertise to get it right, and one can get into position for that now.
Outlook
It’s hard to predict the exact circumstances of a posteriori governance. That makes it worth going breadth over depth. A lot more people spending a lot more time brainstorming triggers for international treaties like the ones I suggested above, and then developing narrow, modular policy solutions that these treaties could include, would be a very valuable exercise. More sophisticated governance solutions might still emerge from that – in fact, many temporary treaties have set the scene for the emergence of broader institutions. But if you agree that the likely initial setting for international governance will be narrow and reactive, then you might also agree that much more preparatory policy work should go into its crucial first phases.
The stars might not align on comprehensive, preventative international AI governance – but much could be achieved in AI policy if we could more readily accept the second best and prepare for it accordingly. Let’s get the reaction right.
Nice post! On the tendency of AI policy to produce grand international visions for cooperation/novel international organization designs, I think this is mostly just driven by the strong selection effects in the field. AI policy disproportionately attracts those whose thinking bends towards left-leaning visions of harmonious international cooperation, globalism, and a belief in more centralized, "top-down" governance.
I would love the field to attract more talent with different views, since I currently think most of the areas with remaining "alpha" are to be found by approaching the problem from a different angle and considering more decentralized (or at least, nationalist not internationalist) ways to govern AI. Dean Ball's recent proposal for a private governance scheme is a good example.
I found this article to be fascinating, but narrow and modular policies are unlikely to save humanity.
Much better to bet on crises potentially massively opening up the Overton window such that policies that can actually move the needle become viable.