What AI Summits Are For
The best international AI policy is to fix national strategies
Ahead of the AI Impact Summit in Delhi this week, I’ve published two longer-form pieces on the fate, prospects, and strategies of AI middle powers. I hope you give them a read:
with Dean W. Ball for the Foundation for American Innovation: ‘The Race Worth Winning’, a long-form report on the future of middle powers & their institutions;
with Sam Winter-Levy in Foreign Affairs: ‘The A.I. Divide’, an essay on strategic challenges and pathways for these same powers.
Today’s post places them in the context of what to expect from the Delhi summit.
En route to the AI Impact Summit in Delhi, there’s pessimism in the air. It’s shared by many of my fellow travelers to this fourth installment of the AI summit series: what had begun as a slightly premature safety forum in Bletchley and continued as a competition in naive boosterism in Paris now faces some risk of overextending into an attempt to cover the impossibly large spectrum of AI-related questions.
But I remain hopeful: instead of chasing international governance far outside the overton window, we might be able to use the summit as a forum for the direly needed international conversation about national AI strategies.
We’re Past Peak Summit
The governance advocates’ pessimism is warranted: it should be clear to everyone that this summit—this summit series, this cast of governments, this arrangement of technological reality—will not culminate in ‘international governance’.
That’s first because AI is outgrowing any issue-specific international forum as it bleeds into core areas of domestic and international politics: in many ways making the Munich Security Conference and the Republican National Convention just as much and perhaps more of an AI summit. The AI Impact Summit does not have an obvious mandate to grapple with governance questions. This ties into a second reason: the governance of the underlying technology itself will happen in America and China for the foreseeable future, unaffected by the rest of the world. Now that the great powers have awoken to the national importance of AI, they’ll fight off any attempts to dictate its pace and shape from the outside.
Some of the summit’s attendees already know this, some will express their dismay at the revelation, and some will carry on anyways. By the standards of what many had hoped for when the summit series was inaugurated—binding treaties, global safety standards, convergence on safety principles—not much will come of it in the current environment. The summit instead serves as a platform for conversations, if things go well; announcements, if they go as expected; empty words, if they go badly.
A Multipolar Moment?
The confused status of the AI summits links back to a deeper geopolitical uncertainty. Especially in the last few months, middle powers have started to argue that we might be headed for a more multipolar order, or at least departing from the clear unipolar moment of the post-Cold War era. I think the underlying aspiration for greater geopolitical self-determination on the part of the middle powers, perhaps most visibly articulated by Canadian Prime Minister Carney in Davos, is appropriate in the broader strategic context. Yet it’s clearly in tension with what’s happening in AI specifically: the concentration of frontier AI in one, maybe two, great powers could herald a technological gap indicative of a strongly bi- or unipolar order.
These findings pull in opposite directions: the multipolar thesis invites joint sovereignty efforts, perhaps even the rejection of deeper engagement with great powers—which would leave you cut off from either great power’s AI ecosystem if you realised and pivoted too late that the real world was turning bipolar again. It’s hard to justify a self-assured middle power project, an attempt to secure a more multipolar world order, in the face of that fact.
But all this doesn’t need to mean that the opportunity for international engagement will be wasted altogether. In fact, I believe quite strongly that now is the moment for worldwide AI policy—not ‘inter’-national, but national: nations need to start actually grappling with the transformation they face. Events like the summit still provide an opportunity to advance this conversation: for those in the know to share what they know and those in control to share what will happen; for those set to bear the brunt of disruption to share what they require, and for all of them to get to the same table. These are not conversations of governance per se, but of information sharing and narrow transactions around national strategy, economic and strategic leverage. Their outputs won’t be treaties and papers, but export-import deals, ambitious industrial policy, and aggressive adoption roadmaps. And the very first thing to get right for this kind of international AI policy is to close a gap in awareness and strategic capability around AI. For that, a summit like this can serve as a venue.
Middle Powers Between Worlds
To lay out what I think this conversation should look like, I’ll share with you two recent pieces I’ve written on AI middle powers—countries that lack frontier AI development, but still have sufficient economic, institutional or strategic capacity to be live players far beyond their borders in the years to come. I feel strongly that their prospects should be a central theme of international AI policy, because they are where the greatest variance is: middle powers are united in facing the greatest gap in where they’re headed by default, and where they could be with the right strategy today.
The first piece, co-authored with Dean W. Ball, is a report titled ‘The Race Worth Winning’. In it, we try to paint a picture of what’s at stake for middle powers and their future as nation states. We argue clear-eyed middle powers will realise that the race worth winning for them is to be the first to use AI to reform the concepts of nation states and national economies. If they embrace that change, back it up with hard leverage and clever import deals, they can get ahead of the curve. If they do not, they’ll bleed capacity and legitimacy until they fade into irrelevance.
The second piece is ‘The A.I. Divide’ in Foreign Affairs, co-authored with Sam Winter-Levy. It argues middle powers are at risk of ‘capturing the risks while minimising the benefits’: while it will be hard for them to access frontier AI themselves, it will be much easier for their adversaries and competitors to wield it against them. We develop strategic pathways to address that prospect on two levels: first to secure access to frontier AI either through ‘bandwagoning’, ‘playing both sides’, or ‘sovereignty moonshots’, and second to entrench a strategic position along the AI supply chain to retain an economic stake in AI progress.
The pieces have in common a sense of urgency: whenever I contrast the conversations I have in the U.S. with the conversations I have in middle powers, it looks like the rest of the world is far behind the curve. What’s more, the questions these middle powers face seem almost intractably difficult and interwoven. Time and time again, a question that starts as one of narrow AI strategy turns out to collapse to more fundamental issues of statecraft. It turns out the answer to a lot of middle power problems would be ‘better economic policy’ and ‘a better geostrategic position’. It’s sometimes hard to know where to begin now that the problem is so clear.
But I think we have some promising ideas, and they’re slowly backed up by some political capital: as tired as the cliché of the wake-up call may be, many of the middle powers really are slowly getting there. Beyond the many inevitable discussions about communiqués and their absence, I think we could make some progress on this when we meet in Delhi.
Do please reach out if you think we should chat at the summit!



