The Delhi Gap
Billions of people or trillions of dollars are catastrophically wrong
My experience of the AI Impact Summit in Delhi has been characterised by a bewildering gap: on one side, there’s the summit as a trade show, an admittedly energetic tech conference with a lack of depth that would have you think it might just as well be about anything else: the internet or solar panels1 or robot vacuum cleaners. People sell new versions of the technology, strike narrow business deals, parade their local champions and go home with a few more deals. On the other side of this gap, there’s the summit as a meeting of leaders on frontier AI—serious and in touch with Silicon Valley’s rising realisation that progress toward advanced AI is fast. But while hundreds of thousands attended the former conference, almost no one but the AI companies and the U.S. delegation made it to the latter. This gap throws the world into danger of capturing all the risks and mitigating most of the benefits of AI.
It’s easy to simply bemoan this disconnect and write a sneering post about normies not getting what’s going on. In fact, I suspect that doing so will become a genre on its own in the aftermath of the summit, just as much as it has become a genre to despair at the last AI summit in Paris cutting safety-related issues from the agenda. But I think we should do better than that. My Delhi reflections come a little bit later because I believe we should explain why this gap matters: what it means for AI strategy, and where we should go next. I want to make two claims today: A this technological juncture, the gap is wider than most of us would have expected. And so it has grown wide enough to seriously endanger robust solutions.
Geography of a Gap
In one corner, the world. By numbers alone, the optimists have it. Their beliefs are difficult to characterise in detail, but I think some common features are the belief that capabilities at the very frontier have or will soon plateau; that the great majority of economic value will accrue to those that diffuse models through the economy; that domestic ‘good enough’ AI beats internationally leading ‘frontier AI’ for almost all use cases. That set of views reflects what I believe to be two broader underlying notions: that the American frontier developers are in some important sense mistaken about the expected value of pushing the frontier, and that appropriate national strategy fundamentally deals with software-forward questions of prosaically implementing AI systems not unlike today’s.
In the other corner: the technocapital machine. A minority disagrees—but what a minority that is! The camp of true believers consists of those closest to the AI revolution. It’s made up of the representatives of the three actual frontier AI companies, some members of the U.S. delegation, and a ragtag group of policy types in middle powers. They, and I with them, believe that the rest of the world is fundamentally wrong: that the great powers are the only ones who are live players in this race, that the vast majority of AI’s impact on the world will come from systems and form factors yet to come, and that any diffuse implementation strategy with no recourse to technical and geopolitical leverage and resilience is doomed to fail. I’ve written repeatedly and in some detail about what middle power policy this would imply.
The rest of this essay takes elements of this view for granted: that frontier capability will be strategically mandatory, that AI will be transformative, and that middle powers can’t build frontier AI themselves. If you disagree, I’d still ask you to bear with me for the following—if not from conviction, then as an intellectual hedge. What I posit to you is the consensus view of the frontier AI developers, on which investment flows in the trillions hinge. Join me in suspecting it’s not entirely mistaken.
If you’re in search of a piece that elucidates the exact nature of the broader Delhi Gap, I recommend Dean W. Ball’s latest: next to a more thorough account of the positions held, it provides a causal model by way of connecting the rejection of superintelligence to a wish of the world to emancipate from the American hegemony.
Buyer’s Markets?
What I’ll add is that, if you asked the middle powers, they’d give you a different reason than the ones Dean names. They’d say their instinctive rejection of the American message of dawning superintelligence has to do with the obvious financial incentive motivating those who believe in outsized AI progress. The frontier companies are seeking customers, are looking to sell to governments and local businesses instead, and charge a hefty premium compared to weaker capabilities; and the American government seeks to get the world hooked on its stack for purposes of market share and leverage. Middle powers feel comfortable rejecting these advances because they believe the underlying dynamic is still that of a buyer’s market: AI companies are competing for government contracts, and not vice versa.
This incentive overlap is concerning and unfortunate, but it doesn’t make the true believers wrong in any important way. And the buyer’s market dynamic is prone to flip quickly and decisively: currently, the world is more constrained in buyers of advanced AI than in providers of compute and therefore tokens of artificial intelligence. As more and more gainful implementations of AI systems are found, inference compute will become rare—especially given the raw compute requirements of advanced agentic systems— and so will chips to export and plug into datacenters. Countries that reject deals today could well find themselves strapped for exclusive AI access in a few years, scrambling for second-best GPUs and insecure access to inference from unreliable partners abroad.
The market seems so open because right now, developers need to rush to get the next years’ tranches of compute online—but once they are, the infrastructural substrate of the economic transformation is locked, especially since globally available compute looks likely to run into hard constraints in a few years. Misreading a temporary boom as a guarantee of future availability would be a big mistake.
Why To Mind the Gap
Many on my side of this conversation have not been all that concerned by middle powers’ lack of strategic awareness for some time: they—we—had long been convinced that directionally, things would get better. Increased capabilities would break through to public awareness, and thereby to policy action in middle powers. If there’s one thing I think the juxtaposition of the summit and the Silicon Valley conversation on coding agents and recursive self-improvement implies, it’s that this hope was misplaced.
Capabilities themselves, reported by select groups of AI insiders, will always have the makings of a Rorschach test: true believers will treat them as confirmation of their bullishness, and skeptics will see empty hype. In my view, it will be real effects that will change minds: safety-related warning shots, economic disruptions, and so on. But I think both warning shots and economic effects are likely to lag the capability frontier so substantially that waiting for them prompts strategic action far, far too late.
Some agree with that view, but suggest the awareness gap itself is not that big of a problem—that directional progress toward ‘low-hanging fruit’ is possible even on the minimal consensus that AI is vaguely important. I’m not convinced. I’ve argued in the past that middle power strategies should aim at improvement ultimately not centrally related to the core of AI development, but to a broader supply chain: to gain leverage up– or downstream of advanced AI and use this leverage to secure access to frontier capabilities and a financial share in AI-driven economic growth.
But you need a minimum level of awareness to make effective supply chain plays, or you are headed for at least one of four failure modes. You risk…
Underpower your economy. If you don’t share the specific belief that frontier capabilities are what you need, the temptation is strong to favour ‘sovereign’ or open-source solutions, perhaps even by regulatory fiat—forcing your government and economy to use AI that doesn’t make you directly reliant on American imports, but also leaves you exposed and uncompetitive with adversaries and economic rivals.
Waste time on fake sovereignty. Middle power initiatives to reach the frontier might fail painfully slowly. They’ll spend money and time on chasing alternative approaches or attempts to scale, and by the time that governments will have realised they are unlikely to succeed, they will have lost valuable time to secure imports and bottlenecks instead—or might even be bound by political path dependencies to see through the attempt until the bitter end.
Sell early. If you run the bottleneck strategy, but underestimate the future importance of AI, you run the risk of underpricing your assets. In a world with AI, selling off your semiconductor manufacturing equipment company, or your datasets, or your manufacturing plants, to a great power for a nice lump sum sounds like a great play for a quick windfall. But if they represent future bottlenecks for a truly transformative technology, your appraisal of their value might change: you should treat them as assets fundamentally capable of powering the entirety of your economy in a few years, and only ever consider a sale under very specific circumstances. Doubling down on bottlenecks while expecting a normal world risks just building up capacity that will be swept up and flipped in Delaware or purchased in Guangzhou.
Next to the core of the supply chain strategy, the gap also endangers middle powers’ ability to do well on political economy. A recent essay by advisory firm Citrini has made the rounds this week, predicting a derailing of the political economy as the result of a harsh reallocation of value and revenue and a collapse of aggregate demand. Much as this scenario might seem highly contingent in America, it could well unfold in local scenarios: most countries in the world are sleepwalking into the fiscal and labour effects of advanced AI, and unlike the US, they have no easy levers to mitigate them. How are you going to get the fiscal and social capacity in place to deal with these prospects—edge cases as they might be—if you’re operating under fundamentally mistaken assumptions about the future of the technology?
Endure, Export, or Engage?
Realising the gap persists and treating it as a substantial strategic risk for middle powers allows action along three pathways.
Endure, i.e. just accept that things will be bad in middle powers, and the window for action only arises once actually transformative effects manifest. Focus on the ways you might mitigate the harms, and begin thinking about the ways to rebuild—a view perhaps endorsed by an interestingly ‘AGI-pilled’ recent post by Tyler Cowen. You might be more inclined toward this view if you thought the ‘wake-up’ moment will reliably happen fairly early, or that the effects will be somewhat kind to middle powers. I share neither of these views, and so I’d seek to avoid a world where we mostly play for ‘endure’.
Export, i.e. try to build a mutually beneficial solution in America: through lab export programs and government export promotion frameworks, advanced AI capabilities along strategically valuable lines can be exported to middle powers. There is even U.S. incentive to elicit the buildup of strategic capabilities in allied governments in an attempt to match Chinese scale. I’m excited about these programs, and have written much about them in the past. But they’ll only ever go so far with ignorant buyers: as long as it’s Americans who make the pitch, middle powers will distrust them and perceive them as extractive salesmen. The breakthrough moment for these programs, I believe, will come when at least some trustworthy middle powers embrace them, experience them as highly valuable, and the word spreads between allies. But that requires us to break through in at least some important and widely trusted allied markets.
And so that leaves us with engage, i.e. to renew pushes to raise awareness. I’d like to do that, if you’ll join me, but essays will no longer do. The work has to be more specific, more trustworthy, and in much greater depth. I think that most who have the view from San Francisco and want to think about the international dimension have been too content to take the broad view.
I’m certainly guilty of this, having written much more about the admittedly vague category of middle powers than the idiosyncratic specifics that might be required to make progress on any one of the important powers. In some ways, I was a bit too optimistic—I thought I might plant a flag of general strategic considerations, and the interest would come as AI progress continued. Instead, and to enable more effective versions of ‘export’, I think future work should be surgical. The broad strategic toolkit exists, and I think it now ought to be translated into the policy processes of the most high-leverage middle powers. South Korea and Japan as east asian allies with comparatively little concern about U.S. dependency, as well as Canada as a likely leader of joint middle power efforts, immediately come to mind.
Outlook
My thoughts on the comparative attractiveness of these strategies have somewhat changed over the last few weeks. Enduring seems to be even more of a gamble than I assumed, because if the wake-up comes from effects, not capabilities, the initial shock will be even greater. The export response is even more politically fraught than I thought (but still worth trying); and I suspect ‘engage’ needs much more surgical country-level focus. And so the right next step might be to zoom in: on helping promising countries willing to take the first step to get their strategy right, and then to export the success case throughout other liberal democracies. I’ll think more on where we should start, and I hope I’ll have more on it soon. But in the meantime, I think Delhi should send us all back to the drawing board.
All the credit for that one goes to an observant colleague.


