Press Play To Continue
‘Pausing AI’ is bad policy and worse politics
The zealous wing of AI safety advocacy is riding high on a string of recent PR successes: demonstrations in favour of a ‘pause’ of AI development have expanded in recent months, and U.S. lawmakers have begun engaging with the idea. Last week, this culminated in Senator Bernie Sanders and Representative Alexandria Ocasio-Cortez introducing a bill formalising one version of a pause: a moratorium on the construction of data centers. These data centers are the prerequisite of frontier AI development, so a ban on their construction means a crash of the AI market and a pause of the progress it is driving. Advocates for such a move are Luddites, of course—but on the eve of profound and often scary technological transformation, many feel that now is the time for some measured Luddism.
I believe these advocates are mistaken about the politics even if we grant their view of the risks: pauses and moratoria likely sabotage our progress on a narrow path toward beneficial and safe advanced artificial intelligence. And in the likely event of their political failure, they’ll leave behind a much worse environment of AI politics.
It’s worth clarifying as much because clearly, a pause of some kind is something that some policymakers are asking for. Worse, it’s something a government could enact. We find ourselves in an AI paradigm that depends on the most complex value chain in history culminating in huge datacenters—shutting down this supply chain is costly, but not strictly speaking impossible. And so despite all caveats, pause proposals are now on the streets of San Francisco, in the U.S. Senate, and on the minds of many AI policy advocates. That makes them worth engaging with: the ‘pause’ is on its way to becoming the canonical bad idea in AI policy.
Many others have made the all-things-considered point that a pause introduces prohibitive strategic and economic costs. But to an ardent safetyist, these costs are often bearable. They see us on the path to existential catastrophe, and will pay a high price to avoid it. But I believe that, even if you are principally and perhaps exclusively concerned with reducing catastrophic risks, you should oppose the notion of a pause. The idea’s current uptake is not indicative of lasting political traction; its most likely implementations would be a huge safety setback; and it is lastingly making AI politics worse.
The Golden Path
It’s first worth clarifying the backdrop against which this debate takes place. That backdrop is one of an AI revolution that is going better than we had any right to hope for. The type of AI progress we have right now is—both by the standards of AI paradigms people had predicted years ago and by the standards of past waves of technological revolution—highly democratic and in the hands of forces for good. These features are not in themselves guarantees that things are going well; they do not justify laissez-faire, and they don’t mean the work will be easy. But they should call into question the idea of dramatic, pivotal action. In particular, three features of the current environment strike me as fortunate:
We’ve discovered the necessary technical breakthroughs about as early as we could have; they’re always just at the edge of what’s feasible on today’s compute. That means we face little to no ‘compute overhang’: no innovation has suddenly broken away from infrastructure constraint, and we keep facing hardware and infrastructure constraints to further capability jumps. That has so far meant gated speed for progress, which in principle allows for iteration on risks. It also reduces the purview of AI policy to trackable, infrastructure-rich major entities: you cannot build AGI in a cave with a box of scraps.
The frontier is led by multiple private companies—neither by governments nor by a single monopolist. That’s a double-edged sword, but I believe it ultimately cuts in our favour: there are realistic checks and balances through competition for talent and incentives to create consumer value through the need to justify market share. And despite all shortcomings, we have seen races to avoid harm work out in incentivising some responses to child safety and energy use issues.
Liberal democracies control most of the frontier AI supply chain. This, too, is extraordinary for an emerging technology in the 21st century. Autocracies are leading innovation or controlling the supply chain on drones, digital opium, robotics, modern missiles, and neon-bright city skylines—but most of the critical supply chain nodes for advanced AI are under the control of liberal democracies. That provides people with legislative leverage over the technology.
This path, however, is highly volatile. If it is to continue, the investments have to continue working out; if the providers of private or political capital get burnt once, others might pick up the torch. It also spawned out of an awareness gap: you could, in fact, see the future first in San Francisco, but now that everyone has made the trip, they won’t forget about it again. If we stop the trajectory now, and whatever version 2.0 of the AI industry regroups in a new technical and political reality, the same favourable trends might not hold.
All these are the reasons I don’t want this to stop. I don’t want to reshuffle the deck—I like our hand. I say that at this point because it clarifies the stakes of dramatic intervention: if we pass policy that destabilises the current trajectory too much, that resets the race, I’m not sure how the next iteration will play out. What happens when industry no longer needs governments in awareness, when the free world no longer leads autocracies in the relevant infrastructure? When the barriers to development and deployment aren’t necessarily linked to consumer preferences and actual usefulness anymore? I’d rather not find out, and so I’d much prefer to get this right by working on the margins of our current trajectory. If you’re in favour of rocking the boat instead, you’re putting a lot at stake.
Everyone But The Center
These days, many feel that the current paradigm is headed for disaster soon, and so they advocate decisive intervention now. And at first glance, their idea of a pause is having a moment. Images of one of the bigger protests to date are spreading on social media, and Sanders in particular is starting to sound like ‘the good timeline’ in arcane AI safety posts on LessWrong from 2013: he’s invoking hardcore safetyist talking points, publicly engaging with some of the most prolific safety advocates, and is engaging in what used to be called doom-posting on the floor of the U.S. Senate. The view through the mists that separate Berkeley from the world outside is: there’s something real happening, the start of a trend toward the fabled moment of policymakers ‘waking up’.
One specific feature of the current moment that has made pause advocates hopeful has been its bipartisan nature. This is not just coming from one AI-pilled lawmaker (though Sanders is doing his best), but is driven by a fairly heterogeneous crowd of advocates worried about jobs, environmental effects, power concentration, big tech’s alignment with Democrats in the 2010s or tech CEOs’ alignment with the Trump administration in 2025, and so on. While this entire coalition isn’t exactly aligned with all versions of a ‘pause’, they do share a common motivation to drastically intervene in the pace of AI progress in service of preventing its risks. This alignment has been visible in recent public discussion, and it has even survived the introduction of the bill by left-wing outliers of the Democratic party, with Republican Senator Josh Hawley expressing sympathy with their concerns shortly after.
And while many safety advocates aren’t naive about the coherence of this coalition, they do feel they’ve found a tiger they can ride to legislative success.
Yet there’s a difference between the anti-AI Horseshoe bipartisanism employed by the pause movement and the moderate bipartisanism that usually portends successful policy. Not a lot of ideas that have started on the radical flanks of both parties have seen their time come. All past areas of overlap between characters like Sanders and Hawley—from interest rate caps on credit cards to helicopter money in an inflationary environment—have seen little electoral or legislative success. Clown cars rarely fit 60 U.S. Senators.
By now, most of Congress knows this, too. A policy platform that brings together the populist left and the populist right is very difficult to sell to most moderate lawmakers that can stall and block federal legislation. Even those sympathetic to the idea that their party’s respective flank sometimes comes up with interesting new ideas will be particularly apprehensive of the horseshoe alignment. ’Yes, it’s a Bernie idea, but it’s one of the good ones - see, Steve Bannon’s go-to guy for fire and brimstone endorses it too!’ can’t be a great pitch with the majority of Congress. The congressional origin of this policy idea is twice counterproductive: it’s bad PR that leaves supporters at risk of being branded as radicals, and it’s read as evidence of a populist lack of sophistication.
In fact, that coalitionary structure is not unlikely to have the opposite effect: lawmakers that are looking to carve out their own position are looking for ways to make tangible the old adage of ‘maximising the benefits while minimising the risks’. There will be no cheaper signal than to disagree with the datacenter moratorium idea specifically to clarify why your own proposal is not anti-growth or anti-tech. ‘Condemning the pause’ could easily become the legible way to put distance between yourself and the maximally luddite position to balance powerful donor interests and popular appeal. To a pause advocate, that can’t possibly have been the point.
Of course ideas that start on the fringes usually have a different purpose: they’re a way to put a concern into policy language, start a conversation, expand the window of discussion for moderate voices to propose policy that’s better fit to similar ends. We’ll return to the pause movement as a political Overton window-spreader shortly; but if we judge it as a policy movement, it seems unlikely to succeed.
Against ‘Directionally Good’ Pauses
Though things aren’t quite black and white. It’s now of course somewhat more likely than before that a very specific version of pause policy garners meaningful support: one advanced by a momentary anti-AI coalition. It is that version—not merely the whitepaper version—that we must scrutinise when discussing this new movement.
Their version of a pause would be the version that satisfies the horseshoe I described above. The success story, if it ever could be written, would be that of a political syzygy: groups all over the political spectrum aligning into one broad anti-AI omnicause and settling on something like a pause as the lowest common denominator of their policy interests. We know this coalition from past fights around AI preemption already—but with a crucial difference: on preemption the coalition worked because stopping legislation is easy to rally around. Here, it’s supposed to be leveraged in favour of something substantive, which invites much more complexity. Such alignment would in my view be the only way that a pause proposal gets close to the votes it needs in the current political environment—it’s the only way around the political forces I’ve described in the previous section. It’s unlikely to assemble fully, but stranger things have happened in the aftermath of the kinds of upheaval that AI might soon cause.
The version of a pause that would result from this coalition seems particularly bad, even by the standards of a pause. This is both for structural and political reasons.
Structurally, when the details get ironed out, safety-motivated pause advocates will not be the most powerful in the room. This is coming together at a rare moment of alignment of many interests – anxieties about jobs, wealth concentration, humanity, the environment, and existential risks —, and it will need to tap into all of them to get through. Basically all of these interests have bigger lobbies and bigger constituencies than catastrophic risks, so when there are trade-offs, they’ll bite against the ability of safety advocates to implement their version of the details. You’ll get a pause, but perhaps not the export ban; you’ll get your deployment frictions, but perhaps not the restrictions on internal development; and so on.
Politically, this is still a domestic conversation. The incentives of many policymakers driving this are to make national policy at the very best. Cynically, you might think many mainstream political actors are only engaging to introduce bills and brand themselves as thought leaders—but let’s assume they’re in it to pass some legislation at least. For any political operator, this would have to happen on a short timeframe, ahead of the presidential primaries, ideally. If you get to a point where the domestic moratorium language is done and has a majority ready to go, are you really going to stop because some of the arcane details are not in place? Or are you going to take the win, campaign on the achievement, write into the bill a pinky promise to take care of the hard questions later? If you look at the legislative record of election-year policy making, especially by the characters involved, I think you’ll know the answer.
Second Best Is Worst
Now of course this dynamic is not exclusive to AI policy; and oftentimes in policy, a near-win is already good progress, so it’s worth riding the tiger anyways. Perhaps we shouldn’t let the perfect be the enemy of the good? Not so in this case. The pause proposal hinges on its most complicated and least politically feasible element: an enforceable international treaty. Any suboptimal version therefore likely backfires; but it’s the suboptimal version, not the whitepaper, that’s gaining support.
The logic is simple, and acknowledged by pause advocates: if only the U.S. introduces pausing policy, the compute, capital, and talent will eventually regroup elsewhere and restart the same progress—just under different stewardship, and having learned the lesson of being paused by the backlash, outside of democratic jurisdiction. This strips democratic activism of most feasible levers and upends the favourable paradigm I’ve described before. To get around that problem, pause advocates usually take an international view: domestic pauses would have to be aligned and agreed-upon through an international treaty.
However, that is an immensely complicated task. The safety literature has brought forward some technically sound ideas for how to approach it, but there’s very little by way of political strategy to achieve it. As someone who spends much of my time on international AI policy, I’ll say that whatever political progress you make in America is not enough to get to such a treaty quickly. The Sanders-AOC proposal handles this unilaterally through restrictive export controls on chips–a solution that provides no answer to the obvious issues of supply chains migrating over time, to extant compute being consolidated elsewhere, and so on. As Dean Ball has pointed out in his many attempts to engage with pause advocates on the substance, a unilateral pause has unconscionable consequences for the rights and liberties of American citizens.
And so a treaty or even international organisation of some kind would have to be the vehicle of choice. I believe this, too, could not be brought about unilaterally: in the current setting, U.S. lacks the soft power over allied democracies and the hard leverage over China to force through the fast creation of any international institution. This is doubly true because one miss, one nation you cannot induct suffices to derail the policy: one sovereign country deciding that it doesn’t like the pause idea, and it becomes the lead candidate for a compute haven and AGI development hub in the future. And there’s plenty of economic and political incentive not to participate: economic, to attract all the AI investment; and political, to position against an America that has grown wildly unpopular in many electorates around the world.
America, with its diplomatic reach into increasingly confident middle powers growing thin, seems unlikely to stamp out each and every defection—not by trusted advice and not by threatened aggression. The way to get this done, then, would be slow progress in aligning an enormously heterogeneous set of international players with very different views on AI, America, and what to do about either. The world as it is is very far away from aligned on this, and I’ve seen no serious suggestion to further this alignment.
It’s not outright impossible to get around these problems; it has been done before, though under circumstances of a much more united world. But recall that for this to fall apart, the treaty doesn’t have to be outright impossible. It just has to be hard enough for the pro-pause coalition to take the easy win and delay the hard part under political and electoral urgency. Given the enormous complexity and, more importantly, timing gap between passing a domestic moratorium and negotiating an international treaty, I believe it’s very likely we get the second-best version first—inverting the purported gains and jeopardising safety and progress alike.
I think by far the most likely version to come out of the political moment that pause advocates are seeking to exploit is a bill that speedruns the elements that satisfy the lowest common denominators—spiteful anti-tech measures that make for good rhetoric and allow the chosen champions to enter the primaries as having taken down big tech. Expecting the political forces involved to delay past critical political timings to reflect catastrophic-risk-motivated treaty nuances strikes me as outright naive.
That should, in my view, be the biggest reason even for hardcore safety advocates to be skeptical of summoning these spirits and pointing them at the pause: you don’t control what comes of any of it, and even the best pause ideas are too close to bad pause ideas.
Jumping Out Of The Overton Window
Point these things out to an ardent pause advocate, and the response usually retreats to gesturing at broader political dynamics: perhaps the current, ill-fated versions of the pause idea lay the groundwork for better policy and politics in the future? This might be your response to the previous section: you could think I’m misguided in assuming a familiar political environment, and actually you’d want to see these pause proposals enacted after a big ‘warning shot’ or near miss on AI risk of some kind. Or you’re in fact counting on the pause discourse moving the Overton window—though while that’s a reasonable defense for coalition-building, it does not justify the selection of what seems to be an actively harmful policy vehicle to do so.
I’ve written about this before, in a piece that touched upon the merits and flaws of building a popular AI safety movement more broadly. In this case specifically, two arguments apply.
First, I believe the logic of the ‘radical flank’ boosting associated-but-not-allied moderates does not apply here: there are no friendly moderates that correspond to this particular radical flank. The radical flank effect requires a moderate wing that shares the movement’s broad goals but advocates softer means. The radical makes the moderate look reasonable by contrast. But this coalition’s horseshoe structure means there is no natural moderate counterpart waiting to benefit: Democratic moderates like Warner and Fetterman are not proposing a gentler version of the pause but actively repudiating the entire premise, branding it “idiocy” and “China First” within hours of introduction. And moderates on similar beats are not touching the Sanders language at all—other than using the term AI, Senator Slotkin’s recent proposal does not even give the impression of being about the same set of issues and concerns. No one trades on the moderate version of the flank’s premise, and no one’s swooping in through the window it opens.
Second, the radical flank effect works out by introducing a pared-down version into moderate awareness—pared down either by introducing softer means, or by only agreeing on some of the ends. In climate policy, that’s a fairly robust play: as a climate activist, you’d appreciate a moderate passing whatever instrument to reduce carbon emissions, and you’d appreciate moderates accepting most versions of your problem statement. Not so in AI: if much gets lost in translation between the flank and the moderates, you end up with a lot of bad ideas.
As discussed above, the second-best means are likely to radically backfire for safety advocates; and a random sampling of the motivations expressed by the anti-AI omnicause would likely include some jobs doomerism, some pedestrian anti-tech sentiment, and none of the concerns pause advocates consider exceptionally important. In fact, that latter trend seems fairly likely: if I’m a moderate borrowing from my radical flank, I’d much rather adopt the far more salient jobs rhetoric and leave the fringe-y catastrophic risk concerns aside than vice versa.
To be fair, something like this was always going to happen. No good policy happens without adjacent bad ideas to moderate between, and you always need overreaching solutions to contrast effective interventions against. This is especially true because past congressional debate has mostly pitted fairly moderate safety positions against straightforwardly nihilistic applications of anti-regulatory sentiment. But the play for a discursive shift would be a lot more convincing if it made the radical position a little bit more sound, a little bit less salient, and further removed from the association with the political fringes. If we are to judge this as a play for political communications rather than policy strategy, I think it’s likely to backfire.
The Hard Answer
I suspect a true believer in the maximalist safety position might still reject all these arguments, suggesting that any movement is better than heading for doom by default. I don’t think that’s our trajectory, but I do concede that we’ll need to come up with some good policy to handle the risks. That said, I don’t think that means I need a fleshed out agenda in response. In fact, one of my main points of disagreement with the pause advocates is that I don’t think the seriousness of the challenge means that we need action to be pivotal; and that a bit more broadly, I do not believe we know enough about the future contours of this technology to make a determination.
From where I stand today, though, my favoured bet is this alternative plan, taken in three steps:
First, we pluck the low-hanging procedural fruit. On the state, federal and international level, we introduce overlapping provisions to ensure transparency, safety reporting, industry coordination, whistleblower protections. Simultaneously, we ramp up state capacity to engage with this information (this is going pretty well).
Second, building on what we learn from that, we start holding the ecosystem to its promises through incubating a functional and tightly-overseen market for independent third party assessment (we’re getting started on this, and I feel much better about this than a year ago).
Third, building on what this ecosystem identifies as shortcomings of an effectively-audited frontier development space, we determine the kinds of surgical policy interventions that fix the safety-relevant market failures.
I believe this plan could work, that we can deploy it at decent political robustness within months to years, and that we’re just a little bit behind the curve on realising it at the appropriate level of technocratic rigour and political salience. I also believe that the greatest threats to this plan are hasty disruptions to the politics of AI that drag good policy work into the crossfire and force it to justify itself not on technical merits, but on the twisted standards of an American presidential primary. That said, I’m under no illusions here: getting this right will still be hard, and it’s harder still in the face of unproductive political spending on AI policy matters. But in general, I have faith that there is progress to be made within the current political and technical constraint. Enough, to my mind, not to upset the gameboard and start anew far away from the lucky trajectory we find ourselves on.
Until then, the point of today’s piece is simple: movement toward a pause puts too much at stake, all while likely achieving less than nothing. Whether for safety or for progress, you should resist it.



