Failing the Future
In search of better AI accelerationism
Slowly but surely, accelerationists are losing ground. An administration that was supposed to cement the ‘tech right’ in D.C. has instead mostly held a draw in domestic politics, and its super-PACs intended to create a climate of fear have given rise to an air of anti-tech defiance. And so 15 months after inauguration, the panels at Hill & Valley are looking tense and SFO-DCA seems like quite the long flight again. Many pro-tech voices realise 2028 is coming fast, and that there is no good response to stabilise the presidential primaries against the rising anti-AI populist tide. The current playbook isn’t working—accelerationism needs to moderate.
If you squint and tilt your head, you can see a spark of that in the recent release of OpenAI’s new Industrial Policy Blueprint. The document is a marked departure from OpenAI’s previous posture, proposing sweeping institutional changes to prepare society for unabated AI progress. The optimistic read is that we’ve learned a lesson: in the face of rising political salience of AI, maximalist accelerationism is more luxury belief than policy platform. That realisation could be the first step toward a new pro-tech agenda, built on the understanding that the real risks and perceived perils of AI have to be acknowledged and managed, or politics will stop the progress.
But if some have already learned that lesson, word hasn’t reached the political arena yet. In something of a crude reenactment, the political bruisers employed by the accelerationist camp are spending their time repeating yesterday’s battles against the ‘doomers’: fighting the Pentagon’s war against Anthropic, trying to pass large regulatory carve-outs and squash vaguely safety-flavoured state bills. And so the pessimistic interpretation seems more likely: policy documents and talks of national frameworks alike are unachievable by design, a convenient excuse that provides comms cover for pretending you’re for something instead of against everything.
If that’s true, it’s an ill-fated political strategy. For every month it staves off political action, it makes the policies that will inevitably come that much worse. And by 2028, the pro-AI project will be left without allies, outflanked on the left and on the right by the populist backlash it failed to contain. But there is still time for better pro-tech politics that could find durable allies in the center, among the friends of the future.
Can’t Fight Something With Nothing
I know many readers disagree not just with my view of, but my interest in this issue; they think the regulatory nihilism exhibited in recent months is by accelerationist design, that one should be happy to simply let the accelerationists fail and deal with whomever comes after. On that, I’d refer you to a piece I wrote last fall—for many AI policy interests, you should want the accelerationist project to stay around, even if you’re less ‘pro-AI’ than I am. If you’re a safetyist, don’t make the mistake of thinking you’re winning just because your favorite rival is losing. Right now, we’re all losing.
I still believe more cooperative AI politics between those who get what’s happening in AI are possible. At the risk of turning this publication into a Peanuts cartoon, this essay will take one more swing at arguing for that. That feels worth doing today because there’s a growing dissonance to explore: while the accelerationist project trades on its claim to represent ‘tech’, I believe many pro-tech voices should be frustrated with its record and should ask their political representatives to do better.
That’s mostly because the current accelerationist strategy is becoming more and more visibly untenable. While their opponents have brought forward a potpourri of asks, some good, many bad, the keystone accelerationist ask has long been to take the hands off. No policy symbolised that more clearly than early attempts to achieve ‘empty’ preemption, i.e. a moratorium on state laws with no corresponding federal legislation. The tradition continues in the White House’s repeated attempts to lean on state legislatures to get them not to pass AI legislation. In a sense, this has ‘worked’ in fighting the anti-AI sentiment to a draw: no moratorium, but no substantive regulation either. But never mind the fact that a domestic draw isn’t a particularly impressive use of a trifecta, this stalling tactic itself is on borrowed time.
One early indicator is that the political spending strategy executed by the super-PAC ‘Leading the Future’, a key accelerationist political vehicle, seems less promising than expected. The hope, perhaps expectation, was that LTF would be able to duplicate the successful playbook of cryptocurrency PAC Fairshake. Fairshake, recall, relied on surgical electoral spending against perceived anti-crypto candidates, bootstrapping visible defeats into a general congressional climate of fear. For LTF, the situation is a bit more difficult. The problems started with early target selection: through singling out Alex Bores, a candidate in NY-12 responsible for New York’s moderate safety law, LTF has misstepped. Bores, by all accounts a fairly reasonable and likeable politician with policy proposals well within what even many in the industry consider acceptable, has been able to use the attacks to raise his own profile. Worse, resistance to LTF’s attacks has also led to an influx of support and donations into the Bores campaign.
But safetyists have a vested interest in exaggerating that trend—Bores might well lose, and the impact of spending on this race will remain hotly contested. But that’s the point: if what you want is a climate of fear that stops policymakers from even touching AI regulation, you don’t want the electoral impacts of your spending to be an endlessly contested open question with a valid alternative story. Post-Bores, moderates might even think that picking a fight with the accelerationists is a quick way to free media and safety-aligned political spending. And the counter-spending precedent will matter so much more once safety-sympathetic AI developer employees start spending big after OpenAI and Anthropic's IPO.
All in all, LTF’s early strategy seems to have cornered its opponents into raising salience and assembling a proto-populist alliance, compelled the pro-regulation side to close ranks and counterspend, and demonstrated its uncompromising attitude toward comparably moderate policymakers—all while doing nothing to stymie the upcoming populist sentiments. By focusing their fight on a well-resourced lineup of secondary threats, accelerationists are at risk of spending a fearsome amount of political money on an incubator for pro-regulation martyrs.
But even if anti-regulatory sentiment by super-PAC was working well, you’ve got to ask how far you can kick this can down the road. No one seriously thinks that there will never be comprehensive AI regulation, that the pendulum will never swing back. Yet the nihilist’s treadmill never leads to actual policy influence: you started with asking for broad and empty moratoria; you went on to ask for narrow carve-outs; then you said there was a ‘framework’; then you had to actually come up with a framework; and now you find yourself advocating-but-not-really for sweeping societal changes. This is a stalling game, but a losing game regardless.
High Interest Rates On Borrowed Time
It’s a losing game because dragging their feet on the way to eventual comprehensive policy is ultimately not in the accelerationists’ policy interests. It fails on all three timeframes.
In the long run, they’re failing to shape the content of eventual policies. The delaying strategy performs very well on the question of whether to regulate. But it does not make any progress on the question of how to regulate: because stalling commits you to arguing against the necessity of policy action, it also prevents you from acknowledging the problems and proposing your own solutions to them. If you’re stuck arguing against the need for development-focused interventions, you don’t have a great pitch to argue for resilience-forward interventions instead.
So when the pro-regulatory sentiment eventually breaks through, whether from irrational anxiety or rational risk intolerance, the actual policy ideas in the drawer will have been crafted without accelerationists in the room. Or do you really believe that, in the face of a dramatic poll or a major incident, moderates will reach for the White House’s AI framework first—a document that, whatever its merits, was advertised by its author as being about stymying ‘a growing patchwork of 50 different state regulatory regimes’?
In the medium term, accelerationists are also failing to get ahead of actual harms that give rise to anti-industry sentiment: without minimal regulation in place that stops the least safety-conscious AI companies from taking egregious risks and harmful product decisions, there will be more and more instances of irresponsible AI deployments that further sharpen public opinion against this technology. Widely available capabilities usually lag the frontier by 9 months—the recent case of Mythos makes it quite plausible that we’ll see genuinely harmful AI misuse before the next presidential election. And while AI harms are visible and exploitable, its very real benefits are harder to attribute and do not seem to leave a dent in public perception so far—so the expected PR effect of further capability increases is strongly negative.
This is a threat to any accelerationist project that must be managed for policymakers not to run away from the technology soon. If your political strategy doesn’t give politicians a record on the risks that they can defend on TV when the need arises, it will not survive the moment AI reaches the real world.
And in electoral time, they’re failing to provide a desperately-needed anchor for the 2028 presidential primary. On either flank of the political spectrum, the anti-AI message will reign supreme—it’s too strong an attractor for populist forces, too good a projection surface for all kinds of political concerns. And since primaries pull to the flanks, right- and left-wing populists alike will have a field day nailing their respective moderates to tech funding and their fuzzy messages. Since a nihilist regulatory agenda never equips these moderates with a sufficiently strong message to defend their record and position, this political dynamic puts it under threat: moderates might abandon ship if they can, or lose if they cannot.
Accelerationists might contend that Vice President Vance is the frontrunner, and he’ll struggle to put distance between himself and the administration’s record. To begin with, this administration has executed more dramatic shifts in political messaging than this before; if Vance reads the political risk in time and decides it’s worth risking some tech funding on the margin, he could still correct course between 2026 and now even if the admin pulls the other way—his Iran positioning should be instructive here. And second, there isn’t even that much of a substantive domestic record to hold anyone accountable to. Yes, there have been hiring decisions and foreign AI policy, but it seems less clear that there will be any ‘accelerationist’ domestic policy passed during Trump 2 to tie any presidential hopeful to.
The likely outcome of the current strategy, then, is that the accelerationists manage to stall their way until the midterms. But some time around 2028—perhaps in the leadup, when candidates look to build a legislative record on AI to campaign on, perhaps in the election itself—the position seems likely to crumble. Politicians will search for solutions to the risks and answers to the negative public sentiment. They’ll scramble because accelerationists will have stalled them until the 11th hour, and because accelerationists have never credibly offered them policies, they’ll instead search elsewhere: in idiosyncratic improvisations if we’re lucky, on the radical flanks if we are not. Nothing good comes of that dynamic: not for accelerationists, not for safetyists, not for anyone who takes AI seriously.
Pivot Potential
What to do instead? My sense is the industry has started saying the right things, and now needs to act on them.
The OpenAI blueprint hints at some plausible contours for future policy advocacy. It asks policymakers to build a society that can handle rapid AI development: through minimal guardrails and a lot of institutional and societal resilience. This framing is still fundamentally accelerationist in that it seeks to create the conditions for maximum speed; and it’s unapologetically self-interested in that it shifts the burden of regulation away from OpenAI. Yes, it’s progressive in a way that takes seriously some ideas popular on the populist flanks—transfer payments, shorter work weeks, and so on—but I think that is a fair prediction of 2028’s political attractors. Most importantly, the framework is not in itself obstructionist or nihilistic: it does propose a substantive way of handling things.1
In trying to break with that, the framework slots into a trend that accelerationists have soft-launched through their pivot from a ‘moratorium’ to a ‘national framework’ in preemption discussions earlier this year, and continued even through the rhetorical twist of asking for deployment-focused regulation as a code for liability shields in Illinois. In a recent interview with Politico, OpenAI’s Chris Lehane has doubled down on this framing of being ‘for something’ now.
The difference between the framework and these earlier instances is that the framework actually advances the conversation, whereas previous accelerationist proposals have often been selective endorsements of ideas that emerged on the pro-regulation side of the conversation. That’s usually an epistemically healthy conservative position, but not an effective way to get ahead of the policy conversation. This is also my main strategic grievance with accelerationists’ renewed advocacy for the White House framework: it corrects course on some important issues, but mostly responds to the question of ‘which bullets are we willing to bite’, not ‘where do we want this to go’. This is what the OpenAI framework promises to do differently.
But saying you’ll change course is not the same as meaning it. In fact, I think that many critics are right: just the fact that this kind of rhetoric has made its way into accelerationist communications does not yet mean it is now the accelerationist political strategy. A likely interpretation is still that this is a fig leaf: sufficiently high profile to mollify concerned employees that aren’t close enough to the lobbying to realise the dissonance, sufficiently visible to assure some policymakers for some time that you aren’t anti-everything. It’s a good explanation for what would then simply be passable comms work with no real political strategy behind it.
But it’s noteworthy nevertheless that OpenAI even feels the need to do this. It speaks to a change in the political incentives, both as they arise from discontent voices within the company and political pressures from without. In that light, there might be room for movement. And I think accelerationists should seriously make this pivot, not just in word, but in deed: move messaging and political spending alike in support of a deployment-focused policy agenda and the legislators that could get it passed.
Put Your Money Where Your Mouth Should Be
Accelerationists are well-positioned to achieve this goal: currently, they still have plenty of influence on the politics of AI, especially through political spending. Moderate policymakers on either side would like to get on the good side of tech money, if only you gave them a face-saving way of doing so. It’s just that the current environment of AI politics only gives them a few choices: align all the way with the accelerationists, yet risk making yourself a target of counter-spending and populists; align all the way with the populist attractor; or stay out of it. Right now, there’s no money and only political risk in coming out as a congressional friend of the future.
That term, by the way, will require a less zealous definition: not so much someone who shares a proclivity to tweet about polyamorous doomers, but someone who is simply more pro-tech than a replacement-level member of his party that cares about AI. Look at Bores again, and think about what kinds of things you think the median democrat from a blue district will say about AI in 2028. Do you really think you’re creating a better terrain for pro-tech policy by getting rid of him?
The consequence of a successful pivot, then, would be a moderate position that’s built on more than pillars of sand. It’ll never be as much of a vote-winner and catalyst as the anti-AI position, but it might be strong enough for the pro-tech sentiment to survive the quickly shifting politics of AI, to anchor them to something more durable than the increasingly volatile political conflict that has dominated this congress.
What would that look like in practice? The details of that are usually not best hashed out in public discussion, and are better-suited for politicos to figure out. In truth, I’d rather leave you with the takeaway that someone else should really grapple with how to do this—but I’ll outline some starting points anyway:
First, you settle on a substantive message and stick to it. Part of that message is the fabled ‘better story’ for AI—the way you make tangible why a future with advanced AI is good for the average voter. But more importantly, the question is: what should the serious response to the risk case be? Practically speaking, that means you need to find out right now what you’d want a moderate politician—not a tech booster, not a fringe accelerationist—to say when they’re in a primary debate and your choice of Josh Hawley or Alexandria Ocasio-Cortez has just said ‘I’m in favour of a pause because AI is putting American lives and jobs at risk’.
That answer has to take seriously the anxieties that the public will have: it needs to explain why the trade-off is worth it, and what the mitigations will be other than a pause. I don’t know what it is either, but that’s the point: you need to workshop, test, and poll it ahead of time. If you only deploy it once the anxieties require it, you’re again chasing the news, so you must build the moderates’ profile now.
Second, you use the window after the midterms to pass some politically effective legislation. The easy way to think about this legislative window is to think ‘how do we get the least invasive AI policy possible in 2027’. That justifies something like the current strategy: fight sweeping proposals tooth and nail, try to pass a preemption-focused narrow bill that appeases child safety advocates and not much more. But on my view of the actual threats, I think the correct question instead is ‘how do we pass a policy that arms moderates with the ability to stave off populists in 2028’. The answer to that question is different: in terms of policy, it means broader regulation, more substantively responsive to political anxieties, more visibly an act of ‘regulating tech’--otherwise, the appetite will remain unchanged. At minimum, I’d expect a bill like that to include an active labor policy and a third-party oversight element as a final result.
In terms of messaging, that means you cannot fight this policy at every turn. Of course you’ll start with a less sweeping suggestion than what you expect will be the result—everyone should understand that’s a frequent element of initiatives like the Illinois bill or the White House framework. But if the policy is supposed to provide cover for your moderates to stay pro-tech in 2028, they need to be able to endorse it, lobby for it, sponsor it. Whenever you create conflicts, demand fealty to the anti-regulatory position, force your friends to go on the record to water down whatever happens, you undermine their credibility in campaigning on the compromise later on. In the service of later political success, that must not happen.
And third, you refocus your political spending: away from highly adversarial spending, toward nudging moderates toward the political messages and policy vehicles you have developed in steps one and two. Destructive, adversarial PAC spending has failed to work, and tilted the dynamic toward a two-horse-race where millions of pro- and anti-regulation money get burned in fighting over the same few districts. This is plainly counterproductive: the marginal dollar value of political spending is much lower when it’s spent in a highly contested market, and a feverish conflict between two PAC camps condenses what is fundamentally not a single issue into a binary for-or-against.
The upshot for sensible accelerationist or safety policy is that you don’t get compromises until political will violently breaks through one way or the other, which is not usually the time for measured policymaking on the technical merits. Which we will need, soon—and just as much as I’ve argued safety advocates have their part to play in creating that environment, I’m now making the same ask of accelerationists.
Outlook
Now I’m not that naive—I know that this agenda is a non-starter for an organisation like Leading the Future and its committed funders on the accelerationist fringe. That’s fair game: every political movement needs its bruisers and zealots, and that’s the role they’ve decided to play. But a flank alone is not enough. That’s why I think the way to actually do the above is to broaden the political funding landscape to include credibly distinct ‘accelerationist moderate’ poles. The money exists—from coast to coast, plenty of people have much to lose from pro-tech voices going under, but have few sympathies for the bluster.
The pivot, then, would not be for incumbent accelerationists to change course but for others to enter the same arena and diversify the portfolio instead of doubling down on the current approach. By the time the next Congress fights over inevitable AI legislation, the pro-tech project would be more strategically stable, and able to steer policy through compromise before the reckoning.
Still, I understand this is a frustrating list of suggestions for an ardent accelerationist aching for a fight. To them, the radical path will of course remain open, and I expect they’ll insist there’s still time, that the efforts to pass the framework and the ramping-up of the PACs will help to win. But on a personal note, these choices are making it harder and harder to argue for a genuine alignment in the center. And it’s my impression that many accelerationists in the broader sense—those who want to reap the rewards of advanced AI, and soon—are also frustrated with the political odds and rhetorical bluster of the current approach. Their interests are not represented well, and to them, I want to say: you can do something better with your money.
If this is actually where accelerationist strategy is heading, I’ll have lots to say on what I think about this, and there will be many disagreements to be hashed out. But it would make for a negotiating position worth engaging with.



