The default trajectory of AI politics is grim. As political salience rises, entrenched groups with a spurious grasp of the issue will grow in power, exposing the once-sophisticated field of AI policy to vociferous debate and cheap politics. Yesteryear’s major AI policy factions – the so-called ‘accelerationists’ and ‘safetyists’ – can quickly become marginalised within their respective coalitions. As their influence wanes, less and less policy will be based on a realistic view of this transformative technology. In striking a preemption deal, the field could find a chance to change that fate.
Right now, faced with the threat, accelerationists and safetyists should have more in common than they’d like to admit. They share a fundamental understanding of the technology itself, and a desire to see its potential realised. Yet rather than reconciliation, we’re headed for escalation: on one side, an accelerationist camp faces increasing political headwinds, but is now armed with a $200 million super-PAC set to snipe away at safety advocates. On the other side, a safety movement is forced to mount expensive defenses and undertake increasingly risky attempts to garner public salience. Both sides carry a mistaken sense that they can win the ensuing fight. In reality, they’ll both lose to the broader political dynamics of increased salience, which will render them marginal elements of their respective coalitions.
Their resources would be better spent on keeping the frontier AI policy conversation on its tracks, and there is a rare window to do just that. Dean Ball, not an orthodox safetyist by a long shot, has laid out the case and mechanics from his view last week. I agree with its general thrust: the practical first step is to move toward a deal on preemption. That deal should attempt to trade broader preemption of state AI legislation for narrow frontier safety measures. It might or might not happen in this Congress. But here is why we owe it to our many shared goals to try.
Political Trends and Undercurrents
My conviction in this proposal comes from my certainty that things in AI politics are about to get a lot worse for everyone on the current spectrum of frontier policy. I suspect that warrants some explanation. Two trends drive that effect: First, as AI enters people’s lives, they start to notice and care about its effects. Some of these effects register more obviously than others; and the effects that intersect with voters’ deepest concerns are most salient. In recent months, this has included the threat to human jobs and to child safety. It has not included any issue closely or obviously related to frontier safety.
The second effect is less transparent, but already taking root: policymakers notice the electorate’s anxiety around new technology and play to their fears, connecting them to more salient issues. They portray even current AI as a threat to dignity, jobs, health and safety and exaggerate harmful trends. This will get worse as party politics settle: the populist right is getting ready to tap into anti-tech sentiments, and the left wing of the Democratic party could find AI to be a promising campaign issue if salience trends continue.
Not all of these trends have to do with frontier AI policy. Many of them will manifest as discussions around application-layer product design, or rules on adoption of prosaic AI systems far downstream. But they will still inform policy windows on frontier action. To understand how, we should distinguish between two levels of debate: the narrow frontier debate and the broader AI politics we’re heading for.
Two Levels of Debate
The first level of debate is the one along the old battle lines: safetyists against accelerationists. This is a constellation of forces that has not been at the heart of the most recent fights, but still features prominently in the people’s minds – perhaps because these were the battle lines of the fight around SB-1047, the first major policy skirmish in frontier AI policy. In this narrow debate and compared to where we were in California two years ago, the trends above absolutely favour the safetyists: safety regulation fundamentally feels like regulation, and anti-AI sentiment favours regulation. This is particularly clear where safetyists don’t advocate for one narrow policy, but oppose deregulation, which is where we were in the moratorium fight a few months ago. No one who cared about any of the harms wanted broad preemption with nothing in return, so everyone could be rallied to defeat it. Accelerationists may soon find it harder to resist regulatory momentum, as the coalition against blanket deregulation grows stronger. That’s why accelerationists should and perhaps do seek a deal on preemption today – this is the most publicly powerful they’ll be for some time.
But then there’s the second level of debate: AI policy that goes beyond anything to do with frontier safety and instead has much more to do with mainstream sentiments on tech. However, this effect does not imply that frontier AI policy is getting more likely. It’s very easy to get child safety groups and labor unions to agree that a 10-year moratorium is a bad idea, because it preempts everyone’s favorite idea. It will be much harder to get them behind specific policy asks – especially as AI policy gets more specific, asks increasingly diverge. There is no good reason for an organisation or policymaker who is mainly concerned about, say, labor impacts to endorse a policy that addresses frontier risks when there is also a version of the policy available that just narrowly responds to the most salient aspect of the issue.
I’ve written about this in much more detail with regard to child safety last week. The high-level point is: Most AI regulation that comes from this broader high salience really does not help frontier risk. For any salient area like jobs or child safety, there’s an abundance of mediocre and easily-dodged policy that cannot conceivably move the needle on something as complex as frontier safety. Especially now that CAISI already exists and that minimal transparency standards are already on the state books, there is very little natural convergence.
As ever, frontier safety advocates will need their own points of leverage to squeeze their ideas in on the margins. But the trends from above do not confer this leverage to them: there are no obvious mechanisms by which political salience of frontier safety policy increases. So I remain convinced that neither warning shots nor movement-building will suffice. Counting on salience to directly increase odds of frontier policy is mistaken.
Against Marginal Contributions
Safetyists might endorse remaining a member of a powerful coalition like so: ‘The marginalisation point might be true, but it’s the best we’ve got. By hanging on and aligning with the current harms coalition, we retain some say in its policy positions. That helps nudge policy that is chiefly about other things the right way, so that it’s also helpful for frontier safety.’
For instance, the argument goes, when a child safety law is being passed, safetyists could make sure it empowers CAISI more generally; or when a jobs law is passed, safetyists could make requiring greater transparency into developers’ business practices part of the deal. But this undersells the price of nudging policy. Passing any law is a complex process, and any line of a deal that does not have substantial leverage behind it is susceptible to be cut at any time. In triangulating lobbied interests and policymaker idiosyncrasies, many things quickly fall off the wagon. If a bill is close to the finish line, and cutting a frontier safety concession is necessary to get industry on board, or to avoid an idiosyncratic policymaker jumping off, can safety advocates really be sure that other groups will go to bat for them and make a sacrifice or risk progress at large? I see few reasons to believe that today: safetyists are capable and well-funded, which makes them a valuable coalition member – but as the balance of power continues to shift, this contribution will matter less and less. Being a comparatively lower-leverage member of the coalition is unlikely to translate even to marginal policy wins – especially given the super-PAC dynamics I’ll discuss below.
The distinction between the two levels of debate is a frequent source of confusion: Many safety advocates who think they might be headed for better policy prospects overindex on the fact that safetyism’s relative power compared to accelerationism rises. They see that accelerationists’ power will decrease, and see no reason to give them a win through accepting broad preemption now. But that does not account for the fact that the influence of this entire sub-debate for AI policy as a whole radically diminishes. As a result, safetyists overestimate their likely gains from public salience. Safetyists and accelerationists alike should instead consider the shift of the broader debate as a forcing function: Because both camps risk getting swept away soon, they can and should compromise now and get narrow safety legislation in exchange for some preemption on the books.
The Other Forces Bearing Down
Before fleshing out what that might look like, there are two important exogenous factors: the upcoming midterms, and the upcoming influx of money into the debate through super-PACs. They cast their shadow to today: The former makes safetyists hesitant to strike a deal, the latter might make accelerationists miscalculate.
Midterm Prospects
The first are the upcoming midterms. Some safety advocates are excited about the prospect of safety-sympathetic Democrats winning; some others feel fatalistic about an even more split Congress. I don’t want to discuss partisan politics at length; I’ll just point out that even if Democrats are sympathetic, a partisan bill is unlikely to pass the Senate and become law anyways. My main concern with any midterm-focused strategy is instead that the 120th Congress will be sworn in on January 3rd, 2027. For one, that’s a long time from now if you measure it in global compute supply, AI model generations or progress toward the automated software engineering that safety advocates dread. But more importantly, it’s way further down the road of politicisation. The midterms themselves can make AI politics a lot more vociferous; maybe because anti-AI rhetoric around whatever current issue already emerges as a promising campaign strategy before the election, maybe because political strategists will identify its potential right after and determine politically tailored action a strategic priority in the lead-up to 2028. Either way, after the midterms, the relative power of the accelerationist-to-safetyist spectrum will be lower, the politics trickier, and the windows for frontier policy smaller. I think it’s not worth rolling the dice on that prospect.
A Tale of Two PACs
The second is the recently announced super-PAC ‘Leading the Future’ (LTF), a joint vehicle by a16z and OpenAI’s Greg Brockman – squarely an accelerationist vehicle. Meta’s state-level American Technology Excellence Project might act in a similar vein, but for a simple model of what happens next, I’ll focus on LTF. With an opening salvo of $200 million, it is exceedingly well-funded, and will reportedly draw on the highly successful tactics of the crypto super-PAC Fairshake. It’s by many accounts very likely that LTF will default to setting its sights on fighting safetyists.
This super-PAC can rapidly erode safetyists’ standing in Congress. It’s hard to overstate how devastating the effects of that might be. In the AI policy setting leading up to the midterms, all LTF really needs to do is to call Congressional offices and say ‘I have $200 million of PAC money, and I’m happy to spend it on ugly primaries and heavy attack ads – remember Fairshake? So, we just wanted to make sure you were not talking to any of the following groups.’; at which point they’ll simply read a list of policy organisations that have been vocally opposed to moratoria in the past. And because frontier safety is clever and reasonable, but not particularly salient, it simply doesn’t make sense for most offices that received that call to keep in touch with safety organisations.
Safety advocates sometimes argue that public salience insulates against PAC tactics. This is clearly true: Fairshake has worked so well because nobody really cares about crypto, so it’s not worth taking the risk. But money in politics has diminishing returns when it brushes up against higher public salience, and is usually not enough to get policymakers to pass deeply unpopular policy. That is a very good reason as to why LTF can’t prevent policies that draw on very high public salience.
But that likely makes it even worse for the safety movement. On the level of organisations, it means that LTF can’t actually keep offices from talking to groups that directly stand for high salience issues like child safety or labor. And on the level of policy, imagine a big AI regulation Christmas tree bill that deals with some salient issue and also has frontier-related provisions tacked on. LTF might not be able to defeat that bill altogether, because it draws on too much salience. But it can pick off single clauses that are not as politically vital. The frontier-related provisions are a prime target for that. On the safetyist logic of being a marginal member of a powerful coalition, LTF means you’re very unlikely to get your marginal contributions through.
This actually makes for a particularly insidious secondary effect: It can wedge away the safetyists from their broader coalition. In the worst world, increasingly powerful groups around child safety or jobs could come to believe that being close with frontier safety organisations might attract accelerationist lobbyists on the other side – lobbyists that perhaps know to stay away when the easy-picking safetyists aren’t in the room. How long after that realisation until more and more coordination calls happen without a safetyist in the room? None of this is happening just yet, and it can be averted – paradoxically, prudent action on preemption might actually keep safetyists in the room by satisfying LTF’s most reasonable demands and deflecting the rest of its spending elsewhere. Increased salience insulates other policy agendas against LTF, but makes safetyists even more comparatively vulnerable.
The Limits of Counter-Money
The second response safetyists sometimes give is that they might counter money with money. Forgive me for being vague, but the overall rationale will be familiar to many readers – see this post, for instance. Deploying safetyist money is a good way to reduce the immediate influence of LTF in some direct regards: it can help fight out ugly elections, champion single policymakers, and reassure on-the-fence lawmakers that feel threatened. But it can’t offset the amount or breadth of money LTF can deploy. At any realistic ratio of accelerationist to safetyist money, safetyists cannot be everywhere the accelerationists are. They can’t offer to protect everyone, and there will be plenty of lawmakers who would simply prefer their elections not to become a battleground of AI money – no matter who will eventually win. Most to the point, there is no true cancelling the lingering threat of a super-PAC with plenty of potential to escalate funding and an imported track record from the Fairshake days.
But much more importantly, using any potential safetyist money for mitigation is a highly inefficient use of resources. This is money that can be used to stay in the room as politics increase, money that can be a valuable token to contribute to any future coalition that could emerge. In the face of tomorrow’s politics, both accelerationist and safetyist money could surely find much better uses than escalatory infighting. Both camps should endeavour to free up money from a cycle of defensive and offensive spending – a deal does just that. The funding will be direly needed to stave off many of the worst impulses that will result from a broader debate on AI politics.
The Path Forward
What, then, is the alternative? On the face of it, the proposal on the table is about a specific policy deal: Federal frontier safety laws in exchange for a preemption of some state AI legislation. The safety laws would get to be deep, the preemption would get to be somewhat broad, i.e. relate both to frontier safety and some other issues that seem particularly likely to produce a burdensome patchwork.
That deal has something that safetyists like, in that it would regulate frontier AI at the federal level. And it has something that accelerationists like, in that it would preempt the worst instances of a state-level patchwork. Whatever headwinds the future holds, this would frame the next years of AI policy – in essence establishing an acceptable backstop for both sides that otherwise seems under threat. Getting it right will take some triangulating.
Process & Progress
None of the progress toward this deal can happen publicly, at first: committing to the idea of a deal is tricky territory. First for coalitionary reasons: if the deal falls apart, but safetyists have shown willingness to negotiate with accelerationists, they’ll pay a price within their coalition. Second for reasons of negotiation capital. Already now, safety advocates might think the fact that Dean is floating the idea of a deal as a sign that the accelerationists aren’t strong enough to just push through preemption alone. A similar public push from the safety side might have the accelerationists interpret the safetyists as desperate. Both camps can and should stay subtle about this prospect right until it can actually happen.
But a process can start behind the scenes. Less coalitionary committed safety organisations can communicate openness. People can get in a room and hash out some details. Safety-affiliated policy researchers can make a public counterproposal, and suddenly we have an option space between that and Dean’s suggestion on the table. And then negotiations can begin, people can talk to their favorite Congressional offices, and the people that get in rooms slowly are more senior and take things more seriously. And perhaps, if it looks like the stars align, sponsors can declare and Congress can start moving. More coalitionary-bound safety organisations or a16z would never have to publicly commit: they can voice mild alibi reservations, but call offices and signal they aren’t fighting anything, in much the same way that the accelerationists ultimately shrugged and accepted SB-53. It’ll cost them in their respective coalitions, but perhaps remains short of visibly throwing established allies under the bus. Quietly, away from the spotlight, a still-nimble policy environment, backed by the threat of super-PACs, might just get this through before the midterms.
The frontier safety element would have to be a bit more extensive than Dean suggests, I see no other way. One of the main contributions of Dean’s proposal is codifying SB-53, and that will not be enough to get safetyists to risk their coalition: the risk of it being preempted currently does not seem high enough to make safetyists move. Accelerationists are seeking substantial concessions here – and so I suspect that at minimum, something on the scale of fairly hands-off pre-deployment testing or minimal entity-based regulation would need to be included.
Safetyists, in turn, must commit to keeping the scope of regulation more narrow than the scope of preemption. Any favour-currying Christmas tree attempts of tacking on the standard current-harms-language makes the deal much less enticing for the accelerationists. The frontier safety elements must be narrow and strong, the preemption must be selective, but broader than just narrow frontier issues. This sets a mutually acceptable frame for the 120th Congress – where we’ll all see each other again as we debate new federal laws on all the things the deal has not covered, but both sides’ worst case outcomes have already been avoided by having seized the moment today.
A Second-best Aftermath
But of course, even given a successful deal between safetyists and accelerationists, a law might very well not happen. Congress is highly unpredictable, and frequently just does not get things done even if interests align. Also, the current harms issues are already a political sleeping giant today. They might well wake up and plenty of lawmakers might catch what’s going on and try to stop state-level preemption. I’m less sure of this than others for two reasons: The safetyist-accelerationist deal can be ad-hoc expanded to rein in the pet issue of at least one lawmaker or two while still being mutually beneficial; and some concerns can be staved off by offloading them into parallel, unrelated but ongoing federal action on issues like child safety.
But still, there remains a risk. Some safety advocates see that risk and conclude it’s not worth the downsides for coalitionary cohesion. I think if the maneuvering is done right and the scoping is done carefully, the only coalition-critical commitments would not need to be made before the prospects seem clear – so we can return to the risk assessments then.
But more importantly, as I’ve argued, coalitionary cohesion alone doesn’t buy policy wins anyways. And on the other hand, even a failed attempt can move us toward a helpful realignment of battle lines. Even an attempt to cooperate on this, some reestablished channels of coordination, and a demonstration that safetyists aren’t necessarily accelerationists’ worst enemies, would go a long way – especially in the context of the PAC dynamics discussed above. Some big underlying assumptions can be diffused that way: that safety advocates are the most ardent opponents of any preemption, that they believe any regulation is good regulation, and that they’re captured by partisan and group-specific politics. A credible effort could work to diffuse that.
In practical terms, that would save us from ugly fights and unnecessary spending: safety efforts would no longer be solely committed to fending off LTF, and LTF would no longer have to go hunting for safetyists. Both sides might actually be well-advised to unlock the other side from their battle: in staving off the worst of AI politics, they might find themselves on the same side of the fight more often than not.
All of us few who have a serious understanding of the scale of this technology are on borrowed time. Uglier politics are coming, but we can still frame them today: we can get a lower bound for frontier safety and for policy patchworks on the books before salience has taken root. And we can take a step away from the brink, avoid fighting each other in an increasingly marginal corner of AI policy. We have much to gain and little to lose.