The Night Before Preemption
Three dynamics to watch as the fighting begins
If you want to understand AI policy today, you have to look back at the SB-1047 debate, where a growing safety movement inducted lawmakers and allies into its coalition, an unlikely heterogeneous opposition assembled and stuck together, and many of today’s most influential voices rose to prominence.
If you want to know what tomorrow’s AI policy will look like, you should watch the next few weeks closely. After a hasty scramble in July and subsequent weeks of debate, the fight around federal preemption is now taking off in earnest. There’s a draft executive order, at least three congressional initiatives, and a rapidly closing window to act. The next few weeks could get messy: they’ll feature closed-door negotiations about the NDAA, maximalist public rhetoric, $10 million on one side and a coalition from Steve Bannon to Joseph Gordon-Levitt on the other. Coalitions and grudges will emerge, promises broken and kept will change the terrain of AI policy.
A few weeks ago, I wrote two long pieces about my thoughts on the substance of this issue; arguing first for the policy merits of a deal exchanging narrow frontier AI regulation for broader preemption of state legislation; and second, arguing for rapprochement between the safetyist and accelerationist factions. Now that the fight is on, I won’t relitigate the substance – that’s now for the politicos.
Instead, I want to leave you with three observations from the before-times: on the trajectory of political spending, on who can and can’t afford to play for time, and on the curious silence of the AI developers. We’ll be revisiting all of them soon, no matter how this ends.
Here Come the Super-PACs
If there’s one lasting effect of this debate beyond legislation, it’ll have to do with the spending of not one, but two massive sources of political money. One, already infamous and frequently featured here and elsewhere, is ‘Leading the Future’, an accelerationist vehicle worth $100 million. Yesterday, the existence of its counterpart was made public: a pro-AI-safety super-PAC with close ties to leading organisations in safety policy and enough money to at least offset a lot of LTF spending. What happens now informs where this money will go in the future, and early signs point in a bad direction.
One of my main reasons to advocate for détente between safetyists and accelerationists last month was the prospect of a bitter fight between these two PACs. Their combined committed volume of $150 million could seriously change AI policy for the better – inform policymakers about the merits and the risks of the technology, and form an uneasily unified front against the uglier politics that threaten to derail more sensible approaches. If they instead fight each other, much of that money might go to waste, as their strategies will coalesce around key races and candidates, supercharge both sides of contentious AI policy debate, and ultimately serve to burn a lot of money. This is a prudent reaction to the threat of being unilaterally outspent – but ultimately regrettable regardless.
LTF’s opening salvo has, frankly, made it much harder to avoid this deepened conflict. In a somewhat puzzling move, they have picked out Alex Bores as their first target – a congressional candidate for NY-12 who, as an Assemblyman, advanced the RAISE Act in New York. In doing so, they have made Bores the centerpiece of a national media story, given him as much free national coverage as he wants, as well as the opportunity to position as a defender of citizens against big tech. They have also quite plausibly guaranteed the signing of the RAISE Act into law by uplifting it into a matter of anti-big-tech resistance. That decision has also further disillusioned accelerationists’ opponents about their willingness to engage in good faith: Bores and the RAISE Act are generally perceived as reasonable and moderate, and LTF’s decision to open hunting season on them anyways did not exactly signal willingness to compromise.
If LTF postures similarly in this fight, it’s hard to see a path away from the brink before the midterms. The jury is still out: LTF will be spending $10 million in a three-week push for preemption, and one of its strategies so far is calling a major AI safety organisation ‘the Germans’ based on X’s inaccurate location feature. But the actual talking points are not unreasonable: they recognise the need for a federal framework (though have not proposed any language on it), and have somewhat departed from the squarely anti-regulatory position. It’s not quite ‘balancing the risks and benefits’, but given where they’re coming from, it’s a shift. Such a shift would be prudent: unless you want your shiny new super-PAC to be counter-spent by the safety PAC in every future race (and in the process raising salience of that race to your disadvantage), there’s something to be said for not going all out.
Of course, preemption opponents don’t trust this shift at all, and suspect it’s mostly political deception. I guess we will see, but I think the political realities have changed enough since July that a slightly more conciliatory position is possible. While there’s no avoiding the fight, there are ways to get it right: if the issue moves out of the public eye into congress fairly quickly, and a somewhat tenable solution can be found in somewhat good faith, there are still many paths back from the brink — especially now that the safety PAC creates more obvious incentive for accelerationists to deescalate. But if there’s an all-out fight now, the current battle lines will entrench, and it will be difficult to pivot the PACs away from the safetyist-accelerationist conflict. I still think that would be a shame.
Who’s Running Out What Clock?
This entire debate is not happening on neutral ground, but amidst an accelerating trend of AI entering the political mainstream. That makes for a time asymmetry, where accelerationists face a closing window, while safety advocates think time is on their side.
The politically salient intersections are increasingly occupied by major political voices – economic populists on either side speaking to the jobs implications, anti-tech politicians identifying AI as a projection surface for their broader concerns, and so on. Amid that shift, AI technology continues to be unpopular, and its regulation remains popular. Even if you think that polling is the result of coordinated campaigns, it spells a broader shift: heading into an election year, salience and public opinion will make policymakers more likely to engage the issue on its political rather than its policy merits.
Broadly, that makes for bad news for the pro-preemption camp. They might, sometimes convincingly, argue that they’re not actually trying to preempt laws that would address citizens’ concerns and relevant harms. But the trade-offs are hard to communicate, and the case itself is not particularly intuitive. Accordingly, accelerationist messaging has changed since summer, and now actively invokes the prospect of federal frameworks and trades on US prospects to win the race with China, instead of primarily attacking the prospect of state-based legislation. The smart accelerationists have read the room – and I suspect they’re also growing frustrated at their less subtle allies.
More elegant messaging on its own won’t do. Agreeing to empty preemption is also quickly becoming an electoral liability: media-salient AI harms will happen, regulation will remain popular, and no policymaker wants to be on record to have voted against legislation.
The political attack ads write themselves: stories of harms to children, exploitative businesses, jobs at risk – interspersed with statements and votes against state laws that purport to address them. This is doubly true now that the pro-safety PAC exists to actually pay for ads like these, which is why leaking the PAC’s existence strikes me as very effective – particularly since a reported key figure behind the PAC has pointed out a vote for preemption as an electoral liability akin to a vote for the Iraq War.
Using A Closing Window
But the accelerationists know all this, and are responding in strategic kind. First by exerting a substantial amount of the committed PAC funding right now, for a campaign to get this through, while the safety PAC is still assembling. My suspicion is that the announcement of the $10 million LTF push is what prompted the safety side to leak its PAC before it was fully operational. But as it stands, it’s still fighting money with the quickly announced (near-term) prospect of money – that’s a disadvantage. Second, I suspect this push will see much more White House involvement than previous attempts. Whereas in July, White House resources were tied up in many different places, the admin seems better-prepared for a fight this time around. In a still-obedient Republican caucus, that can make a world of difference – at least as long as proposals remain reasonable enough not to offend the President’s political instincts.
This is why I think you should understand the leaked draft Executive Order as a forcing function. This EO, a document that compels agencies to prevent state AI legislation through a number of legally contentious mechanisms, is neither plan A, nor is it primarily a desperate attempt to make something stick. By all accounts, the President’s political preference, too, is to preempt by federal framework instead of empty litigation. To my mind, the EO instead attempts to move congress into action. It flows from the recognition that congress is not sufficiently motivated to move on preemption if the alternative is the cozy status quo of state-level legislation — that in an environment where preemption opponents were ready to run out the clock, it would have been hard to get anything done in congress. Why else leak the EO while congressional action was still coalescing?
I think preemption opponents’ reactions to the EO have missed that point. Among other things, that has given rise to misguided expectation management: of course, the outcome will not be broad and empty preemption. Everyone knows this, to the point that even accelerationist operatives are conceding the point in public. The success of the EO is rather measured in congressional appetite to tackle the issue instead of waiting it out – and along these lines at least, it seems to have gone fairly well. Following that logic, I wouldn’t be surprised to see another (draft) EO early next week, following a similar goal, but slightly weakened so as to not inconvenience Republican lawmakers that might disagree with its specific provisions.
The White House, too, has to thread a needle here: using the EO as a stick only works if there is legislative language to serve as a carrot; the issue in last week’s news cycle was that there was no public language toward which to push congressional Republicans. I suspect next week will be different.
To end, it also bears repeating that a worsening accelerationist position does not necessarily imply an improving safetyist position. As I’ve argued before, the relative sway of safetyists within their coalition diminishes as political salience rises. Running out the clock works as a mechanism to stop accelerationists from getting what they want, but it works much less as a mechanism to actually get what the safetyists want. There is no clear model to pass safety-relevant policy on the back of broader anti-AI political sentiment: the legislative asks that matter for frontier safety are narrow, but the range of politically favourable anti-AI laws is much broader. The next AI policy fight in congress won’t be yet another version of this same fight. It’ll happen along different battle lines, and they might not be favourable to any major player today. That doesn’t mean that the safety side should take a loss without a fight, but it does mean it might be worth taking a pyrrhic victory over adjournment if it’s on the table.
Where Are The Labs?
The major AI developers have not been vocal on this issue: few statements from leadership, few public positions, neither official remarks nor offhand comments on podcast appearances. To some extent, that’s understandable: it’s a politically volatile moment, and before the chips are down, aligning with one position can be politically risky. Particularly OpenAI is currently not in a position to exert its legislative influence: after the backstop debacle, it’s under general suspicion to leverage its size and importance, and entering a policy conversation means painting a target on any coalition’s back. But still, the developers’ absence from this debate runs a substantial risk in three directions at once.
First and most obviously, OpenAI in particular is susceptible to being painted as part of the preemption push, because its President Greg Brockman is personally involved in LTF. Media reports frequently portray LTF’s actions as representative of industry as a whole, leaning both on the Brockman link and a scarcity of public statements to the contrary. I wouldn’t expect enough nuance to distinguish between OpenAI and other developers in this case, and so I think this association is likewise applicable to GDM, xAI and Meta at least. This is a very risky game for the developers to play: their political opponents will not hesitate to attack them by drawing the line from Brockman to unpopular preemption. Back-channel dealing and closed-door conversations will only do so much to assuage the skeptics’ worries – as long as they have not stuck out their neck for any particular position, the post-game analysis will be that Brockman’s PAC money speaks louder than Chris Lehane’s words. Beyond the obvious political ramifications, this also matters internally: many employees at the labs believe in the original, mission-led approaches and are dissatisfied to work for an employer that seems to engage in the standard anti-regulatory playbook.
Second, the ambiguity cuts the other way, too. Opponents from the accelerationist side have every reason to paint the major developers as secretly pro-regulation, given the shadow cast by Anthropic’s involvement in the safety super-PAC – in fact, they’ve frequently done so. In public, the other labs may insist on their distance from Anthropic’s policy positions, but in the absence of clear statements to the contrary, skeptics will still make assumptions. That’s especially true if the safety PAC also receives donations from within OpenAI, which strikes me as highly plausible. Silence in that environment is a canvas for projection, and labs risk ending up in the worst of both worlds: blamed by safety advocates for bankrolling LTF through Brockman, and blamed by accelerationists for providing cover to regulation advocates over industry.
And third, AI developers may also eventually be at risk of losing their position as something more than just another industry lobby. At no small investment of time, talent, and political capital, all major labs have cultivated a role as trusted actors and authoritative voices on policy, whose input isn’t just sought as stakeholder testimony. I’m convinced this dynamic is a boon to our policy ecosystem. But that standing is at risk amid increasing politicisation, as congressional offices trust fewer and fewer outside voices and put a growing premium on showing up to the fights.
All in all, I don’t think the current ambiguity is doing the developers any favours. I know that many people leading these companies have strong opinions on these matters – opinions that often diverge from the standard industry-versus-regulators frame. They might come to regret missing out on the opportunity to voice them on the record, and might be surprised how quickly they’ll get cast in political roles that have little to do with their own convictions. But even if this is the correct risk-averse strategy for labs, I’m not sure the rest of the debate should let them run it. These are major players with major stakes in the decision and deep insights into its potential ramifications, and we should be interested in their positions. Commentators should compel their public positions, and policymakers shouldn’t be satisfied to let them maneuver behind the scenes. If ill preparation keeps them from having a voice on this, AI labs’ position as a major and somewhat trusted voice on AI policy is at risk from three directions.
Soon, all these questions will take a back seat. There’ll be fights over line items, maximalist rhetoric, and anyone having called for reasonable compromise will look a little bit stupid as things escalate. But above that noise, those who find themselves before a closing window to act should still keep the trend lines in mind: despite all temptation to treat every battle like the last, this is far from the final AI policy debate.


