The Next Three Phases of AI Politics
Narrow windows worth preparing for
We left last year’s AI politics unfinished: with an executive order promising legislative action that did not come, amid rising political salience with no policy vehicle to attach it to. Now, we’re set for a year heavy in politics and light in policy – and are looking at three upcoming phases of AI politics in the US: attempts and blockades before the midterms, a brief policy window right after, and political chaos once primaries commence.
This moment is hard to read, and a lot of countervailing factors are at play. Super-PACs and political funding on both sides are ramping up and won’t show their effects until after the midterm season, political salience is increasing but can fizzle out without popular policy ideas to latch on to. As a result, you can just as easily paint a picture of an upcoming anti-AI techlash as of a Congress paralysed by fear of the AI industry’s unprecedented political spending. That is how we have ended up with a lot of different factions all thinking that things can only get better for them.
But while most of the underlying effects are real, they occur at different speeds and time scales. That means the most valuable way to think through them is to look at how their timings line up: when is politics worst, and when is it best? When is it time to play defence, and when to play offence? I think: if you want to get something done in AI policy, spend your time preparing for the window right after the midterms, and get something through before the primaries close the window again.
One Last Preemption Push
2026 begins with an accelerationist coalition compelled to cash a cheque that David Sacks has written in the form of EO 14365. This executive order, controversial among the broader American public and within the GOP itself, is the latest attempt at federal preemption of state laws on frontier AI. It came after failed attempts to push preemption through Congress, and after it had become clear that SB-53 and (a weakened version of) the RAISE Act would be signed into law.
But while the signing of the EO is an intracoalitionary victory for the accelerationists, who have convinced the president in spite of populist right objections, it is not yet a complete policy victory. The EO is widely understood not to be a standalone achievement. Experts debate the legality of its provisions, and drawn-out court battles could take away the EO’s teeth and leave only a vague chilling effect. And politically, even supporters have contextualised it as an immediate stopgap preceding a national legislative framework – Congress and the American public alike feel AI should ultimately be addressed legislatively.
So, the reigning coalition is looking for legislative codification. Just yesterday, OSTP Director Kratsios reaffirmed his intent to produce a proposal this year. It seems hard to believe that congress would simply pass a White House proposal as a standalone law, but it’s also hard to find another way to get a law done. The admin is short on clever ways to get the framework through: a last-minute attempt to pass preemption as part of the NDAA failed last fall, and there are no other similar must-pass bills left. Now, accelerationists might be tempted to find another convenient vehicle. But would leadership attach contentious preemption to a continuing resolution and risk a shutdown fight? Would GOP legislators jeopardize priority reforms like permitting with a controversial AI rider? The story of the NDAA push suggests that attempts like these would be easily sunk by single influential legislators, and thus remain unlikely – and more unlikely still every time last-minute attempts burn goodwill.
What’s left? The most promising accelerationist play is to attach their framework to a standalone bill on a specific AI harm, most probably child safety. That’s twice attractive: because it’s easiest to rally the GOP behind, and because it’s hardest for democrats to say no to. Many internal preemption opponents would very much like to take credit for a child safety law. And even the AI companies have realised the importance of doing something on child safety, recognising the associated PR risks. There are a couple of child safety bills in committee right now, and many of them could make for an attractive preemption vehicle.
Nothing Ever Happens
It’s just not clear any of this is enough to move the Democrats. Right now, Congressional Democrats are in a great position to play for time: any law they can get right now, they can probably also get next Congress, on even better terms and with much more credit to them. That’s especially true given previous accelerationist pushes have left them skeptical of good-faith offers. And in the meantime, they can continue to run anti-tech and anti-AI campaigns against the GOP, keeping the issue alive for the elections. It might not win any races, but it can’t hurt.
Getting something done on AI this Congress mostly comes down to moving these Democrats. How might preemption proponents try to dislodge them? There are three potential pieces of political leverage:
First, Congressional Democrats would rather not be at odds with tech-affiliated donors and their super-PACs, especially the accelerationist ‘Leading the Future’. By all accounts, Democratic leadership has already been somewhat sympathetic to a preemption deal in the fall – this could be a reason, and one that might persist through this year’s congressional calendar.
Second, it will be hard for Democrats to argue why they don’t want to rush on AI legislation. The object-level reason might be simple: they don’t think they’ll get any federal legislation beyond what’s already provided by SB-53 and RAISE. But that can’t be their public reason, lest they admit they’re happy to let Gavin Newsom and Kathy Hochul run national AI policy.
Third, accelerationists might manage to offer something too good to refuse publicly. If they come out with a substantively strong bill with provisions serious enough to convince child safety advocates, it will be hard to kill. Preemption or not, no one wants to go into an election year with a vote against child safety on the record and a PAC in the field that’s willing to exploit that vulnerability.
Still, I wouldn’t be too optimistic. Between pro-regulation super-PACs and increased salience, Democrats might not be too worried about the LTF threat; and given accelerationist messaging so far, I don’t think they’ll manage to put together a legislative package that’s good enough to make Democratic opposition look completely unreasonable. And so my best guess is that we’re headed for deadlock and litigation of the EO, and not much else will happen in policy until the midterms.
A Post-Midterm Window
The midterms, then, will change things in two ways. Most obviously, they remove the political barriers to compromise that come from wanting to campaign on an issue for some time. But they’ll also provide the debate with a lot more data on the politics of AI policy: drawing on increased polling attention and the performance of candidates and spending, we’ll have much more to go on early next year. On all sides of the debate, analysts are placing a lot of hope in this prospect – they hope that more data will clarify what they are certain is true: that the public is already on their side.
I don’t think we’ll be afforded the luxury of that sort of clarification. Perhaps the most interesting case study is NY-12, where RAISE Act sponsor Alex Bores is competing in a crowded primary. There has been a lot of safetyist triumphalism about the admittedly puzzling decision of LTF to effectively buy Bores a lot of free media by publicly targeting him and letting him make his case on national television. But still, on base rates alone, Bores is likely to lose in his crowded field, and if he does, the super-PAC still gets its visible win. The same effect likely applies to a lot of ostensibly AI-specific electoral indicators: because AI is still not an election-deciding topic, a lot of the signal from the midterms will be drowned in broader political noise that obscures simple analysis.
The Salience Story
But while the midterms might not give us much clarity about what version of AI policy is in the voters’ interests, they will clarify the need for some version of federal AI policy once again. As is now widely acknowledged, the perceived political salience of AI will continue to increase: yesterday’s frontier systems are entering mainstream application and used for impressive and harmful purposes. That generates public attention, media reporting and policymaker interest. And even if all this happens less dramatically than boosters expect, I believe the associated meme has already reached escape velocity – everyone already ‘knows’ AI will be politically big, which can quickly become self-fulfilling.
Critics of this ‘AI salience’ notion sometimes point at issue polling, which sees ‘advances in the capabilities of computers’ still delegated to marginal positions. But I think that is mistaken. There’s a much-quoted sentiment that ‘as soon as it works, we don’t call it AI anymore’ – in much the same way, I believe that ‘once it’s salient, we don’t call it AI anymore’. Where it will matter, it might instead poll as part of the actual big-ticket issues – as a symptom of ‘tech oligarchy’, an issue of economic equality, of job prospects, of environmental harms, and so on – all of which steadily poll as important issues. Once that reality shows up in polls around the midterms, and once it’s paired with the very clear policy-level polling suggesting voters want legislation, Congress will identify AI as an issue they could touch but have remained silent on. No self-respecting lawmaker will pass on the chance to put their name on a bill regulating something they and the electorate feel is important, and so legislative appetite will increase on the backs of the salience discussion.
A Brief Window…
As a result, a policy window for substantial congressional action on AI may open shortly after the new Congress is sworn in. Policymakers of all stripes will sketch out their priorities for the term, many of which will be contradictory. But critically, far fewer lawmakers will be satisfied to do nothing on AI. That removes the greatest current barrier to action – which is that too many people are satisfied to wait. But when everyone wants some action and no one is satisfied with the status quo of a legislature with nothing to say on a transformative technology, a process opens up.
Any law that makes it out of it would be the result of much triangulation and negotiation, including a host of provisions aimed to give every important voice a win. Broader preemption, national security provision, narrow substantive rules on current harms, federal codification of SB-53 and many others could be components; there might just be enough vaguely compatible ideas to get a law through Congress. And once it’s out of Congress, it seems likely enough that the president would sign it: it would be too much of a political liability for VP and presidential hopeful Vance if the administration vetoed a rare congressional consensus on AI.
Whether the law that makes it through that window will be good is less clear. I have my concerns: the dynamic I describe is principally driven by a sense of having to do something rather than by the quality of any one specific policy proposal. And the final contours of a deal would be shaped by rounds and rounds of haggling over language, making it easy to lose legislative nuance in the process. The result could easily be unsatisfying provisions from one end of the spectrum – say, broad preemption without substantive stipulations – traded for equally myopic provisions that only address incidental current harms.
…And How To Use It
What can you do? Shaping that prospect is not about coming up with a particularly clever one-size-fits-all legislative framework as much as about identifying the best versions of bad ideas. Policymakers will try to push their political priorities through this window, and they’ll remain fairly immutable; so you have to ask yourself what the best legislation based on this priority would be. What’s a stipulation on AI and labor that doesn’t just sound good? What measure on child safety actually helps us get at the underlying and scalable problems of deception and sycophancy? The response to these questions can’t be ‘do these things and also do very clever frontier policy on top’. That risks clever frontier policy being thrown out of the negotiations when it gets into conflict with other, more immediately politically salient goals.
To insulate good policy against these politics, the response to the political driver and the actual policy merit have to be closely interwoven. In practice, that means isolated frontier safety policy is rarely effective, and you have to link specific areas of frontier policy to specific areas of near-term public salience. In child safety, that might mean making a case against age gates and for evals-based solutions that get at underlying sycophantic and deceptive tendencies. In labor, that might mean going through the extra effort to make labor market policy actually scalable by defining appropriations mechanisms for expandable safety nets early, and so on.
These solutions sit between two unfortunate attractors: being happy with piecemeal solutions that don’t further mid-term policy goals – which would be dismissive of the fact that good policy windows are rare and require actual progress; and making purely horsetrade-based policy that don’t anchor frontier-focused policy in politically important issues — which would be dismissive of the true political drivers making AI policy possible in the near future.
Politics At Last
That window, too, will pass quickly. I suspect that AI politics will truly intensify once we enter the Presidential primary season – which you’d usually expect to start around late 2027, when candidates start building national profiles, testing messages, and considering policies. The incentives for outright politicisation of AI will be much sharper than even in the midterms. That partly has to do with further increases in salience that make more people think they ought to talk about AI. But it also has to do with how primaries work – they reward candidates who manage to carve out a niche within their own party and distinguish themselves from the mainstream and leading candidates. And current party politics offer some such opportunities:
On the Republican side, frontrunner and current Vice President JD Vance will be stuck holding the bag of the Trump administration’s record on AI policy, for better or for worse. That leaves a gap for tech-skeptical voices to attack him if the administration remains accelerationist, or for unapologetic technooptimists to contest his support from Silicon Valley if he ever pivots to a more skeptical position himself. Already today, Senator Josh Hawley and Florida governor Ron DeSantis are lining up for the tech-skeptical primary angle.
On the Democratic side, AI seems like a likely point of contention between moderates and left-wing populists, with the latter already making headlines through deeply anti-AI views and proposals that intersect with their general distaste of big tech, billionaires, environmental harms and labor disruption. It gets even more complicated because the leading moderate, California governor Gavin Newsom, can’t completely pivot to an anti-AI position without upsetting his base of donors and supporters in his home state.
Even if you don’t put much stock in the increasing-salience story, these dynamics make AI politicking very attractive – they’re excellent wedges to drive party bases apart and to secure a foothold in each of the two crowded and contentious primaries. And once that happens, the legislative windows once again will close: when people want to campaign on an issue, they have an incentive not to make policy happen beforehand. Any compromise, especially a bipartisan one, reduces the profile of the issue and calms down the debate. And with majorities as thin as they are these days, even a few electorally motivated defectors can jeopardise any legislative attempts. In terms of actionable lessons, there’s not much to be done with regard to the primaries today – but the prospect of primary season means that the post-midterm window will be short and precious.
What Follows
We face three phases: attempts to break the blockade through the midterms, a brief window after, and then political chaos through the primaries, making for a narrow opportunity worth preparing for and capitalising on. What follows from that depends on where you sit:
For accelerationists, it means that you need to think hard about how to make use of borrowed time. If you think you can actually win on legislation before the midterms, you need a better lever to move the Democrats. I know I’m a broken record on this, but I still believe the best path is compromising around deep and narrow regulation for broader preemption. But if you think you won’t get a law this year, you might rather want to stop posturing a little bit to retain some capital and good faith for the negotiations in the next congress.
For safety advocates and regulation proponents, it seems to spell a comfortable next few months of playing defense. But they need to be used well to prepare for what comes after: the political drivers of whatever AI policy push we can expect will not be perfectly aligned with any reasonable advocate’s policy priorities – and so there’s translation and groundwork to be done to develop solutions that harness these political drivers for actually good policy. If this opportunity is squandered, safety advocates might find themselves in support of AI laws that don’t do much, but confirm their reputation as unabashedly pro-any-regulation.
More generally, this all means it’s worth recognising the most politically likely times for policy action rarely guarantee good policy. In principle, that opens two avenues: try to make good policy more likely in the unlikely moments, or try to make bad policy better in the likely moments. The mistake is optimising for policy quality when politics are prohibitive, or for political viability when politics are already favourable. Plan accordingly, or else we're headed for a 2026 with little policy and much political theater.


