AI politics feels at a knife’s edge. A week after I argued that safety advocates and the ‘tech right’ ought to strike a deal around federal preemption and frontier safety, the politics seem volatile as ever. I’m reassured by the depth and breadth of positive response to Dean Ball’s proposal and my endorsement, and really do feel that movement is possible. But still, much strategic uncertainty remains, and it continues to erupt in rhetoric. On the safety side, people are wary of the tech right and hope for its imminent political failure. On the tech right side, people wonder: why compromise, when we hold all the cards.
So a week later, some on both sides mistakenly think themselves above a deal. As safetyists weigh what to make of Sriram Krishnan’s measured articulation of grievances, and as the tech right considers heeding its allies’ warnings to make a deal, they both have a devil on their shoulder. Its destructive logic goes, ‘let’s take the fight and take our chances with the next guys’. Against that voice, I’ll defend and expand my case for deals and détente in two arguments:
First, that the tech right is already politically embattled, and would suffer from having to focus its stretched political resources on fighting the safety movement.
Second, that AI policy under the Trump administration is better with the tech right around – especially compared to any realistic alternative.
At the end of a more vicious fight well into midterm season, we could end up with worse AI policy for everyone. We would see the tech right supplanted by a politicised populist alternative with a poor grasp of the technology and its promise — which in turn spells worse international diffusion, less AI-driven growth and progress, and a push of safety-relevant development further into secrecy and beyond public oversight.
There is still another path. It requires some careful rapprochement: a safety movement that looks past the rhetoric to understand how it can prove its detractors wrong; and a tech right that realises that neither its political position nor its policy goals are served best by fighting safety advocates. Progress on this path begins with understanding the tech right’s current position.
The Political Arithmetic of Tech-Right Influence
You, too, might have heard the tales of the tech right’s ascendancy; of a successful capture of the US government and the consecutive passing of tech-friendly policy at every step. And much of that is true: The tech right elements within the White House, David Sacks in particular, have held on far longer and far better than observers had predicted – though Sacks’ impressive media presence also plays a part in that. And by all accounts, important decisionmaking on technology policy has successfully been centralised around few tech right decisionmakers at Commerce and the White House. From that center, they have defied pushes for stricter controls on exports to China, authored the AI Action Plan, and executed a controversial mercantilist foreign policy that sees US datacenters raised in the Emirati deserts and global leaders rushing to get in on tech deals with the President.
Soft Power Only
But this past presence and influence is still different from sticky political power. The tech right’s pitch to the President and MAGA’s political leadership has been to provide expert talent, commercial ties, economic credibility and an intellectual underpinning (as well as campaign funding, but that matters less on the margins than many think). This contribution was rewarded with outsized say on politically marginal, comparatively low-salience policy. This is a rare dynamic; the Christian right or jobs populists, for instance, come with a contingent of voters in important states for Republican majorities. The implicit leverage makes them powerful in Congress, it means giving them wins can be spun as electorally important, and has them become essential in election years. In comparison, the tech right by default is strongest when furthest from electoral math: there aren’t all that many software engineers, they all live in San Francisco – you’re not going to win California because of the tech right, and Twitter gets no electoral votes.
That contribution logic explains why the tech right is primarily empowered to ask for low-salience policies in return for its support: Higher-salience policy wins require fights or deals with politically more influential groups. For crypto, this is good news: no one other than people with crypto care a lot about crypto, and so the broader coalition is mostly happy to let the tech right do whatever it wants. This used to be the case for AI as well, but AI is becoming very political very quickly. Intersections with many salient areas are starting to emerge: jobs and child safety are the current big-ticket items, but others will emerge.
As a result, the modus operandi of the tech right in AI policy has often been to take marginal wins where they are available, marginally shift policy decisions on issues of lower salience, and carefully pick fights where they pose no outsized risks: around export controls contra an atrophied national security community far out of favour with the admin, or around AI buildouts foreign and domestic that fit well into a general deregulatory, business-friendly, deal-forward political agenda. That might be one of the reasons why it hasn’t gone to bat for the moratorium in July: As soon as the debate around that reached high salience, it seemed likely that the pro-moratorium would lose. Because the tech right ducked out early, the crusade didn’t continue, and the anti-moratorium momentum evaporated before it went on to search for the culprit.
Why the AI Safety Antagonism?
This dynamic, I believe, is also instructive to understand the kinds of fights the tech right does pick. Many have been puzzled by Sacks’s repeated decrying of effective altruists and AI safety advocates, even though you’d think the more obviously luddite tendencies of jobs- and current-harms-related populists should offend him even more. In fact, the AI safetyists are perhaps the least left-aligned, most pro-technology camp among the tech right’s adversaries – so why do they seem like the primary target? One explanation surely is hard-to-parse intra-Silicon-Valley dispute. Another are some genuine mistakes, overreaches and hasty political alignments the safety movement has committed in the past, and about which I’ve written extensively before.
But an underrated factor is this: there are no effective altruists in the GOP, no AI safety senators – so identifying safetyists as the culprits is politically safe. If Sacks turned the same rhetoric against more obviously technophobic ideologies, this would inevitably prompt an inner-party conflict. If he categorically dismissed concerns around jobs and mental health the same way he dismisses concerns around frontier risks, powerful Republican lawmakers would turn their scrutinising eye to the tech right’s presence in the President’s orbit. The tech right has an interest in avoiding that, and thus looks for enemies outside of the coalition. I understand that this explanation might not pacify safety advocates – but I do think it adds important strategic nuance for interpretation.
Political Headwinds Can Rise To A Storm
This general political logic means that, as AI becomes more and more salient, the tech right’s influence is at risk. The maneuvering space afforded by dodging around the most salient issues is shrinking by the day, as mainstream politics bleed into AI policy and mainstream policymakers begin taking over the discussion. Most obviously, for any little issue in AI policy, you’ll soon find vocal opposition on the other side. Once increasing adoption spells even momentary labor market impacts, lawmakers will fight you on statist, job-protectionist grounds. Once datacenter buildouts’ impact on electricity prices becomes visible, local lawmakers will fight you on cost-of-living grounds. And while the tech right can dodge away from any one single policy fight, at some point there won’t be much policy to make any more without taking losing fights. That trend will put the tech right in more and more direct opposition to other members of the coalition. In that conflict, the safetyists are yet unaccounted for – their sway and funding could strengthen the resistance the tech right face at every juncture.
A Burden Come 2028
There’s also the potential of more direct electoral political upheavals: the election in 2028 casts a long shadow to the present day. For the GOP’s electoral prospects, the tech right – not as a policy platform, but as a group of people – could quickly turn out to be a burden. If AI becomes more and more important, it makes for a glaring vulnerability in the GOP’s 2028 pitch: With the tech right in good graces and visible positions, it will inevitably be cast as having sold out American workers to tech billionaires. There will be tech-right-specific angles abound – casting the protagonists as out-of-touch, as transhumanists and successionists.
There are two potential drivers for this trend: It might be a Democrat line of attack, especially if the populist left clinches the candidacy; which could then force GOP leadership to visibly distance itself from the tech right. Or it could come from inside the party, as the presence of the tech right opens presidential hopefuls up to an anti-AI primary challenge. Already today, Senator Josh Hawley seems to be gearing up for an even more explicitly anti-AI next stage of his political career. If the administration, and with it the likely presidential candidate Vance, stays associated with the tech right, it might just be quite vulnerable over an administrative record of supporting and endorsing it. That is a much stickier risk for the tech right than any policy position: you can dodge away from losing policy battles, but if who you are becomes the problem, little room to dance remains.
The midterms might be one obvious catalyst for that trend to pick up speed. Right now, AI as an issue is only suitable for extraordinarily vague polling, owing to its low salience. But post-midterms, a lot more information about the political viability of different platforms will come into view; campaigns start thinking strategically about issue viability, and start considering which policies have been boons and burdens. My suspicion is that AI as a technology, economic input and cultural artifact will remain deeply unpopular, and correlations around that fact will start showing up in the data more prominently.
If all this comes to pass, the GOP coalition can be vicious to its own. Next to all these reasons of electoral strategy and coalitionary dynamics, the tech right is currently still keeping a lid on latent animosities among the rest of the GOP coalition, with many members suspicious of their purity of faith and allegiance. By all accounts, David Sacks in particular has done very well to maneuver around the White House, its senior staff and the MAGA faithful. But spurred by political volatility, these things can all still come crashing down suddenly – just ask the many well-connected administration officials that found themselves ‘Loomered’ on the tail end of falling out of favour with the core MAGA movement.
The Limits of PAC Money
The tech right, too, is reading this writing on the wall. Perhaps partially in response, it has decided to ‘PAC up’ – it is now armed with a near-unprecedented vehicle for political spending, a $100 million super-PAC called ‘Leading the Future’ (LTF). This will be a force to reckon with, no doubt. But it might not be effective in staving off the most important political threats, especially if it is spent on fighting safetyists.
People like to compare LTF to Fairshake – the super-PAC that has seen astonishing successes in crypto policy by following a playbook of aggressive, broad, highly political spending that advanced Congressional crypto champions and disincentivised would-be critics from interfering. But in large parts, Fairshake worked so well because the price of acquiescing to its demands was always very low: few policymakers had genuinely strong feelings on crypto, and even more importantly, there was never big electoral incentive to go against Fairshake. So by just a few high-profile victories, Fairshake managed to create a sense of fear no one was incentivised to go up against.
But replicating the Fairshake playbook in AI policy will be difficult. For one, salience is so much higher: voters will actually care about many AI-related issues. That makes the trade-off much less one-sided, because the threat of PAC money alone might not be enough to stave off anti-AI positions if they’re politically lucrative enough. To move the needle on AI, you actually have to spend on the races you care about. That approach is far more limited: You can target some Democrats for defeat, but that doesn’t solve the problem of intra-party pressures. Targeting Republicans is harder, because it presumably restricts your spending to primaries if you don’t want to get into coalition trouble by bankrolling Democrats. And it can backfire, because markedly anti-tech figures can get a lot of political mileage out of portraying themselves as ‘targeted’ by big tech. This issue is even further complicated by the real prospect of counter-money; push too hard, and you’ll find that a lot of political money cares about the tech issue – and might find your favoured candidates going up against both populist right and tech-Democrat money.
Now, rumour has it the super-PAC will devote quite some time and effort to targeting AI-safety-aligned policymakers and initiatives. That sounds more likely to be effective, but I do not see how it addresses the tech right’s main challenge, which comes from the political pressures within its own party. In fact, it further exposes it: by deepening fights that are peripheral to the main challenge, and by inviting safety advocates to focus on defeating the tech right. You might still think this is worth it, if you thought the safetyists are an existential threat. But as I’ve argued in depth last week – and Dean Ball has stated much more eloquently –, that does not need to be the case, and so the tech right could still pivot away.
In short: if you understand PACs as the main vehicle to save the tech right, and fighting safety advocates as the main way to waste PAC money on peripheral threats, you’d also consider the deepening conflicts with safetyists a liability – and might think about how to free up precious money and resources from that conflict. But right now, the tech right is at risk of trying the Fairshake playbook where it will not work, and beating yesterday’s enemies instead of tomorrow’s. That won’t do to address the deeper political threats.
As Good As It Will Get
What should we make of this trend? Beyond animosities on X, intelligent observers do have some substantial reasons to disagree with tech right statements and policies on exports to China, on frontier risks, and on the prospect of fast progress toward advanced capabilities. Now that they see the political future laid out above, they might suspect a window to strike and supplant the tech right. But they would do well to investigate the alternatives first.
Who Else If Not The Tech Right?
Right now, I am not convinced the ‘tech right’ would be replaced by something better: not by the standard of wanting to get AI right very generally, and not even by the standards of ardent safetyists.
That is in large part because the tech right’s most likely replacement are populist types with a spurious-at-best grasp of the technology and its ramifications. To understand why that’s the realistic alternative, look back at the conditions that would see the tech right pushed out: a political environment that is supercharged by economic, cultural and social anxieties around AI. In that scenario, the administration would be incentivised to pay visible concessions to this politically motivated crowd – if not outright through shifting the issue to the direct jurisdiction of Miller-type MAGA faithful, then at least by ceding much more influence over the issue to congressional Republicans with a track record of inane rhetoric on AI. At minimum, they won’t be willing to engage the technical discussion in the same way that the tech right has.
Put another way: the only currently realistic way that other ‘reasonable’ forces could take over the tech right’s mantle within the administration would be if that shift was prompted by a low-salience shift in technocratic preference.1 But due to shrewdness, money and influence, the tech right will not be easily displaced in a low salience environment – so in any environment in which the tech right is displaced, AI policy could well get worse. That’s for reasons of policy and of political dynamics.
Reasons of Policy
International diffusion. In its market-share-driven approach to selling AI systems abroad, the administration has committed itself to a fundamentally export-friendly approach to AI trade. It is hard to overstate just how contingent this outcome was – especially as the Trump administration is moving to retreat from some of its deeper-integrated trade relations. It makes for bright prospects for many middle powers, who can now, in principle, buy frontier AI capabilities. This incentivises them to find their own economic contribution to an AI-driven economy, but does not put them on the death ground that would have been implied by a highly securitised or isolationist paradigm. If you compare this AI foreign policy with many other areas of the administration’s trade policy (which might take its place after removing the tech right), I think you’ll find it extraordinarily mutually beneficial. I rate this issue very highly for reasons I’ve described in greater depth elsewhere – permissive exports cut the world in on AI-driven progress and growth, and they lock out China from its path to AI-strategic victory through international diffusion. Looking at the track record of the broader GOP coalition on these matters, a populist replacement would almost certainly reverse this stance.
Domestic economic diffusion. For many reasons, I believe it is important to diffuse AI technologies through the American economy quickly and seamlessly, even at the cost of some transitory disruptions. I of course think this is good and valuable for often-explained reasons of growth, progress and American competitiveness that don’t need restating. But I also believe this because of the labor market politics specifically: hastening diffusion of augmenting and productivity-boosting AI technology is essential to insulate the labor market against displacement, either from full automation or from augmented workers outside US borders. I do not believe that most job-concerned members of the populist right appreciate this nuance: from everything they have said and shown, I strongly suspect they’ll veer toward friction-inducing regulation that comes back to hurt the workforce down the road.
Frontier safety. This is the trickiest one — and it depends greatly on how much you think safety would be helped by slowing down deployment in general: if you do, you favour broadly anti-AI forces in charge. I don’t share that view, and instead would point out some other factors: For one, the AI Action Plan is the playbook for the next years, and it genuinely delivers on some safety-relevant topics. I understand it’s hard to look past many of the theatrics, and plans are not yet policy – but it does count for something that OSTP has put pen to paper and the Action Plan (and not the Techno-Optimist Manifesto) is what came out. It’s not enough on safetyist metrics, but I believe it counts as evidence of future potential. To add, the tech right does have a fundamental understanding of the technology and its drivers — which might count a lot for reacting to serious safety-relevant developments.
Now, I understand that safety advocates are excited about the populist forces on the basis of safety-focused proposals in the Hawley-Blumenthal bill; and that the ‘AGI-pilled’ nature of the CCP Select Committee has revived some hopes in a national-security-led approach to frontier safety. But congressional self-promotion through floating unlikely bills is a categorically different beast than actual lawmaking. Right now, lawmakers like talking about AI in these terms because it’s filling a gap the admin is leaving, because the political salience and upside is high, and because they face fewer constraints. But if they moved from that role into the very different incentive space of actually governing, I’m not sure how their incentives would shift. Until then, my high-level view remains that empowering populists based on their current, incidental commitments always carries a risk. If there was an alternative path to frontier safety compromise with the tech right, it would be worth taking instead — and I still think there is.
Reasons of Political Strategy
Next to the policy reasons, you should be mindful of the strategic counterfactual.
First, I do not think you are sure to get H20 export restriction back even if you supplant the tech right. This is a big crux: If you push many of the most reasonable voices in frontier AI hard enough to justify their dislike for the tech right, the conversation ends up at the H20 decision and fears of subsequent B30A exports permissions. I’m a bit less sure of this myself — but let’s grant the argument for now. While Nvidia might initially have found a buyer of the pro-export argument in the tech right, I don’t think the influence still runs through these channels. Nvidia is the most valuable company in the world, its business model drives a sizable part of US economic and stock market growth, and its continued success is essential to staving off volatile economic developments. By all accounts, Jensen Huang has leveraged this position to deepen his relationships to top political leadership. By economic necessity and political relationship, the influence of Nvidia now extends deep into the Oval Office, well past the niche areas of AI and tech policy. Tomorrow’s OSTP would face the same constraints, and might not be so likely to restrict inference exports to China after all.
And last, the tech right begets an important equilibrium: As long as it’s around, the ruling coalition will remain fundamentally divided on matters of domestic AI policy. No matter how much the administrative consolidation continues to succeed, congressional Republicans, driven by electoral incentives, still skew more sympathetic to the populist case against AI. That puts any legislation, but also much executive action, at the end of a tug of war. That gives others – dear readers – room to maneuver, politics to make. It means you can sometimes tap into one camp to derail the efforts of the other; it means things take longer so you can mobilise opposition; and it means there is some incentive for coalition-building outside strict GOP party lines. The replacement of the tech right elements would fundamentally closer align the majority of electorally motivated congressional Republicans with the White House, creating a unified front and force for legislation that makes policymaking from the outside that much more difficult.2 Even if you dislike the tech right, I suggest you appreciate its contribution to a malleable environment. I’m not sure if all of this ultimately reconciliates any ardent critic with the tech right. But I do think it should inform your judgement on the risks and rewards of plotting to remove them.
Back From The Brink
All in all, I think the above logic reaffirms last week’s point: a deepening fight between safety advocates and the tech right turns out to both sides’ detriment. The most immediate implication of that is still this: there is mutual incentive for safety advocates and the tech right ‘accelerationists’ to pursue a narrow deal exchanging federal preemption for frontier safety regulation. But it might be valuable to sketch out the mechanics of the underlying rapprochement in greater detail.
What Safety Advocates Would Need To Do
On one hand, there will soon come points where others in AI policy will have opportunities to seriously weaken the tech right – to hasten the trends I describe, and to empower the populist forces. This is particularly true of safety advocates, whose political power within this narrow trade-off is set to rise: with increased public salience and skepticism, safety advocates will find themselves momentarily more influential in their coalitions, and able to point money, public outrage and political attention at the tech right. I don’t know that much policy will come of it, but it would definitely cause real damage.
Past conflict will motivate safety advocates in particular to deepen that fight, and the promise of narrow success will do the rest. But even if you like the odds, the ugly fight might not be worth the outcome: even in low-level skirmishes today, battle lines solidify in ways harmful to the safety movement, such as when Twitter fights break out and fairly uncontroversial legislation in California gets caught in the crossfire. And still, an all-out PAC-funded fight around the midterms would be a good way to waste both sides’ political power on mutual neutralisation.
Whether for that reason or the substantive policy concerns above, I’d hope that the safety advocates show some restraint: do not overreact to however-offensive social media lashouts that are explained by many factors, work toward deals that can actually deliver on narrow frontier safety priorities, and resist hastily entrenching factional battle lines between parties, companies and issues that do not need to harden yet.
What The Tech Right Would Need To Do
But for this to even be remotely viable, the tech right, too, would need to move. I know this publication frequently asks a lot of safety advocates – but I can’t reasonably suggest they could refrain from attacking the tech right while the tech right remains on the hunt for AI safetyists. To be able to act on this advice, safety advocates will need more solid evidence that the tech right’s attacks are contingent: right now, they see themselves at risk of becoming the frog carrying the scorpion across the water.
So on the same token, I think that tech right hostilities pointed at the safety ecosystem specifically are counterproductive – not least because they invite increasingly destructive retaliation. From many conversations I’ve had since laying out my case for the preemption deal last week, I’ll reaffirm that the willingness for compromise and disarmament on both sides is there – please do not discount that fact based on Twitter vibes alone. But if the tech right does not heed the writing on the wall and instead turns more vicious under pressure, the paths toward reconciliation will close. Yes, the tech right can turn LTF and its public profile against safety advocates and lock them out of the halls of power, and in some narrow sense, this is a position of power – the dynamics have to reflect this asymmetry.
But against the political headwinds it’s facing, the tech right needs to reconsider who its primary enemies are. Well-established friends of the tech right – Neil Chilson and Dean Ball, for instance – have suggested criteria for meaningful distinction. If not my warnings, the tech right should heed its friends’ calls for caution. Doing so means realising the safety movement is, in many meaningful ways, different from the pro-regulatory left-wing forces at the gates. And it means searching for ways to reconcile its best ideas with your agenda, instead of for reasons to dismiss it out of hand. I read Sriram Krishnan’s recent post as a first step enumerating how the safety movement would have to change. It’s worth taking this post in particular very, very seriously, both as a matter of substance and of communicated openness. Safety advocates should give serious responses, and the tech right should take these responses seriously. From the tech right’s position of relative power, this will be all safetyists can get for now. I think it’s a better start than any alternative, but I do understand why some would disagree.
Détente
The safety movement and the tech right can both unilaterally choose to carry each other deeper into a political fight that will see them both marginalised. This essay puts to both of them the extraordinarily difficult ask of not throwing another blow at a staggering opponent. Getting overexcited at the prospect of beating old enemies can very quickly lead to missing the bigger picture, and overindexing on Twitter rhetoric risks obscuring the pitfalls of upheaval. An existential conflict around the tech right’s future might feel impossible for some, inevitable for others, and satisfying for many. It’s not.
There is no safe path for the tech right to get rid of the safetyists without compromising its own position, and no safe path for the safetyists to replace the tech right with something better. And so I believe both sides are stuck with the devil they know, and better make the best of it.
Or, I suppose, by a security-relevant warning shot shifting ownership of the AI issue back to NSC and DoD, but you can’t plan for that.
Perhaps this changes with the midterms. But hoping for the midterms as a core political strategy has its many flaws.