7 Comments
User's avatar
Matthijs Maas's avatar

thanks for writing this up--finding this useful to work through some high-level questions over what is even the overall appropriate approach for the community. More thoughts to follow, but briefly -- the dynamics around the fledgling value of 'independent' expert testimony also remind me of some of the dynamics in the US nuclear test ban debate of the 1950s--

https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1467-7709.2010.00950.x

> "Although inspired by moral concerns, these scientists found that in order to influence policy they had to uphold, rather than question, the nuclear deterrent. Physicist Edward Teller, meanwhile, sought to cement the link between nuclear weapons and national security. Congressional debate over the Limited Test Ban Treaty revealed that the scientific expertise that scientists relied on to argue for a test ban in the end proved a weakness, as equally credentialed scientists questioned the feasibility and value of a test ban, consequently weakening both the test ban as an arms control measure and scientists' public image as objective experts."

Expand full comment
Anton Leicht's avatar

Yes, I think this is a very nice parallel; there's a lot to be said about the Teller parallel in today's AI policy space (even moreso since I sometimes get the sense that people are willfully emulating the dynamics around the Manhattan Project ever since Oppenheimer...).

At some point of politicisation, scientific expertise really doesn't do that much anymore to change hearts and minds. It's probably best deployed to define an Overton window of sorts, and to brief curious-but-not-yet-determined policymakers; I go into some more detail on that in the follow-up here: https://writing.antonleicht.me/i/157796941/insulating-expert-consensus

Expand full comment
Cyril's avatar

"So if Paul, head of AI security, goes to Congress and argues in favour of limitations on open-sourcing and cautions against loss of control, people will quickly realise he looks a lot like Saul, head of AI safety, who had been a mainstay of the last years of AI policy. And at that point, he needs a very good answer to ‘so what’s the difference between AI safety and AI security’, and it cannot be ‘security sounds great to Republicans’."

I agree. While I defintiely opted for a more polemic take in my response to recent events, 'It should have been the Paris AI Security Summit',[0] I think we drive towards similar conclusions. Particularly that security is not the same as safety:

"... if AI safety fears are well-founded, then AI is not merely a tool to be wielded, but a force that, unchecked, may slip from human control altogether. This must be communicated differently. AI safety asks, ‘What harm might we do to others?’ AI security asks, ‘What harm might be done to us?’ The former is a moral argument, the latter is a strategic one, and in the age of realism, only the latter will drive cooperation. AI must be counted among the threats to a winning coalition and as a direct threat to states. AI is not just another contested technology — it is a sovereignty risk, a security event, and a potential adversary in its own right."

[0] https://cyril.substack.com/p/the-paris-ai-action-summit-2025

Expand full comment
I.M.J. McInnis's avatar

I think PauseAI has a lot of message discipline, actually. (And they're fringe with respect to EA / AI safety etc., but w/r/t the general public, they're certainly less fringe than CFAR/MIRI/etc.)

Expand full comment
Jonathan Gibson's avatar

Really interesting - do you think that if tech causes job displacement and further dislodges people's sense of self, the safety movement could potentially find an ally in the 'populist' movement? Many people vote for populists as they dislike the rapid changes (economic and cultural) that have changed their sense of 'self'. Whilst this might be a slightly unpalatable option for some, I would suspect many Trump voters etc distrust 'big tech' and though nationalism may hinder these efforts and US-China 'race' sentiment, could be potential allies if the debate is framed in a particular way- 'tech elites' Vs the 'people'. Otherwise really like the framing of this piece!

Expand full comment
Anton Leicht's avatar

I think they could! I also think this can quickly backfire, because a lot of these backlash reactions can end up on the wrong side of history, and if the safety movement allies with them too quickly, they suffer the counter-backlash themselves. I commented a bit on this in my follow-up post here, basically arguing that to go with these alliances, you need a much less monolithic movement: https://writing.antonleicht.me/p/unbundling-the-ai-safety-pitch?r=1va1wu

Expand full comment
Jonathan Gibson's avatar

Thank you! To put it very crudely though, is the main divides in society simply populist nowadays, and so is there really a choice? Either way, are these parties not gaining, or already have gained power, and so appealing to certain elements which distrust Big Tech (whilst not stopping other bad things they they might be doing) could be necessary, even if overall these parties cause all sorts of problems? I can't really see them going away anytime soon.

Also I'd be surprised if the backlash to these populist parties was specifically aimed at their tech policies, and there sometimes is continuation in certain policies as administrations transition from populist to non-populist. Would it be a salient enough issue to actually muster and backlash or change from a changing administration - particularly as currently AI Safety do have more connections with Democrats and there may be some overlap with left-wing and care about AI safety (as opposed to libertarian folk)?

Expand full comment