3 Comments

thanks for writing this up--finding this useful to work through some high-level questions over what is even the overall appropriate approach for the community. More thoughts to follow, but briefly -- the dynamics around the fledgling value of 'independent' expert testimony also remind me of some of the dynamics in the US nuclear test ban debate of the 1950s--

https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1467-7709.2010.00950.x

> "Although inspired by moral concerns, these scientists found that in order to influence policy they had to uphold, rather than question, the nuclear deterrent. Physicist Edward Teller, meanwhile, sought to cement the link between nuclear weapons and national security. Congressional debate over the Limited Test Ban Treaty revealed that the scientific expertise that scientists relied on to argue for a test ban in the end proved a weakness, as equally credentialed scientists questioned the feasibility and value of a test ban, consequently weakening both the test ban as an arms control measure and scientists' public image as objective experts."

Expand full comment

"So if Paul, head of AI security, goes to Congress and argues in favour of limitations on open-sourcing and cautions against loss of control, people will quickly realise he looks a lot like Saul, head of AI safety, who had been a mainstay of the last years of AI policy. And at that point, he needs a very good answer to ‘so what’s the difference between AI safety and AI security’, and it cannot be ‘security sounds great to Republicans’."

I agree. While I defintiely opted for a more polemic take in my response to recent events, 'It should have been the Paris AI Security Summit',[0] I think we drive towards similar conclusions. Particularly that security is not the same as safety:

"... if AI safety fears are well-founded, then AI is not merely a tool to be wielded, but a force that, unchecked, may slip from human control altogether. This must be communicated differently. AI safety asks, ‘What harm might we do to others?’ AI security asks, ‘What harm might be done to us?’ The former is a moral argument, the latter is a strategic one, and in the age of realism, only the latter will drive cooperation. AI must be counted among the threats to a winning coalition and as a direct threat to states. AI is not just another contested technology — it is a sovereignty risk, a security event, and a potential adversary in its own right."

[0] https://cyril.substack.com/p/the-paris-ai-action-summit-2025

Expand full comment

Really interesting - do you think that if tech causes job displacement and further dislodges people's sense of self, the safety movement could potentially find an ally in the 'populist' movement? Many people vote for populists as they dislike the rapid changes (economic and cultural) that have changed their sense of 'self'. Whilst this might be a slightly unpalatable option for some, I would suspect many Trump voters etc distrust 'big tech' and though nationalism may hinder these efforts and US-China 'race' sentiment, could be potential allies if the debate is framed in a particular way- 'tech elites' Vs the 'people'. Otherwise really like the framing of this piece!

Expand full comment