The Self-Fulfilling Prophecy of AI Securitisation
The rapid shift of AI into the domain of national security is frequently portrayed as a foregone conclusion. It’s not, and we shouldn’t treat it like that.
Last week saw the release of the Superintelligence Strategy: the second time in as many years that a seminal essay makes sweeping claims on the geopolitical and military implications of upcoming advanced AI systems. The authors are Dan Hendrycks, a leading AI safety advocate; Eric Schmidt, the former CEO of Google turned tech policy mainstay; and Alexandr Wang, CEO of the very ScaleAI that has recently been awarded a landmark AI defense contract. Together, they lay out a mechanism along the lines of Mutually Assured Destruction that proposes the prevention of destabilizing superintelligent systems through mutual threat of sabotage by great AI powers. This is framed as a doctrine to prevent adversaries from deploying destabilizing tech, but it’s also a measure to prevent the development of these systems anywhere through invoking a national security dimension that invites the threat of sabotage.
There has been some good discussion on the merits of this strategy, but I believe the underlying assumptions are more interesting still. Namely, both upshots of this new contribution hinge on the assumption of securitisation: that because AI systems will be so impactful, they will be of such strategic relevance that the national security apparatus of major powers will soon play a leading role in their development and deployment – up to interfering with that development through sabotage and bombardment.
AI Securitisation
Securitisation is an admittedly fuzzy term when it comes to key technology. In this post, I take it to describe a spectrum, ranging from quite obviously securitised projects like the Manhattan Project or the construction of aircraft carriers to somewhat securitised projects like energy production to mostly non-securitised areas like microelectronics, computers or the internet. Whenever AI capabilities or decision-making over these capabilites’ development, diffusion or deployment get moved from civilian and commercial circles to the national security space, I take AI to become more securitised.
This new paper is not the first major contribution to the AI policy debate that makes an assumption of impending rapid securitisation. Last year’s ‘Situational Awareness’ treated the nationalisation of advanced AI as a foregone conclusion as a result of its projected strategic nature. Leadership of major AI developers frequently invokes similar predictions of securitisation in their characterisations of the next years in AI. On top of that, recently, many major policy contributions take a decidedly national security-focused perspective. The general trend in capabilities, combined with some recent increases in government awareness, seems to support that: AI is getting bigger and bigger, so why would the security apparatus not take a strong interest?
All in all, you quickly get a picture of an irrevocably impending securitised AI ecosystem. In part as a result, more and more policy work is drifting from asking ‘whether to securitise’ toward asking ‘how best to securitise’. I believe the conviction in that securitisation trend is not fully justified, and the resulting drift is regrettable. This post looks at the very real technical and political barriers to securitisation, the political motivations for some players to invoke securitisation anyways, and the reasons why these invocations should not go unopposed.
Evidence to the Contrary
To begin with, plenty of evidence points the other way. The US does not look like it’s securitising that much just yet. Sure, there are some limited security cooperations, such as the recent ScaleAI contract and some previous indirect engagements. But these are a tiny part of the defense budget, on par with plenty of other exploratory endeavours and niche technologies. Maybe the DoD is pursuing securitised AI in private, but no indications point to that: No major researchers, no massive GPU scale-up, no broader public-private contracts, not even a strong push to make existing command structure or capabilities compatible with an AI-driven military paradigm.
Even more telling is the political communication around government policy on AI. Both the outgoing government and the current Trump administration would have had plenty of incentive to stress the military value of AI, and their respective successes in harnessing it. One of the most-quoted sources on US securitisation is the former National Security Advisor, Jake Sullivan, who has frequently used public appearances to voice his conviction of escalating AI capabilities and the resulting security risks. But even his warnings have been very careful not to simply suggest a government-driven approach. On the other side of the aisle, President Trump would have had a slam-dunk opportunity to make a securitisation point when he presented Stargate, $500b of private investment into compute for OpenAI. In fact, OpenAI stresses the possible geopolitical upsides of the deal; but Trump focused his remarks and questions almost entirely on the markedly civilian risks and benefits of AI. If securitisation was coming and enjoyed political support, why did the US president miss out on framing a landmark investment under his aegis as a strategical and military win? I’m unconvinced by an answer that relies on his sense of subtlety and subterfuge.

On the other side of the Pacific, the CCP does not seem very involved with Chinese AI labs just yet. Of course, by and large, the CCP does seem to increasingly realise that AI is important technology through strategic funding, high-level involvement, etc. But that is a level of involvement that a lot of other industries in China, from infrastructure to car manufacturing, see as well; it’s markedly different from the highly hands-on approach that the CCP pursues with regard to their major military contractors. The evidence is still entirely compatible with a market-first approach, where Chinese AI would be supposed to win market share, cultural influence and economic hegemony on its technical merits – in some sense, DeepSeek already provided a blueprint for that. That version of the China-US race could still be closer to the competition between Huawei and Apple than between Lockheed Martin and Chengdu; and the CCP might think their cards are much better for a civilian AI race. All that is to say: There are many different ways for AI to be contested, hugely impactful, world-changing technology, and not all of them necessarily end up at immediate securitisation.
Barriers to Securitisation
But maybe my notes from above really just characterise superficial political misses or deliberate obfuscations. Or defense policy just moves very slowly before it moves very fast all of a sudden, and there is limited value in looking at present trends. The evidence in the open is quite compatible with the notion that securitisation is really happening in the far backrooms. But if you look at incentives, there are some real structural barriers that could explain why securitisation might be much later and much slower than many people assume.
Technical & Commercial Factors
First are technical and commercial background reasons for why securitisation is not obviously prudent: The current paradigm has proven valuable for commercial use and benefits greatly from competition between US labs. There are a couple of contingent factors at play favouring a commercial-driven environment:
The most promising path to advanced AI systems runs through useful consumer technology. The LLM paradigm in particular provides for many generally useful deployment cases that make for a worthwhile and revenue-generating product. That means a private sector is highly incentivized to pursue unlocking further capabilities without government involvement.
The models are being developed in the context of a rising tech market fueled by monumental amounts of venture capital, which means that private funding is bankrolling the US’ climb up the tech tree. Move frontier development to government involvement, and the promise of returns get fuzzier, the market becomes more intransparent, and the bankrolling might stop.
The US AI ecosystem has provided for competition at the top, which ensures enduring market incentives to explore niches and push the capability frontier. A more monopolistic setting might have motivated national involvement to break complacency, but not here.
Dare I say the military value of prosaic progress on today’s general AI systems seems just not that high compared to their economic effects? The path to economically deployable agents, that need to be somewhat secure and somewhat reliable, seems pretty clear. To reduce those margins to the standards of a safeguarded military operation seems much more convoluted. So while economists and the job market seem set for major disruptions that reflect in current rhetoric, the military use still seems comparatively marginal: US Secretary of Defense Hegseth, for instance, tellingly frames the immediate value of AI in terms of data analytics. There are very plausible applications of offshoots of current systems, or further-future applications, of course, and securitising either seems valuable. But progress on the current paradigmatic main branch: much less so.
All these factors combined make for a stong case to maintain the well-running commercial engine turning US economic power and tech dominance into a smooth capability build-up. Government involvement is neither necessary to incentivize further build-up, nor to bankroll further scaling. Securitising and nationalizing AI development provides a much smaller benefit than doing the same to building nuclear bombs or advanced weaponry, where progress might simply not have happened without government-provided direction and funding. Maybe you get higher security standards, or slightly better coordination. But you might be doubtful of these benefits, too – after all, it’s striking that Chinese non-AI capabilities are rapidly catching up to many militarized US capabilities, but that the non-securitized US AI environment still seems ahead. Why disrupt that now?



Political Factors
Second are political considerations that make securitisation difficult. Further securitisation is costly along two dimensions: It constitutes a heavy-handed intervention into private sector activity in favour of the government; and it costs a lot of money, whether that’s through procuring outrageously expensive training runs, building datacenters, acquiring stakes in AI labs, or nationalizing developments. Neither political will for government involvement nor federal funds are in particularly high supply at the moment. In fact, the part of the current coalition that cares the most about AI – the tech right – advocates for the exact opposite platform: hands off the tech ecosystem, and stop wasting taxpayer money. Even the recently announced strategic cryptocurrency reserve, which stood to directly benefit a part of that coalition, ran into plenty of intracoalitional opposition and subsequent watering-down from the get-go. Appetite for the government taking expensive charge is just not very high.
In the eyes of the reigning coalition, the current AI ecosystem might be doing fairly well: Pacing ahead of all geopolitical rivals, driving growth and progress, providing leverage through model diffusion and export controls. Even on the hardcore China hawk’s view of the world, this seems to be one of the precious few technological races the US is winning right now. Why go the unpopular route of costly government overreach now? This model also explains why some instances of federal action so far have been possible, but might not be evidence for further involvement. Take, for instance, the deregulation of datacenter building, e.g. regarding their energy supply through nuclear power, or their construction on federal land. These actions are markedly in line with the political trends described above, and carry none of the costs of going against the grain. To think that they would neatly escalate into steps that go the opposite direction by constraining activity or raising costs to the government instead is a step that does not go without saying.
It’s worth noting that there is plenty of precedent for highly impactful technology that did not end up securitised, sometimes for very similar reasons. Electronic consumer technology from phones to computers is an absolute cornerstone of the modern economy and also fuels a lot of government capacity, but is mostly provided through private companies; the internet itself is of massive strategic relevance, but has largely avoided a securitised or nationalised setting – in a lot of cases, the benefits of a market-based order have prevailed. You might think that AI capabilities will quickly eclipse even these fields in relevance; but those aggressive trend projections are still bound to feel fairly alien to the policymakers whose political incentives I describe here. To them at least, for the current and near-future state of the art, there would be plausible non-securitised avenues.
Where does that leave us? I don’t think these reasons are binding constraints on a securitised trajectory. The national security interest could easily outweigh them, especially driven by some external shock. But I do think they imply that securitisation is at least not a foregone conclusion.
A Self-Fulfilling Prophecy
Spirits called by him, now banished // my commands shall soon obey.
Amidst that uncertainty, we see a lot of statements and actions assuming a hard and fast securitised trajectory. From essays on nationalisation to developer statements stressing the geopolitical dimensions, much AI policy is framed in terms of a reaction to an ongoing trend toward securitisation. Why is everyone treating securitisation as such a certainty? One very plausible response is that I’m just wrong, and the national security relevance is actually much higher and more immediate. But another response might be that there is a substantial dose of trying and succeeding to manifest a trend into existence. Predictions around AI securitisation risk becoming a self-fulfilling prophecy.
There’s plenty of reasons for why that might be happening that go beyond honest concerns for security risks and a stable geopolitical environment: The Cassandras of these prophecies all have decent incentives to push for a more nationalised environment. Others have commented on this in greater detail, but for a brief overview:
The leading AI developers can use a strong securitisation to create a moat against new entries by building up exclusive rapport with the government and scaling up security capabilities in ways that are entirely inaccessible to newer players – much harder to disrupt an entrenched defense contractor than a SaaS start-up, after all.
Commercial actors at the intersection of defense and AI simply stand to benefit from the perception of rapid securitisation. It’s long-held wisdom in the military-industrial complex that once you have secured a healthy base of government contracts, the sometimes-glacial speed of DoD decisionmaking means your business is set for life. If you’re the CEO of a company that takes aim at just these contracts, you might be incentivized to manifest a rapidly commencing securitisation – and your investors might be very happy to hear about it.
Safety advocates might be looking for a framing and message that resonates well with the incumbent administration. There will be very little Republican appetite to regulate a commercial technology; there might be much more appetite to make a strategic, securitised capability more safe and reliable. The way to the quite sought-after GOP buy-in into the safety platform might very plausibly run through securitisation.
I suspect some actors might consider securitisation predictions as an effective way to communicate upcoming capability gains. In that sense, the claim ‘AI will soon get securitised, just you wait until the USG steps in’ is a shorthand for ‘This is going to be a really big deal’ – and people say the former as an attempt to find a novel framing to say the latter. I’m somewhat sympathetic to that – there’s some extent of capability gain that’ll make a government intervention near-necessary. But in the time before, this imprecise shorthand makes it much harder to separate facts from fiction on the pace of national involvements.
These vocal predictions themselves have the power to reshape discussion on AI policy. A lot of actors bound by these incentives are critical opinion leaders in the AI field. As a reaction to their strong claims, many others might be nervously trying to anticipate or follow a trend: They see labs and leading policy organisations talking about securitisation, and they want to fit their pet issues and ideas into what they perceive as the upcoming policy framework. After all, if everyone is saying there will be securitisation, then one ought to say something about security, too. Quick, rename the safety institute!
Less facetiously, getting the right messaging lined up in anticipation of a policy shift is genuinely good political strategy. But at some point, this trend becomes self-reinforcing: more and more money gets spent, policy drafted, discussion shifted toward a security-forward environment that it, bit by bit, comes to be. That way, the fact that some are trying to manifest a policy shift and many are trying to anticipate it ultimately accelerates that shift and cuts corners on crucial debates on the way.
Fueled by the accelerating driver of a self-fulfilling prophecy, AI policy might be shifting away from asking very important questions on whether securitisation is even a good idea, and what might be done to stop it if it isn’t. I think treating a rapid pace of ever-increasing national security involvement as a foregone conclusion could be a mistake.
Beware Rapid Securitisation
Oh, the spirits that I’ve summoned // I cannot banish now!
This post is mainly about encouraging a debate around securitisation, and not so much about making a major contribution to it. But I think it’s valuable to outline some of the very good reasons why you might be sceptical of a securitised AI paradigm:
First, securitisation removes policy from a lot of otherwise-available levers. It hurts expert advice and disagreement because the government can say experts have no insight into the actual secret capabilities; it hurts democratic oversight because the government can declare important policy decisions around the balance of risks and benefits a matter of national security and remove them from public accountability; and it hurts lively engagement with a broader ecosystem of commentators, policy organisations, etc., because genuine participation in the debate requires privileged access to sensitive operations.
Second, securitisation is irreversible. Once some information has become privileged, some companies built a moat from defense contracts, and the Pentagon has another budget item to cling onto, it’s very hard to get the genie back into the bottle – so you ought to be really sure it’s a good idea to let it out.
Third, securitisation raises the risk of geopolitical conflict and unequitable AI access. I’ve commented on that elsewhere extensively – the gist is: Securitisation leads to much less democratic global access to frontier AI capabilities, in turn leading to major inequalities that could create global instability might motivate armed conflict before decisive advantages through securitised AI set in. Interestingly enough, Eric Schmidt, one of the authors of the Superintelligence Strategy, in a recent book co-written with Craig Mundie and Henry Kissinger, warns of exactly that destabilising nature. I genuinely do wonder whether the same destabilization would occur were AI capabilities not understood as a strategic and military capacity, but a fundamentally civilian achievement. If you think it would, I think you’d have good reason to dismiss my points, as we’d be in for destabilization that needed securitised responses anyways.
Fourth, securitisation could hurt the highly effective US AI ecosystem that has made it to global dominance. For the very reasons detailed as commercial factors above, securitisation could pose a threat to that dominance: By hamstringing and constraining competition, by drying up revenue streams and venture capital, and introducing new cultural and strategic variables that have not performed very well recently.
Fifth, securitisation brings new safety risks. You might think the above is all worth it because it means that, at least, the adults will get a handle on the safety stuff. That might not be so: Racing dynamics can be exacerbated by overt government meddling that moves the competition from commercial to military. And between particularly dire multi-agent settings and corner-cutting from frantic racing, that does not make for a safety-conducive environment.
What to do instead?
If you look at all these factors and say ‘that all sounds somewhat reasonable, but the USG will do what the USG does, and AI is just a bit too important to go un-securitised’, I think you might overestimate a nebulous security deep state that, in this administration particularly, is not as insulated from civilian political concerns as you might think. And inversely, you might underestimate the breadth of potential coalition that could be motivated by these arguments. Between confessed free-market advocates in the powerful tech right coalition, who might not be too enthusiastic about regulatory capture and shutting down new players; de-escalatory isolationists not out for an arms race anywhere; and the safety ecosystem including some of the leading labs, the makings of an anti-securitisation coalition really do exist. It could anchor the next years of AI governance in today’s admittedly messy AI policy reality, instead of ushering in a new paradigm at a rapid pace.
What would the success of such a coalition mean for the Superintelligence Strategies of the world? It would mean that the AI sector would get to be commercial-first for a bit longer. Some procurement here, some strategic use cases there, but much less security apparatus involvement. Risks would be legislated through the perspective of civilian technology policy, races would be decided through private ingenuity and regulatory conditions, and the policy debate would have to remain out in the open. Less certainty, less perceived adults in charge, no moats through government contacts and no mitigations through military mutual assurances – some more chaos, more uncertainty, but more openness, and maybe more advantageous proliferation and genuine freedom, too.
If you think that securitisation is right, and we should pursue it, I think that is a very reasonable position to take. But if you think that rapid securitisation is inadvisable, you should stop treating it as inevitable. The current moment decides on the degree and pace of national security involvement, and there still is leeway left. We should refrain from making self-fulfilling prophecies, and from hastily summoning spirits we might never get rid of again.
re. the internet not being securitised: you could even make the point that digital network technologies were born military, but became gradually de-securitised over time even as their military benefits became more obvious (e.g. Network-Centric Warfare in Desert Shield), simply because their commercial and civilian benefits ballooned even more rapidly.
The same desecuritizing dynamic applied to e.g. early internet systems trialled by the Soviets, which shifted from 1950s-1960s military trials to broader tech programs intended to help in managing the economy, supported by cybernetics enthusiasts and mathematical economists --before eventually fizzling ( https://doi.org/10.1080/07341510802044736 )
Moreover, there are a bunch of other technologies that were abandoned for various reasons by military planners that were more conservative and cautious about the theatre-readiness of moonshot technologies (see also the overview at: https://verfassungsblog.de/paths-untaken/ , or pg 27 onwards in this talk https://drive.google.com/file/d/1Oo2sHGYN5os5VoSBgP8wDXBQThSIzrFh/view?usp=drive_link ).
the securitisation dynamic here is also interesting to consider against previous instances of self-fulfilling dynamics affecting international politics, both in (arguably) beneficial directions: e.g. Democratic Peace Theory and Commercial Peace Theory only ended up developing an empirical record in support after the theories became popular (for likely political and ideological reasons);* in the early '00s, there were similar debates that Huntington's 'Clash of Civilizations' might have become one (perhaps it did).
That may get especially tricky when these predictions are made at the interface of political and technological trends though--
---
* https://academic.oup.com/isr/article/11/3/552/1798000
** http://cadmus.eui.eu//handle/1814/17458
(explored some of this in chapter 5 of my undergraduate thesis https://www.academia.edu/8007823/The_Forging_of_Our_Futures_The_Temporal_Construction_of_Political_Logics_the_Performative_Self-Fulfillment_of_Prophetic_Visions_and_the_Need_for_a_Post-Positivist_Transformational_Idealism_in_IR )