The Most Dangerous Time in AI Policy
If the deployment of human-level AI systems disrupts our political institutions, we won't be prepared for further progress.
Building a dam in a great waterway is one of the most high-stakes, dangerous moments in modern engineering. Once the concrete is poured and the river is diverted, we can only watch and face the consequences. When, in the 1960s, Italian civil engineers faced pressure to ignore early warning signs in pursuit of sustaining the ongoing economic miracle by hydroelectric power, disaster struck. The Vajont Dam Collapse laid the Piave valley to waste, killing thousands.
AI capabilities are growing rapidly, and recent paradigmatic changes in how leading models are constructed have reinvigorated a general optimism in sustainable breakneck speeds of growth. A somewhat clear trajectory is slowly shifting into focus, leading first to AI systems providing essentially human-level economic activity at scale (‘AGI’). But the most drastic and important consequences of this growth might only manifest once AI capabilities move beyond that and toward superhuman feats of long-term planning, research and innovation (‘ASI’). This transition will be the most dangerous time in AI policy.
Today, many researchers agree that all the pieces for AGI are, in principle, in place: The necessary architectures and algorithms exist, and iterating on them for some few years will get us there. It’s much less clear what the shape of the trajectory from AGI to ASI looks like: It might be a simple instance of further prosaic progress; it might be a rapid growth curve kickstarted by an acceleration of AI development through the deployment of AGI; it might be a much longer path due to frictions in AGI deployment and the need for technical paradigm shifts. ASI might not happen at all; but on the condition of AGI being possible, I wouldn’t count on it.
On balance, I tend to believe we’re due a somewhat slower process best measured in years at least: Between economic frictions in deploying AGI systems, necessary paradigmatic breakthroughs, and delays from building up and out compute capacity, the road that leads to superintelligence is plausibly quite rocky. But I think my argument suits a broad range of views on that matter; as long as you’re not in the camp of a rapid intelligence explosion, most of what follows will apply.
So, I stipulate that there will be a non-negligible time between the widespread, disruptive deployment of AGI systems and the development of ASI. This will be the most important and most dangerous time in AI policy. Shaped by the public and political salience of economically and politically disruptive AI systems, political institutions will be much more receptive to a whole host of drastic measures – some of them prudent, some of them potentially disastrous. While we do not know enough about the substance of these future policy questions just yet, we can work to shape that volatile moment when the decisions will be made.
Superintelligence Is Policy Contingent
It’s difficult to talk about the effects of a superintelligent AI system without digressing into science fiction. That is not so much because these systems are so implausible in and of themselves, but because their nature and outcomes could be so radically different as the result of even minor technical and environmental details. Hence, currently, ASI futures are frequently best described in terms of stories that explore one of these branches. Any one branch will be so implausibly contingent that it’s best presented in terms of a story; and the branches range from blooming utopias to bleak dystopias. That means two things for this piece: I’ll refrain from speculation on specific ASI futures; and the overall range of outcomes is broad enough that we should care quite a lot about any tractable nudge to the trajectory.
One, and maybe the most important, nudge is policy. In principle, nation states do have the power to meaningfully shape the development of superintelligent AI systems. Particularly given the compute-intensive, power-hungry, difficult-to-hide nature of modern systems, a capable state might have a lot to say about what happens. This influence ranges from enforcing stipulations on how ASI is developed – say, on alignment; to how it is developed - say, privately, nationally or internationally, or unipolar or multipolar; to whether it is pursued at all. Charting many of these potential courses will take a political fight, to be sure. But in the face of downsides, these fights seem worth taking: Hasty securitised nationalisation can lead to entrenched concentration of power, race dynamics could see dangerous instances of deploying superintelligent systems, and botched regulation, censorship and restrictions in the initial training of these advanced systems could well have long-lasting ripple effects on whether truth and important values prevail. You can get this really right or really wrong, in a great number of ways.
So if that period between AGI and ASI is where we pour the concrete into our dam, how do we best avoid the flawed incentives that led to disaster at Vajont?
Shape Terrain, Don’t Fight Just Yet
It is only after the deployment of AGI that it’ll be viable and effective to discuss policy on superintelligence. Once human-level AI systems are widely deployed, the political salience of AI will skyrocket; and only then, policy proposals that grasp the scale of inbound change will be viable to voice. And only once human-level AI systems have been successfully developed, we might get a clearer glimpse at the technological trajectory beyond that – and only with that look past the horizon will we be able to introduce somewhat informed policy ideas.
That means: the mere fact that ASI policymaking is likely to be important does not by itself provide any action guidance. We know very little about these speculative systems, we know very little about the state capacity that will encounter them, and we know even less about the exact nature of the risks they will pose. Much policy work vaguely aimed at superintelligent AI is bound to miss the point; and I believe it has been a fatally flawed takeaway to suggest that just because ASI is policy contingent, it’s worth it to work out specific ASI policy ideas today. These have taken up a prominent part of external perception, have been particularly susceptible to misinterpretation, and ultimately turned out mostly inapplicable either way.
There is a more robust class of interventions available today. A central requisite of sound policymaking is a political environment conducive to making good choices. Such an environment starts at a well-configured democratic process that keeps policymakers actions accountable to the electorate’s preferences, but also includes the ability of policymakers to identify policies that are likely to achieve that, and the capacity of states to effectively implement these policies, both nationally and, if necessary, in global agreement. This environment is under threat from much more tractable likely impacts of AGI deployment.

AGI Disrupts the Political Arena
Very briefly, I’ll note three highly disruptive elements of AGI deployment that could reshape the political terrain in a way that makes most sound, forward-looking policy unlikely. A past post goes into some more detail on this.
Labor Politics
First are labor market disruptions: Human-level agents that can be readily deployed as drop-in remote workers could displace a large portion of advanced economies’ white-collar jobs. This would be a cross-sectional hit to employment that cannot be easily dismissed: It’d hit young people, well-enfranchised and wealthy groups, and manifest across regions and countries. The political turmoil that might follow this displacement could easily escalate into radical policies, from crackdowns on AI technology to ultimately unsustainable social policy.
In the true sense of the word, this could spell a Luddite political environment with sufficiently idiosyncratic and politically supercharged notions of AI. Just as it was very hard to make sound policy that balanced fostering industrial growth, preventing destitution and safeguarding social structure in the upheaved mid-1800s, sound policy around ASI development in a terminally disrupted labor environment might be near-impossible.
Geopolitics & War
Second is geopolitical upheaval from unequal global access to AI: If the access to human-level AGI proves to be a supercharger to advanced economies, rifts will open up where access is unequally distributed. Today’s almost-competitive middle powers that are unlucky enough to miss out on frontier access might quickly fall behind; sectors of the economy reliant on providing digital remote work might be wiped out altogether. Poverty and destitution for some could easily be in the cards.
Once AGI systems become central to security considerations, these rifts will grow more impactful still: Some nations might think themselves backed into a corner, get nervous, strike first before their adversary reaches higher AI capabilities; some others might think they’re facing the last chance to move against a victim before they’re protected by advanced AI. From Japan and Russia in 1904 to Germany in 1914 to the 1940s Manhattan Project, the perceived threat of a rival’s impending technological advancement has often fueled escalation.
Whether this new and exacerbated global inequality manifests in the economic or security domain, it would make for a volatile environment, ripe for economic conflict, political disruption, mass migration, and war. Some of the most reckless, callous, uninformed policy has been made in response to disruptions like these.
Institutional Failure
Third is a failure of institutions to keep up with technology. A couple of years ago, the inability of some US lawmakers to grasp the technology and business model behind social media platforms led to widespread amusement. In hindsight, we should consider ourselves lucky that these hearings did not happen to inform legislation on a widespread technological revolution. In a world rapidly changed by AI systems, slow-moving governments, their bureaucratic bodies and their senior policymakers will have a hard time keeping up. Already now, many people ‘clued in’ on the pace of AI capabilities rightly bemoan the lack of technological awareness in parts of their governments; and the resulting lack of capacity to formulate, pass and enforce at least directionally reasonable regulation. And much of our current decades’ legislative baggage will still apply to even the most advanced AI systems – to paradoxical effects.
If that rift widens, even an otherwise calm political environment might not be sufficiently receptive to any sound policy interventions in the face of ASI development. Some of that might be corrected by ‘wake-up calls’ that saliently demonstrate the scale of technological change. But the phones will be ringing with wake-up calls on all kinds of issues all day once the deployment of AGI rolls around – it will be hard to tell the ones that really matter apart. And even if a wake-up call worked to motivate legislators to move on the issue, an external shock might come too late to empower states with the capacity to meaningfully act.
All this is a serious problem for anyone who takes the technical trajectory of AI capabilities seriously. It’s been somewhat of a repetitive feature of my writing that I think that the most relevant line of conflict in AI policy runs not between AI safety advocates and ‘accelerators’, but between those that take AI seriously and those that don’t. This is another case like that: No matter your favored ASI-related intervention, if you believe it’s likely to prevail in reasonable debate on its merits, you should not wish for a disruptive and disrupted environment. Concrete should be poured with a steady hand.

What to do?
There is work to be done on safeguarding and bolstering our policymaking environment amidst AGI disruptions. This frequently includes work that those convinced by the transformative power of AI dismiss, because they believe it’s not appropriate to the scale of the ultimate problem of superintelligence. Most saliently, I believe that could include:
Work on predicting and shaping the labor market effects of AI, especially as it relates to smoothing over the rampant inequality and erosions a rapid and unmitigated proliferation might cause. This plausibly includes academic research as well as (substantially more) engagement with political parties, labor unions and related stakeholders to get the involved political forces to the table as soon and as well-informed as possible. Being early to this debate matters a great deal – once the inevitable politicisation occurs, being the first to suggest sound interventions matters for preempting counterproductive reactions.
Work on ensuring global equitable access to frontier AI capabilities, whether that’s through de-securitisation and advocating for an open and market-based AI order that allowed for widespread participation in AI benefits; through fostering international agreements that reduced the imminent threat of securitised AI, just as has been successfully done in the case of nuclear and biological weapons; through fostering permissive access agreements that allow widespread partaking in US- or China-driven AI capabilities; or by technical work on competitive open-source products that alleviate asymmetries. If access to frontier AI is generally widely distributed, much fewer countries will feel imminently threatened by the technological trajectory.
Work on ensuring the enduring capacity of policymakers and governments to understand and navigate AI progress, whether that’s through substantial reform and reprioritisation, enduring provision of context and top information to a broad range of policymakers, or through those particularly concerned and informed about AI seeking and being provided government roles themselves.
To the most ardent augurs of superintelligence, this might not be an entirely satisfying response: Many are very certain that even greater disruptions are on the horizon, and are eager to work on them now. The most effective way to do that might still be to work on these prosaic issues of today. Today, we cannot do much to plot the path of future development in detail. But there is great merit in making sure we’re equipped to meet the high-stakes moment when it’s time to act.
"Work on ensuring global equitable access to frontier AI capabilities" - On the contrary, insofar as you're including open-source in this, if we have no idea what the strategic situation will look like, open sourcing would be one of the last things we'd want to do b/c once a capability has open-source that's pretty much irreversible.
Amazing work. Thanks for taking the time to write this.