Do You Need A Wake-Up Call?
Big-ticket AI impacts might soon reshape AI policy. No one is prepared for that, but some people think they are.
The increasingly jingoistic rhetoric and land grabbing of early 1930s Germany seemed like a much-needed wake-up call for a somewhat complacent French defense establishment. Work on the Maginot Line of fortification along the French-German border intensified, and the line was finished in 1938 – ready to repel a World-War-I style broad infantry assault. But when Nazi Germany attacked France in 1940, their motorised forces circumvented the line through the Ardennes. Even the loudest wake-up call proved useless – it only motivated an ill-advised and outdated countermeasure.
A growing realization spreads within AI policy circles: It’s really happening. The idea that rapid AI progress will change the world is growing mainstream, and soon, the political landscape might follow. The vision of such a time has long been a mainstay of AI politics: People frequently pitch politically untenable proposals, but when challenged, they reassure you that current political realities are of minor concern: Because a wake-up call - an AI-driven event that establishes their favoured kind of political salience - is inbound. It’s after that wake-up call, they’d have you believe, that the time for their policy will come. So as AI capabilities and AGI awareness grow, many groups' anticipation of an external shock intensifies.
In principle, this is a savvy political perspective: If you have strong assumptions on the trajectory and impact of AI capability growth, it follows to expect changes to policy windows following that trajectory. In practice, however, these hopes for a wake-up call are frequently underspecified, leading to inadequate preparation to shape and capitalise on the resulting windows. The actual realities of upcoming sudden, external shocks deserve much closer prediction and examination. Otherwise, today's frontier AI policy—from open-source advocates to hawkish safetyists—risks being overwhelmed when AI policy suddenly enters mainstream politics.

Progress Without Shocks
The first concern with the wake-up call hope is that the connection between rapid progress and politically impactful shocks is not all that robust.
Insulation Through Prevention
First, we are about to cross a threshold in making AI models safe. In the past, AI-enabled harm was a matter of capability: You could get the models to do most things, but they rarely turned out genuinely helpful for causing harm. In the future, preventing AI-enabled harm will be a matter of propensity and prevention: The model could help in causing harm, but there will be safeguards at many stages to prevent that. At the moment, the equilibrium still holds, and models are safe enough. Maybe that changes once open source once again catches up to the frontier, but if it doesn’t, a different problem arises, putting a spin on the broader prevention paradox.
In an environment where capability is high, but propensity is low and prevention is effective, smaller-scale harms might be unlikely. That is to say: If substantial, but not insurmountable, barriers to misuse exist, we might not see the equivalent of school shooters or local troublemakers marginally uplifted (either because this will not happen, or because it will not be identified). Instead, the first salient harmful applications might well be from groups that take the matter seriously enough to find some idiosyncratic loopholes, or even run stolen, modified versions of frontier models. These would have to be dedicated and motivated actors, and the harm they might cause might be proportional to the resources they invested to cause it. On that trajectory, no acceptable wake-up calls occur, and we’re left relying on a shaky balance between barriers and capability progress. This is bad for the politics of wake-up calls and a really unfortunate place for policy: In the absence of any empirical evidence before the fact, calibrating the margin of error between Bostromian crackdowns and reckless abandon is difficult.
Near-Misses Don’t Count
Second, relatedly, political salience of near-misses is exceedingly low. There are a lot of ways that AI can almost go wrong. Evals catching unreliable models before they are deployed, an advanced lab assistant model giving almost correct instructions, an AI agent ordering pathogens with the sellers growing cautious at the eleventh hour – or inversely, a major lab almost losing a court case regarding some arcane application of a law that would stop the industry’s progress dead in its tracks (to some, the NYT vs. OpenAI case is an example of that). A lot of these will make very concerning examples to include in a very nice powerpoint presentation, but very few of them will make national headlines; and hence very few of them will meaningfully change the political economy of decisive AI policy. If the trajectory toward hugely impactful AI effects is not ‘AI effects get larger and larger’, but ‘AI effects get more and more likely’, the wake-up call mechanism is far less likely to work.
Frog-Boiling Economic Effects
Other transformative AI effects can feel surprisingly marginal until they’ve changed everything. In the case of labor impacts, the ultimate political impact once we reach a critical amount of potential displacement could be quite high. But that does not count as a wake-up call: That very displacement is the kind of thing we’d ideally prevent and address with policy that is passed in a policy window before the fact! But up until the critical political mass of displaced or endangered job market participants is big enough to be tapped into, the effect might be slow, lagging and steady; not sufficiently substantial on a month-to-month basis to warrant a headline. There will simply be fewer jobs from cohort to cohort in the big consulting and software engineering firms; a little bit fewer online gigs and task work available; and a bit fewer new jobs per newly founded company sprouting up. There will be no empty villages, no desolate regions like in the 19th century; everywhere, there will just be fewer and fewer new jobs. Quite conceivably, on one of the biggest issues in frontier AI, there will be no wake-up call until it’s far too late.

All that might very well mean that the landscape of AI policy will not change until the lagging, impactful effects of progress actually manifest in the public eye. If your work today is trying to shape these effects, you might not get a better policy window than today. Any policy platform contingent on wake-up calls takes a substantial risk.
Sleeping Through The Wake-Up Call
The second concern is that, even if what-should-be-shocks happen, they can very easily get missed.
That’s because AI harms might just not register as related to AI policy at all: Oftentimes, AI harms will only present as AI-enabled upon closer, delayed examination well past the window of political opportunity. If you ask AI safety advocates for plausible near-term pathways to catastrophic AI harm, they’ll mention two threat models: engineered pandemics and cybercrime. Early instances of either are very unlikely to create a substantial policy window for AI policy.
On cybercrime, the extent of AI use in the execution of a catastrophic cyber attack might only become apparent very late. An AI-uplifted cyberattack might register as very similar to a ‘normal’ cyberattack. Maybe the spearphishing was a bit more sophisticated and individualised – but that could look a lot like very good social engineers and scammers. Maybe the botnet was a bit more expansive – but that might look a lot like a very capable group. Maybe the amateur hacker had access to remarkable programming capability – but that could just have been a particularly capable group of attackers. The bluescreens, infrastructure failures, grid collapses and extortionary threats will come first. Questions on how the hacker group got there will, if anything, come much later. So many possible policy responses will miss the forest for the trees; crack down on cybersecurity specifically, raise firewalls and regulate the security practices of critical infrastructure providers instead of model propensities to assist in cybercrime or system availability.
On bio risk, pathogens move very fast, and information about their origin moves very slowly. Serious discussion on the lab leak hypothesis and its implications for future prevention started only years after Covid cost millions of lives and caused massive economic damage. Even now that we’ve found some compelling evidence in favour of lab leak, no reasonable measures have been taken to reduce that risk at all. The very same might happen with a not-quite-existential AI-empowered bioweapon: We’ll have reasonably strong prevention and reasonably quick vaccines, there will be death, self-isolation and economic harm, and once it’s over, we’ll be happy to stop talking about it. Whenever the intelligence commissions publish their comprehensive reports on its likely origins, we’ll be way past the point of actual political salience, and arguably past the point of making necessary policy corrections as well, as AI will have progressed much further in the meantime.

Maybe this sort of resilience-building is good AI policy; but if any area-specific wake-up call only leads to area-specific improvements, there’ll be too much snoozing to get up in time. One central priority for policy organisations to patch this risk could be to invest much more into the ability to identify AI origins of harms, from small-scale crime to big-ticket catastrophes. The policy window coincides with the moment of harm much more than with the moment of identifying its causes; so one important goal to anyone counting on wake-up calls should be to identify the AI effect on harms in time to leverage the policy windows.
Getting Up On The Wrong Foot
The third concern is that wake-up calls can very quickly create the wrong kind of political salience, by uplifting previously niche policy issues and debate subsections, drawing power and capital away from debate incumbents. This could be costly for the diverse group of voices that currently think themselves ‘early’ to the debate: They are well-informed technocratic advisors on a debate that still takes place in the political shadows. They could quickly get swept away.
It’s quite plausible that a landmark early wake-up call will have very little to do with the actual major policy challenges related to AI. First, because there’s always a bit of luck involved in which issues get picked up in the media and carried, through public salience or market reactions, to policymakers. A slow news cycle can elevate a niche issue, a busy day in the market can drown out big events. See, for instance, the broader release of DeepSeek’s R1 model (as opposed to e.g. its initial development) for an example of a self-reinforcing media reaction to a fairly marginal technical event. And second because frontier developers, hounded by liability and encouraged by safety proponents in their own rows, are particularly incentivised to prevent the biggest sources of AI-driven harm. For that reason, if an AI-enabled harm vector does slip through, it might not be one of the big ones – it might be a comparatively prosaic niche effect no one expected instead. There are many transitory AI harms: serious issues, no doubt, but ones that are fairly continuous with other technologies, and are not particularly instructive regarding the transformative effects inbound.
Two Examples
For instance, deepfake media are a somewhat concerning technological application that could cause a lot of genuine harm. A lot of their privacy-violating effect is not necessarily new, but certainly lowers a threshold that required advanced software skills or yielded worse results before. But many current AI models do not do a great job of preventing convincing deepfakes, and a well-placed deepfake could cause enough harm to reach media salience. Between a stock market panic caused by a believably insane statement from a senior policymaker, an election purportedly uprooted by deepfake-driven misinformation, there are still some possible pathways from deepfake tech to the front page of X and the NYT. These vulnerabilities are somewhat easy to patch, ranging from platform regulation to social inoculation against deceptive video content; but they’ll make headlines first.
A similar story might, again, be told on labor market effects. Even if there is a wake-up call from some publicly salient wave of displacements, current debate incumbents don’t stand to benefit. A lot of people in frontier policy also consider themselves ahead of the curve on predicting complex labor market effects and disruptions, and there is some genuine crossover. But by and large, neither the safety advocates nor the more libertarian-minded AI policy wonks will be called up once unemployment starts rolling around; it would be unions, employers, workers’ collectives, and all the other mainstays of labor policy.
Such developments would not help frontier policy at large. The current safety platform, which has in parts overextended into a pivot to national security policy, does not stand to benefit from political salience conferred by so-called near-term harms: The experts and advocates on this have their own idiosyncratic conflicts with much current frontier AI policy work, and rifts have deepened due to the rightward turn of AI policy and the subsequent political losses for the AI ethics faction. And on jobs, the stakeholders called to the table will be those that regard today’s frontier AI policy people with a healthy dose of skepticism, if not outright perceive them as the causes of the AI trajectory that got workers into hot water to begin with. Many similar routes for AI at large to gain political salience don’t stand to benefit anyone who works on ‘making AI go well’ today. They could instead change the cast and content of the public debate, rendering everyone involved today a peripheral actor at best.
Only One Shot
But once one wake-up call has happened, the patient is awake, and future, sequential wake-up calls might not have the same effect that the first catapult to political salience had. By the time another, more policy-relevant harm hits, the fronts will already be drawn, positions will be taken, drafts and laws will be on the books. The Germans have tanks now? Not to worry, work on the fortresses has already commenced. Of course, some policymakers might still change their minds due to future external shocks; but I still suspect the primary impact on the overall political environment happens the very first time a policymaker calls their staffers together at 10:30pm on a Thursday evening and asks them ‘what do we even think about AI’. And if that moment happens in the shadow of a merely tangentially frontier-related AI issue, the chances of an effective wake-up call will drop tremendously. There will, at the very best, be one AI Maginot Line – where we build it counts.
The answer to this conundrum might lie in broader coalition building. A lot of purportedly niche actors, from unions to AI ethicists, that have been readily abandoned by frontier policy groups can quickly get catapulted back to central prominence by the luck of the draw on wake-up calls. If these interest groups are closely integrated, allied with, and informed by those ‘in the know’ on major future implications, frontier policy advocacy can ride many future waves – I’ve written much more on this here. This is very much the case no matter if you chiefly care about safety or opportunities; the rise of an idiosyncratic platform can upset both perspectives. If the current policy environment shuts the door on momentarily uninteresting, but fundamentally somewhat related causes, the ‘wake-up call’ strategy relies on a very narrow trajectory.
Less Reliance & Better Preparation
External shocks can effectively catalyse political salience and public support in favour of effective frontier policy, whether that ultimately relates to reasonable safety or correcting against sweeping government involvement. But for that to work, these shocks have to be contextualised and prepared for. Currently, they are not. More capacity for linking AI-enabled harm principally to AI and more effective coalition building with advocacy on different harms can help. But to some extent, the unreliability of wake-up calls must still mean that we ought not rely on them too much.
I just had a lengthy discussion about this topic with my brother. You posted this right on cue with many details we overlooked. Thanks!