Nice post! On the tendency of AI policy to produce grand international visions for cooperation/novel international organization designs, I think this is mostly just driven by the strong selection effects in the field. AI policy disproportionately attracts those whose thinking bends towards left-leaning visions of harmonious international cooperation, globalism, and a belief in more centralized, "top-down" governance.
I would love the field to attract more talent with different views, since I currently think most of the areas with remaining "alpha" are to be found by approaching the problem from a different angle and considering more decentralized (or at least, nationalist not internationalist) ways to govern AI. Dean Ball's recent proposal for a private governance scheme is a good example.
Thanks! I think I agree with that, but even beyond these selection effects, early theorising probably always gravitates toward grand theories of everything, and once things start happening, you have to go into the weeds and things get much messier. I just feel like AI is moving so much faster that we don't have the usual 10 years of feedback loops we had with other significant developments to learn and apply the lessons that I try to anticipate.
I also definitely agree that there's a lot of alpha in working on nation-focused or even private governance - especially in actually novel policy proposals that don't just transfer old proposals to more nationalist rhetoric, which seems like a particularly unfortunate 'current thing' that crowds the space.
Not sure on 'most areas with remaining alpha', though - I still think we can do a very good or a very bad job at setting up useful international frameworks, and there's a lot of work still to be done here. Especially on the diffusion side of things, I'm very worried about the 'jagged diffusion' world in terms of models and compute that I've written about elsewhere, and think you don't get around that through purely nation-level policy (though probably a lot of the solutions are bilateral as opposed to multilateral).
Great article! Agreed that a greater fraction of the AI policy world needs to further satisfice based on the world we have, though I think there is value in continuing to pursue both approaches in parallel.
You make the point that international AI regulation without the US is nearly pointless because this is where the frontier labs are, and that reminded me of a loosely related question that's been rolling around my head - why aren't other liberal democracies trying to entice frontier labs to relocate their operations there? We're probably in the last part of the window where a company like Anthropic might be able to change where it's headquartered without the US government blocking it. You would think that if a country's leaders believed in the economic and strategic importance of AI, they would offer huge incentives, friendly AI regulation, dedicated compute, etc. to try to convince one of the major players to move so that they would have a chance in the AI race. Given the sluggish and likely patchwork regulatory environment in the US, I would expect some AI safety groups to advocate for such moves as well. But the only times I've really seen the idea of labs moving to other countries mentioned is as a hypothetical response to heavy regulation or taxation (e.g. "if we had an AI/compute tax, the companies would just flee somewhere less burdensome"). Any thoughts?
Thank you! Political will is definitely one part, at least enough political will to counter huge agglomeration effects like they happen in SF / the US in general. Probably gets even worse now that the US seems to be getting somewhat serious about racing - there's an unfortunate (for a middle power, at least) feedback loop here where the nation with all the frontier AI labs receives the most policy advice to be friendly to frontier labs, and so it becomes even more clearly the runaway place to be.
That being said, I think it's still worth trying for that for a middle power (imagine if the UK had kept GDM!). But there's a real question as to what to offer that actually works and keeps the attracted lab competitive. Providing compute sounds nice, but these countries don't really have the datacenters or energy infrastructure to provide it, etc. There's a reason Mistral isn't really getting off the ground despite French govt support, for instance. From a political POV, though, it seems much more likely that a national govt would try to give the same concessions to a local player instead to build their own national champion. And that's probably even less effective, if we look at track records.
These are probably also all reasons why the "lab-drain effect" from taxes you mentioin is quite overstated, fwiw.
Yeah. It's hard to quantify the bay area network effects on AI development, but talent is key, ams that's certainly where the talent is concentrated. Deepseek and more weakly Manus do offer some counterevidence, though.
Alas, the dream of Norway launching a national AI project in partnership with anthropic or SSI is remote. (It's a pet hot take of mine that Norway would be the best country to develop ASI first - socially liberal, Democratic, low corruption, advanced knowledge economy, used to distributing wealth from oil and its endowment, well-regulated capitalism, strong privacy law, etc.)
Nice post! On the tendency of AI policy to produce grand international visions for cooperation/novel international organization designs, I think this is mostly just driven by the strong selection effects in the field. AI policy disproportionately attracts those whose thinking bends towards left-leaning visions of harmonious international cooperation, globalism, and a belief in more centralized, "top-down" governance.
I would love the field to attract more talent with different views, since I currently think most of the areas with remaining "alpha" are to be found by approaching the problem from a different angle and considering more decentralized (or at least, nationalist not internationalist) ways to govern AI. Dean Ball's recent proposal for a private governance scheme is a good example.
Thanks! I think I agree with that, but even beyond these selection effects, early theorising probably always gravitates toward grand theories of everything, and once things start happening, you have to go into the weeds and things get much messier. I just feel like AI is moving so much faster that we don't have the usual 10 years of feedback loops we had with other significant developments to learn and apply the lessons that I try to anticipate.
I also definitely agree that there's a lot of alpha in working on nation-focused or even private governance - especially in actually novel policy proposals that don't just transfer old proposals to more nationalist rhetoric, which seems like a particularly unfortunate 'current thing' that crowds the space.
Not sure on 'most areas with remaining alpha', though - I still think we can do a very good or a very bad job at setting up useful international frameworks, and there's a lot of work still to be done here. Especially on the diffusion side of things, I'm very worried about the 'jagged diffusion' world in terms of models and compute that I've written about elsewhere, and think you don't get around that through purely nation-level policy (though probably a lot of the solutions are bilateral as opposed to multilateral).
I found this article to be fascinating, but narrow and modular policies are unlikely to save humanity.
Much better to bet on crises potentially massively opening up the Overton window such that policies that can actually move the needle become viable.
Great article! Agreed that a greater fraction of the AI policy world needs to further satisfice based on the world we have, though I think there is value in continuing to pursue both approaches in parallel.
You make the point that international AI regulation without the US is nearly pointless because this is where the frontier labs are, and that reminded me of a loosely related question that's been rolling around my head - why aren't other liberal democracies trying to entice frontier labs to relocate their operations there? We're probably in the last part of the window where a company like Anthropic might be able to change where it's headquartered without the US government blocking it. You would think that if a country's leaders believed in the economic and strategic importance of AI, they would offer huge incentives, friendly AI regulation, dedicated compute, etc. to try to convince one of the major players to move so that they would have a chance in the AI race. Given the sluggish and likely patchwork regulatory environment in the US, I would expect some AI safety groups to advocate for such moves as well. But the only times I've really seen the idea of labs moving to other countries mentioned is as a hypothetical response to heavy regulation or taxation (e.g. "if we had an AI/compute tax, the companies would just flee somewhere less burdensome"). Any thoughts?
Thank you! Political will is definitely one part, at least enough political will to counter huge agglomeration effects like they happen in SF / the US in general. Probably gets even worse now that the US seems to be getting somewhat serious about racing - there's an unfortunate (for a middle power, at least) feedback loop here where the nation with all the frontier AI labs receives the most policy advice to be friendly to frontier labs, and so it becomes even more clearly the runaway place to be.
That being said, I think it's still worth trying for that for a middle power (imagine if the UK had kept GDM!). But there's a real question as to what to offer that actually works and keeps the attracted lab competitive. Providing compute sounds nice, but these countries don't really have the datacenters or energy infrastructure to provide it, etc. There's a reason Mistral isn't really getting off the ground despite French govt support, for instance. From a political POV, though, it seems much more likely that a national govt would try to give the same concessions to a local player instead to build their own national champion. And that's probably even less effective, if we look at track records.
These are probably also all reasons why the "lab-drain effect" from taxes you mentioin is quite overstated, fwiw.
Yeah. It's hard to quantify the bay area network effects on AI development, but talent is key, ams that's certainly where the talent is concentrated. Deepseek and more weakly Manus do offer some counterevidence, though.
Alas, the dream of Norway launching a national AI project in partnership with anthropic or SSI is remote. (It's a pet hot take of mine that Norway would be the best country to develop ASI first - socially liberal, Democratic, low corruption, advanced knowledge economy, used to distributing wealth from oil and its endowment, well-regulated capitalism, strong privacy law, etc.)