Great post - you really hit the nail on the head with the paradigm-independent point.
I'm a bit suspicious that going back to the drawing board will actually result in better ideas. What I wish we had is the ability to test out laws and see what actually happens. One of the nice things about common law systems is that precedents are set, judgments are passed between courts, and the overall system gets to grow as it learns more. Unfortunately, there may not be enough time to get this right because of how soon ASI might be here.
I was opposed to the moratorium because it seems like it would be very handy to have state bills like RAISE, or SB-813 (which I now think is likely to be axed) become law so you could see whether these laws were actually doing meaningful things that were net positive in, say, 1-2 years.
My sense is that there's a frustrating dynamic (which you somewhat point at): our ideas are not perfect, political appetite is turbulent, and tech companies that are against regulation are very powerful. This whole dynamic feels like you could keep trying and trying and just end up getting nowhere. Of course, I'm very against accepting defeat here, and I do think you need to actually try a hypothesis from the armchair as to what kinds of laws or policies would be useful. But it feels like we'd need some safe aligned AGI system to work through what seems like an impossible challenge of coming up with the right regulation that one-shots everything.
Thank you! Agree, that's a big reason for why a state moratorium is a tricky idea; experimentation is great, and I'd love for there to be an 'AI delaware'. That said, I'm a little bit unsure whether the current situation points toward this sort of policy experimentation, or whether we don't just get redundant, almost-overlapping laws instead. If it's the latter, I really don't know if we learn a lot from that.
Also, it's a bit unclear whether state bills can be ambitious enough to give valuable insight on big policy targets. RAISE specifically seems to fall rather under the low-hanging fruit category, and SB-813 has its range of practial limitations, too.
Lastly, I'm unsure whether we'll ever have experimenting conditions. The true stress test for policies occurs only when we get to really advanced capabilities, and I'd think any results from before that capability level don't tell us much?
That said, if there is some way to move the state legislation stuff toward more, not less, experimentation, I'd be very happy to see that.
Another relatively paradigm-independent suggestion: building institutional capacity. Perhaps it might be possible to establish an international oversight agency with monitoring and enforcement powers for AI, even before we have a good idea of how an effective comprehensive treaty on AI development and deployment would look like? It could start out with responsibility for transparency policy
Definitely agree, state capacity work is a big area of 'solid' AI policy. We probably have to look at the national level before the international, but besides that, I agree.
But for all the talk on state capacity, I sometimes wonder how useful a state institution without a clear mandate is. We see this with CAISI and arguably even BIS in the US: For the longest time, people have been going on about 'just staffing them well' etc. But it seems like they're fairly strong institutions now, and they're still not doing very much & they're still politically vulnerable.
Maybe the right way to build state capacity is instead to pass something that these institutions are actually supposed to do?
Great post - you really hit the nail on the head with the paradigm-independent point.
I'm a bit suspicious that going back to the drawing board will actually result in better ideas. What I wish we had is the ability to test out laws and see what actually happens. One of the nice things about common law systems is that precedents are set, judgments are passed between courts, and the overall system gets to grow as it learns more. Unfortunately, there may not be enough time to get this right because of how soon ASI might be here.
I was opposed to the moratorium because it seems like it would be very handy to have state bills like RAISE, or SB-813 (which I now think is likely to be axed) become law so you could see whether these laws were actually doing meaningful things that were net positive in, say, 1-2 years.
My sense is that there's a frustrating dynamic (which you somewhat point at): our ideas are not perfect, political appetite is turbulent, and tech companies that are against regulation are very powerful. This whole dynamic feels like you could keep trying and trying and just end up getting nowhere. Of course, I'm very against accepting defeat here, and I do think you need to actually try a hypothesis from the armchair as to what kinds of laws or policies would be useful. But it feels like we'd need some safe aligned AGI system to work through what seems like an impossible challenge of coming up with the right regulation that one-shots everything.
Thank you! Agree, that's a big reason for why a state moratorium is a tricky idea; experimentation is great, and I'd love for there to be an 'AI delaware'. That said, I'm a little bit unsure whether the current situation points toward this sort of policy experimentation, or whether we don't just get redundant, almost-overlapping laws instead. If it's the latter, I really don't know if we learn a lot from that.
Also, it's a bit unclear whether state bills can be ambitious enough to give valuable insight on big policy targets. RAISE specifically seems to fall rather under the low-hanging fruit category, and SB-813 has its range of practial limitations, too.
Lastly, I'm unsure whether we'll ever have experimenting conditions. The true stress test for policies occurs only when we get to really advanced capabilities, and I'd think any results from before that capability level don't tell us much?
That said, if there is some way to move the state legislation stuff toward more, not less, experimentation, I'd be very happy to see that.
Another relatively paradigm-independent suggestion: building institutional capacity. Perhaps it might be possible to establish an international oversight agency with monitoring and enforcement powers for AI, even before we have a good idea of how an effective comprehensive treaty on AI development and deployment would look like? It could start out with responsibility for transparency policy
Definitely agree, state capacity work is a big area of 'solid' AI policy. We probably have to look at the national level before the international, but besides that, I agree.
But for all the talk on state capacity, I sometimes wonder how useful a state institution without a clear mandate is. We see this with CAISI and arguably even BIS in the US: For the longest time, people have been going on about 'just staffing them well' etc. But it seems like they're fairly strong institutions now, and they're still not doing very much & they're still politically vulnerable.
Maybe the right way to build state capacity is instead to pass something that these institutions are actually supposed to do?
Completely agree, that's why I suggested transparency.