Interesting post but I think it can be easily misunderstood ;-)
After reading, it feels like the main idea (correct me, if I'm wrong) - if you can't do it perfectly, don't do it at all - if that's the point, I disagree with it:
Nothing will ever be perfect
The important thing is not to make it left vs right, I think, not to demonize/lose big chunks of people who can do things - that's important and tricky, yep, important to have no anxiety (Worry Workbook by Beck is the best according to meta-analyses) and no anger management problems
P.S. We're not just about about AI at e/uto - we grow the best ethical ultimate futures for all probability - p(best) - basically building the simulated multiverse + the best things from it outside and it's tricky to build all that if there is a dystopia or two ;-)
Anton- I could not disagree more strongly with your piece, respectfully.
Most simply: how dare you? How dare you say that I and everyone else do not have the right to understand and have an opinion on AI risk? Are the anointed deciders of my children's fate upset if I try to rally my friends and neighbors to fight for our survival? Who knows and talks about AI risk is not for some small group in SF to decide. I find this attitude shocking.
I came to AI safety completely from the outside. There is zero case to be made that the existing AI safety brain trust has it all figured out—least of all in how to get regulations that might slow the race to suicide.
Worst AI Safety idea ever: "keep the public out of it!"
Are you aware of any successful mass movements that occurred without the "mass" part? Without the people, and the simplified messages they will require, the politicians will do nothing and we will lose control of AI and all die.
And people believe their neighbors far more than experts, so you can protect your experts all you want, it won't matter, we'll just all die.
In the framework of what you wrote, I am probably your worst nightmare. I came into the AI safety world entirely from outside. I'm not outside anymore. And now I have the specific purpose of making AI risk simple and understandable to everyone via The AI Risk Network on YouTube.
It appears you are genuinely coming from a place of wanting to help AI safety be more effective. I can appreciate that. But I think you are just completely wrong. Wrong about what would work better, and wrong about this morally. Who knows and talks about the greatest risk to their families is not a decision for some small group in SF to make. That is just wrong, and clearly, thus far, ridiculously ineffective.
Thank you for taking the time to comment. I strongly disagree with your characterisation of my opinion, though - I don't think I've argued anyone who wants to engage with these issues shouldn't engage. I might have thoughts about how helpful it is generally, but democratic engagement with these issues is clearly important. What I'm opposed to is building and incubating this popular engagement strategically.
I won't get into the ways we disagree on the state and effectiveness of current AI safety policy much, but I will say: I think work like you describe suffers from concerted movement-building efforts, too. Credibility of genuine grassroots engagement - some of which happens on genuine AI safety grounds, much more of which happens at intersections with more politically salient areas - can quickly lose credibility when it suffers from backlash against incubation efforts or allegations of astro-turfing.
That's all to say: If what you say is true, then a popular movement for AI safety doesn't need to be built at all. But to the extent that it must be built through coordination and incubation, I think it should probably not be done.
Very interesting piece and I am sympathetic to much of it (especially the stuff on astroturfing and the reductive nature of movements). One point of disagreement though: I'm not convinced that changing policy asks weaken a populist movement. Indeed this is often a feature rather than a bug of these movements. The Brexit case is instructive - the same coalition has reassembled for similar reasons (migration, 'control') led by the same person less than a decade later arguing that elites 'thwarted' Brexit. I can see a world where a populist pause AI movement similarly has continuously expanding goals and blames failure on AI elites. If anything though, this strengthens your point - this kind of goal expansion is usually not a recipe for good policy
Interesting post but I think it can be easily misunderstood ;-)
After reading, it feels like the main idea (correct me, if I'm wrong) - if you can't do it perfectly, don't do it at all - if that's the point, I disagree with it:
Nothing will ever be perfect
The important thing is not to make it left vs right, I think, not to demonize/lose big chunks of people who can do things - that's important and tricky, yep, important to have no anxiety (Worry Workbook by Beck is the best according to meta-analyses) and no anger management problems
P.S. We're not just about about AI at e/uto - we grow the best ethical ultimate futures for all probability - p(best) - basically building the simulated multiverse + the best things from it outside and it's tricky to build all that if there is a dystopia or two ;-)
Anton- I could not disagree more strongly with your piece, respectfully.
Most simply: how dare you? How dare you say that I and everyone else do not have the right to understand and have an opinion on AI risk? Are the anointed deciders of my children's fate upset if I try to rally my friends and neighbors to fight for our survival? Who knows and talks about AI risk is not for some small group in SF to decide. I find this attitude shocking.
I came to AI safety completely from the outside. There is zero case to be made that the existing AI safety brain trust has it all figured out—least of all in how to get regulations that might slow the race to suicide.
Worst AI Safety idea ever: "keep the public out of it!"
Are you aware of any successful mass movements that occurred without the "mass" part? Without the people, and the simplified messages they will require, the politicians will do nothing and we will lose control of AI and all die.
And people believe their neighbors far more than experts, so you can protect your experts all you want, it won't matter, we'll just all die.
In the framework of what you wrote, I am probably your worst nightmare. I came into the AI safety world entirely from outside. I'm not outside anymore. And now I have the specific purpose of making AI risk simple and understandable to everyone via The AI Risk Network on YouTube.
It appears you are genuinely coming from a place of wanting to help AI safety be more effective. I can appreciate that. But I think you are just completely wrong. Wrong about what would work better, and wrong about this morally. Who knows and talks about the greatest risk to their families is not a decision for some small group in SF to make. That is just wrong, and clearly, thus far, ridiculously ineffective.
Respectfully,
John Sherman
Thank you for taking the time to comment. I strongly disagree with your characterisation of my opinion, though - I don't think I've argued anyone who wants to engage with these issues shouldn't engage. I might have thoughts about how helpful it is generally, but democratic engagement with these issues is clearly important. What I'm opposed to is building and incubating this popular engagement strategically.
I won't get into the ways we disagree on the state and effectiveness of current AI safety policy much, but I will say: I think work like you describe suffers from concerted movement-building efforts, too. Credibility of genuine grassroots engagement - some of which happens on genuine AI safety grounds, much more of which happens at intersections with more politically salient areas - can quickly lose credibility when it suffers from backlash against incubation efforts or allegations of astro-turfing.
That's all to say: If what you say is true, then a popular movement for AI safety doesn't need to be built at all. But to the extent that it must be built through coordination and incubation, I think it should probably not be done.
Very interesting piece and I am sympathetic to much of it (especially the stuff on astroturfing and the reductive nature of movements). One point of disagreement though: I'm not convinced that changing policy asks weaken a populist movement. Indeed this is often a feature rather than a bug of these movements. The Brexit case is instructive - the same coalition has reassembled for similar reasons (migration, 'control') led by the same person less than a decade later arguing that elites 'thwarted' Brexit. I can see a world where a populist pause AI movement similarly has continuously expanding goals and blames failure on AI elites. If anything though, this strengthens your point - this kind of goal expansion is usually not a recipe for good policy