6 Comments
User's avatar
Dave Friedman's avatar

Hm, AI seems like a classic dual use technology, akin to cryptography or biotech. Which makes me think that a president Vance would use the apparatus of the national security contingent to assert control over AI in a way that Trump or Biden have not. Asserting such control would dovetail well with Vance’s professed nationalist populism.

Expand full comment
Kevin's avatar

It seems pretty clear that many normal layoffs in the near future will be blamed on AI, rather than typical business problems, because that makes company leaders seem like innovators, rather than losers.

Expand full comment
Oliver Sourbut's avatar

(generally appreciate your writing and I'm following the grand tradition of only commenting with a bone to pick)

With reference to natsec, you seem to skip over the elephant in the room of the relative costs of destroy vs protect, which AI could upset! Surely something that both Right and Left are concerned about. (The Schmidt and Hendrycks piece you referenced is at least as much about limiting proliferation of destructive tech as it is about military supremacy.) It's not just about the effects on jobs, and natsec folks are in a great position to accelerate defensive measures (in anticipation of a proliferated AI world) while making the case for brakes on proliferation.

Expand full comment
Anton Leicht's avatar

Thank you! I fully agree on the substance of what you write, and agree that it's both a priority and grounds for bipartisan agreement on the natsec level. (I also think it actually *is* bipartisan consensus in that area, in fact, and I think natsec is where we maybe see the least polarised or partisan AI politics environment).

I'm a little less sure about how well this translates to political saliency & sticking power. I suspect that's also the reason why the Hendrycks/Schmidt pieces and many others try to marry the proliferation angle to the supremacy angle - trying to make it a package deal with a politically more opportune framing. I'm just not so sure if it's working: My sense is that the old natsec guard isn't very influential in the kinds of politicised discussion this piece gets at: they have few champions at the top of the party at the moment, and their values trade off unfavourably against some of the anti-gov/deep-state sentiment, they're on the politically difficult side of discussions e.g. on Ukraine/Russia, and their prescriptions for AI sometimes clash both with the economic-populist impulse to cash in on proliferation and with the SV impulse to keep the government very far away from most of this, other than for reducing barriers.

I'll think & write more on what I think should be done to fill these gaps, though, and on the normative side, I do agree with you that there are very reasonable natsec-focused framings that I hope will play a role.

Expand full comment
Oliver Sourbut's avatar

Awesome, looking forward to your further writings.

Yeah... national security is often something like 'technocratic' in a way which maybe doesn't align well with wider saliency or populism in particular. Nevertheless, a lot of government *isn't* about theatre, and people really do care about security. So security (in the defence and safety sense, as opposed to supremacy and projection of influence) does need to be grappled with as an additional possible tension with laissez faire.

Expand full comment
Oliver Sourbut's avatar

(I think there's something Right ish about 'quiet enjoyment without interference', and getting a bioweapon to the face is very much interference in most people's books)

Expand full comment