Three Notes on Dean Ball's 'Private AI Governance'
In this shorter piece, I discuss a recent proposal's limitations regarding transformative AI systems, securitised environments, and the role of liability.
This piece is the first of my ‘dispatches’ — more frequent, shorter-form pieces with more connection to current debates & events and less intended shelf life than my main essays. Please let me know what you think!
Dean Ball's latest is a strong proposal on private AI governance. I agree with his overall vision of a market-driven AI paradigm, a societas in the Oakeshottian sense he invokes: Where market forces drive AI development and deployment and calibrate the guardrails we set around it, we can forego the need for heavy-handed government intervention that could erratically narrow the range of AI opportunities or prevent its equitable diffusion to everyone who would use it well. At the same time, as long as AI is developed in the crucible of an incentive-laden market environment and not behind closed doors in service of some strategic goal, I believe careful work on the nature and extent of these market incentives is our best bet to steer clear of some of the most serious risks — much more on that soon in this publication.
Because I believe in a private-first AI ecosystem, it seems particularly worth it to flag three specific concerns I have with Dean's specific proposal that I feel need answering: Shortcomings in the face of ostensibly transformative AI deployments, a vulnerability to securitisation, and the coercive force of retaining current liability laws.
On Advanced AI, Ex-Post Incentives Might Not Be Enough
The proposal provides few incentives for situations in which AI companies think they are about to deploy transformative technology. Granted, the market is tight and there are few runaway scenarios, but the technology growth, combined with the sometimes-feverish culture at leading developers, could lead labs to believe they'll soon have something on their hands that will change the world. The deployment of such a system might not be meaningfully constrained by liability laws, as a fundamentally changed world would be perceived to conduct the ex-post governance. Why would an AI developer in such a case not opt out of private regulation entirely? You might call these situations edge cases; and in terms of share of plausible outcomes, they certainly are. But my strong sense is that a lot of plausible ways for AI to go really wrong are concentrated exactly within these edge cases; and so I feel it’s worth examining regulatory proposals with a strong focus on their ability to deal with them.
This incentive issue is a problem with many regulatory approaches, but a governmental oversight regimen has a much clearer path to escalating preventative, pre-deployment measures if necessary. Governments in principle have sufficient authority and standing to be empowered to take appropriate preventive measures. The agencies in this proposal have little grounds to adversarially interact with AI companies before deployment; so if you concede that this interaction could someday be justified, you'd need to keep government capacities in place all the same. You might disagree on when such government intervention might be required, but looking at technical trajectories, I don't think you'd be justified to think it won't be required at all.
That introduces a meaningful inefficiency: The capabilities required for effective governance would have to be built up in the private agencies, but very similar capacities would still have to be maintained in government. This is especially true because there is no clear delineating line between tomorrow's models that might require a priori oversight and today's commercial champions; the way for a government agency to identify the former is to examine the latter — i.e. to do quite similar things to what private agencies would do. Though inefficient, I think this is theoretically acceptable: Outsource the more prosaic contents of regulating AI as an albeit transformative consumer technology to a well-functioning private system, retain a strong and narrow mandate for government beyond that.
The inefficiencies get truly worrisome where they manifest as adverse incentives for state capacity: If a private regulatory approach takes over the more immediately pressing questions of AI governance, there might be much less incentive for the government to build and maintain the capacity to intervene later. If both are carried out within one synergistic agency, the political incentives to regulate AI now have positive spill-overs for our capacity to prevent particularly harmful deployments later on. Without that synergy, I worry that oversight capacity could atrophy.
Securitisation Threatens Regulatory Market Freedom
Lastly, I'm very unsure how all this interfaces with increasing tendencies to characterise AI models and their development as strategic and security-relevant - and hence (a) exempt from burdensome regulation in service of a race and (b) required to ensure a high level of security against espionage and sabotage.I can conceive of two particularly concerning interaction modes between that trend and Dean's proposal.
The first is exemptions and exceptions for models that are deemed security-relevant. A government might argue that such models, e.g. those that are developed beyond a certain threshold of capability and through a certain degree of cooperation with the government, should not be allowed to partake in this mode of private governance: the information they might need to share to demonstrate compliance might not be in secure hands with private organisations, or their development would be slowed down through pressure to comply. I'm not quite sure what alternative the government would offer to players in that case - a third form of oversight for securitised models, a blanket assumption of compliance in favour of certain developers in the government's good graces, etc. We see that just this sort of carving out is beginning to happen, and I worry it could hit this private governance approach particularly hard; just because it is so much easier for developers to argue that their security-relevant information cannot be provided to private actors than to argue that direct government oversight is a security risk.
I fear these incentives can quickly lead either to a de-facto defunct private setting in which most meaningful development happens within securitised government oversight anyways; or arguably even worse, a two-tier ecosystem that has some developers privileged to operate outside of the private system, and the rest comparatively encumbered by the reasonable standards that system would set.
The second, alternative concerning interaction is the government introducing security stipulations on the side of the private regulatory organisations: just as they might expect developers to operate safely and securely with regard to the security-relevant secrets around their systems, the government might also expect private regulators to comply with similar security standards. I think this is preferable to the alternative of securitised exemptions, but it still throws a wrench into the friction-free regulatory markets of Dean's suggestion: If a new regulatory organisation needs to comply with an expensive government-evaluated standard of security to even start offering its oversight to leading AI developers, entry into the regulatory market is made much, much harder and much more dependant on government fiat once again; empowering incumbent agencies, reducing pressures toward reasonable standards, and giving the government an open backdoor to de-facto choose what private agencies can exist after all.
The Legal System As A Worst-Case Backstop
The proposal relies on the overwhelming threat of tort liability to an extent that makes reforming that liability system unattractive, since without an unreasonable liability threat, we'd have no sufficient stick to incentivise opt-in. Dean calls his approach all-upside for developers, but I think I am somewhat more optimistic than him on the counterfactual: Without a private governance approach to introduce a liability shield, I still think political incentives could align to motivate some changes to liability. Dean’s approach removes this and reverts incentives to reform liability — keeping it burdensome and ineffective, and thereby just as threatening to the next big technological innovation that might come along. Of course, we could regulate all future technology along the lines of Dean's proposals as well; but I suspect we won't be licensing out private regulatory regimens for all kinds of novel tech quick enough, and in the meantime, keeping the old liability around would be very costly.
In a comment, Dean says he ‘rejects the casino’ of seemingly random enforcement of liability — that’s a fair enough perspective to take on theoretical grounds, but I don’t see how his proposal does that: It condemns the casino to never change in order to incentivize private governance; continually confronts market participants (in the US, and only there!) with an unknown threat of going back to the casino; and places all new players on new fields in the casino again anyways.
Keeping an unworkable liability doctrine on the government books so that people escape government oversight in favour of private input seems unelegant, especially given the rest of the piece’s grounding in appealing political theory. I feel like even the invocation of Oakeshott doesn't provide a satisfying response: Would a societas really sustain laws that must be avoided at all cost? And should it really brute-force incentivize the emergence of private governance? In the end, the Nozickian defender of Dean’s view might say, there is a lot of free choice involved: Developers choose their own regulator, and a well-calibrated market pushes the standards toward the pareto frontier of effectiveness and justifiability. But if the state-owned comparative of liability is as untenable as Dean describes it, no (at least no US-internal) market pressure acts on the regulatory market at large. If this is to work particularly well & prevent the emergence of oligopolic tendencies in the regulatory market, I suspect the government-sponsored alternative would have to be at least a somewhat tenable backstop. As Dean characterises it, the applicability of liability has to be avoided at basically all costs. That would make its existence a quasi-coercive government measure in favour of private governance and takes away much of the proposal’s theoretical appeal.
I think all these are issues that can — but should — be addressed, and I genuinely hope for lively debate around what I consider an excellent and underrated direction of thought laid out by Dean. It's worth taking proposals like these quite seriously - they might still save us from some of the conflict, jagged diffusion and restrictive regimen that many likely AI futures seem to hold.