RSS · Spotify · Apple Podcasts · Pocket Casts · YouTube
Below are some highlights from our conversation as well as links to the papers, people, and groups referenced in the episode.
Some highlights from our conversation
On AI as a force multiplier for political power
“I got much more into thinking about the political philosophy of AI because I realized that AI based on machine learning was the most significant means of extending the capabilities of those who have power that has been invented since, I guess, the invention of law — so, a really significant force multiplier for those who govern. And as we were talking about before, we see that with the AI companion and summaries in Zooms: the ability to take all of the recording that we’re doing and translate that into actionable insights that you can then use to shape people’s behavior, it’s bananas.
So I end up focusing on that. But the cool thing is that with the kind of natural language capabilities of LLMs, there’s a sense in which you can kind of go back to some of those more top down-type approaches to ethics for AI that were kind of closed off when you had to find a way of mathematizing complex moral concepts. Now you can actually leverage natural language understanding and sort of an underlying moral understanding of those concepts.”
On overlooking moral nuance
“Because of that desire for certainty, I think a lot of folks have focused around a particular normative framing that offers that certainty at the expense of nuance. So there in particular, people have been worried about existential risk from future AI systems. And one of the reasons why people are worried about that or why they focus on that is because it removes all of these difficult questions about uncertainty because we all know it would be bad for the whole human race to be wiped out. There’s no debate — I mean, a little bit. Some people might debate it, but only on the margins and sort of obscure philosophy papers. But for almost everybody else, wiping out humanity sucks — and you don’t need to have these complicated questions about, like, what do we really want to do?”
On premature regulation
“I also think that the appetite for regulating foundation models due to motivations coming out of concern about existential risk, to my mind, has led to some bad decisions in the last year where there’s been a sort of an apparent alignment between folks who are concerned with the present and folks who are concerned with the further future. But I think that’s led to kind of just rushing through regulations for systems that we don’t really understand well enough to regulate successfully. So, for the most part, with the EU AI Act or with the Executive Order, it’s stuff that is intentionally designed to be fairly malleable — so regulations that will be susceptible to change over the next year or two. But I do on the whole thing that there’s been a bit of a mad dash to regulate for the sake of regulating which I think is probably going to have adverse near-term consequences, whether through becoming irrelevant or through limiting the decentralization of power.
On legitimate power
“The question of who gets to exercise power is really important. Like, is it appropriate that an unelected, unaccountable executive at a company far away from your country is making these significant decisions about how you’re able to communicate online, how you’re able to use your AI tools? Or should that be something that is a decision that is made by people within your country? If it’s people within your country, it’s not enough that it just be your compatriots, right? It needs to be the case that they are exercising power with the appropriate authority to do so.”
On the limits of human and generative agents
“That’s something that you wouldn’t want to happen with generative agents, that basically they get to kind of do things on your behalf that you wouldn’t be permitted to do for yourself. That would be a real risk. And if we just talk about alignment, then that’s what we’re going to get because they’ll just be aligned to the user’s interest, and damn everybody else. But I think also a lot of the constraints that apply to us are fundamentally conditional on the kinds of agents that we are. A lot of morality is about dealing with the fact that we’re not able to communicate instantaneously with one another in a way that is perfectly transparent. If we could do that, if we could coordinate in that way, where we could communicate, be perfectly transparent, and then stick to it, so much of morality would be so different.”
Referenced in this podcast
- Waging War on Pascal’s Wager by Alan Hájek
- What’s Wrong with Automated Influence by Claire Benn and Seth Lazar
- On the Opportunities and Risks of Foundation Models by Stanford University’s Center for Research on Foundation Models
- Frontier AI regulation: Managing emerging risks to public safety (OpenAI)
- “The US is racing ahead in its bid to control artificial intelligence – why is the EU so far behind?” by Seth Lazar (The Guardian)
- The Age of Surveillance Capitalism by Shoshana Zuboff
- Constitutional AI: Harmlessness from AI Feedback (Anthropic) by Yuntao Bai et al.
- Jamie Susskind
- Democratic inputs to AI (OpenAI)
- Digital Switzerlands by Kristen E. Eichensehr
- Legitimacy, Authority, and Democratic Duties of Explanation by Seth Lazar
- Power and AI: Nature and Justification by Seth Lazar
- Communicative Justice and the Distribution of Attention by Seth Lazar
- Toolformer: Language Models Can Teach Themselves to Use Tools by Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, Thomas Scialom
- Specific versus General Principles for Constitutional AI (Anthropic) by Sandipan Kundu, Yuntao Bai, Saurav Kadavath, et al.
Thanks to Tessa Hall for editing the podcast.