In short: Could a “public option” large language model (LLM) – developed by the government – solve some of AI’s larger problems around trust and democratic resiliency? Bruce Schneier and his co-authors make that case in this piece in Slate.
In How Artificial Intelligence Can Aid Democracy, Bruce Schneier and his co-authors outline a vision for how AI could actually be used to bolster democracy:
A.I. could advance the public good, not private profit, and bolster democracy instead of undermining it. That would require an A.I. not under the control of a large tech monopoly, but rather… available to all citizens.
[A.I.] could plausibly educate citizens, help them deliberate together, summarize what they think, and find possible common ground. Politicians might use [A.I.] to better understand what their citizens want.
The article details a few of the many ways in which, if properly applied, today’s powerful language models (and future systems based on them) might actually be used to create a far more democratic world.
This is not a call for creating some massive AI system fully controlled by the government, but rather a recognition that such systems could be very helpful in the day-to-day business of governing, and that building these systems will be hard, but important, work:
…we should apply A.I. through piecemeal democratic engineering, carefully determining what works and what does not…
…building and fielding a democratic A.I. option will be messy and hard. But the alternative—shrugging our shoulders as a fight for commercial A.I. domination undermines democratic politics—will be much messier and much worse.
While it’s easy to be distracted by the hype and get-rich-quick schemes, we believe it’s ideas like those that will ultimately matter and on which we should be collectively focusing.