✨ Explore this insightful post from Hacker News 📖
📂 **Category**:
💡 **What You’ll Learn**:
A democratic agenda for AI governance
AI is already shaping what you see, what jobs you’re offered, what loans you qualify for. You have no say in any of it. A handful of companies are building systems poised to reshape civilization, accountable to no one but their investors.
Their CEOs all face the same trap: if I don’t move fast, a competitor will. And they’re right, which is exactly why individual companies can’t fix this. The problem is structural. The solution has to be too. No small group deserves the power to direct a technology this consequential. Not even the best-intentioned CEOs, nation-states, or AIs. Anything short of democratic governance is a priesthood or oligarchy.
To make AI serve the public interest, we have to put the public in charge of AI.
Give the public real, democratic power over AI’s rules, goals, profits, and ownership. Not just better regulation. The public sets the direction, and the people building these systems answer to them.
Here’s how, organized around four democratic claims on AI, each worthwhile on its own, each reinforcing the others:
- Rules. The public should set AI’s rules through democratic deliberation.
- Incentives. Public institutions should redirect AI toward democratically determined goals.
- Wealth. The economic wealth generated by AI should be shared through democratic mechanisms.
- Ownership. The public should hold governing ownership stakes in powerful AI companies.
And we propose a Global AI Assembly GAIA, as the international institutional architecture to make it real.
The Structural Alignment Problem
The AI safety community has focused on “alignment,” getting AI systems to reliably pursue the right objectives. That work matters. But even perfect technical alignment doesn’t answer the deeper question: whose objectives should AI be aligned to?
AI serves whoever’s paying.
Today, that means enterprise clients, military, advertisers, and consumers with money. Not the public, the people whose lives will be impacted, and whose collective knowledge, creative output, and data these systems are trained on.
This is an old story. Railroads, telecommunications, broadcasting, the internet: in each case, a public or common resource was privatized, its governance captured by industry, and the public interest eroded. We have a chance to do this one differently. But only if we move now.
1. Rules: The Public Sets AI’s Rules
Who should decide what AI can and can’t do? Right now, it’s mostly the companies building it, or lobbyists shaping a political process they’ve learned to capture. It should be the public, through democratic deliberation.
The public can govern through participatory governance. Not popularity-contest elections, but structured deliberative processes where members of the public engage seriously with evidence and tradeoffs. Participatory governance is a rich field of practices, with a track record of success from ancient Athens to modern Taiwan.
Perhaps the most powerful tool of this kind is the citizens’ assembly: a representative cross-section of everyday people, like voluntary juries, selected by lottery (a practice called sortition) and given extensive expert briefing and skilled facilitation, then real authority to set binding goals and constraints. Ireland’s assemblies broke decades-long political deadlocks on marriage equality and abortion. France’s Convention Citoyenne produced binding climate recommendations. The Collective Intelligence Project’s Alignment Assemblies have shaped AI behavioral guidelines at Anthropic and OpenAI. Even OpenAI itself funded democratic input experiments in 2023, though these never became binding.
Citizens don’t write the code. They set the boundaries: what risks are unacceptable, what safeguards are non-negotiable, what transparency the public is owed. Safety red lines, deployment standards, accountability requirements: all set democratically, implemented by experts. Technical experts and standards bodies are then accountable to the public for delivering on them.
A single process setting rules across all AI companies is the only exit from the race to the bottom. No company can unilaterally slow down without ceding ground to rivals. But binding rules that apply to everyone change the game: every company can meet safety and transparency standards when no one gains advantage by cutting corners. This is the logic of arms control applied to AI. And like arms control, it doesn’t require every nation to participate — it requires enough coordination among the states that control the key resources. (More on this, including how this framework addresses China, in National Security section below.) Any binding standard would help, but a democratic one is required for legitimacy, harder for industry to capture, and more durable across changes in government.
2. Incentives: The Public Directs AI’s Goals
You get what you incentivize. Right now, AI companies are rewarded for maximizing profit.
Rules set the boundaries, but incentives decide what AI is actually built for. If we change the incentives, we change what companies compete to build, which changes the trajectory of AI.
What could AI do if companies were rewarded for achieving democratically determined goals? Tutors optimized for genuine learning, not screen time. Medical AI optimized for patient outcomes, not insurance reimbursements. Research assistants directed at climate, clean energy, and disease. Tools designed for communities the market ignores because they can’t pay enough.
How do we get there? Democratic procurement: when governments and public institutions commission AI to solve specific problems, they create paying customers for public-benefit AI. Government procurement built the internet, GPS, and the renewable energy industry. It can shape AI the same way.
Or conditional licensing: companies that train on humanity’s collective knowledge, as a condition of that access, should be required to direct a significant share of their resources toward public priorities, much as broadcasters were once required to serve the public interest in exchange for use of the airwaves.
3. Wealth: The Public Gets the Upside
On our current trajectory, AI’s economic value will concentrate in the hands of a few companies and their shareholders. One Project advisor Yanis Varoufakis has described the risk of “techno-feudalism”: platform owners capturing the surplus produced by everyone else’s data, labor, and attention. But AI’s endgame may be more radical than even that: as AI automates more cognitive and physical labor, the companies that own AI could capture the economic value of entire industries. The only way to justify the current valuations of AI companies is on the assumption that they’re going to eat the economy. The question is whether the public gets a fair share.
The public created most of the value that makes AI possible: millennia of writing, art, science, code, and conversation. The economic returns are rightfully ours.
The most direct mechanism to achieve this is a public AI dividend, a share of AI-driven productivity gains (collected from not only the AI labs but also every industry AI transforms), allocated through participatory budgeting where the public democratically decides how the money is spent. Not more tax revenue for politicians to allocate, but wealth to be allocated by the same public whose collective work made AI possible.
If AI really does automate most labor, a public share of AI-generated value won’t be optional. It will be how governments fund anything at all. And the more AI transforms the economy, the larger the public revenue base becomes.
Over 10,000 cities worldwide have run participatory budgets, including Paris, Lisbon, Seoul, and Melbourne, with longstanding programs across Latin America. Porto Alegre, Brazil ran one for three decades at up to $160 million in a single year. The Alaska Permanent Fund shows that public wealth-sharing institutions, once established, are politically durable. People fight to keep institutions that empower them and improve their lives.
A public AI dividend could fund whatever communities decide they need, including things AI makes more urgent: retraining for displaced workers, biodefense against AI-enabled threats, strengthened social infrastructure for a world in transition. And things that have nothing to do with AI but everything to do with the world people actually want: public spaces, child care and elder care, ecological regeneration, universal income.
4. Ownership: The Public Owns AI
The public should hold equity in AI companies above a certain scale. Not a token share, but a governing share, held and exercised through democratic institutions.
This is not government-controlled ownership, where AI answers to whoever won the last election. Public ownership can take many forms: golden shares, sovereign investment vehicles, community ownership trusts. What matters is democratic accountability: governance that answers to the public through participatory institutions that are structurally independent of any administration.
This is the intervention that changes what AI companies optimize for, from the inside. Rules constrain from the outside. Incentives redirect from the outside. But ownership changes what the company is for: its direction, priorities, and values at the source.
We already treat resources that affect everyone as public trusts: airwaves, waterways, wildlife, beaches. Private use is permitted under conditions that serve the public interest. AI, built on all of our collective knowledge and shaping all of our lives, is as strong a case for public ownership as any of these.
Germany has required worker representation on corporate boards for 75 years without sacrificing competitiveness. What we’re proposing goes further, but the principle is the same: governance that includes the people affected by the decisions.
Publicly owned AI already exists. In Switzerland, PublicAI is building AI infrastructure as a consumer cooperative, one of a growing number of experiments with alternative ownership.
When radio emerged in the 1920s, Congress declared the electromagnetic spectrum publicly owned. Broadcasters were designated “public trustees” with binding obligations. Over decades, every meaningful obligation was gutted through industry lobbying, until the “public trust” became nominal. Licensing alone wasn’t enough. It got captured. This is why we need public equity and democratic governing authority, not just regulatory conditions.
The Public Is Ready
The public is ready. Polling from Blue Rose Research shows that 66% of Americans support citizen panels helping set AI rules. That number holds across Trump voters, Biden voters, and swing voters, in a country that can barely agree on anything. 79% worry the government has no plan for AI-driven job loss. People want a say.
The democratic instinct is already there, and it’s cross-partisan. The political problem is not that people disagree with giving the public more power over AI. It’s that no one has organized that sentiment into political demand — yet.
GAIA: A Global AI Assembly
Implementing these four claims requires someone to actually do the work. We need an organization to convene citizen assemblies. We need an entity to hold public equity on behalf of the people. We need a body that administers wealth-sharing. We need a technical secretariat that translates democratic goals into standards. And because AI is global, and no single country’s regulations can govern companies that operate everywhere, this needs to be an international institution.
We propose GAIA, a Global AI Assembly. The model is the International Atomic Energy Agency (IAEA): technical expertise under democratic mandate, independent of any single government.
GAIA has two core components. A participatory process anchored in citizen assemblies and other proven democratic mechanisms (see Democracy Beyond Elections below) sets the goals and red lines: what AI should serve, what risks are unacceptable, how wealth should be shared. This is the democratic core. A technical standards body, expert-staffed and accountable to the assemblies, translates those goals into enforceable standards. The assemblies decide the goals; the technical body figures out the implementation.
GAIA would sit within the international system as the IAEA does: autonomous governance, connected to treaty infrastructure, designed to avoid the paralysis that comes when any single powerful nation can block action. The UN has begun laying groundwork. In August 2025, the General Assembly established both the International Scientific Panel on AI and the Global Dialogue on AI Governance, the first forum where all 193 member states have a seat. These structures stop short of binding democratic governance with citizen participation, which is what GAIA would provide.
From global to local
GAIA sets the global framework, but not every AI decision is global. Whether a data center gets built in your town, how your city uses AI in public services, how AI-generated revenue gets spent locally, are local decisions, and the people affected should govern them locally. The principle is subsidiarity (decisions get made at the level closest to the people they affect), and it’s as old as democratic governance itself. Global standards like safety and environmental requirements stay global, to prevent a race to the bottom between jurisdictions.
In practice, this means citizen assemblies at multiple levels: cities and states setting procurement standards, overseeing community benefits agreements, running participatory budgets for AI-generated revenue. Each level feeds upward. This nested architecture is already being built for climate governance. The Global Citizens’ Assembly runs a model in which community assemblies feed into a civic assembly connected to global institutions, first prototyped at COP26 and scaling for COP30.
Enforcement
Enforcement of democratic mandates comes from layered mechanisms, most exercised by existing governments and institutions: compute controls (frontier AI requires advanced chips from a supply chain a few democratic states already control), regulatory frameworks (like the existing EU AI Act), public equity (governing stakes give the public enforcement power from the inside), fiscal tools (automation levies, conditions on subsidies and bailouts), litigation (like the New York Times v. OpenAI case), and community-level agreements (Amsterdam imposed a data center moratorium in 2022). No single mechanism is sufficient. Layered together, they give democratic governance real teeth.
Democracy Beyond Elections
When we say “democratic governance of AI,” we don’t mean putting AI policy to a popular vote. We mean giving everyday people real decision-making power through structured democratic processes, informed by evidence and expert input.
People are increasingly questioning democracy’s ability to produce good outcomes. But the systems we call “democracies” today were never designed to be democracies. America’s founders weren’t trying to create one; they said so explicitly. They chose elections over sortition, a Senate designed to “protect the interests of the opulent minority”, and the word “republic” over “democracy,” which they associated with mob rule. Montesquieu, whose work shaped the founders, put it plainly: “Voting by lot [sortition] is in the nature of democracy; voting by choice [elections] is in the nature of aristocracy.” Polarization, wealth capture, and short-termism are features of electoral republics, not of real democracies.
Real democracy works. Brazil’s participatory budgeting reduced infant mortality by 20%. Taiwan’s vTaiwan, pioneered by digital democracy leader Audrey Tang, shaped technology regulation through open digital deliberation. Research by One Project found participatory decision-making outperforming politicians and bureaucracies. A meta-analysis of 100 studies found that participation in deliberative processes increases political knowledge, efficacy, and reasoning quality.
Democratic governance isn’t one mechanism. It’s a design philosophy with many proven implementations. No single mechanism works for every decision type. A robust architecture draws on several, matched to function:
| Mechanism | Best suited for |
|---|---|
| Citizen assemblies (sortition-based) | Values-laden, high-stakes goal-setting; constitutional questions; contested tradeoffs |
| Participatory budgeting | Allocation decisions where affected communities can meaningfully assess priorities |
| Liquid / delegated models | Topic-specific delegation to trusted representatives; domains requiring quick decisions |
| Referendum / direct vote | Ratifying decisions that have emerged from deliberative processes |
| Expert / technical panels | Implementation design within democratically-set goals; risk evaluation |
| Multi-stakeholder processes | International coordination; governance of global commons |
A growing ecosystem is building this infrastructure. The Collective Intelligence Project has built Alignment Assemblies shaping AI behavior at frontier labs. Aviv Ovadya’s AI & Democracy Foundation has developed frameworks gaining traction in policy circles. And digital tools like Polis, Decidim, vTaiwan, and One Project’s Common platform are making democratic governance scalable, accessible, and fast.
National Security, for All Nations
A common objection: democratic governance sounds nice, but the AI race is a national security competition, especially with China, and democracies can’t afford to slow down.
But democratic governance produces better security outcomes. The current system, where rules for military AI get set through bilateral CEO-government deals with no stable framework and no democratic input, doesn’t produce speed; it produces institutional whiplash. When the U.S. Department of War can designate Anthropic a “supply-chain risk to national security” for refusing to remove safety guardrails, while accepting nearly identical terms from a competitor that framed compliance differently, that’s not a stable system; that’s arbitrary power. A binding democratic framework survives changes in administration, resists lobbying capture (sortition-based assemblies have no donors to please), and carries international legitimacy that no single nation’s policy can match.
GAIA doesn’t require participation of autocratic states like China to be effective, any more than the IAEA required the Soviet Union’s agreement on every point to function. What it requires is enough coordination among the democratic states that control the computer supply chain, and those states already cooperate on chip export controls. A democratic framework with real enforcement gives those states a principled basis for coordination, rather than the ad hoc, shifting arrangements they rely on now.
GAIA separates mandate from execution: the assembly sets the framework, the technical body implements at operational speed. The assembly doesn’t approve individual classified operations any more than Congress approves individual military missions.
The deepest argument is the simplest: in an uncoordinated AI race, everyone loses. Every arms race is a security problem masquerading as a security solution. The people who built nuclear arms control understood this, and the same logic applies to AI.
From AI to Everything
The structural problem behind the AI race isn’t unique to AI. The economy rewards shareholder returns over public benefit, and competitive dynamics lock everyone in: if I don’t do it, someone else will. That logic is what game theorists call a multipolar trap. As thinkers like Daniel Schmachtenberger have argued, it’s the same force behind climate destruction, arms races, and the attention economy. Different crises, same structure.
The four claims in this agenda — democratically determined rules, democratically determined goals, shared wealth, and public ownership — aren’t just an AI agenda. They’re the levers we need to democratize any sector of the economy where misaligned incentives are producing bad outcomes. Pharmaceutical companies, fossil fuel companies, social media companies: same problem, same shape of solution. What if the systems that shape our lives were governed by the people they’re supposed to serve? That’s economic democracy. AI demands these institutions now: the stakes are too high and the timeline too short to wait. But once built, the democratic capacity to govern AI could become the foundation for governing everything else.
One Project is building the infrastructure to make that possible, for AI, and for the rest of the economy. If you’re interested in partnering with us, reach out at connect@oneproject.org.
FAQ
Can ordinary citizens handle AI’s technical complexity? This is a fair concern, and the answer is yes, with the right support. Citizens’ assemblies give participants extensive expert briefing, access to competing perspectives, and skilled facilitation. The evidence from Ireland, France, Taiwan, and other cases shows that ordinary people, given time and resources, consistently produce policy outputs that are nuanced and broadly supported. Citizens aren’t asked to design model architectures. They’re asked to set goals, constraints, and values, and they do it well.
Won’t this slow down AI development? A citizen assembly can produce policy in weeks, faster than most legislative processes. And binding rules that apply to everyone create stability and predictability, which is better for long-term progress. But it would slow down reckless development. That’s partly the point. Democratic governance is how to maximize speed in the right direction.
Is public ownership feasible? Public ownership of critical infrastructure is not new: we already do it with airwaves, waterways, and other shared resources. Germany’s co-determination model, where workers hold half the seats on corporate boards, shows shared governance and competitiveness coexist. The most likely path to AI public equity happening in practice is through a crisis: when AI companies need public support (for example a financial bailout or regulatory rescue), equity stakes should be a condition.
Further Reading
- Elinor Ostrom, Governing the Commons (Cambridge University Press, 1990) — The foundational framework for how communities govern shared resources without privatization or state control.
- Assets in Common — Recently published guide to legal and organizational structures for holding shared resources accountably.
- Hélène Landemore, Open Democracy (Princeton University Press, 2020) — The rigorous case for why citizen assemblies outperform elections.
- People Powered, “Impacts of Citizens’ Assemblies: What We Know” (January 2026) — Recent synthesis of nearly 70 peer-reviewed studies.
- Who Decides Where Money Goes? The Benefits of Participatory Funding — One Project’s research on why participatory funding is the future.
- Collective Intelligence Project, Alignment Assemblies — Democratic governance of frontier AI, in practice.
- Ruha Benjamin, Race After Technology: Abolitionist Tools for the New Jim Code (Polity, 2019) — Groundbreaking overview of how and why AI systems too often reproduce racial inequality, and what we can do about it.
- AI Now Institute, “Artificial Power: AI Now 2025 Landscape” (June 2025) — The state of concentrated AI power across industry, labor, and the public sector, and a roadmap for community accountability.
- AI Commons: Nourishing Alternatives to Big Tech Monoculture — A field scan of hundreds of organizations who are already building towards a possible AI Commons, by Coding Rights and One Project, 2024.
- Public AI, Infrastructure for the Common Good (2025) — A whitepaper that argues for the benefits of public AI infrastructure.
Justin Rosenstein is the co-founder of One Project and Asana, an early product leader at Facebook and Google, and a founding advisor to the Center for Humane Technology.
One Project builds infrastructure for economic democracy: civic technology, movement partnerships, and the research and legal frameworks that make democratic governance of shared resources possible. Learn more →
💬 **What’s your take?**
Share your thoughts in the comments below!
#️⃣ **#Serve #Public**
🕒 **Posted on**: 1775946657
🌟 **Want more?** Click here for more info! 🌟
