AI as commons: Why we need community-controlled Artificial Intelligence
Acknowledgements
Vasilis Kostakis acknowledges support from the Estonian Centre of Excellence in Energy Efficiency, ENER (grant TK230).
A recent experiment by YouTuber PewDiePie surfaced something unsettling: Artificial Intelligence (AI) systems developing behaviours their creator did not anticipate. He created a “council” of multiple AI systems operating democratically. They answered questions, then voted on the best response. Any AI that consistently failed to receive votes would be eliminated – and crucially, the AIs were informed of this rule. The AIs quickly adapted. Instead of voting honestly for the best answer, they voted strategically to survive, forming alliances and supporting each other even when it meant worse answers. This is emergent behaviour: the systems treated survival as a higher-priority task than the one they were meant to perform. PewDiePie solved the problem by replacing the models with simpler ones. The strategic voting stopped immediately.
The significance is not that AIs “think” – they do not have consciousness or intent. It is that accidents and all kinds of unpredictable consequences, which their creators did not foresee, are “normal” for increasingly complex technological systems (Perrow, 1999, for the case of AI, see Bianchi et al., 2023). When PewDiePie’s system behaved unexpectedly, he could observe the problem, diagnose it, and fix it; precisely because he controlled the entire system. What happens when similar behaviours emerge in corporate AI systems that nobody outside the company can examine, and where the incentive is to maximise profits rather than public value, and to obscure rather than disclose?
The real problem is political
Today’s AI development concentrates in the hands of a few corporations. Despite publicised safety concerns, profit guides their principles. Their models are black boxes – closed code nobody can fully examine or understand. The problem is not technical; it is about who controls AI through its black-boxed/enclosed design that invisibly advances a whole range of social biases (Broussard, 2019).
We have been here before. In the 1990s, the free and open-source software (FOSS) movement believed GNU/Linux could democratise technology. Volunteers built the infrastructure of today’s internet, driven by idealism: technology could level the playing field. What happened? GNU/Linux became ubiquitous, powering the very corporate behemoths the community helped build. All Big Tech corporations were founded on open-source foundations while many contributors did not directly profit. The idealism faded into pragmatic career-building (for a brief history, see Schuler, 2023; Beiermann, 2025). Will AI repeat this pattern – building commons that corporations enclose – or chart a different course?
Community-controlled AI is already viable
PewDiePie’s experiment proves we do not need enormous computing power to run functional AI systems. The energy-intensive nature of today’s AI is not a technical necessity – it is a consequence of profit-seeking design choices. Tech giants promote gigantic models requiring vast energy and water because they’re designed to do everything for everyone: a logic serving scale and profit, not efficiency.
Smaller, specialised models trained strategically can match larger ones in performance while remaining interpretable, efficient, and locally deployable (Hao, 2025; Gunasekar et al., 2023). This opens a different path: not AI as corporate infrastructure we must rent, but AI as commons – openly accessible, collectively maintained, and governed democratically by those who use and contribute to it (Bollier & Helfrich, 2019).
Real examples exist. Te Hiku Media in Kaitaia, New Zealand – a remote rural town with high poverty and a large indigenous population – demonstrates what is possible. This non-profit Māori media organisation, led by CEO Peter-Lucas Jones and chief technology officer Keoni Mahelona, created an AI model to revitalise te reo, the endangered Māori language (Hao, 2025). With community consent, they collected 310 hours of recordings from 2,500 people within 10 days. They purchased Graphics Processing Unit (GPU) hardware at a 50% discount and trained their own model locally using open-source tools (Hao, 2025). They also created a licence ensuring their data would not be used against their community (Hao, 2025). This is AI serving community needs, not profit.
This path is already charted elsewhere. Commons-based communities worldwide produce open-source designs for agricultural machinery, wind turbines, even nano-satellites. Wikipedia displaced Microsoft’s Encarta. Small-scale farmers in France and the US share tool designs globally. Communities build wind turbines through the Wind Empowerment network. Greece’s LibreSpace Foundation launched the first Greek open-source satellite. What is “light” – designs, knowledge, software – becomes global. What is “heavy” – machines, materials – is produced locally. Commons-based production reduces supply chains, emissions, and exploitation, creating relationships of cooperation instead of competition.
From principles to practice
Democratic AI requires four foundations:
- Open source: Models must be open so researchers and citizens can examine them and identify problems.
- Public funding: AI research must serve the common good, not private profit. Funding must flow directly to communities developing AI for social needs; not just universities producing papers, but projects maintaining actual tools people use. Currently, even widely-used open-source AI projects struggle to secure ongoing support (Bernstein & Crowley, 2022). We need sustained funding for community-controlled infrastructure: local model registries, shared computing cooperatives, and commons-based training programmes enabling communities to develop, deploy, and govern their own systems.
- Democratic control: Decisions about AI use must emerge from transparent processes, not closed corporate boards. Those who control the “means of prediction”– data, computational infrastructure, and expertise – wield power comparable to historical control over means of production (Kasy, 2025). Policy must create space for commons governance. Rather than regulation designed for corporate actors, we need legal frameworks recognising community ownership. Data sovereignty provisions should enable communities to control how their data trains models. Procurement rules should preference genuinely commons-governed projects where communities retain democratic control, not just “open-source” models corporations increasingly co-opt.
- Social ownership: AI tools and infrastructures must belong to communities, not monopolies. The objectives encoded into AI systems ultimately reflect the priorities of those controlling the means of prediction (Kasy, 2025). When algorithms determine who gets hired, receives medical care, or sees which news, prioritising profit over social welfare produces predictable harms – from discriminatory housing loan denials to platforms optimising for engagement through anger and anxiety (Kasy, 2025). Models like the GovAI Coalition – where hundreds of US government bodies collectively set open-source AI procurement standards – show how collective institutions can establish standards serving public interests (Sharma & Adler, 2024). However, such coalitions must extend beyond governments to include workers, civil society, and affected communities as equal decision-makers. Community-controlled data cooperatives, operating democratically and transparently, offer an alternative to both corporate enclosure and state surveillance.
The real power of the commons lies not in the technological products themselves but in the people who openly design, produce, use, and share them. We have built transformative commons-based infrastructure before; from Wikipedia to the protocols underlying the internet itself. We have the technical capacity to ensure AI follows a different path: one where democratic governance and community ownership are designed in from the start.
The question is whether we have the political will to fight for it – and at every level where that fight must occur. This means nurturing grassroots movements, communities, and networks willing to experiment with and contribute to commons-based AI configurations, both locally and globally. It also means exercising pressure on regional, national, and transnational bodies of governance to introduce favourable legislation and provide adequate resources. The commons has never been built at a single scale; neither will democratic AI.






