Data centres or data colonies? Energy, water, and Latin America’s sovereignty test
Acknowledgements
I thank my advisor, the Postdoctoral Scholarship Programme, and the Coordinación de Humanidades at UNAM for their support during my postdoctoral stay at PUEDJS.
Latin America knows the scene: projects that promise modernisation and arrive as private infrastructures with public impacts – on water, energy, and territory. The novelty is not technical; it is political. Hyperscale computing exists only insofar as the state enables it: permits, grid interconnections, water rights, licences. If that intermediation is reduced to “attracting investment”, dependence grows; if it is organised as an architecture of power, capacity grows.
Digital sovereignty is not a slogan, nor a single technical layer. It is a legal–political capacity to order, with real effects, a multidimensional ensemble of dependencies: rights and their remedies (due process, privacy, freedom of expression), data governance (access, portability, public reuse), competition and taxation vis-à-vis intermediaries with market power, platform-mediated labour, security and service continuity, standard-setting and international insertion. And, as the material floor of possibility, a state’s control of energy, water, and access to silicon – the preconditions that make computation physically feasible.
None of these dimensions substitutes for the others; in public practice they are co-produced. The task, therefore, is not to add adjectives but to close an operational triangle – the practical alignment of legal requirements, technical standards, and actual infrastructure operation: what the law commands must be measured with verifiable standards and must produce material consequences in how infrastructures and platforms run. Without that coupling, rights remain declarations; without rights and competition, infrastructure dictates outcomes; without interoperability, vertical integration neutralises public direction.
Chile sets the precedent: jurisdiction over critical resources
Chile shows the method in action. Consider Google’s $200 million Santiago data centre project, which required court intervention to reassess water use equivalent to 24% of municipal supply – in a region suffering exceptional drought conditions. In February 2024, the Second Environmental Court partially reversed approval, ordering a reassessment that explicitly accounted for climate-change effects on the Central Santiago Aquifer. In September 2024, Google announced it would take the project “back to square one” and redesign it with air cooling. This was not anti-technology posturing; it was effective jurisdiction over a critical resource. Without water and robust evaluation, there is no computing infrastructure, regardless of corporate timelines. That is the difference between climate policy that conditions operations and corporate narratives that presume them.
The contrast with Mexico is instructive. Where Chile used judicial review to condition infrastructure deployment, Mexico has accelerated it without equivalent environmental safeguards. Querétaro has emerged as the country’s de facto hub for data centres, with over half of installed or under-construction capacity attributed to the state in recent market reporting. The point is not to romanticise scarcity but to underline public leverage: permits, water rights, and grid connections are public power – and Latin American governments should use them upfront to orient deployment toward public purpose rather than to accelerate capital absent conditions.
From permits to public power: energy, water, and continuity as leverage
Latin American governments hold planning authority, permit power, and grid-access control over energy policy. What is often missing is hourly traceability and verifiable additionality at the point of connection – ensuring that “clean” megawatts actually match consumption hour by hour, and that new renewable capacity is actually built in response to demand rather than existing generation repackaged as accounting certificates purchased elsewhere.
The pragmatic move is not to reject corporate commitments but to convert them into binding permit conditions – turning voluntary pledges into legally enforceable requirements. Google’s pledge to operate on 24/7 carbon-free energy by 2030 provides a common language that public authorities can translate into binding permit conditions, backed by independent audits and automatic sanctions when systems fall short. This approach flips the traditional script. Instead of treating corporate sustainability commitments as public relations exercises, states can use them as common ground for regulatory requirements. When Google commits to 24/7 carbon-free energy, that becomes the baseline for permit conditions, not an aspirational extra. The state’s role is to specify what “24/7” means operationally as a condition – hourly matching, not annual averaging; local generation, not remote certificates; independent audits, not self-reporting – and to attach automatic consequences when commitments fail. The point is not to mimic corporate branding but to extract verifiable operational standards from voluntary pledges and make them legally binding.
In water governance, the toolbox is likewise public: concessions, environmental impact assessments, extraction charges, discharge standards, and, increasingly, open hydrological data. Technical efficiency metrics like Power Usage Effectiveness (PUE, the ratio of total facility energy to computing energy) and Water Usage Effectiveness (WUE, litres consumed per kilowatt-hour of computing) do not settle environmental-justice questions, but they function as shared vocabulary for mandatory reporting, stress-linked caps, and waste-heat reuse where district networks exist.
The European Union offers a usable precedent: the revised Energy Efficiency Directive establishes a reporting obligation and a European database for data-centre performance; the 2024 Delegated Regulation details specific KPIs – including energy performance and water footprint. Latin America need not import bureaucracies; it can adopt the principle – auditable data, comparability, automatic consequences – and adapt it to local administrative frameworks. That is regulation that governs, not greenwashing.
Continuity and competition add a further edge. The problem is not privatisation per se but lock-in produced by vertical integration – when providers control multiple layers from compute to storage to networking and bundle compute, storage, Domain Name System (DNS) and Content Delivery Network (CDN) by design. Here too, Europe provides leverage. The EU Data Act, which entered into force in January 2024, mandates fair access to and use of data across the European Union. Its Article 29 phases out all provider switching charges – including the data-egress fees that cloud providers historically used to lock in customers – with a full prohibition starting 12 January 2027. This converts what was a technical barrier to switching (prohibitive transfer costs) into a legally protected right (zero-cost exit).
Latin American governments can write mirror clauses now into public contracts: tested multi-cloud exit plans, binding migration timelines, zero egress for state workloads. They can also require functional separation (ensuring that essential infrastructure services are provided on open, non-discriminatory terms) for essential services in licensing frameworks. This aligns technical openness with legal enforceability and moves public buyers from aspirational exit rights to executable ones.
Brazil’s REDATA moment: acceleration or dependency?
This is also where the region’s current policy debate risks sliding into dependency by design. Brazil’s newly announced REDATA regime (Provisional Measure 1,318/2025, enacted September 2025) seeks to accelerate hyperscale investment through tax incentives and sustainability conditions. Acceleration without structural reciprocities – technology transfer with real interface licensing, training of public cadres under recognised certification, and the availability of a public option that disciplines the market – can lock in dependency: more megawatts and buildings, but less national capacity. The question is not whether to accelerate; it is what acceleration is tied to.
None of this denies the role of litigation. But relying on courts alone to discipline hyper-concentrated providers invites institutional exhaustion. The cheaper path – in time, money, and legitimacy – is to bring enforceability forward: set material and service conditions ex ante in permits, contracts, and licences, with clear metrics, independent audits, and automatic consequences. In other words, fewer last-minute heroics; more clauses with teeth.
A sovereignty agenda worth the name – both for Latin America and for Europe – does not fetishise self-sufficiency; it curbs privatisation of essential functions and self-regulation that empty the law of material effect. It re-centres public authority over evidence, obedience, and exit: orders must be enforceable locally; evidence must be preserved under public governance; exit must be technically and economically real. It also demands technology transfer, not wishful thinking: escrow of critical artefacts (secure third-party storage of source code and technical documentation), open interfaces for publicly funded deployments, co-ownership where public money underwrites development, and domestic engineering and maintenance over a multi-year horizon.
This is not a promise of instant emancipation. In concentrated compute markets, under hard physical constraints of water and energy, law opens verifiable margins of situated autonomy rather than full independence. But that difference matters. It separates a state that governs with binding conditions from one that administers dependencies while hoping for corporate goodwill. Latin America has this window now, while Big Tech races to build AI infrastructure and still needs permits, water rights, and grid connections. The decision is not whether data centres will exist, but under what national project.






