Will Liquid Cooling Become Mandatory in Future Data Centers?
- 21 hours ago
- 6 min read

Liquid cooling is not becoming mandatory everywhere, but it is becoming hard to avoid at the highest densities. ASHRAE says its liquid-cooling guidance was updated because climbing rack heat loads mean air cooling can no longer handle a growing number of high-performance, high-density data centers.
The pressure is structural, not cosmetic. Data centers accounted for about 1.5% of global electricity use in 2024, or 415 TWh, and the IEA projects a rise to 945 TWh by 2030 in its base case. Cooling itself can represent roughly 7% of consumption in efficient hyperscale facilities and more than 30% in less-efficient enterprise environments.
The short answer: a de facto requirement in some projects, not a universal law
The most accurate answer is no: liquid cooling is not becoming a blanket requirement for every data center. It is, however, becoming a practical necessity for some of them, especially the densest AI and HPC deployments. Uptime Institute says direct liquid cooling remains concentrated in applications where air cooling is no longer practical, and its 2026 analysis argues that it is still unlikely to become mainstream across mission-critical workloads within the next three years. (journal.uptimeinstitute.com)
If you want the thermal and availability side of that shift, our article on data center reliability and GPU density shows why rack power is often the real trigger point. The market data explains the pace: the Uptime Institute’s 2024 direct liquid cooling survey found that 22% of respondents already used some form of DLC, 61% were considering it, and nearly half of users still kept it below 10% of racks.
That is why the transition looks hybrid rather than binary. Microsoft’s 2024 datacenter cooling update said all new datacenter designs began using next-generation cooling technology in August 2024, while its current fleet still uses a mix of air-cooled and water-cooled systems. (microsoft.com)
Why the question is accelerating now
The economics and policy signals are moving at the same time. The IEA’s 2025 analysis of energy demand from AI shows why operators are under pressure to improve cooling efficiency, while the European Commission says data centers also carry a substantial environmental burden because of cooling water use and CO2 emissions tied to electricity consumption.
The ecosystem is also standardizing its interfaces. The Open Compute Project’s liquid-cooling overview covers both direct-to-chip and immersion techniques, which suggests the market is converging on shared hardware patterns rather than a single universal architecture.
ASHRAE’s updated data-center handbook guidance is another clear signal: it says new liquid-cooling classifications were needed because air cooling can no longer handle rising rack heat loads in a growing number of high-performance, high-density facilities.
Cooling approaches at a glance
Approach | Best fit | Why operators choose it | Main trade-off |
|---|---|---|---|
Air cooling | Traditional enterprise halls and lower-density workloads | It is familiar, widely deployed, and still works well for many applications. | It reaches practical limits as rack heat loads rise in high-density deployments. |
Direct-to-chip liquid cooling | GPU-heavy AI and HPC racks | It moves heat closer to the source and fits the high-density direction of the market. | It adds CDUs, plumbing, water quality management, and maintenance complexity. (handbook.ashrae.org) |
Immersion cooling | Specialized deployments with extreme thermal density | It can handle very high heat loads and is part of the liquid-cooling toolkit. | Adoption remains niche, and it has not broken into the mainstream of enterprise and colocation facilities. (intelligence.uptimeinstitute.com) |
Hybrid cooling | Mixed estates during transition periods | It lets liquid-cooled AI zones coexist with conventional halls and shared infrastructure. | It demands careful planning, especially for redundancy and operational ownership. |
What regulations could change the trajectory?
The most likely regulatory path is not “liquid cooling by law,” but reporting, performance, and water-footprint rules that make inefficient thermal designs harder to defend. The European Commission’s data center energy-performance framework says the Energy Efficiency Directive already requires monitoring and reporting, uses a European database for energy and water-footprint data, and points toward a future package that could introduce minimum performance standards. That does not mandate liquid cooling today, but it creates a policy environment where efficiency-first design matters more and more.
ASHRAE’s 90.4 standard is also revealing: the handbook update describes it as a non-prescriptive “sister” standard that provides metrics for assessing data center energy efficiency at the design stage. That points to performance-based regulation and design guidance, not a global technology order.
So, if liquid cooling ever becomes “mandatory” in some markets, it will most likely be mandatory by outcome rather than by name. In other words, a project may have to meet a thermal, water, or energy target that air cooling can no longer satisfy, and liquid cooling becomes the easiest way to comply. That is an inference from the current standards and policy direction, not a formal global rule.
How operators should plan the transition
The safest roadmap is incremental: target the hottest rows first, keep liquid loops serviceable, and preserve an air-cooled path for lower-density workloads. Uptime says many installations will remain hybrid, and Microsoft’s 2024 update shows how quickly mixed portfolios are becoming normal. (intelligence.uptimeinstitute.com)
Start with the hottest racks or pods instead of converting an entire hall at once.
Specify leak detection, isolation, and service access from day one.
Keep cooling, power, and monitoring aligned so operational ownership is clear.
Track both energy and water impacts, because regulators increasingly ask for both.
One important caveat remains: liquid cooling solves thermal density, not every operational problem. Uptime’s 2026 analysis notes that deployment is still constrained by resiliency design, maintenance complexity, and the division of labor between facilities and IT teams.
What this means for Score Group projects
At Score Group, our Noor ITS division deals with data centers and IT infrastructure, while Noor Energy focuses on energy management and building systems. That matters because liquid cooling projects are never only about heat rejection; they also touch electrical design, monitoring, maintenance, and continuity planning.
If you are still comparing thermal strategies, free cooling limits and best practices and adiabatic and liquid cooling for GPU farms are two useful complementary reads. For broader project design, Noor ITS data center solutions are a natural entry point when cooling, power, and resilience need to be aligned from the outset.
FAQ
Will liquid cooling become mandatory in data centers in the future?
Probably not as a universal rule. The most likely outcome is a split market: conventional halls keep using air cooling, while very dense AI and HPC rooms move to liquid cooling because air alone is no longer enough. ASHRAE says air cooling can no longer handle a growing number of high-performance, high-density data centers, and Uptime still sees DLC as a niche rather than a mainstream default. So if a mandate appears, it will more likely be workload-specific or project-specific than a blanket global requirement.
Is liquid cooling required for AI data centers with high-density racks?
Not for every AI site, but often for the densest GPU racks. AI workloads span a wide range of power densities, and the closer racks get to the thermal limits of air, the more liquid cooling becomes the practical choice. ASHRAE’s updated guidance explicitly links new liquid-cooling classifications to climbing rack heat loads. In practice, that means many AI facilities will stay hybrid, with liquid cooling reserved for the racks that genuinely need it.
Are data centers moving toward immersion cooling as a standard practice?
Not yet. Uptime says water cold plates lead the current direct liquid cooling mix, while immersion remains a smaller slice and has not broken into the mainstream of enterprise and colocation facilities. That does not mean immersion has no future; it means the market is still choosing the simplest liquid option first, especially where operational familiarity, maintenance access, and vendor support matter.
What regulations could mandate liquid cooling in data centers globally?
The most plausible regulatory path is not “liquid cooling by law” but performance, reporting, and water-footprint requirements that make inefficient designs harder to approve. The EU already has mandatory reporting and a common rating scheme for data centers, and the Commission says a future package could introduce minimum performance standards. If other regions follow the same logic, operators may adopt liquid cooling to meet targets, not because a rule names the technology directly.
When will liquid cooling become the standard cooling method for hyperscale data centers?
For the hottest parts of hyperscale estates, it is already becoming standard in new designs, but not across every hall. Microsoft said new datacenter designs began using next-generation cooling in August 2024, while its current fleet still mixes air- and water-cooled systems. That is the best snapshot of where the industry is headed: liquid cooling first in AI-heavy zones, then more broadly as portfolio redesigns mature.
And now?
If you are planning an AI-ready build or a phased retrofit, the next step is to map cooling strategy to workload density, resilience goals, and energy constraints. Start with Score Group's homepage, then explore how Noor ITS and Noor Energy can support the design, integration, and efficiency decisions behind the project.



