Wednesday, October 08, 2025 12:39:19 PM
Behind The $500 Billion Data Center Boom: Here's Who Makes All The Key Components
Tyler Durden's Photo
by Tyler Durden
Wednesday, Oct 08, 2025 - 11:00 AM
One year ago, we published a primer on the beating heart of the AI bubble - the data-center - which in September 2024 represented at $215 billion investment opportunity. Not surprisingly, it was one of our most popular premium articles of 2024. Fast forward to today, when the addressable market size of data centers has more than doubled, at a cool half a trillion dollars, but the question remains: who stands to benefit the most from this unprecedented build out, which will one day come to a crushing, thunderous halt as the AI bubble implodes... but until then the music is playing. And so, here is the updated edition of the data center primer, updated for 2025.
Introduction
A global view of the $500+bn data center market
Understanding the data center end market has grown in importance given high growth rates and scale, and according to BofA estimates global data center spending will reach $506bn in 2025, comprised of $418BN of IT equipment and $88BN of infrastructure spending. This is up 25% y/y, and looking ahead BofA forecasts a remarkable 23% CAGR for the market over 2024-28, including a 19% CAGR for infrastructure spending. This report provides historical context on the size, shape, and ownership of the global data center market.
Key product lines for data center infrastructure
The report focuses on 12 product and service categories: chillers, construction firms, cooling towers, computer room air handlers, coolant distribution units, engineering firms, generators, networking equipment, power distribution equipment, servers, switchgear, and uninterruptible power supplies (UPS). The average content per megawatt (MW) and market shares for each of these categories is reported. The BofA estimate for the all-in cost of building a data center is 39MM/MW, and the bank anticipates next generation AI architectures will be significantly more capital intensive at $52MM/MW.
Implications of AI semiconductor evolution: from AC to DC
The report explains the reasons why artificial intelligence (AI) semiconductor manufacturers are switching to “rack scale” architectures, with ever increasing density of chips per rack. These industry trends have already driven rapid growth in liquid cooling in thermal equipment. However, there is an emerging shift to high voltage direct current (DC) architecture for electrical equipment. We size the potential for electrical equipment content and costs to change as the industry pivots away from low voltage alternating current (AC) designs.
From air to liquid: a closer look at CDUs
Rising rack density is driving adoption of liquid cooling solutions. The industry is coalescing on single-phase, direct-to-chip solutions. Coolant distribution units (CDUs) are the key equipment needed to power these offerings. Larger format, in-row CDUs have outgrown smaller in-rack variants. We show 30 vendors and highlight the five we think currently have the most market share.
AI data centers and electricity demand
Over the last few months, there have been several announcements of gigawatt-scale data centers. Using multiple academic forecasts, BofA projects AI electricity demand growing at a 40+% CAGR, with low- to mid-teens growth for total data center demand. The report provides case studies of how these large projects have progressed and are obtaining power. Additionally, it is projected that the electricity demand for AI inference (i.e., running previously trained models) will overtake AI training before the end of the decade. While efficiency gains are possible, increased adoption will more than offset this. The political and populist blowback to soaring electricity prices, which have now been correctly attributed to the data center explosion, is a major wildcard and a potential huge negative risk factor.
The Data Center opportunity
Data center market to hit $900+bn in ‘28E
BofA estimates data center capex was more than $400bn globally in 2024, rising to more than $500bn in 2025E. The bank also estimates that AI adoption will drive a 23% market-wide CAGR over 2024-28E. The Electrical and Thermal equipment markets are size at $18bn and $10bn, respectively, in 2024.
The percentage mix varies from the market size as the total market includes replacement/refresh spending. BofA estimates the all-in cost of building a traditional data center to be $39mn per megawatt.
As discussed later in this report, there are significant infrastructure changes ahead for the next generation of AI chips (e.g., NVIDIA’s proposed Rubin chip architecture). The cost of this future state data center will rise by a third to $52mn per megawatt.
The largest difference is higher server costs for next generation chips. Higher rack density leads to a lower number of racks and square footage per megawatt, reflected in lower building costs power distribution equipment costs. This future state data center assumes direct-to-chip liquid cooling and high voltage direct current electrical systems.
Vendor shares on broad infrastructure product categories
Across data center electrical products, Schneider is the share leader in this $18bn market. Across all thermal products, Vertiv is the share leader in this $10bn market.
Types of data centers
Enterprise: a facility owned by a single organization housing its IT infrastructure. Typically, they are owned by large corporations, financial institutions, or government agencies. Over the past ten years, the square footage growth of these data centers has been flat. However, upgrade and modernization projects have increased the capacity.
Single-tenant colocation: a facility owned and managed by a third party and leased to a single tenant. Historically, these came from sale leasebacks (i.e., enterprise-owned data center is sold to an investor and then leased back). Over the past ten years, cloud service providers have used this method to expand in new geographies. Initial lease terms are typically long with options to expense (e.g., 10-year initial lease with two additional 10-year options).
Multi-tenant colocation: a facility owned and managed by a third party and leased to multiple tenants. Rents are generally based on a combination of power usage and number of racks. Tenants benefit from shared services (e.g., network connectivity, physical security). Can be subdivided into retail (smaller space commitments; less flexibility) and wholesale (requires larger commitments; more flexibility in design).
Hyperscale: a very large data center engineered to provide maximum uptime (i.e., Tier 4 ranking in Uptime Institute’s classification), support distributed computing (e.g., sharing workloads across servers and sites), and scalability. The definition of “large” varies, but typically greater than 20 megawatts. Distributed computing and hyperscale data centers are closely associated with cloud service providers, such as Amazon Web Services and Microsoft Azure. However, not every hyperscale data center is used for cloud services and not all cloud servers are in hyperscale data centers.
As measured by electrical capacity, global data centers grew at a 14% CAGR over 2014-24 and a 17% CAGR over 2019-24. While corporations continue to run and maintain a significant number of data centers, hyperscale and colocation firms have made nearly all the capacity additions since 2013. Small colocation companies (e.g., <10 data centers) collectively comprise a meaningful portion of data center space (20-25%). These firms typically own smaller sites (e.g., <20 megawatts) outside major markets.
Evolution of cloud growth
In 2005, Nicholas Carr authored an article entitled The End of Corporate Computing predicting enterprises would stop building their own data centers and use third-party services. Amazon Web Services launched the next year, driving a boom in cloud services. In 2017, cloud service providers and colocation companies surpassed enterprise-owned data centers (as measured by electrical capacity).
Cloud service providers are profitable. Amazon Web Services generated $40bn of GAAP operating profit on $108bn of revenue, or a 37% operating margin. For most IT workloads, colocation provides a lower total cost of ownership. However, it requires upfront capex (for services & related IT equipment), multi-year commitments (to colocation firms), and higher levels of IT management and support. In contrast, cloud services are flexible and offer higher uptime levels.
Regional breakout: Americas home to over half of capacity
Data centers, as measured by electrical capacity, grew at an 17% CAGR over 2019-24. By region, EMEA has been a relative laggard (16% CAGR), Asia Pac a touch better (20%), and the Americas region right in line (17%).
Hyperscaler capacity is largely concentrated in the same key regions globally. Below, we list the top 20 locations for hyperscaler data center capacity globally. Navy represents North America, Red represents APAC, and Green represents EMEA. The largest single hyperscaler location globally is US Northern Virginia, which represents almost 15% of global capacity for hyperscalers. The second largest capacity is in Beijing, with ~7% capacity.
Colocation economics
Below is a walkthrough of typical per megawatt project economics for a new build wholesale colocation project (i.e., one leased to a small number of clients on long-term leases).
$2mn per MW. Land costs, utility connections, and site works. This would vary by location/site.
$11mn per MW for the powered shell. Turner & Townsend’s Data Center Cost Index uses real construction data from 300 projects. These costs include the building shell, mechanical, electrical, thermal, equipment and installation labor costs, and general contractor margin and contingency.
Typical annual rent is $2-3mn per megawatt (assume $2.5mn). Current occupancy rates in the US are 96-97% (assume 90% over the 20 years). The largest operating expense is electricity ($0.08 per kilowatt hour is average US industrial cost implying $0.7mn for 100% utilization, or $0.63mn given 90% occupancy level). Staffing levels are around two full-time employees per megawatt (assume $0.25mn in wages). Property taxes assumed at 1% on property value. Typical EBITDA margins are 40-50% (assume 45%).
For free cash flow, assume maintenance capex of 1.5% of original $11mn powered shell cost. Assume project financing of $7mn in equity (at 10% cost) and $6mn in 20-year amortizing mortgage debt (at 6% rate), yielding a weighted average cost of capital of 8.5%. This would be a 46% loan-to-value, in line with recent data center financings. A 21% corporate tax rate for cash income taxes is used.
After the 20-year holding period, the data center is expected to be sold. Recent data center transactions have been at around 5-7% capitalization rates (assume 7.0% for the exit at the end of year 20, i.e., 14.3x net operating income). These assumptions yield an 11.0% internal rate of return (IRR) and a $2.8mn net present value (relative to the original $7mn equity investment).
Evolution of chips and rack density
1. Rising watts per chip
The power consumption per chip has increased 4x from Nvidia’s first-generation Volta architecture to the current Blackwell. Many of the ways to increase computing performance require additional power. Put simply, supply voltage cannot scale down proportionally with node sizes. Putting more transistors on each chip requires more power. In addition, power consumption rises linearly with faster clock speeds.
2. Massively parallel processing
GPUs perform calculations in parallel. When thousands of GPUs work together in an AI cluster, if even one GPU lacks the data it needs, all other GPUs stall. This means that network latency delays can reduce overall performance significantly. Putting more GPUs within a single rack reduces the need for networking and high-speed interconnects.
These trends drive increased rack density...
In 2021, the average rack density was less than 10 kilowatts (kW) per rack. A reference Hopper rack (H200 chips) would draw 35kW. A reference Blackwell rack (B200 chips) would draw 120kW. Based on released statistics from Nvidia, we estimate a reference Rubin Ultra rack would reach 600kW in a single rack.
While we highlight Nvidia’s roadmap, other chip firms are following a similar trajectory. Given the increasing importance of scale-up capabilities in AI data center, each major accelerator vendor is developing its own protocol. On 6/12, AMD (Advanced Micro Devices) announced a reference rack infrastructure for its Instinct MI350 GPUs. These racks feature up to 128 GPUs/rack, with each GPU drawing up to 1,400 watts, suggesting a 180+ kW rack density. AMD also announced its next generation Helios rack infrastructure, planned for release in 2026. This will feature 72 MI400 GPUs. In January 2025, Intel announced it was also developing a “system-level solution at rack scale” for its Gaudi data center accelerator chips.
All these firms are optimizing AI model performance across dimensions – chip-level, chip-to-chip bandwidth, and network throughput. This results in increased rack density, not as a goal, but as an outcome.
…which existing data centers are unprepared for
According to a 2024 Uptime Institute survey, only 5% of data centers have average rack densities greater than 30 kW. In other words, only 5% of data centers are designed to house even Hopper (H200) chips.
Looking at average rack density over time, there was a clear inflection upward in 2024 as AI data centers began to go live. With rack densities 5-10x higher, even a small number of AI data centers drives the overall survey average up.
The Uptime Institute surveys data center operators annually on their facilities’ average rack density. The average response has more than doubled since 2017, from less than 6kW in 2017 to ~12kW in 2023. According to JLL, the typical rack density among hyperscale facilities is ~36kW, and expected to continue to rise. Hyperscalers are contributing more to the square footage pipeline at likely above-average rack densities.
Energy efficiency curves for data centers
Power usage effectiveness (PUE) measures the total electricity used by the data center divided by the electricity used by IT equipment. 2025’s average PUE of 1.54 means that cooling, electrical, lighting, and other devices used 54% of the electricity going to IT equipment. By definition, the lowest PUE is 1.00 (all electricity goes to IT equipment).
Average PUE has declined since 2007, but has remained in the 1.5-1.6 range since 2016. Among cloud providers, Google has the lowest fleet-wide PUE at 1.10 in 2024. This shows that there is considerable room for improvement.
EU regulations: reporting is step one
In September 2023, the European Union passed the Energy Efficiency Directive. The first (mandatory) data reporting was September 15, 2024. It is due annually by May 15th thereafter. Required data includes floor area, installed power, network traffic, electricity consumption, temperature set points for cooling, and wager usage.
In June 2025, the EU’s Commissioner for Energy and Housing announced plans for further data center energy regulation by March 2026. While details have yet to be announced, regulations will aim at increasing energy efficiency.
Rack architecture evolution
NVL72 increases infrastructure content per MW
The first liquid-cooled server was introduced by IBM (the IBM System/360 Mainframe) in 1964. However, advances in semiconductor design (i.e., complementary metal-oxide semiconductors) enabled a step-function reduction in the required electricity current for chips. The last widespread commercial liquid-cooled design was 1995. Since then, air-cooled server designs in open racks have dominated data centers.
Nvidia’s NVL72 rack design was announced in March 2024. It consists of 36 Grace CPUs and 72 B100 GPUs, all liquid cooled. To reduce network transmission lag, engineers brought more GPUs into the rack, connecting them with Nvidia’s proprietary NVLink, which offers up to 130 terabytes per second of bandwidth. Increased power density is an outcome (not a goal) of reducing latency. To compensate for the increased power density, Nvidia’s engineers opted for liquid cooling.
The NVL72 also has innovation in power delivery. Rather than use rack PDUs (power distribution unit), the NVL72 uses a 1,400-amp busbar to deliver electricity to servers. Included in each rack are eight power shelves (similar to rack PDUs, but controlling the busbar).
We compare electrical and thermal content per megawatt in the NVL72 relative to Nvidia’s prior DGX SuperPOD air-cooled architecture, we find four differences:
CDU: Most obviously, the NVL72 requires a coolant distribution unit (CDU) to drive its liquid cooling system.
Lower air-cooling content: We estimate a one-third reduction in the number of computer room air handlers (CRAHs) as a result of the heat captured by the cold plate and liquid cooling system.
More power shelves: Nvidia’s reference architecture for the DGX SuperPOD has three power shelves per rack, while the NVL72 has eight. Even with the lower number of racks per megawatt in the NVL72 configuration, it still requires more power selves.
UPS for CDUs. Given the importance of the CDUs, they will need to have a separate back-up uninterruptible supply systems (UPS).
On balance, we estimate the NVL72 architecture increases content/MW for both the electrical (+7%) and thermal (+18%) equipment relative to the DGX “SuperPOD” configuration. The overall infrastructure content/MW rises to $3.1mn from $2.8mn
The existing power distribution architecture
Typically, data centers receive three-phase alternating current (AC) electricity at 13,800 volts or 34,500 volts. Over a series of transformer, switchgear, power distribution units, busways, and uninterruptible power supplies (UPS), this electricity is stepped down and converted to 48-volt direct current (DC) which powers IT servers and other equipment.
The example below shows a double-conversion UPS. Incoming AC power is converted to DC power to charge the batteries. Then an inverter converts the DC power back to AC for further distribution. These double-conversion UPS are considered the best solution given it is always “on” and offers power conditioning features (e.g., manage over/under voltage, decrease frequency variation).
Raising the voltage to save on wiring
As GPUs for AI applications have increasingly higher computational needs and demand more electricity, existing power distribution systems will struggle to cope. Large data center operators are proposing changes to existing power distribution systems.
In October 2024, Microsoft and Meta, as part of the Open Compute Project, announced a reference rack architecture called Mt. Diablo. This uses 400-volt direct current (DC), which is significantly higher than the current 48-volt.
On 5/28, Nvidia announced that it would develop a new power infrastructure with an 800-volt direct current (DC) architecture to deliver power requirements of 1+ megawatt server racks. The company has plans to deploy it by 2027.
Electrical power (measured in watts) can be broken down into voltage (measured in volts) and current (measured in amps). The carrying capacity for wiring is determined by the current (amps). Thus, higher voltage can carry more electrical power using the same diameter wire. This reduces the amount of copper needed within the rack. Compared to a 208-volt system, a 400-volt system would reduce the copper wire weight by 52% per Schneider Electric.
The Mt. Diablo/Open Compute Project proposal would allow equipment to be installed by electricians with low voltage certifications (e.g., less than 600 volts). Nvidia’s proposal would require installation by electricians with medium voltage certification, which would limit the potential workforce of installers.
How will UPS architecture change
There are four main parts of a UPS: (1) a rectifier to convert AC to DC, which powers the battery; (2) the battery; (3) inverter to convert DC electricity back to AC; and (4) the static bypass switch, which allows power to continue to flow even if the UPS itself fails. In the high voltage direct example, the UPS no longer needs an inverter to convert the DC electricity back to AC. The UPS continues to need a rectifier, battery, and static bypass switch.
This is not the first time that DC architecture has been tried in data centers. In 2011, ABB acquired a majority stake in DC power distribution firm Validus DC. They built several DC-based data centers globally. The absent of standards and equipment meant that the initial cost to deploy was much higher.
Direct current does not have a frequency (no variation) and therefore no harmonics. However, they can still have variation in output power and current. In theory, a DC UPS system should cost 10-20% less than an AC UPS. However, the higher voltage requires more expensive safety equipment versus lower voltage. Net-net, we do not expect high voltage DC UPS pricing to be lower than current AC UPS, particularly in the early years with limited capacity.
From an operators’ perspective, the main benefit of an 800-volt direct current (800V DC) UPS versus the 208-volt alternating current (208V AC) UPS is a slight uptick in power efficiency. According to Schneider, by removing the AC-to-DC conversion and DC-to-AC reconversion and increasing the voltage can reduce to total electricity consumption of a data center by ~1%. This would be an annual savings of ~$7,000 per megawatt (assuming 100% utilization an $0.08 per kilowatt cost).
Moving power equipment outside the rack to save space
Power supply units (PSUs), which convert AC to DC, take up valuable space within the rack. At higher voltage, these will take up even more space. This is why Nvidia and the Open Compute Project are proposing moving electrical equipment to a “side car” next to the rack containing servers.
Details on the increase in Rubin Ultra infrastructure
Nvidia’s Rubin Ultra GPU and its NVL576 Kyber racks were initially unveiled as a mockup in March 2025. Rubin Ultra will follow Rubin and Blackwell chips. The Rubin Ultra is intended to ship in 2H27. The current Blackwell B200 server racks can use up to ~120kW per rack. The first Vera Rubin rack, to launch in the 2H26 (the name for the combination of the Vera CPU and Rubin GPU) will use the same infrastructure as Grace Blackwell (the Grace CPU and Blackwell GPU combination).
However, the next iteration of Rubin – Rubin Ultra – will have 2x the number of GPUs per rack. The single rack solution, dubbed Kyber, will be able to handle 600kW. Each rack will consist of four “pods” with 18 blades in each pod.
Early thoughts on Kyber rack
The Kyber rack (and 800V direct current architecture) will require several changes. First, the power shelves would come out of the rack and go into a power side car. The floor power distribution units (PDUs) would be eliminated. We assume that direct current UPS pricing would be similar to current alternating current pricing.
Nvidia’s mockup included one power sidecar, one CDU, and one networking/storage rack for each server rack. This likely increases the CDU costs per MW, as a dedicated CDU is needed for each rack. However, according to press articles with Nvidia executives, the Kyber rack is intended to be “100% liquid cooled.” This implies custom cold plates than could cover the entire server blade (not just GPUs/CPUs), reducing the amount of air-cooling content. Net-net, we expect electrical and thermal content/MW to be above the current DGX SuperPOD configuration, but similar to the NVL72.
ASICs chips following similar infrastructure evolution
Application-Specific Integrated Circuit (ASIC) semiconductors are customized for a particular use. ASIC chips can be designed to lower electricity requirements for certain tasks relative to more general-purpose chips, such as CPUs. However, we see strong evidence that ASICs chips are following a similar infrastructure development path as GPUs.
The largest buyers of ASICs for data centers are cloud services firms, including Amazon Web Services, Microsoft Azure, and Google Cloud. All three of these firms have introduced liquid-cooling architectures (in chronological order):
Google Cloud is now on its sixth generation Tensor Processing Unit (TPU), which is an ASIC chip for AI applications. Despite the potential for lower electricity draw, Google Cloud has been using liquid cooling for TPUs since 2019, according to press reports.
In 2024, Microsoft Azure announced its AI-specific ASICs chip (Maia 100) would use a liquid-cooled system.
Amazon Web Services is introducing its third generation Trainium chips in 2026. These are ASICs chips for AI applications. Amazon Web Services VP of Infrastructure Prasad Kalyanaraman has stated that Trainium3 chips would require liquid cooling. Here again, despite the potential for lower electricity usage, these ASICs chips are moving to a liquid cooled architecture.
Implications for incumbent equipment vendors
Given Eaton, Vertiv, nVent, and other IT infrastructure companies specifically manufacture on/around the rack, this has raised investor questions on whether this will disrupt market share. We argue the data center industry prioritizes uptime/reliability, which historically has benefited incumbents. Service capabilities are another barrier to entry, particularly for operators adopting new equipment.
When NVIDIA announced its 800-volt direct current architecture, it listed three of the largest incumbents as partners for the development of the power system. Vertiv (May 2025) and Eaton (July 2025) have both announced plans to release compatible products.
However, the proposed direct current architecture would result in simplified uninterruptible power supplies (UPS). The hypothetical cost of a direct current UPS is 10-20% lower than an alternating current UPS. However, we see this being largely offset by additional costs for higher-voltage switchgear and rectifiers (AC to DC convertors). Net-net, industry participants expect content per megawatt for electrical equipment to remain relatively similar.
Overview of data center thermal systems
Key components:
Computer room air conditioners (CRACs): used in small data centers, these are full air conditioning units located inside the data hall. They are tied to a condenser located outside of the building. CRACs have small cooling capacities.
Computer room air handlers (CRAHs): used in larger data centers. These blow air over a coil with chilled facility water to lower the temperature inside the data hall. They are connected to a chiller. Often used in data centers with raised floors, where cold air is blown under the IT equipment and hot air rises to the top of the data hall.
Fan walls: A type of high-capacity CRAH. Large fans blow air over chilled water in coils. Designed to lower air temperatures by 15-25 degrees (with water temperatures rising 15-25 degrees). Often used in data centers without raised floors.
Chillers. A high-capacity refrigeration unit that removes heat from facility water for distribution inside the building. Heat is absorbed from the facility water into the chilled coolant through the evaporator heat exchanger. The coolant is then run through a compressor, increasing the pressure and temperature. The condenser heat exchanger transfers the heat to either outside air or water. Finally, the refrigerant runs through an expansion value, lowering the pressure and temperature before going into the evaporator heat exchanger. There are two varieties:
Air-cooled chillers are typically located outside the building. The condenser transfers heat to the outside air.
Water-cooled chillers are typically located inside the building (mechanical space). It uses a liquid-to-liquid heat exchanger to transfer heat to a dedicated cooling loop. That loop is connected to a cooling tower outside the building.
Cooling towers. Heat rejection equipment that dissipates heat into the outside air. Cooling towers are giant heat exchangers used to reject heat created by IT equipment inside the data hall. Cooling towers take condenser water (hot) water and cool it using outside air. They come in two varieties.
Wet cooling towers use water evaporation to create additional cooling capacity. However, this consumes a large amount of water (e.g., “open circuit”). Wet cooling towers are efficient in hot and dry regions; however, the costs are higher for the equipment and installation and the water consumption.
Dry cooling towers are closed-circuit (e.g., no water loss) towers were there is no direct contact between the ambient air and the fluid being cooled. Facility water transfers to the air through radiators. Dry towers have lower initial maintenance costs and can work in most climate conditions; however, there is a lower capacity and can’t cool below a certain temperature.
Compressors: equipment that increases refrigerant pressure by reducing the volume. Compressors are the most critical part of the chiller and largely determine its capacity, efficiency, and power usage. There are many different types of compressors, but the largest chillers tend to use centrifugal compressors. These compressors pull refrigerant using centrifugal force and compress it using an impeller. They are more energy efficient, particularly in large capacity applications.
Coolant Distribution Units (CDUs): circulates and pumps coolant in a closed-loop system to row manifolds, rack manifolds, and through either cold plates or rear door heat exchangers. The coolant (typically a water-glycol mix) returns to the CDU where it runs through a heat exchanger. The heat is there transferred either to air (“liquid-to-air CDU”), or facility water/dedicated cooling loop (“liquid-to-liquid CDU”) for heat rejection. CDUs will typically come with two pumps, offering redundancy around mechanical failure. CDUs will also have sensors to monitor and control temperature, pressure, and flow rate. CDUs come in two forms:
In-rack CDUs: small scale CDUs that fit within a single rack and pump coolant through rack manifolds. By design, they have limited capacity.
In-row CDUs: larger format CDUs that sit outside of the rack and typically serve multiple racks.
Liquid cold plate: a metal block specifically designed to sit on top of a chip with microchannels for coolant to flow through the chip. The liquid cold plate facilitates heat exchange from the semiconductor into the cooling fluid.
Quick disconnectors: couplings with zero leaking of coolant. Used to connect the liquid cold plates to the rack manifold and to connect the rack with the row manifold. Quick disconnectors in data center applications typically have a latch to secure the connection.
Chillers
Every one megawatt of power supplied to a data center requires approximately 285 tons of cooling, similar to the requirements for a 115,000 square foot commercial building. A 285-ton chiller is roughly $300-400,000. Based on the 9 GW of data center capacity added in 2024, BofA estimates a $3.1-3.5bn market size.
Cooling Towers
Cooling towers costs ~$300,000 on average, but these costs vary with size. A single cooling tower can provide heat rejection for 3-4 MW of supplied electrical power (e.g., 1,000 tons of cooling). BofA estimates that the cooling tower market for data centers was $0.7-0.9bn in 2024.
Wet vs dry cooling towers is something to be considered for data centers. Wet cooling towers are open-circuit cooling towers where water comes in contact with the ambient air. Dry cooling towers are closed circuit cooling towers were there is no direct contact between the ambient air and the fluid being cooled. Wet cooling towers are reliable in hot ambient temperatures and have high cooling capacity; however, the costs are higher for the equipment and installation and the water consumption. Dry towers have lower initial maintenance costs and can work in most climate conditions; however, there is a lower capacity and can’t cool below a certain temperature.
Computer Room Air Handling Units and Other
Computer room air handling units (CRAHs) use chilled facility water and blow air over a radiator. CRAHs are a major part of the thermal equipment located in the “white space” of a data center (i.e., where IT equipment is located). Other equipment includes aisle containment systems, in-rack cooling fans, and related sensors & controls.
We estimate the CRAHs and related equipment is a $5-6bn market. A portion of this total market is driven by replacement demand. The market for new construction is likely $4-5bn.
In 2021 Johnson Controls acquired Silent-Aire, a manufacturer of CRAHs and other equipment with $0.7bn in revenue. Other large HVAC manufacturers, such as Carrier, Trane, and Daikin also make CRAHs
A deeper dive into CDUs
Breaking down the cooling market
Traditionally, racks are cooled with air. Liquid cooling includes direct-to-chip applications (~11% of the market), immersion cooling (~1%), and rear-door heat exchangers (1%).
Importantly, liquid cooling is additive to existing air-cooling equipment. While liquid cooling is transferring heat from the chip itself, other IT equipment and power equipment still needs to be air cooled.
Direct-to-chip emerging as preferred choice for industry
Historically, liquid cooling was used largely for high-performance computing applications. Rising rack density is driving increased interest in adoption of liquid cooling. Under the absolute right containment and right cooling, air-cooling would reach its maximum cooling limit at 60-70kW per rack. This would be the maximum, not the average.
Direct-to-chip cooling has been the leading alternative cooling method. Liquid cooling solutions can be retrofit into the existing infrastructure with relatively little disruption.
Liquid cooling demand among colocation companies is more likely to accelerate versus decelerate, as AI chip availability broadens out. Absent a “pause” by cloud service providers on AI build outs, demand for CDUs will remain strong.
Existing racks can be retrofit for liquid cooling by adding cold plates and connecting to the plates with couplings and tubing. Heat is transferred from the chip to the cold plate and into the fluid, which is then circulated back to a cooling distribution unit (CDU). Most liquid cooling systems use two cooling loops: a primary cooling loop (also known as an external loop) and secondary cooling loop (also known as an internal loop). A CDU is used to thermally couple the external and internal loops. Heat is transferred from the internal loop to the external loop within the CDU. The CDU external loop connects to the data center infrastructure. The CDU internal loop connects to the piping and manifolds
Rear-door heat exchangers
Rear-door heat exchangers help manage densities from 20kW to ~75kW. The technologies do not bring liquid directly to the server, but the infrastructure is similar to direct liquid cooling. Passive or active heat exchangers replace the rear door of the IT equipment rack with a liquid heat exchanger. Passive heat exchangers use server fans that remove heated air through a liquid-filled coil in the rear door of the rack. The coil then absorbs the heat before the air goes into the data center. Active heat exchangers also have fans that pull air through the coils.
What’s in liquid cooling?
A liquid cooling system includes:
A CDU, which isolates a fluid loop from the rest of the cooling system. This is typically a single enclosure with all parts integrated within it. A CDU provides temperature control, flow control, pressure control, fluid treatment, and heat exchange and isolation. The CDU takes the heat from the fluid loop and exchanges it out of the system. It also must isolate the fluid in the loop from the rest of the cooling system.
A redundant pump system (e.g., one more pump than needed) with filtration for the cooling fluid.
A heat exchanger, responsible for passing the heat to a secondary cooling loop.
A controller, with autonomically and autonomously will control the pump system and gather information from different sensors.
Immersion cooling
Single-phase cooling immersion cooling uses a pump to circulate coolant around immersed server racks. In two-phase cooling, server heat turns the coolant into vapor, which then rises, condenses through coils, and returns. Both methods result in better power usage effectiveness (PUE) ratios.
Technically, there is ~2 GW of capacity using immersion cooling, but the majority relates to crypto mining versus cloud, colocation, or enterprise data centers. One major barrier to adoption is that immersion cooling voids the chip OEM (original equipment manufacturer) warranty. Another barrier is the immersion fluid itself. Immersion cooling involves chemistry with Per- and polyfluoroalkyl substances (PFAS). 3M (MMM) announced it would exit production of its ~$1.6bn in sales PFAS manufacturing back in December 2022, with official phaseout by the end of 2025. While Illinois Tool Works (ITW) and others offers alternatives to Novec fluid, we still see some reluctance from data center operators to adopt immersion.
Pros and Cons Part I: Liquid-to-air or liquid-to-liquid?
There are three main ways to reject heat from a server. Below we show the differences between existing heat rejection systems and dedicated heat rejection systems. The reject heat-to-air in IT space (closed loop heat rejection) utilizes a liquid-to-air CDU. The other two formats of heat rejection utilize liquid-to-liquid CDUs. The most energy efficient heat rejection system would be rejecting heat to an independent water system, which makes the most sense for large-scale AI server deployments. The closed loop heat rejection is costly for large scale deployments, but the investment is much smaller and the time to deployment is shorter.
Pros and Cons Part II: Rack mounted or floor mounted?
Below, we show the advantages and disadvantages of a rack-mounted solution as opposed to a floor-mounted solution. An in-rack solution has the CDU within the IT rack space, typically mounted at the bottom of the rack. The CDU includes a pumping unit, filtration, and controls. Heat is transferred to the data center air via a fan-assisted rear-door heat exchanger (liquid to air) or to a facility loop via a liquid-to-liquid heat exchanger.
A floor-mounted CDU is dedicated to a row or multiple rows of racks, sharing an IT fluid loop. This can be placed at the end of the row or further away from the cluster. Similar to the in-rack unit, heat is transferred to the data center either via a fan-assisted rear-door heat exchanger (liquid to air) or to a facility loop via a liquid-to-liquid heat exchanger.
Pros and Cons Part III: Single-phase or two-phase?
Two-phase direct-to-chip liquid cooling involves the coolant going from a liquid to a gas. This phase transition can absorb more heat, but requires the use of specific coolants that boil in the required temperature ranges of safe operation of the semiconductor. The vapor then returns to the condenser for recirculation.
Two-phase direct-to-chip cooling remains a nascent approach (see more details in the
COOLERCHIPS: research beyond D2C section below). Third-party research firms suggest a market size of less than $50mn in 2024. Vertiv has tested a prototype two-phase system (see Maturation of Pumped Two-Phase Liquid Cooling to Commercial Scale-Up Deployment, Nov. 2024).
It is important to note that both single-phase and two-phase approaches will net CDUs, cold plates, access to facility water supply, and heat rejection (e.g., cooling tower). The key difference is that two-phase systems use specialized coolants/refrigerants, while single-phase systems tend to use a water-glycol mix.
Practical difficulties with a two-phase approach include:
Managing the difference in density (and hence pressure) as the coolant/refrigerant enters the cold plate as a liquid versus exiting the cold plate as a gas. The variation in pressure in a single-phase approach is far less.
Water-glycol mix is cheap; the customized refrigerants needed for two-phase systems adds to the overall cost. Manufacturers of cold plates and quick disconnects have optimized their products for single-phase systems. Customized accessories for a two-phase system would add to the cost.
Variations in the level of vaporization can result in a wider disparity of heat transfer versus single-phase systems. Vaporization (e.g., the creation of bubbles) can vary based on minor differences within the microchannels of cold plates. This can be overcome through system-wide simulation and testing, but adds an additional level of complexity.
CDU competitive landscape
Given the high-growth prospects, it is not surprising that the liquid cooling market has seen many new entrants. There are 30 vendors offering more than 100 CDU variants in the market today. Given the conservative nature of data center operators, reputation and service capability will play a major factor in decision making. This bodes well for Vertiv, which has more than 440 service centers globally offering same-day service in most locations.
Given the nascent nature of the CDU market (~$1.2bn in revenue in 2024), we do not have the same level of confidence in market shares relative to larger, more established product categories. The table below groups the vendors into three tiers (with vendors listed alphabetically in each tier). Tier 1 vendors have at least $100mn in CDU-related revenue and offer multiple variants. Tier 2 vendors have strong products and existing thermal offerings, but we do not believe have more than $100mn in CDU-related revenue.
For reference, the ten CDU manufacturers listed as Nvidia partners at the 2025 Computex conference were: Auras, Boyd, Cooler Master, CoolIT Systems, Delta, Flex/JetCool, LiteOn, Motivair/Schnedier, Nidec, and Vertiv.
New entrants, such as Carrier, JetCool, Munters, Nautilus, Nortek, and Trane, may have garnered more revenue in 2025, but the focus is on 2024 given data availability.
Below is a brief view of the Tier 1 CDU vendors below (in alphabetical order):
Delta Electronics
Delta Electronics (ticker: 2308 TT) is a manufacturer of power supplies and video display products. Delta offers liquid-to-liquid in-rack and in-row CDUs and liquid-to-air in-row CDUs.
nVent
nVent (ticker: NVT) is a US-based manufacturer of electrical products. nVent manufactures both in-rack and in-row solutions. nVent introduced its first standardized liquid cooling unit, RackChiller CDU800, in November 2020. The company also offers smaller, in-rack coolant distribution units. nVent has been building up CDU service capabilities as well.
Schneider Electric / Motivair
Schneider (ticker: SU FP) is a manufacturer of electrical and automation products. Within the data center business, the company offers a full range of electrical and thermal products. Within thermal, Schneider offers traditional air-cooling products, as well as in-row, in-rack, and floor-mounted liquid cooling solutions. Schneider closed on the acquisition of US-based Motivair in February 2025. We estimate combined pro forma CDU sales in excess of $100mn.
Vertiv
Vertiv (ticker: VRT) has a large portfolio of liquid cooling products. This includes coolant distribution units (CDUs), active and passive rear-door heat exchangers, and heat rejection systems. The company acquired CoolTera in 2023, which added to its CDU manufacturing capabilities and intellectual property position.
Overview of data center electrical systems
Tracing electricity from the utility to the server
The utility provides either high-voltage or medium-voltage electricity. For larger data centers taking high-voltage lines, there would be a step-down power transformer located near the site. Alternating current (AC) electricity enters the data center at medium voltage. This will pass through switchgear before being stepped down to low voltage AC by a transformer.
Electricity then goes to the uninterruptible power supply (UPS). The UPS converts the electricity to DC to charge the batteries, then converts it back to AC to send on. Traditionally, electricity was then distributed through power distribution units (PDUs). In more modern data centers, electricity goes through a higher-capacity busway. The electricity then goes to the rack, where it flows into rack power distribution units (typically along the side of the rack). Individual servers are plugged into these rack PDUs. Finally, the electricity is converted to DC by power supply units (PSUs) located within the rack.
Uninterruptible power supply (UPS)
Uninterruptible power supply (UPS) provides automated backup electrical power for a data center. UPS systems can also perform power conditioning, including voltage fluctuations, under- or over-voltage conditions, and frequency variations. We focus on the large-scale UPS market, which is most applicable to data centers.
UPS provide short-term backup power (typically 15-30 minutes) in the event of a power failure until the backup generators are up and running. The size of the UPS battery array is therefore proportional to the supported data center electrical load.
The market is estimated at $8.5-9.5bn, which includes a significant amount of replacement revenue. Data center operators typically replace UPS every 10 years, given increased risk of failure after this period. Based on the 9 GW of data center capacity added in 2024, the new install market size is closer to $7bn.
Switchgear
Switchgear is equipment that controls, protects, and isolates electrical equipment. Common components include switches, fuses, isolators, relays, and circuit breakers. Data centers typically have two kinds of switchgear. A set of medium-voltage switchgear for the incoming electrical supply from the utility before it goes to the step-down transformer. Then a set of low-voltage switchgear before the electrical supply reaches the UPS backup batteries.
Based on the 9 GW of data center capacity added in 2024, the market size is estimated at $5.0-5.5bn. Switchgear is ubiquitous throughout electricity distribution and the broader market is more than $100bn. The broader market is dominated by ABB, Eaton, Legrand, Schneider, and Siemens.
Power distribution equipment
Traditionally, electrical power went from the UPS system to power distribution units (PDUs). PDU components typically include circuit breakers, power monitoring panels, power metering, and cabling to each rack. PDUs have drawbacks, including taking up floor space in the data hall and generating waste heat.
In high-density data centers, busway is an alternative approach to PDUs. Busway is typically mounted overhead, providing power to each rack through plug-in units with breakers. The busway draws power directly from low-voltage switchgear. While busway takes up less floor space, it is typically more expensive to install and less flexible to changes in rack location.
Finally, rack power distribution units (rPDUs) are mounted on the rack itself. rPDUs provide outlets to plug in servers, storage, and networking equipment. These are the last step in power distribution to IT equipment. In tier 3 and tier 4 data centers, there are two rack PDUs for each rack, providing redundancy.
Based on the 9 GW of data center capacity added in 2024, BofA estimates a $4.2-4.7bn market size. Similar to UPS, data center operators typically replace rack PDUs every 10 years, given increased risk of failure after this period, so this includes a portion of replacement revenue.
Engineering
Design & engineering services generally cost 4.5-6.5% of the infrastructure costs of a data center (e.g., excluding IT equipment). Publicly traded US firms covered by our colleague Michael Feniger include Jacobs Solutions (J), Fluor (FLR), and AECOM (ACM). These firms are highly diversified among end markets, with data centers representing a small percentage of total revenue.
These engineers plan the electrical, mechanical, cooling, fire protection, and physical security systems of the data center. Importantly, they must understand the IT infrastructure (e.g., network, routing, storage), which has implications for the physical infrastructure requirements. Based on the 9 GW of data center capacity added in 2024, the market is estimated at $4bn.
Construction
Construction firms oversee all aspects of the construction project including project management, specialty contractors, material purchasing, and equipment rental. These firms report modest operating margins, given the large amount of pass-through costs. The average operating margin is approximately 4% among publicly traded contractors with data center exposure.
Publicly traded firms include Balfour Beatty and Skanska. There are also smaller, private construction firms that specialize in data centers, such as US-based T5 Construction Services and Ireland-based Mercury Engineering.
Based on the 9 GW of data center capacity added in 2024, we estimate a $65-80bn market size. However, this would include material & equipment pass-through costs. Using an average margin of 4%, this would imply $2.6-3.2bn of operating profit.
Generators
Typical diesel backup generators cost $400-550,000/MW. Total system costs would include fuel tank, fuel pump, and installation costs, which collectively add an additional $350-500,000/MW. Generators are typically sized to fully supply the electrical consumption of the data center. For example, a 10MW datacenter will typically have 10MW worth of generator power on site to ensure 99.999% uptime for clients.
Based on the 9 GW of data center capacity added in 2024, we estimate a $7.2bn market size for generator equipment only (excluding ancillary products and installation costs). In 2023, Cummins gave a $6bn market size, but this has likely expanded significantly in 2024.
Servers
Servers are the largest single product category of data center capex. In 2024, data centers bought 13.5mn servers spending approximately $280bn. On a dollar basis, AI servers comprised approximately half of this spending, but traditional servers represented the vast majority on a unit basis.
Original design manufacturers (ODMs) are companies that produce servers to their own design. For example, Google’s Tensor Processing Units (TPUs) are custom semiconductors and these are put into servers at Google Cloud data centers. Similarly, Amazon Web Services has its own custom semiconductors (Graviton) and servers. Original equipment manufacturers (OEMs) are companies building servers to clients’ specifications.
Networking equipment
Networking equipment includes several different pieces of equipment. Switches communicate within the data center or local area network. Typically, each rack would have a networking switch. Routers handle traffic between buildings, typically using internet protocol (IP). Some cloud service providers use “white box” networking switches (e.g., manufactured by third parties to their specifications).
AI workloads are bandwidth-intensive, connecting hundreds of processors with gigabits of throughput. As these AI models grow, the number of GPUs required to process them grows, meaning bigger networks are required to interconnect the GPUs.
The market size for networking equipment is estimated at $36bn market . Arista and Cisco Systems are the two largest vendors.
Last but not least, to get a true sense of scale and context of the data center revolution, here is a stunning chart we highlighted first two months ago, showing that in the next few months, construction of data centers will soon surpass all general offices.
Source
Much more in the full BofA report available to pro subs.
32,12220
More ai stories on ZeroHedge
Recirculator_title__zk4W0
Could AI's Growing Thirst For Water Usher In Localized Resource Wars
Recirculator_title__zk4W0
'Bigger, Faster, More Permanent Than All Waves Combined': No Sector Will Be Untouched By AI
Recirculator_title__zk4W0
'Major Geopolitical Issue': Eric Schmidt Warns 'Majority Of The World' Will Choose China's 'Open' AI Over America's 'Closed' Models
Show Comments
Discrimination Notice
Privacy Policy
Disclosure
Disclaimer
Privacy and cookie settings
Advertise with ZeroHedge
Copyright ©2009-2025 ZeroHedge.com/ABC Media, LTD
Tyler Durden's Photo
by Tyler Durden
Wednesday, Oct 08, 2025 - 11:00 AM
One year ago, we published a primer on the beating heart of the AI bubble - the data-center - which in September 2024 represented at $215 billion investment opportunity. Not surprisingly, it was one of our most popular premium articles of 2024. Fast forward to today, when the addressable market size of data centers has more than doubled, at a cool half a trillion dollars, but the question remains: who stands to benefit the most from this unprecedented build out, which will one day come to a crushing, thunderous halt as the AI bubble implodes... but until then the music is playing. And so, here is the updated edition of the data center primer, updated for 2025.
Introduction
A global view of the $500+bn data center market
Understanding the data center end market has grown in importance given high growth rates and scale, and according to BofA estimates global data center spending will reach $506bn in 2025, comprised of $418BN of IT equipment and $88BN of infrastructure spending. This is up 25% y/y, and looking ahead BofA forecasts a remarkable 23% CAGR for the market over 2024-28, including a 19% CAGR for infrastructure spending. This report provides historical context on the size, shape, and ownership of the global data center market.
Key product lines for data center infrastructure
The report focuses on 12 product and service categories: chillers, construction firms, cooling towers, computer room air handlers, coolant distribution units, engineering firms, generators, networking equipment, power distribution equipment, servers, switchgear, and uninterruptible power supplies (UPS). The average content per megawatt (MW) and market shares for each of these categories is reported. The BofA estimate for the all-in cost of building a data center is 39MM/MW, and the bank anticipates next generation AI architectures will be significantly more capital intensive at $52MM/MW.
Implications of AI semiconductor evolution: from AC to DC
The report explains the reasons why artificial intelligence (AI) semiconductor manufacturers are switching to “rack scale” architectures, with ever increasing density of chips per rack. These industry trends have already driven rapid growth in liquid cooling in thermal equipment. However, there is an emerging shift to high voltage direct current (DC) architecture for electrical equipment. We size the potential for electrical equipment content and costs to change as the industry pivots away from low voltage alternating current (AC) designs.
From air to liquid: a closer look at CDUs
Rising rack density is driving adoption of liquid cooling solutions. The industry is coalescing on single-phase, direct-to-chip solutions. Coolant distribution units (CDUs) are the key equipment needed to power these offerings. Larger format, in-row CDUs have outgrown smaller in-rack variants. We show 30 vendors and highlight the five we think currently have the most market share.
AI data centers and electricity demand
Over the last few months, there have been several announcements of gigawatt-scale data centers. Using multiple academic forecasts, BofA projects AI electricity demand growing at a 40+% CAGR, with low- to mid-teens growth for total data center demand. The report provides case studies of how these large projects have progressed and are obtaining power. Additionally, it is projected that the electricity demand for AI inference (i.e., running previously trained models) will overtake AI training before the end of the decade. While efficiency gains are possible, increased adoption will more than offset this. The political and populist blowback to soaring electricity prices, which have now been correctly attributed to the data center explosion, is a major wildcard and a potential huge negative risk factor.
The Data Center opportunity
Data center market to hit $900+bn in ‘28E
BofA estimates data center capex was more than $400bn globally in 2024, rising to more than $500bn in 2025E. The bank also estimates that AI adoption will drive a 23% market-wide CAGR over 2024-28E. The Electrical and Thermal equipment markets are size at $18bn and $10bn, respectively, in 2024.
The percentage mix varies from the market size as the total market includes replacement/refresh spending. BofA estimates the all-in cost of building a traditional data center to be $39mn per megawatt.
As discussed later in this report, there are significant infrastructure changes ahead for the next generation of AI chips (e.g., NVIDIA’s proposed Rubin chip architecture). The cost of this future state data center will rise by a third to $52mn per megawatt.
The largest difference is higher server costs for next generation chips. Higher rack density leads to a lower number of racks and square footage per megawatt, reflected in lower building costs power distribution equipment costs. This future state data center assumes direct-to-chip liquid cooling and high voltage direct current electrical systems.
Vendor shares on broad infrastructure product categories
Across data center electrical products, Schneider is the share leader in this $18bn market. Across all thermal products, Vertiv is the share leader in this $10bn market.
Types of data centers
Enterprise: a facility owned by a single organization housing its IT infrastructure. Typically, they are owned by large corporations, financial institutions, or government agencies. Over the past ten years, the square footage growth of these data centers has been flat. However, upgrade and modernization projects have increased the capacity.
Single-tenant colocation: a facility owned and managed by a third party and leased to a single tenant. Historically, these came from sale leasebacks (i.e., enterprise-owned data center is sold to an investor and then leased back). Over the past ten years, cloud service providers have used this method to expand in new geographies. Initial lease terms are typically long with options to expense (e.g., 10-year initial lease with two additional 10-year options).
Multi-tenant colocation: a facility owned and managed by a third party and leased to multiple tenants. Rents are generally based on a combination of power usage and number of racks. Tenants benefit from shared services (e.g., network connectivity, physical security). Can be subdivided into retail (smaller space commitments; less flexibility) and wholesale (requires larger commitments; more flexibility in design).
Hyperscale: a very large data center engineered to provide maximum uptime (i.e., Tier 4 ranking in Uptime Institute’s classification), support distributed computing (e.g., sharing workloads across servers and sites), and scalability. The definition of “large” varies, but typically greater than 20 megawatts. Distributed computing and hyperscale data centers are closely associated with cloud service providers, such as Amazon Web Services and Microsoft Azure. However, not every hyperscale data center is used for cloud services and not all cloud servers are in hyperscale data centers.
As measured by electrical capacity, global data centers grew at a 14% CAGR over 2014-24 and a 17% CAGR over 2019-24. While corporations continue to run and maintain a significant number of data centers, hyperscale and colocation firms have made nearly all the capacity additions since 2013. Small colocation companies (e.g., <10 data centers) collectively comprise a meaningful portion of data center space (20-25%). These firms typically own smaller sites (e.g., <20 megawatts) outside major markets.
Evolution of cloud growth
In 2005, Nicholas Carr authored an article entitled The End of Corporate Computing predicting enterprises would stop building their own data centers and use third-party services. Amazon Web Services launched the next year, driving a boom in cloud services. In 2017, cloud service providers and colocation companies surpassed enterprise-owned data centers (as measured by electrical capacity).
Cloud service providers are profitable. Amazon Web Services generated $40bn of GAAP operating profit on $108bn of revenue, or a 37% operating margin. For most IT workloads, colocation provides a lower total cost of ownership. However, it requires upfront capex (for services & related IT equipment), multi-year commitments (to colocation firms), and higher levels of IT management and support. In contrast, cloud services are flexible and offer higher uptime levels.
Regional breakout: Americas home to over half of capacity
Data centers, as measured by electrical capacity, grew at an 17% CAGR over 2019-24. By region, EMEA has been a relative laggard (16% CAGR), Asia Pac a touch better (20%), and the Americas region right in line (17%).
Hyperscaler capacity is largely concentrated in the same key regions globally. Below, we list the top 20 locations for hyperscaler data center capacity globally. Navy represents North America, Red represents APAC, and Green represents EMEA. The largest single hyperscaler location globally is US Northern Virginia, which represents almost 15% of global capacity for hyperscalers. The second largest capacity is in Beijing, with ~7% capacity.
Colocation economics
Below is a walkthrough of typical per megawatt project economics for a new build wholesale colocation project (i.e., one leased to a small number of clients on long-term leases).
$2mn per MW. Land costs, utility connections, and site works. This would vary by location/site.
$11mn per MW for the powered shell. Turner & Townsend’s Data Center Cost Index uses real construction data from 300 projects. These costs include the building shell, mechanical, electrical, thermal, equipment and installation labor costs, and general contractor margin and contingency.
Typical annual rent is $2-3mn per megawatt (assume $2.5mn). Current occupancy rates in the US are 96-97% (assume 90% over the 20 years). The largest operating expense is electricity ($0.08 per kilowatt hour is average US industrial cost implying $0.7mn for 100% utilization, or $0.63mn given 90% occupancy level). Staffing levels are around two full-time employees per megawatt (assume $0.25mn in wages). Property taxes assumed at 1% on property value. Typical EBITDA margins are 40-50% (assume 45%).
For free cash flow, assume maintenance capex of 1.5% of original $11mn powered shell cost. Assume project financing of $7mn in equity (at 10% cost) and $6mn in 20-year amortizing mortgage debt (at 6% rate), yielding a weighted average cost of capital of 8.5%. This would be a 46% loan-to-value, in line with recent data center financings. A 21% corporate tax rate for cash income taxes is used.
After the 20-year holding period, the data center is expected to be sold. Recent data center transactions have been at around 5-7% capitalization rates (assume 7.0% for the exit at the end of year 20, i.e., 14.3x net operating income). These assumptions yield an 11.0% internal rate of return (IRR) and a $2.8mn net present value (relative to the original $7mn equity investment).
Evolution of chips and rack density
1. Rising watts per chip
The power consumption per chip has increased 4x from Nvidia’s first-generation Volta architecture to the current Blackwell. Many of the ways to increase computing performance require additional power. Put simply, supply voltage cannot scale down proportionally with node sizes. Putting more transistors on each chip requires more power. In addition, power consumption rises linearly with faster clock speeds.
2. Massively parallel processing
GPUs perform calculations in parallel. When thousands of GPUs work together in an AI cluster, if even one GPU lacks the data it needs, all other GPUs stall. This means that network latency delays can reduce overall performance significantly. Putting more GPUs within a single rack reduces the need for networking and high-speed interconnects.
These trends drive increased rack density...
In 2021, the average rack density was less than 10 kilowatts (kW) per rack. A reference Hopper rack (H200 chips) would draw 35kW. A reference Blackwell rack (B200 chips) would draw 120kW. Based on released statistics from Nvidia, we estimate a reference Rubin Ultra rack would reach 600kW in a single rack.
While we highlight Nvidia’s roadmap, other chip firms are following a similar trajectory. Given the increasing importance of scale-up capabilities in AI data center, each major accelerator vendor is developing its own protocol. On 6/12, AMD (Advanced Micro Devices) announced a reference rack infrastructure for its Instinct MI350 GPUs. These racks feature up to 128 GPUs/rack, with each GPU drawing up to 1,400 watts, suggesting a 180+ kW rack density. AMD also announced its next generation Helios rack infrastructure, planned for release in 2026. This will feature 72 MI400 GPUs. In January 2025, Intel announced it was also developing a “system-level solution at rack scale” for its Gaudi data center accelerator chips.
All these firms are optimizing AI model performance across dimensions – chip-level, chip-to-chip bandwidth, and network throughput. This results in increased rack density, not as a goal, but as an outcome.
…which existing data centers are unprepared for
According to a 2024 Uptime Institute survey, only 5% of data centers have average rack densities greater than 30 kW. In other words, only 5% of data centers are designed to house even Hopper (H200) chips.
Looking at average rack density over time, there was a clear inflection upward in 2024 as AI data centers began to go live. With rack densities 5-10x higher, even a small number of AI data centers drives the overall survey average up.
The Uptime Institute surveys data center operators annually on their facilities’ average rack density. The average response has more than doubled since 2017, from less than 6kW in 2017 to ~12kW in 2023. According to JLL, the typical rack density among hyperscale facilities is ~36kW, and expected to continue to rise. Hyperscalers are contributing more to the square footage pipeline at likely above-average rack densities.
Energy efficiency curves for data centers
Power usage effectiveness (PUE) measures the total electricity used by the data center divided by the electricity used by IT equipment. 2025’s average PUE of 1.54 means that cooling, electrical, lighting, and other devices used 54% of the electricity going to IT equipment. By definition, the lowest PUE is 1.00 (all electricity goes to IT equipment).
Average PUE has declined since 2007, but has remained in the 1.5-1.6 range since 2016. Among cloud providers, Google has the lowest fleet-wide PUE at 1.10 in 2024. This shows that there is considerable room for improvement.
EU regulations: reporting is step one
In September 2023, the European Union passed the Energy Efficiency Directive. The first (mandatory) data reporting was September 15, 2024. It is due annually by May 15th thereafter. Required data includes floor area, installed power, network traffic, electricity consumption, temperature set points for cooling, and wager usage.
In June 2025, the EU’s Commissioner for Energy and Housing announced plans for further data center energy regulation by March 2026. While details have yet to be announced, regulations will aim at increasing energy efficiency.
Rack architecture evolution
NVL72 increases infrastructure content per MW
The first liquid-cooled server was introduced by IBM (the IBM System/360 Mainframe) in 1964. However, advances in semiconductor design (i.e., complementary metal-oxide semiconductors) enabled a step-function reduction in the required electricity current for chips. The last widespread commercial liquid-cooled design was 1995. Since then, air-cooled server designs in open racks have dominated data centers.
Nvidia’s NVL72 rack design was announced in March 2024. It consists of 36 Grace CPUs and 72 B100 GPUs, all liquid cooled. To reduce network transmission lag, engineers brought more GPUs into the rack, connecting them with Nvidia’s proprietary NVLink, which offers up to 130 terabytes per second of bandwidth. Increased power density is an outcome (not a goal) of reducing latency. To compensate for the increased power density, Nvidia’s engineers opted for liquid cooling.
The NVL72 also has innovation in power delivery. Rather than use rack PDUs (power distribution unit), the NVL72 uses a 1,400-amp busbar to deliver electricity to servers. Included in each rack are eight power shelves (similar to rack PDUs, but controlling the busbar).
We compare electrical and thermal content per megawatt in the NVL72 relative to Nvidia’s prior DGX SuperPOD air-cooled architecture, we find four differences:
CDU: Most obviously, the NVL72 requires a coolant distribution unit (CDU) to drive its liquid cooling system.
Lower air-cooling content: We estimate a one-third reduction in the number of computer room air handlers (CRAHs) as a result of the heat captured by the cold plate and liquid cooling system.
More power shelves: Nvidia’s reference architecture for the DGX SuperPOD has three power shelves per rack, while the NVL72 has eight. Even with the lower number of racks per megawatt in the NVL72 configuration, it still requires more power selves.
UPS for CDUs. Given the importance of the CDUs, they will need to have a separate back-up uninterruptible supply systems (UPS).
On balance, we estimate the NVL72 architecture increases content/MW for both the electrical (+7%) and thermal (+18%) equipment relative to the DGX “SuperPOD” configuration. The overall infrastructure content/MW rises to $3.1mn from $2.8mn
The existing power distribution architecture
Typically, data centers receive three-phase alternating current (AC) electricity at 13,800 volts or 34,500 volts. Over a series of transformer, switchgear, power distribution units, busways, and uninterruptible power supplies (UPS), this electricity is stepped down and converted to 48-volt direct current (DC) which powers IT servers and other equipment.
The example below shows a double-conversion UPS. Incoming AC power is converted to DC power to charge the batteries. Then an inverter converts the DC power back to AC for further distribution. These double-conversion UPS are considered the best solution given it is always “on” and offers power conditioning features (e.g., manage over/under voltage, decrease frequency variation).
Raising the voltage to save on wiring
As GPUs for AI applications have increasingly higher computational needs and demand more electricity, existing power distribution systems will struggle to cope. Large data center operators are proposing changes to existing power distribution systems.
In October 2024, Microsoft and Meta, as part of the Open Compute Project, announced a reference rack architecture called Mt. Diablo. This uses 400-volt direct current (DC), which is significantly higher than the current 48-volt.
On 5/28, Nvidia announced that it would develop a new power infrastructure with an 800-volt direct current (DC) architecture to deliver power requirements of 1+ megawatt server racks. The company has plans to deploy it by 2027.
Electrical power (measured in watts) can be broken down into voltage (measured in volts) and current (measured in amps). The carrying capacity for wiring is determined by the current (amps). Thus, higher voltage can carry more electrical power using the same diameter wire. This reduces the amount of copper needed within the rack. Compared to a 208-volt system, a 400-volt system would reduce the copper wire weight by 52% per Schneider Electric.
The Mt. Diablo/Open Compute Project proposal would allow equipment to be installed by electricians with low voltage certifications (e.g., less than 600 volts). Nvidia’s proposal would require installation by electricians with medium voltage certification, which would limit the potential workforce of installers.
How will UPS architecture change
There are four main parts of a UPS: (1) a rectifier to convert AC to DC, which powers the battery; (2) the battery; (3) inverter to convert DC electricity back to AC; and (4) the static bypass switch, which allows power to continue to flow even if the UPS itself fails. In the high voltage direct example, the UPS no longer needs an inverter to convert the DC electricity back to AC. The UPS continues to need a rectifier, battery, and static bypass switch.
This is not the first time that DC architecture has been tried in data centers. In 2011, ABB acquired a majority stake in DC power distribution firm Validus DC. They built several DC-based data centers globally. The absent of standards and equipment meant that the initial cost to deploy was much higher.
Direct current does not have a frequency (no variation) and therefore no harmonics. However, they can still have variation in output power and current. In theory, a DC UPS system should cost 10-20% less than an AC UPS. However, the higher voltage requires more expensive safety equipment versus lower voltage. Net-net, we do not expect high voltage DC UPS pricing to be lower than current AC UPS, particularly in the early years with limited capacity.
From an operators’ perspective, the main benefit of an 800-volt direct current (800V DC) UPS versus the 208-volt alternating current (208V AC) UPS is a slight uptick in power efficiency. According to Schneider, by removing the AC-to-DC conversion and DC-to-AC reconversion and increasing the voltage can reduce to total electricity consumption of a data center by ~1%. This would be an annual savings of ~$7,000 per megawatt (assuming 100% utilization an $0.08 per kilowatt cost).
Moving power equipment outside the rack to save space
Power supply units (PSUs), which convert AC to DC, take up valuable space within the rack. At higher voltage, these will take up even more space. This is why Nvidia and the Open Compute Project are proposing moving electrical equipment to a “side car” next to the rack containing servers.
Details on the increase in Rubin Ultra infrastructure
Nvidia’s Rubin Ultra GPU and its NVL576 Kyber racks were initially unveiled as a mockup in March 2025. Rubin Ultra will follow Rubin and Blackwell chips. The Rubin Ultra is intended to ship in 2H27. The current Blackwell B200 server racks can use up to ~120kW per rack. The first Vera Rubin rack, to launch in the 2H26 (the name for the combination of the Vera CPU and Rubin GPU) will use the same infrastructure as Grace Blackwell (the Grace CPU and Blackwell GPU combination).
However, the next iteration of Rubin – Rubin Ultra – will have 2x the number of GPUs per rack. The single rack solution, dubbed Kyber, will be able to handle 600kW. Each rack will consist of four “pods” with 18 blades in each pod.
Early thoughts on Kyber rack
The Kyber rack (and 800V direct current architecture) will require several changes. First, the power shelves would come out of the rack and go into a power side car. The floor power distribution units (PDUs) would be eliminated. We assume that direct current UPS pricing would be similar to current alternating current pricing.
Nvidia’s mockup included one power sidecar, one CDU, and one networking/storage rack for each server rack. This likely increases the CDU costs per MW, as a dedicated CDU is needed for each rack. However, according to press articles with Nvidia executives, the Kyber rack is intended to be “100% liquid cooled.” This implies custom cold plates than could cover the entire server blade (not just GPUs/CPUs), reducing the amount of air-cooling content. Net-net, we expect electrical and thermal content/MW to be above the current DGX SuperPOD configuration, but similar to the NVL72.
ASICs chips following similar infrastructure evolution
Application-Specific Integrated Circuit (ASIC) semiconductors are customized for a particular use. ASIC chips can be designed to lower electricity requirements for certain tasks relative to more general-purpose chips, such as CPUs. However, we see strong evidence that ASICs chips are following a similar infrastructure development path as GPUs.
The largest buyers of ASICs for data centers are cloud services firms, including Amazon Web Services, Microsoft Azure, and Google Cloud. All three of these firms have introduced liquid-cooling architectures (in chronological order):
Google Cloud is now on its sixth generation Tensor Processing Unit (TPU), which is an ASIC chip for AI applications. Despite the potential for lower electricity draw, Google Cloud has been using liquid cooling for TPUs since 2019, according to press reports.
In 2024, Microsoft Azure announced its AI-specific ASICs chip (Maia 100) would use a liquid-cooled system.
Amazon Web Services is introducing its third generation Trainium chips in 2026. These are ASICs chips for AI applications. Amazon Web Services VP of Infrastructure Prasad Kalyanaraman has stated that Trainium3 chips would require liquid cooling. Here again, despite the potential for lower electricity usage, these ASICs chips are moving to a liquid cooled architecture.
Implications for incumbent equipment vendors
Given Eaton, Vertiv, nVent, and other IT infrastructure companies specifically manufacture on/around the rack, this has raised investor questions on whether this will disrupt market share. We argue the data center industry prioritizes uptime/reliability, which historically has benefited incumbents. Service capabilities are another barrier to entry, particularly for operators adopting new equipment.
When NVIDIA announced its 800-volt direct current architecture, it listed three of the largest incumbents as partners for the development of the power system. Vertiv (May 2025) and Eaton (July 2025) have both announced plans to release compatible products.
However, the proposed direct current architecture would result in simplified uninterruptible power supplies (UPS). The hypothetical cost of a direct current UPS is 10-20% lower than an alternating current UPS. However, we see this being largely offset by additional costs for higher-voltage switchgear and rectifiers (AC to DC convertors). Net-net, industry participants expect content per megawatt for electrical equipment to remain relatively similar.
Overview of data center thermal systems
Key components:
Computer room air conditioners (CRACs): used in small data centers, these are full air conditioning units located inside the data hall. They are tied to a condenser located outside of the building. CRACs have small cooling capacities.
Computer room air handlers (CRAHs): used in larger data centers. These blow air over a coil with chilled facility water to lower the temperature inside the data hall. They are connected to a chiller. Often used in data centers with raised floors, where cold air is blown under the IT equipment and hot air rises to the top of the data hall.
Fan walls: A type of high-capacity CRAH. Large fans blow air over chilled water in coils. Designed to lower air temperatures by 15-25 degrees (with water temperatures rising 15-25 degrees). Often used in data centers without raised floors.
Chillers. A high-capacity refrigeration unit that removes heat from facility water for distribution inside the building. Heat is absorbed from the facility water into the chilled coolant through the evaporator heat exchanger. The coolant is then run through a compressor, increasing the pressure and temperature. The condenser heat exchanger transfers the heat to either outside air or water. Finally, the refrigerant runs through an expansion value, lowering the pressure and temperature before going into the evaporator heat exchanger. There are two varieties:
Air-cooled chillers are typically located outside the building. The condenser transfers heat to the outside air.
Water-cooled chillers are typically located inside the building (mechanical space). It uses a liquid-to-liquid heat exchanger to transfer heat to a dedicated cooling loop. That loop is connected to a cooling tower outside the building.
Cooling towers. Heat rejection equipment that dissipates heat into the outside air. Cooling towers are giant heat exchangers used to reject heat created by IT equipment inside the data hall. Cooling towers take condenser water (hot) water and cool it using outside air. They come in two varieties.
Wet cooling towers use water evaporation to create additional cooling capacity. However, this consumes a large amount of water (e.g., “open circuit”). Wet cooling towers are efficient in hot and dry regions; however, the costs are higher for the equipment and installation and the water consumption.
Dry cooling towers are closed-circuit (e.g., no water loss) towers were there is no direct contact between the ambient air and the fluid being cooled. Facility water transfers to the air through radiators. Dry towers have lower initial maintenance costs and can work in most climate conditions; however, there is a lower capacity and can’t cool below a certain temperature.
Compressors: equipment that increases refrigerant pressure by reducing the volume. Compressors are the most critical part of the chiller and largely determine its capacity, efficiency, and power usage. There are many different types of compressors, but the largest chillers tend to use centrifugal compressors. These compressors pull refrigerant using centrifugal force and compress it using an impeller. They are more energy efficient, particularly in large capacity applications.
Coolant Distribution Units (CDUs): circulates and pumps coolant in a closed-loop system to row manifolds, rack manifolds, and through either cold plates or rear door heat exchangers. The coolant (typically a water-glycol mix) returns to the CDU where it runs through a heat exchanger. The heat is there transferred either to air (“liquid-to-air CDU”), or facility water/dedicated cooling loop (“liquid-to-liquid CDU”) for heat rejection. CDUs will typically come with two pumps, offering redundancy around mechanical failure. CDUs will also have sensors to monitor and control temperature, pressure, and flow rate. CDUs come in two forms:
In-rack CDUs: small scale CDUs that fit within a single rack and pump coolant through rack manifolds. By design, they have limited capacity.
In-row CDUs: larger format CDUs that sit outside of the rack and typically serve multiple racks.
Liquid cold plate: a metal block specifically designed to sit on top of a chip with microchannels for coolant to flow through the chip. The liquid cold plate facilitates heat exchange from the semiconductor into the cooling fluid.
Quick disconnectors: couplings with zero leaking of coolant. Used to connect the liquid cold plates to the rack manifold and to connect the rack with the row manifold. Quick disconnectors in data center applications typically have a latch to secure the connection.
Chillers
Every one megawatt of power supplied to a data center requires approximately 285 tons of cooling, similar to the requirements for a 115,000 square foot commercial building. A 285-ton chiller is roughly $300-400,000. Based on the 9 GW of data center capacity added in 2024, BofA estimates a $3.1-3.5bn market size.
Cooling Towers
Cooling towers costs ~$300,000 on average, but these costs vary with size. A single cooling tower can provide heat rejection for 3-4 MW of supplied electrical power (e.g., 1,000 tons of cooling). BofA estimates that the cooling tower market for data centers was $0.7-0.9bn in 2024.
Wet vs dry cooling towers is something to be considered for data centers. Wet cooling towers are open-circuit cooling towers where water comes in contact with the ambient air. Dry cooling towers are closed circuit cooling towers were there is no direct contact between the ambient air and the fluid being cooled. Wet cooling towers are reliable in hot ambient temperatures and have high cooling capacity; however, the costs are higher for the equipment and installation and the water consumption. Dry towers have lower initial maintenance costs and can work in most climate conditions; however, there is a lower capacity and can’t cool below a certain temperature.
Computer Room Air Handling Units and Other
Computer room air handling units (CRAHs) use chilled facility water and blow air over a radiator. CRAHs are a major part of the thermal equipment located in the “white space” of a data center (i.e., where IT equipment is located). Other equipment includes aisle containment systems, in-rack cooling fans, and related sensors & controls.
We estimate the CRAHs and related equipment is a $5-6bn market. A portion of this total market is driven by replacement demand. The market for new construction is likely $4-5bn.
In 2021 Johnson Controls acquired Silent-Aire, a manufacturer of CRAHs and other equipment with $0.7bn in revenue. Other large HVAC manufacturers, such as Carrier, Trane, and Daikin also make CRAHs
A deeper dive into CDUs
Breaking down the cooling market
Traditionally, racks are cooled with air. Liquid cooling includes direct-to-chip applications (~11% of the market), immersion cooling (~1%), and rear-door heat exchangers (1%).
Importantly, liquid cooling is additive to existing air-cooling equipment. While liquid cooling is transferring heat from the chip itself, other IT equipment and power equipment still needs to be air cooled.
Direct-to-chip emerging as preferred choice for industry
Historically, liquid cooling was used largely for high-performance computing applications. Rising rack density is driving increased interest in adoption of liquid cooling. Under the absolute right containment and right cooling, air-cooling would reach its maximum cooling limit at 60-70kW per rack. This would be the maximum, not the average.
Direct-to-chip cooling has been the leading alternative cooling method. Liquid cooling solutions can be retrofit into the existing infrastructure with relatively little disruption.
Liquid cooling demand among colocation companies is more likely to accelerate versus decelerate, as AI chip availability broadens out. Absent a “pause” by cloud service providers on AI build outs, demand for CDUs will remain strong.
Existing racks can be retrofit for liquid cooling by adding cold plates and connecting to the plates with couplings and tubing. Heat is transferred from the chip to the cold plate and into the fluid, which is then circulated back to a cooling distribution unit (CDU). Most liquid cooling systems use two cooling loops: a primary cooling loop (also known as an external loop) and secondary cooling loop (also known as an internal loop). A CDU is used to thermally couple the external and internal loops. Heat is transferred from the internal loop to the external loop within the CDU. The CDU external loop connects to the data center infrastructure. The CDU internal loop connects to the piping and manifolds
Rear-door heat exchangers
Rear-door heat exchangers help manage densities from 20kW to ~75kW. The technologies do not bring liquid directly to the server, but the infrastructure is similar to direct liquid cooling. Passive or active heat exchangers replace the rear door of the IT equipment rack with a liquid heat exchanger. Passive heat exchangers use server fans that remove heated air through a liquid-filled coil in the rear door of the rack. The coil then absorbs the heat before the air goes into the data center. Active heat exchangers also have fans that pull air through the coils.
What’s in liquid cooling?
A liquid cooling system includes:
A CDU, which isolates a fluid loop from the rest of the cooling system. This is typically a single enclosure with all parts integrated within it. A CDU provides temperature control, flow control, pressure control, fluid treatment, and heat exchange and isolation. The CDU takes the heat from the fluid loop and exchanges it out of the system. It also must isolate the fluid in the loop from the rest of the cooling system.
A redundant pump system (e.g., one more pump than needed) with filtration for the cooling fluid.
A heat exchanger, responsible for passing the heat to a secondary cooling loop.
A controller, with autonomically and autonomously will control the pump system and gather information from different sensors.
Immersion cooling
Single-phase cooling immersion cooling uses a pump to circulate coolant around immersed server racks. In two-phase cooling, server heat turns the coolant into vapor, which then rises, condenses through coils, and returns. Both methods result in better power usage effectiveness (PUE) ratios.
Technically, there is ~2 GW of capacity using immersion cooling, but the majority relates to crypto mining versus cloud, colocation, or enterprise data centers. One major barrier to adoption is that immersion cooling voids the chip OEM (original equipment manufacturer) warranty. Another barrier is the immersion fluid itself. Immersion cooling involves chemistry with Per- and polyfluoroalkyl substances (PFAS). 3M (MMM) announced it would exit production of its ~$1.6bn in sales PFAS manufacturing back in December 2022, with official phaseout by the end of 2025. While Illinois Tool Works (ITW) and others offers alternatives to Novec fluid, we still see some reluctance from data center operators to adopt immersion.
Pros and Cons Part I: Liquid-to-air or liquid-to-liquid?
There are three main ways to reject heat from a server. Below we show the differences between existing heat rejection systems and dedicated heat rejection systems. The reject heat-to-air in IT space (closed loop heat rejection) utilizes a liquid-to-air CDU. The other two formats of heat rejection utilize liquid-to-liquid CDUs. The most energy efficient heat rejection system would be rejecting heat to an independent water system, which makes the most sense for large-scale AI server deployments. The closed loop heat rejection is costly for large scale deployments, but the investment is much smaller and the time to deployment is shorter.
Pros and Cons Part II: Rack mounted or floor mounted?
Below, we show the advantages and disadvantages of a rack-mounted solution as opposed to a floor-mounted solution. An in-rack solution has the CDU within the IT rack space, typically mounted at the bottom of the rack. The CDU includes a pumping unit, filtration, and controls. Heat is transferred to the data center air via a fan-assisted rear-door heat exchanger (liquid to air) or to a facility loop via a liquid-to-liquid heat exchanger.
A floor-mounted CDU is dedicated to a row or multiple rows of racks, sharing an IT fluid loop. This can be placed at the end of the row or further away from the cluster. Similar to the in-rack unit, heat is transferred to the data center either via a fan-assisted rear-door heat exchanger (liquid to air) or to a facility loop via a liquid-to-liquid heat exchanger.
Pros and Cons Part III: Single-phase or two-phase?
Two-phase direct-to-chip liquid cooling involves the coolant going from a liquid to a gas. This phase transition can absorb more heat, but requires the use of specific coolants that boil in the required temperature ranges of safe operation of the semiconductor. The vapor then returns to the condenser for recirculation.
Two-phase direct-to-chip cooling remains a nascent approach (see more details in the
COOLERCHIPS: research beyond D2C section below). Third-party research firms suggest a market size of less than $50mn in 2024. Vertiv has tested a prototype two-phase system (see Maturation of Pumped Two-Phase Liquid Cooling to Commercial Scale-Up Deployment, Nov. 2024).
It is important to note that both single-phase and two-phase approaches will net CDUs, cold plates, access to facility water supply, and heat rejection (e.g., cooling tower). The key difference is that two-phase systems use specialized coolants/refrigerants, while single-phase systems tend to use a water-glycol mix.
Practical difficulties with a two-phase approach include:
Managing the difference in density (and hence pressure) as the coolant/refrigerant enters the cold plate as a liquid versus exiting the cold plate as a gas. The variation in pressure in a single-phase approach is far less.
Water-glycol mix is cheap; the customized refrigerants needed for two-phase systems adds to the overall cost. Manufacturers of cold plates and quick disconnects have optimized their products for single-phase systems. Customized accessories for a two-phase system would add to the cost.
Variations in the level of vaporization can result in a wider disparity of heat transfer versus single-phase systems. Vaporization (e.g., the creation of bubbles) can vary based on minor differences within the microchannels of cold plates. This can be overcome through system-wide simulation and testing, but adds an additional level of complexity.
CDU competitive landscape
Given the high-growth prospects, it is not surprising that the liquid cooling market has seen many new entrants. There are 30 vendors offering more than 100 CDU variants in the market today. Given the conservative nature of data center operators, reputation and service capability will play a major factor in decision making. This bodes well for Vertiv, which has more than 440 service centers globally offering same-day service in most locations.
Given the nascent nature of the CDU market (~$1.2bn in revenue in 2024), we do not have the same level of confidence in market shares relative to larger, more established product categories. The table below groups the vendors into three tiers (with vendors listed alphabetically in each tier). Tier 1 vendors have at least $100mn in CDU-related revenue and offer multiple variants. Tier 2 vendors have strong products and existing thermal offerings, but we do not believe have more than $100mn in CDU-related revenue.
For reference, the ten CDU manufacturers listed as Nvidia partners at the 2025 Computex conference were: Auras, Boyd, Cooler Master, CoolIT Systems, Delta, Flex/JetCool, LiteOn, Motivair/Schnedier, Nidec, and Vertiv.
New entrants, such as Carrier, JetCool, Munters, Nautilus, Nortek, and Trane, may have garnered more revenue in 2025, but the focus is on 2024 given data availability.
Below is a brief view of the Tier 1 CDU vendors below (in alphabetical order):
Delta Electronics
Delta Electronics (ticker: 2308 TT) is a manufacturer of power supplies and video display products. Delta offers liquid-to-liquid in-rack and in-row CDUs and liquid-to-air in-row CDUs.
nVent
nVent (ticker: NVT) is a US-based manufacturer of electrical products. nVent manufactures both in-rack and in-row solutions. nVent introduced its first standardized liquid cooling unit, RackChiller CDU800, in November 2020. The company also offers smaller, in-rack coolant distribution units. nVent has been building up CDU service capabilities as well.
Schneider Electric / Motivair
Schneider (ticker: SU FP) is a manufacturer of electrical and automation products. Within the data center business, the company offers a full range of electrical and thermal products. Within thermal, Schneider offers traditional air-cooling products, as well as in-row, in-rack, and floor-mounted liquid cooling solutions. Schneider closed on the acquisition of US-based Motivair in February 2025. We estimate combined pro forma CDU sales in excess of $100mn.
Vertiv
Vertiv (ticker: VRT) has a large portfolio of liquid cooling products. This includes coolant distribution units (CDUs), active and passive rear-door heat exchangers, and heat rejection systems. The company acquired CoolTera in 2023, which added to its CDU manufacturing capabilities and intellectual property position.
Overview of data center electrical systems
Tracing electricity from the utility to the server
The utility provides either high-voltage or medium-voltage electricity. For larger data centers taking high-voltage lines, there would be a step-down power transformer located near the site. Alternating current (AC) electricity enters the data center at medium voltage. This will pass through switchgear before being stepped down to low voltage AC by a transformer.
Electricity then goes to the uninterruptible power supply (UPS). The UPS converts the electricity to DC to charge the batteries, then converts it back to AC to send on. Traditionally, electricity was then distributed through power distribution units (PDUs). In more modern data centers, electricity goes through a higher-capacity busway. The electricity then goes to the rack, where it flows into rack power distribution units (typically along the side of the rack). Individual servers are plugged into these rack PDUs. Finally, the electricity is converted to DC by power supply units (PSUs) located within the rack.
Uninterruptible power supply (UPS)
Uninterruptible power supply (UPS) provides automated backup electrical power for a data center. UPS systems can also perform power conditioning, including voltage fluctuations, under- or over-voltage conditions, and frequency variations. We focus on the large-scale UPS market, which is most applicable to data centers.
UPS provide short-term backup power (typically 15-30 minutes) in the event of a power failure until the backup generators are up and running. The size of the UPS battery array is therefore proportional to the supported data center electrical load.
The market is estimated at $8.5-9.5bn, which includes a significant amount of replacement revenue. Data center operators typically replace UPS every 10 years, given increased risk of failure after this period. Based on the 9 GW of data center capacity added in 2024, the new install market size is closer to $7bn.
Switchgear
Switchgear is equipment that controls, protects, and isolates electrical equipment. Common components include switches, fuses, isolators, relays, and circuit breakers. Data centers typically have two kinds of switchgear. A set of medium-voltage switchgear for the incoming electrical supply from the utility before it goes to the step-down transformer. Then a set of low-voltage switchgear before the electrical supply reaches the UPS backup batteries.
Based on the 9 GW of data center capacity added in 2024, the market size is estimated at $5.0-5.5bn. Switchgear is ubiquitous throughout electricity distribution and the broader market is more than $100bn. The broader market is dominated by ABB, Eaton, Legrand, Schneider, and Siemens.
Power distribution equipment
Traditionally, electrical power went from the UPS system to power distribution units (PDUs). PDU components typically include circuit breakers, power monitoring panels, power metering, and cabling to each rack. PDUs have drawbacks, including taking up floor space in the data hall and generating waste heat.
In high-density data centers, busway is an alternative approach to PDUs. Busway is typically mounted overhead, providing power to each rack through plug-in units with breakers. The busway draws power directly from low-voltage switchgear. While busway takes up less floor space, it is typically more expensive to install and less flexible to changes in rack location.
Finally, rack power distribution units (rPDUs) are mounted on the rack itself. rPDUs provide outlets to plug in servers, storage, and networking equipment. These are the last step in power distribution to IT equipment. In tier 3 and tier 4 data centers, there are two rack PDUs for each rack, providing redundancy.
Based on the 9 GW of data center capacity added in 2024, BofA estimates a $4.2-4.7bn market size. Similar to UPS, data center operators typically replace rack PDUs every 10 years, given increased risk of failure after this period, so this includes a portion of replacement revenue.
Engineering
Design & engineering services generally cost 4.5-6.5% of the infrastructure costs of a data center (e.g., excluding IT equipment). Publicly traded US firms covered by our colleague Michael Feniger include Jacobs Solutions (J), Fluor (FLR), and AECOM (ACM). These firms are highly diversified among end markets, with data centers representing a small percentage of total revenue.
These engineers plan the electrical, mechanical, cooling, fire protection, and physical security systems of the data center. Importantly, they must understand the IT infrastructure (e.g., network, routing, storage), which has implications for the physical infrastructure requirements. Based on the 9 GW of data center capacity added in 2024, the market is estimated at $4bn.
Construction
Construction firms oversee all aspects of the construction project including project management, specialty contractors, material purchasing, and equipment rental. These firms report modest operating margins, given the large amount of pass-through costs. The average operating margin is approximately 4% among publicly traded contractors with data center exposure.
Publicly traded firms include Balfour Beatty and Skanska. There are also smaller, private construction firms that specialize in data centers, such as US-based T5 Construction Services and Ireland-based Mercury Engineering.
Based on the 9 GW of data center capacity added in 2024, we estimate a $65-80bn market size. However, this would include material & equipment pass-through costs. Using an average margin of 4%, this would imply $2.6-3.2bn of operating profit.
Generators
Typical diesel backup generators cost $400-550,000/MW. Total system costs would include fuel tank, fuel pump, and installation costs, which collectively add an additional $350-500,000/MW. Generators are typically sized to fully supply the electrical consumption of the data center. For example, a 10MW datacenter will typically have 10MW worth of generator power on site to ensure 99.999% uptime for clients.
Based on the 9 GW of data center capacity added in 2024, we estimate a $7.2bn market size for generator equipment only (excluding ancillary products and installation costs). In 2023, Cummins gave a $6bn market size, but this has likely expanded significantly in 2024.
Servers
Servers are the largest single product category of data center capex. In 2024, data centers bought 13.5mn servers spending approximately $280bn. On a dollar basis, AI servers comprised approximately half of this spending, but traditional servers represented the vast majority on a unit basis.
Original design manufacturers (ODMs) are companies that produce servers to their own design. For example, Google’s Tensor Processing Units (TPUs) are custom semiconductors and these are put into servers at Google Cloud data centers. Similarly, Amazon Web Services has its own custom semiconductors (Graviton) and servers. Original equipment manufacturers (OEMs) are companies building servers to clients’ specifications.
Networking equipment
Networking equipment includes several different pieces of equipment. Switches communicate within the data center or local area network. Typically, each rack would have a networking switch. Routers handle traffic between buildings, typically using internet protocol (IP). Some cloud service providers use “white box” networking switches (e.g., manufactured by third parties to their specifications).
AI workloads are bandwidth-intensive, connecting hundreds of processors with gigabits of throughput. As these AI models grow, the number of GPUs required to process them grows, meaning bigger networks are required to interconnect the GPUs.
The market size for networking equipment is estimated at $36bn market . Arista and Cisco Systems are the two largest vendors.
Last but not least, to get a true sense of scale and context of the data center revolution, here is a stunning chart we highlighted first two months ago, showing that in the next few months, construction of data centers will soon surpass all general offices.
Source
Much more in the full BofA report available to pro subs.
32,12220
More ai stories on ZeroHedge
Recirculator_title__zk4W0
Could AI's Growing Thirst For Water Usher In Localized Resource Wars
Recirculator_title__zk4W0
'Bigger, Faster, More Permanent Than All Waves Combined': No Sector Will Be Untouched By AI
Recirculator_title__zk4W0
'Major Geopolitical Issue': Eric Schmidt Warns 'Majority Of The World' Will Choose China's 'Open' AI Over America's 'Closed' Models
Show Comments
Discrimination Notice
Privacy Policy
Disclosure
Disclaimer
Privacy and cookie settings
Advertise with ZeroHedge
Copyright ©2009-2025 ZeroHedge.com/ABC Media, LTD
Bullish
Recent LWLG News
- Lightwave Logic Announces Scheduling of Annual Shareholder Meeting • ACCESS Newswire • 04/14/2026 12:30:00 PM
- Form 4 - Statement of changes in beneficial ownership of securities • Edgar (US Regulatory) • 04/10/2026 10:37:55 PM
- Form 144 - Report of proposed sale of securities • Edgar (US Regulatory) • 04/10/2026 09:22:42 PM
- Form ARS - Annual Report to Security Holders • Edgar (US Regulatory) • 04/10/2026 08:38:42 PM
- Form DEF 14A - Other definitive proxy statements • Edgar (US Regulatory) • 04/10/2026 08:31:19 PM
- Form 4 - Statement of changes in beneficial ownership of securities • Edgar (US Regulatory) • 04/08/2026 11:50:53 AM
- Form 4 - Statement of changes in beneficial ownership of securities • Edgar (US Regulatory) • 04/07/2026 08:07:26 PM
- Form 144 - Report of proposed sale of securities • Edgar (US Regulatory) • 04/07/2026 07:42:29 PM
- Form 144 - Report of proposed sale of securities • Edgar (US Regulatory) • 04/06/2026 08:06:59 PM
- Form 4 - Statement of changes in beneficial ownership of securities • Edgar (US Regulatory) • 04/03/2026 01:47:09 AM
- Form 4 - Statement of changes in beneficial ownership of securities • Edgar (US Regulatory) • 04/02/2026 08:39:13 PM
- Form 4 - Statement of changes in beneficial ownership of securities • Edgar (US Regulatory) • 04/02/2026 08:14:40 PM
- Form 144 - Report of proposed sale of securities • Edgar (US Regulatory) • 04/01/2026 07:52:04 PM
- Form 144 - Report of proposed sale of securities • Edgar (US Regulatory) • 04/01/2026 07:02:07 PM
- Form 4 - Statement of changes in beneficial ownership of securities • Edgar (US Regulatory) • 03/31/2026 08:01:17 PM
- Form 144 - Report of proposed sale of securities • Edgar (US Regulatory) • 03/30/2026 08:03:59 PM
- Form 144 - Report of proposed sale of securities • Edgar (US Regulatory) • 03/25/2026 08:53:04 PM
- Form 144 - Report of proposed sale of securities • Edgar (US Regulatory) • 03/25/2026 08:49:55 PM
- Form 144 - Report of proposed sale of securities • Edgar (US Regulatory) • 03/24/2026 08:34:36 PM
- Form 144 - Report of proposed sale of securities • Edgar (US Regulatory) • 03/20/2026 09:03:10 PM
- Form 10-K - Annual report [Section 13 and 15(d), not S-K Item 405] • Edgar (US Regulatory) • 03/20/2026 08:35:22 PM
- Form 144 - Report of proposed sale of securities • Edgar (US Regulatory) • 03/18/2026 08:44:44 PM
- Lightwave Logic High-Speed Modulator Platform Now Available in GDS Factory PDK for GlobalFoundries Silicon Photonics Platform • ACCESS Newswire • 03/16/2026 12:30:00 PM
- Form 4 - Statement of changes in beneficial ownership of securities • Edgar (US Regulatory) • 03/13/2026 10:00:07 PM
- Form 144 - Report of proposed sale of securities • Edgar (US Regulatory) • 03/12/2026 07:49:41 PM
