BLOG
Czech modular data center manufacturer ModulEdge has partnered with liquid cooling firm Comino to deliver AI infrastructure across Europe and MENA.
In a statement, the companies said the partnership aims to address the 12-18 months procurement cycles currently facing enterprises looking to deploy AI infrastructure, with their joint solution reducing timelines to between 3-6 months.
The joint solution combines ModulEdge's modular data centers with Comino's liquid-cooled GPU systems, with the latter able to provide Nvidia RTX Pro 6000, H200, B200, B300, and GB300 hardware as part of the offering. Comino’s liquid cooling solution achieves a PUE of 1.05–1.1, the company added.
Yuri Milyutin, commercial director and partner at ModulEdge, added: "The AI infrastructure conversation has shifted. Organizations aren't asking whether they need on-premise compute – they're asking how fast they can get it deployed without compromising on security or reliability. Our partnership with Comino answers that question with a proven, deployable solution that doesn't require 18 months of construction and permitting.“

Why is direct-to-chip cooling dominating?
Direct-to-chip is the dominant GPU cooling solution because liquid removes heat more efficiently than air, allowing GPUs to sustain high utilization without thermal throttling. As rack power density increases, air cooling becomes impractical due to the energy and space requirements it entails.
Besides saving space, liquid cooling requires far less energy than the fans used in air cooling, which helps improve power usage effectiveness (PUE) in AI data centers. In addition, liquid cooling enables stable thermal conditions through sustained electrical loading without overprovisioning or derating.
This explains why AI data center reference designs increasingly assume liquid interfaces, predictable flow envelopes, and standardized rack distribution architectures. Physics, scalability, and operational predictability are driving the shift to direct-to-chip, not vendor preferences.
What causes direct-to-chip cooling to fail or succeed?
Direct-to-chip deployments don’t always deliver the desired performance and reliability. The issue typically isn’t component failure but system-level mismatches. For instance, if the pumping capacity is inadequate, it can create a bottleneck over time as rack power increases. Other common issues are:
Insufficient heat exchanger margin, which creates instability during peak electrical loading and transient training events. The margin provides extra surface area to handle fluctuations.
Limited telemetry, preventing operators from correlating power draw with flow rates, temperature increases (ΔT), and return temperatures.
Electrical upgrades, which alter thermal behavior and require coordinated modeling and validation across the cooling and power domains.

Microsoft will invest more than $1 billion in cloud and AI infrastructure in Thailand over the next two years, vice chair and president Brad Smith said.
As reported by the Wall Street Journal, Smith made the commitment on March 31 after a meeting with Thai Prime Minister Anutin Charnvirakul.
The investment will be used to expand the company's data center footprint, upskill local talent, and invest in cybersecurity and sovereign technology in the country.
Microsoft previously committed to investing $2.85bn in Thailand in 2023, though a timeline for this was not shared. A few months later, it officially revealed plans for a cloud region in the country.
In October 2025, the company revealed it was partnering with Charoen Pokphand Group and True Corporation on its cloud region in Thailand, with True Internet Data Center set to serve as one of the facilities supporting the region. The cloud region has yet to launch.
Competitors Amazon Web Services and Google Cloud have operating regions in the country. Google announced plans for a Thai cloud region in August 2022 and launched the region earlier this year. Amazon similarly made an announcement in October 2022, officially launching its region in January 2026. Amazon also launched a Local Zone Edge location in Bangkok in December 2022.

Amazon Web Services plans to deploy more than one million Nvidia GPUs, including Blackwell and Rubin GPU architectures, within the next 12 months.
Amazon said that it currently offers the broadest collection of Nvidia GPU-based instances of any cloud provider.
This February, AWS CEO Matt Garman said that the company was still running six-year-old Nvidia A100 servers, and had yet to retire any of the chips, due to there being "so much more demand than supply."
AWS made the latest Nvidia Blackwell Ultra GPUs generally available last December, and plans to roll out Rubin when it is launched later this year.
At the same time, the company said that it would continue to invest in Trainium, its own in-house AI accelerator effort.
In February, OpenAI announced that it would spend $2bn on Trainium compute (as well as GPUs on AWS), following a $50bn investment from Amazon.

Orbital data center startup Aetherflux is reportedly raising a Series B funding round at a $2 billion valuation.
According to reporting by the Wall Street Journal, citing sources familiar with the matter, Aetherflux aims to raise between $250 million and $300 million in the latest funding round, led by current investor Index Ventures.
Aetherflux plans to deliver high performance AI compute data centers in orbit, with the first data center node for commercial use targeted for Q1 2027.
The company said it aims to help solve access to energy for scaling AI infrastructure by pioneering orbital data center satellites to leverage solar power in space.
Aetherflux aims to deploy a constellation of satellites, dubbed “Galactic Brain.”
The company raised $50 million in a Series A round last year, led by Index Ventures and technology investment company Interlagos. The Series A round saw investment from Bill Gates’s Breakthrough Energy Ventures, Andreessen Horowitz, New Enterprise Associates, and actor Jared Leto. It has since raised an additional $30 million, according to Wall Street Journal’s sources.
Baiju Bhatt, founder of Aetherflux and co-founder of trading platform Robinhood, founded Aetherflux in 2024 after stepping down as a Robinhood executive. He seeded Aetherflux with an initial $10 million.
At its GTC 2026 conference earlier this month, Nvidia announced that it had developed a space-specific module of its Vera Rubin GPU-CPU platform, and would operate this compute hardware in orbit with the participation of a number of space companies, including Aetherflux.

Leave A Reply
LOGO
This stunning beach house property is a true oasis, nestled in a serene coastal community with direct access to the beach.
Opening Hours
Monday - Friday : 9AM to 5PM
Sunday: Closed
Closed during holidays
Contact
+18888888888
hezuo@eyingbao.com123 West Street, Melbourne Victoria 3000 Australia