BLOG

Global Liquid Cooling Information- June. 21

AMD launches Instinct MI350 GPUs, unveils double-wide Helios AI rack-scale system

AMD has announced that its Instinct MI350 series GPUs, consisting of both Instinct MI350X and MI355X offerings, are in production.

The chip designer also previewed Helios, a rack-scale system based on the company’s forthcoming MI400 series of GPUs, the successor to the MI350 series, set to be released in 2026.

Built using 3nm technology and based on AMD’s CDNA 4 architecture, the MI350 series offers 288GB of HBM3E and 8Tbps of memory bandwidth. Seventy-two teraflops of FP64 is provided by the MI350X and 79 teraflops for the MI355X, with the GPUs' total board power (TBP) up to 1,000W and 1,400W, respectively.

At rack scale, both offerings will be available in an air-cooled solution – scalable to 64 GPUs – and direct liquid cooled, which can be scaled to either 96 or 128 GPUs.

AMD also unveiled the latest version of its open-source AI software stack, ROCm 7, at the conference, which the company said will offer more than 4x inference and 3x training performance improvement when compared to ROCm 6.0.

Slated to be available in 2026, AMD’s Helios rack infrastructure is a unified architecture designed for both frontier model training and large-scale inference, delivering “leadership” across compute density, memory bandwidth, and scale-out interconnect.

The double-wide Helios AI rack is fully integrated with AMD’s Zen 6 Eypc CPUs, MI400 GPUs, and Vulcano NICs.

AMD are not currently disclosing the power specifications related to the forthcoming GPU, however the company did announce a new 2030 goal to deliver a 20x increase in rack-scale energy efficiency from a 2024 base year, enabling a typical AI model that today requires more than 275 racks to be trained in fewer than one fully utilized rack by 2030, using 95 percent less electricity.

1


Nvidia claims 3,000 exaflops of Blackwell compute is coming to Europe

GPU and AI giant Nvidia has claimed that 'more than' 3,000 exaflops of compute is coming to Europe through Blackwell deployments.

The company detailed a number of already announced and new projects across the continent that it believes will add up to the figure.

Nvidia did not disclose the benchmark used, but it is almost certainly an AI focused one like HPL-AI, rather than the standard higher precision HPL found in the recent Top500 report.

The projects span France, Italy, Spain, and the UK.

In France, Nvidia said that it is working with Mistral AI for a cloud platform with 18,000 Grace Blackwell systems in the first phase, with plans to expand across multiple sites in 2026.

In May, the two companies announced a 1.4GW campus outside Paris backed by French national investment bank Bpifrance and UAE investment fund MGX.

Over in Germany, Nvidia announced a new industrial AI cloud for European manufacturers. The AI factory will feature Nvidia DGX B200 systems and RTX Pro Servers, with 10,000 Blackwell GPUs.

In Italy, Domyn will deploy a Colosseum supercomputer with an undisclosed number of Grace Blackwell Superchips. The announcement is not new, just the name - with iGenius rebranding to Domyn this week. The supercomputer was announced in April.


2




Schneider Electric joins Nvidia in pitch for EU AI data centers

Schneider Electric is partnering with Nvidia in pitching infrastructure to the European Commission’s (EC) AI Continent Action Plan.

The EC hopes to establish a number of AI gigafactories that will house around 100,000 next-generation AI chips, building upon the existing EuroHPC JU supercomputing effort.

The continent plan includes a €20 billion ($22bn) investment for up to five AI gigafactories across the Union and 13 smaller ones, with funding coming from government and private funding.

Schneider Electric and Nvidia are together responding to the European Commission’s plan, building on a previous non-exclusive partnership. Schneider has worked with Nvidia on server and data center reference architectures since last year, as well as on digital twins through Omniverse.

Nvidia this week claimed that as much as 3,000 AI exaflops of Blackwell compute was headed to the continent, although how much will be on data centers with Schneider Electric's reference architecture is unclear.


3




NextDC announces plans for data center in Melbourne, promises 1MW rack densities

Australian data center firm NextDC is to expand its footprint with a new development in Melbourne.

The company this week announced a AU$2 billion (US$1.29bn) commitment to develop M4 Melbourne — a new campus at 127 Todd Road in Port Melbourne.

Located on the former Westgate Park Printing Complex, once home to the nation’s largest newspaper presses, the 150MW Fishermans Bend campus will span 50,000 sqm (538,195 sq ft).

The company said the site will host an AI Factory: a liquid-cooled facility engineered for sovereign AI. NextDC said the factory will be designed to support Nvidia Blackwell and Rubin Ultra GPUs and offer rack densities beyond 1,000kW.

The site will include on-site solar and microgrids, offer its waste heat for district networks, and utilize recycled wastewater cooling.

NextDC acquired the land in Melbourne back in 2023, and news that the company was in the planning phase for a new facility surfaced in January. Phase 1 is set to offer around 10MW, according to a previous end-of-year results presentation.

NextDC currently operates three data centers in Melbourne. The 3,000-rack M1, also located in Port Melbourne, went live in 2012 and offers 15MW across 6,000 sqm (64,583 sq ft); M2 went live in 2017, offering 60MW across 25,000 sqm (296,098 sq ft); and M3 launched in 2022, offering 150MW across 40,000 sqm (430,556 sq ft).


4




Naver plans 500MW data center campus in Morocco

Korean Internet giant Naver is planning a data center development in Africa.

Naver Cloud this week announced a partnership with Nvidia, Nexus Core Systems, and investment firm Lloyds Capital to build a 500MW AI data center in Morocco.

The project aims to provide sovereign AI computing services throughout the EMEA region, which encompasses Europe, the Middle East, and Africa.

Available within the year, that initial phase will offer 40MW and feature Nvidia’s Blackwell GB200 GPUs. The site is expected to then be gradually expanded to a maximum of 500MW. Further details weren’t shared.

Naver Cloud currently lists six cloud regions in South Korea, as well as one each in Japan, Germany, Singapore, and the US (West Coast); future regions are planned in the US (East Coast), Vietnam, Taiwan, and Thailand. It previously exited Hong Kong.

On its website, Nexus Core Systems lists plans for a 12-hectare Moroccan site, as well as plans for a 10MW site in Charlotte, North Carolina, and capacity totaling 40MW across Sweden. The company claims to be able to support up to 400kW per rack via liquid cooling.

The 65-site site in the US would host Nvidia GB200 GPUs and is reportedly set to launch in Q1 2026.


5



SK Innovation to deploy integrated energy solution at BDC data center in Malaysia

South Korean energy company SK Innovation is set to deploy an integrated energy solution at a Bridge Data Centres (BDC) facility in Malaysia.

The two companies signed a Memorandum of Understanding (MoU) outlining the deployment of an energy management system and cooling solution at one of the largest data centers being constructed by BDC in the country.

The exact data center was not disclosed. BDC currently has six data centers in operation or development across Malaysia. Most recently, BDC announced a joint venture with Mah Sing Group to develop a 200MW data center campus just outside Kuala Lumpur.

According to SK Innovation, it will implement a comprehensive suite of “next-generation” energy systems for the project. This will include an artificial intelligence data center management system (DCMS), energy storage systems, fuel cell auxiliary power systems, and an advanced immersion cooling system.

SK’s subsidiary, SK Enmove, will provide the immersion cooling system, which is reportedly the first of its kind to be developed in the Korean market. The system will submerge servers within a special cooling liquid, making it suitable for use in data centers with a high percentage of GPUs.


6





Polar supports Crusoe’s AI growth with new high-performance data center

Polar, a leader in high-density, sustainable data center infrastructure, has announced the latest milestone in its ambitious European expansion plans: a strategic partnership with Crusoe to deliver next-generation AI infrastructure at a new 12MW facility (DRA01) in Norway.

The state-of-the-art facility, powered entirely by hydroelectric energy, represents a new standard in performance and minimizing environmental impact. DRA01 will host Crusoe’s scalable platform for advanced AI workloads, serving customers across Europe and beyond.

Polar’s Norwegian facility is optimized for GPU workloads, offering high-density rack configurations, robust energy efficiency and cutting-edge cooling technology. The 12MW deployment will be ready for service later this year and has the option to scale up to 52MW.

Engineered specifically for next-generation AI applications, the Polar facility features advanced liquid cooling systems and high-density rack configurations supporting up to 115kW per rack. These cutting-edge capabilities will enable Crusoe to deploy and scale its Cloud platform efficiently, and in an environmentally responsible way.


7


PRE:No data

Leave A Reply

Submit