Data center operators are installing liquid cooling infrastructure and next-generation network interconnects to support AI compute demands that exceed traditional architecture limits.
Supermicro is delivering Red Hat-certified systems with NVIDIA accelerators designed for AI factories. "Our validated solutions for the Red Hat AI Factory with NVIDIA help ensure that customers can combine our high-performance, purpose-built systems with a robust, enterprise-grade software platform," said Vik Malyala, executive at Supermicro.
Network providers are deploying AI-Scale Ethernet and 224G retimers to eliminate bandwidth bottlenecks in training clusters. Nokia is integrating AI-RAN capabilities to distribute intelligence across network layers. "Physical AI requires an intelligent network underpinned by AI-RAN so operators can fully harness distributed intelligence across every layer of the network," said Ronnie Vasishta at Nokia.
Cooling innovation extends to marine environments. Offshore data centers use seawater for thermal management, though saltwater introduces corrosion and fouling challenges. "The marine environment is pretty brutal to engineer around because there's the increased salinity, there's debris, and various kinds of corrosion and fouling of metal piping," said Daniel King, analyzing underwater deployments.
Edge computing platforms are adopting new security frameworks. Veea Inc. open-sourced Lobster Trap, a scanning system that operates under one millisecond. The company launched TerraFabric for AI deployment at network edges. "Based on large scale deployments to date, we believe this allows organizations to accelerate updates and deploy new capabilities without compromising overall system stability," Veea stated.
The infrastructure buildout reflects capital reallocation from traditional server farms to AI-optimized facilities. Optical scale-up architectures and purpose-built cooling represent multi-billion dollar commitments that will reshape data center economics over the next investment cycle.

