Wiwynn Demonstrates Cutting-Edge AI Server Technology at GTC 2025
Wiwynn, a premier player in the cloud IT infrastructure realm, recently showcased its state-of-the-art AI servers at GTC 2025, held in San Jose, California. The event highlighted the innovative collaborations with Wistron, featuring their latest AI computing solutions powered by NVIDIA’s impressive Blackwell Ultra platform. These developments underscore Wiwynn’s commitment to leading the charge in AI technology and data center efficiency.
Revolutionary NVIDIA GB300 NVL72 Platform
One of the key highlights from Wiwynn’s showcase is the introduction of the NVIDIA GB300 NVL72 platform. This advanced system takes a significant step forward in AI computing with its enhanced AI FLOPS capabilities and a staggering over 20TB of HBM3e memory. Such features enable the platform to handle increasingly complex AI applications, from reasoning models to video inference tasks.
The system’s fully liquid cooling design helps maintain optimal thermal efficiency, a crucial aspect for modern AI workloads that demand both high performance and effective cooling solutions. By integrating NVIDIA's ConnectX®-8 SuperNIC and networking technologies such as Quantum-X800 InfiniBand, Wiwynn ensures that its AI servers are not only powerful but also efficient.
The Debut of NVIDIA HGX™ B300 NVL16
Another remarkable introduction is the HGX™ B300 NVL16 system, constructed with NVIDIA’s Blackwell Ultra technology. This new 10U system is designed to optimize computational efficiency, boosting both memory capabilities and overall performance to meet the demands of today’s advanced AI applications. The flexibility it offers through dual power delivery options further enables swift deployment in various existing data center setups.
Enhanced Networking with NVIDIA Spectrum-4
Wiwynn also showcased advances in networking technologies through the NVIDIA Spectrum-4 MAC. This new integration aims to empower multi-tenant hyperscale AI clouds with sophisticated Ethernet connectivity. Such enhancements pave the way for flexible deployment scenarios within data centers, further driving innovation in AI cloud environments.
UMS100L: A Dedicated Cooling Management System
At the forefront of thermal management, Wiwynn presented its UMS100L, a sophisticated rack-level liquid cooling management system. This system is engineered for deployment across data center facilities and is compatible with various in-row cooling devices. With features like advanced leakage detection and real-time monitoring, UMS100L ensures that critical equipment operates safely and efficiently, minimizing risks associated with traditional cooling methods.
Commitment to AI Advancement
William Lin, President of Wiwynn, emphasized the importance of integrated systems to meet the growing demands of AI computing. “Optimizing computing power involves a holistic approach that includes GPUs, cooling systems, and robust networking,” he stated. Wiwynn’s readiness for NVIDIA GB300 NVL72 showcases their leading position in delivering high-performance AI solutions, solidifying their role in the evolving AI landscape.
Kaustubh Sanghani, NVIDIA’s vice president of GPU products, echoed these sentiments, noting how the collaboration between Wiwynn and NVIDIA is reshaping data centers for the future of computing.
About Wiwynn
Wiwynn stands out as an innovative provider of cloud IT infrastructure, offering advanced computing and storage solutions tailored for leading data centers worldwide. Their vision is to harness the power of digitalization while igniting innovation in sustainability. Continually investing in next-generation technologies, Wiwynn aims to optimize total cost of ownership for data centers, ensuring they stay at the forefront of technological advancements.
For more information, visit
Wiwynn or check out their pages on Facebook and LinkedIn.