MSI Unveils Comprehensive Server Portfolio at COMPUTEX 2025

MSI Unveils Comprehensive Server Portfolio at COMPUTEX 2025



At COMPUTEX 2025, hosted in Taipei, MSI, a leading global provider of high-performance server solutions, presents its most extensive server portfolio to date at booth #J0506. The lineup includes fully integrated EIA, OCP ORv3, and NVIDIA MGX-powered racks, core compute servers based on DC-MHS technology, and the latest NVIDIA DGX Station. These solutions emphasize the company's commitment to providing workload-tailored infrastructures for hyperscale, cloud, and enterprise environments.

CEO of Enterprise Platform Solutions at MSI, Danny Hsu, stated, "The future of data infrastructure is modular, open, and optimized for specific workloads. At COMPUTEX 2025, we demonstrate how MSI is evolving into a full-stack server provider, delivering integrated platforms that help our customers scale AI, cloud, and enterprise implementations with greater efficiency and flexibility."

Rack Integration from Cloud to AI Data Centers



MSI showcases its expertise in rack integration with fully configured EIA 19-inch, OCP ORv3 21-inch, and AI racks powered by NVIDIA MGX. These racks are engineered for operating modern infrastructures, ranging from cloud-native computing to AI-optimized applications. Each rack is pre-integrated and thermally optimized, ready to support specified workloads. Together, they highlight MSI's ability to deliver a complete, workload-optimized infrastructure from design to deployment.

  • - EIA Rack: This high-density computing solution is ideal for private cloud and virtualization environments, integrating core infrastructure within a standard 19-inch format.
  • - OCP ORv3 Rack: Featuring an open 21-inch design, it provides higher computational and storage densities along with efficient 48V power and OpenBMC-compatible management, perfect for hyperscale and software-defined data centers.
  • - Enterprise AI Rack: Built on NVIDIA’s Enterprise Reference Architecture, this rack provides scalable GPU infrastructure for AI and high-performance computing (HPC) workloads. The modular units and high-throughput network based on NVIDIA Spectrum™-X support scalable multi-node configurations optimized for extensive training, inference, and hybrid workloads.

Core Compute and Open Compute Servers for Modular Cloud Infrastructure



Extending its core compute offerings, MSI introduces six DC-MHS servers with AMD EPYC 9005 Series and Intel Xeon 6 processors in 2U4N and 2U2N configurations. Designed for scalable cloud implementations, this portfolio features high-density nodes supported by liquid or air cooling, and compact systems optimized for energy and space efficiency. Supporting OCP DC-SCM, PCIe 5.0, and DDR5 DRAM, these servers enable modular, cross-platform integration and simplified management in private, hybrid, and edge cloud environments.

Further enhancing open compute flexibility, MSI presents the CD281-S4051-X2, a 2OU 2-node ORv3 Open Compute server based on the DC-MHS architecture, optimized for hyperscalable cloud infrastructures. It supports a single AMD EPYC 9005 processor per node, offers high storage density with twelve E3.S NVMe slots per node, and integrates efficient 48V power supply alongside OpenBMC-compatible management, making it suitable for software-defined and energy-conscious cloud environments.

High-Density Solutions for Various Workloads



  • - AMD EPYC 9005 Series based Platform:
- CD270-S4051-X4 (Liquid Cooling): A liquid-cooled 2U 4-node server with up to 500W TDP, each node featuring 12 DDR5 DIMM slots and 2 U.2 NVMe drive bays. Ideal for high-density compute power in thermally constrained cloud deployments.
- CD270-S4051-X4 (Air Cooling): This air-cooled 2U 4-node system supports up to 400W TDP, providing energy-efficient compute power with 12 DDR5 DIMM slots and 3 U.2 NVMe bays per node. Designed for virtualization, container hosting, and private cloud clusters.
- CD270-S4051-X2: A 2U 2-node server optimized for space-saving and computing density. Each node has 12 DDR5 DIMM slots and 6 U.2 NVMe slots, making it suitable for general virtualization and edge cloud nodes.

  • - Intel Xeon 6 Processor based Platform:
- CD270-S3061-X4: A 2U 4-node server featuring Intel Xeon 6700/6500 processors. It has 16 DDR5 DIMM slots and 3 U.2 NVMe bays per node, ideal for containerized services and mixed cloud workloads.
- CD270-S3061-X2: This compact 2U 2-node system has 16 DDR5 DIMM slots and 6 U.2 NVMe bays per node, delivering robust compute and storage capabilities for core infrastructures and scalable cloud services.
- CD270-S3071-X2: A 2U 2-node system designed for I/O-intensive workloads with 12 DDR5 DIMM slots and 6 U.2 bays per node, suited for storage-heavy and data-intensive cloud applications.

AI Platforms Featuring NVIDIA MGX DGX Station for AI Deployment



MSI also presents a wide array of AI-capable platforms, including NVIDIA MGX-based servers and the DGX Station built on NVIDIA's Grace and Blackwell architectures. The MGX series features 4U and 2U form factors optimized for high-density AI training and inference, while the DGX Station offers data center-class performance in a desktop enclosure for on-premises model development and edge AI deployment.

  • - AI Platforms with NVIDIA MGX:
- CG480-S5063 (Intel) / CG480-S6053 (AMD): The 4U MGX GPU Server is available in two CPU configurations offering flexibility across different processor ecosystems. Both systems support up to 8 FHFL Dual-Width PCIe 5.0 GPUs in air-cooled environments, making them ideal for deep learning training, generative AI, and high-throughput inference.
- CG290-S3063: A compact 2U MGX server with a single Intel Xeon 6700/6500 processor, supporting 16 DDR5 DIMM slots and 4 dual-width FHFL GPU slots. This server is designed for edge inference and lightweight AI training, suited for deployments with space constraints where inference latency and energy efficiency are crucial.
- DGX Station: The CT60-S8060 is a powerful AI station based on NVIDIA's GB300 Grace Blackwell Ultra Desktop Superchip. It delivers up to 20 PFLOPS of AI performance and 784 GB Unified Memory. Additionally, it features the NVIDIA ConnectX-8 SuperNIC, enabling network speeds up to 800 Gb/s for high-speed data transfer and scaling across multiple nodes. This system is engineered for onsite model training and inference, supporting multi-user workloads, and can serve as either a standalone AI workstation or a central computing resource for frontend teams.

In conclusion, MSI’s showcase at COMPUTEX 2025 highlights its commitment to innovation and leadership in high-performance server solutions, meeting the evolving demands of cloud and AI technologies.

Topics Consumer Technology)

【About Using Articles】

You can freely use the title and article content by linking to the page where the article is posted.
※ Images cannot be used.

【About Links】

Links are free to use.