Unmatched MLPerf Submission by CoreWeave, NVIDIA, and IBM
On June 4, 2025,
CoreWeave made headlines by partnering with
NVIDIA and
IBM to unveil the largest-ever submission for the MLPerf® Training v5.0 benchmark. The benchmark was executed using 2,496 NVIDIA Blackwell GPUs through CoreWeave's AI-centric cloud platform, showcasing an astounding leap in performance with the highest number of GPUs deployed in a single submission to date. This remarkable feat sets a new standard for cloud providers and reflects CoreWeave’s dominance in meeting contemporary AI demands.
A New Era for AI Workloads
CoreWeave’s submission not only outshines all previous cloud provider results but is also 34 times larger than its closest competitor. The benchmark results were achieved by training on the intricate
Llama 3.1 405B foundational model, completing this rigorous task in just
27.3 minutes. Impressively, this performance is over
2x faster than results from comparable cluster sizes, underscoring the technological armament of the GB200 NVL72 architecture.
Peter Salanki, the Chief Technology Officer and Co-founder at CoreWeave, expressed pride in the results, asserting, “AI labs and enterprises choose CoreWeave because we deliver a purpose-built cloud platform with the scale, performance, and reliability that their workloads demand.” This statement reflects the company’s commitment to providing powerful solutions tailored for stringent AI requirements, positioning them as leaders in the industry.
Client Benefits and Future Implications
The implications of these MLPerf results are significant. For users of CoreWeave's platform, the faster model training translates into reduced development cycles and optimized Total Cost of Ownership. Companies utilizing the platform can expect to halve their training time, thereby scaling efficiently while deploying their AI models cost-effectively. Leveraging advanced cloud technology, CoreWeave enables clients to stay ahead of competitors by maximizing their operational agility and technological prowess.
Further affirming their leadership in the sector, CoreWeave stands as the only cloud provider to be ranked in the
Platinum tier of SemiAnalysis's ClusterMAX and has attained leading submissions in both the MLPerf Inference and Training benchmarks for version 5.0.
About CoreWeave
CoreWeave has rapidly established itself as the
AI Hyperscaler™, driving forward cloud computing technology tailored for AI applications. Since its inception in 2017, the company has expanded its data center operations throughout the United States and Europe, continually pushing the boundaries of what's possible in computing. Recognized in 2024 as one of
TIME100’s most influential companies and featured on the
Forbes Cloud 100 list, CoreWeave insists on innovation and excellence in its mission to empower modern enterprises and AI labs with unparalleled solutions. For further details, visit
www.coreweave.com.
The results of this MLPerf benchmark signify more than mere numbers; they represent a transformative shift in AI computing, potentially reshaping the landscape of artificial intelligence and cloud computing for years to come.