Accelerating AI Training: HPE and NVIDIA Partnership
Hewlett Packard Enterprise (HPE) and NVIDIA are joining forces to enhance AI training with the GH200 supercomputer. This collaboration aims to make AI training faster and more accessible, specifically targeting generative AI applications for large enterprises, research institutions, and government organizations.
Release and Availability
- The solution is set to be available in December.
- Accessible through HPE in over 30 countries.
- Comprehensive Solution:
- Software suite for AI model training and tuning with private datasets.
- Liquid-cooled supercomputers, accelerated compute capabilities, networking, storage, and services.
- Performance Boost:
- Developed in collaboration with NVIDIA.
- Integrates with HPE Cray supercomputing technology powered by NVIDIA Grace Hopper GH200 Superchips.
- Enhances system performance by 2-3X.
- Key Components:
- AI/ML acceleration software, including HPE Machine Learning Development Environment, NVIDIA AI Enterprise, and HPE Cray Programming Environment suite.
- Solution based on HPE Cray EX2500 and NVIDIA GH200 Grace Hopper Superchips.
- Can scale up to thousands of GPUs.
- Full node capacity dedication to a single AI workload for faster results.
- HPE Slingshot Interconnect for high-speed networking supporting real-time AI.
- Turnkey simplicity with HPE Complete Care Services.
- Global specialists for setup, installation, and full lifecycle support.
- Addressing power requirements:
- Liquid-cooling capabilities.
- Potential 20% performance improvement per kilowatt over air-cooled alternatives.
- 15% less power consumption.
This initiative aligns with HPE's commitment to sustainability and energy efficiency.