HPE AI Server with NVIDIA GB200 NVL72
A next-generation rack-scale AI system engineered by HPE and NVIDIA, featuring 72 Blackwell GPUs and 36 Grace CPUs with ultra-fast NVLink interconnects and liquid cooling for extreme AI model training and inference performance.

NVIDIA GB200 NVL72 by HPE resources
Built for next-generation AI at unprecedented scale
The NVIDIA GB200 NVL72 by HPE is a rack-scale AI platform designed to power trillion-parameter AI models, large-scale training, and real-time inference. Built on NVIDIA Blackwell architecture, this system integrates accelerated compute, high-bandwidth memory, and ultra-fast networking to deliver exceptional performance for the most demanding AI and HPC workloads.


Rack-scale architecture for extreme performance
The NVIDIA GB200 NVL72 by HPE delivers a fully integrated rack-scale design featuring NVIDIA Grace CPUs and NVIDIA Blackwell GPUs connected through NVIDIA NVLink™ and high-speed fabric. This tightly coupled architecture enables massive parallelism, ultra-low latency, and industry-leading throughput, making it ideal for generative AI, large language models, and advanced scientific computing.

Accelerate AI innovation with HPE and NVIDIA
Purpose-built for hyperscale and enterprise AI deployments, the NVIDIA GB200 NVL72 by HPE simplifies deployment and operations while maximizing performance and efficiency. Backed by HPE services and NVIDIA AI software, this platform enables organizations to accelerate time-to-insight and unlock new levels of AI innovation.
- Rack-scale system with NVIDIA Grace CPU and Blackwell GPU architecture
- High-bandwidth NVLink™ interconnect for massive GPU-to-GPU communication
- Optimized for large language models, generative AI, and HPC workloads
- Integrated HPE management, services, and enterprise-grade reliability
| MFG Number | 1014890104 |
|---|---|
| Condition | Item Condition : Brand New |
| Price | $3,089.00 |
| Show Product Will Be Available Soon on FE | No |
| Product Card Description | The HPE NVIDIA GB200 NVL72 is a high-performance rack-scale AI platform designed for generative AI, large language models, and HPC workloads. It combines 72 NVIDIA Blackwell GPUs and 36 NVIDIA Grace CPUs with a massive NVLink GPU domain and advanced liquid cooling to deliver unparalleled training and real-time inference capabilities. |
| Order Processing Guidelines | Order Processing Guidelines:
Inquiry First – Please reach out to our team to discuss your requirements before placing an order. |
| Special Price From Date | Oct 27, 2024 |