NVIDIA DGX H100 AI Server

NVIDIA DGX H100 AI Server

From
$350,000.00

NVIDIA DGX H100 AI Server

DGXH-G640F+P2CMI36
Item Condition : Brand New
Hurry! Other 2 people are watching this product
SKU
NVDGXH100-640FG-P2CMI36
$350,000.00
In stock
  • Buy 10 for $2,500.00 each and save 99%
  • Buy 100 for $2,449.00 each and save 99%
  • Buy 1000 for $2,400.00 each and save 99%
  • Buy 5000 for $2,349.00 each and save 99%
Free shipping
could be yours in 1 - 5 days
Hurry! Other 2 people are watching this product

Expand the frontiers of business innovation and optimization with NVIDIA DGX™ H100. Part of the DGX platform, DGX H100 is the AI powerhouse that’s the foundation of NVIDIA DGX SuperPOD™, accelerated by the groundbreaking performance of the NVIDIA H100 Tensor Core GPU.

Details
NVIDIA DGX SuperPOD

Enterprise Infrastructure to Power AI Factories

NVIDIA DGX SuperPOD™ provides leadership-class AI infrastructure with agile, scalable performance for the most challenging AI training and inference workloads. Available with a choice of NVIDIA Blackwell-powered compute options in the NVIDIA DGX™ platform, DGX SuperPOD isn’t just a collection of hardware, but a full-stack data center platform that includes industry-leading computing, storage, networking, software, and infrastructure management optimized to work together and provide maximum performance at scale.

The Proven Standard for Enterprise AI

Built from the ground up for enterprise AI, the NVIDIA DGX™ platform, featuring NVIDIA DGX SuperPOD™, combines the best of NVIDIA software, infrastructure, and expertise in a modern, unified AI development solution—powering next-generation AI factories with unparalleled performance, scalability, and innovation.

NVIDIA AI Platform

The Most Complete AI Platform

  • NVIDIA AI software solutions
  • Build your AI Center of Excellence on DGX H100
  • Fully integrated with NVIDIA Base Command™ and AI Enterprise software
Leadership-Class Infrastructure
Leadership-Class Infrastructure

Leadership-Class Infrastructure

Experience the power of DGX H100 in a multitude of ways that fit your business: on premises, co-located, rented from managed service providers, and more. And with DGX-Ready Lifecycle Management, organizations get a predictable financial model to keep their deployment leading-edge.

Tech Specs

NVIDIA DGX H100 Specifications

expert_image
Still have Questions?
Call +1 833 631 7912

General Specifications

  • GPU: 8x NVIDIA H100 GPUs
  • GPU Memory: 640GB total GPU memory
  • Performance: 100 petaFLOPS training and 200 petaFLOPS inference
  • Power Consumption: ~15.2kW max

Processor

  • CPU: 2 Intel® Xeon® Platinum 8480C Processors
  • Total Cores: 112 Cores total, 2.8 GHz (Base), 4.0 GHz (Max Boost)
  • System Memory: Up to 2TB

Networking

  • OSFP Ports: 4x OSFP ports serving 8x single-port NVIDIA ConnectX-7 VPI (Up to 400Gb/s InfiniBand/Ethernet)
  • QSFP112 Ports: 2x dual-port QSFP112 NVIDIA BlueField-3 DPU (Up to 400Gb/s InfiniBand/Ethernet)
  • Management Network: 10Gb/s onboard NIC with RJ45
  • Additional Networking: 100Gb/s dual-port Ethernet NIC
  • Host BMC: Baseboard management controller (BMC) with RJ45

Storage

  • Operating System Storage: 2x 1.9TB NVMe M.2
  • Internal Storage: 8x 3.84TB NVMe U.2

Software

  • NVIDIA AI Enterprise: Optimized AI Software
  • NVIDIA Base Command™: Orchestration, Scheduling, and Cluster Management
  • Supported Operating Systems: DGX OS / Ubuntu

Rack Units (RU)

  • Rack Units: 10 RU

Dimensions & Weight

  • System Dimensions: Height: 17.5in (444mm), Width: 19.0in (482.2mm), Length: 35.3in (897.1mm)

Operating Conditions

  • Temperature Range: 5–30°C (41–86°F)

Enterprise Support

  • Support: Three-year Enterprise Business-Standard Support for hardware and software
  • 24/7 Support: Enterprise Support portal access, Live agent support during local business hours
Models
Reviews

Write Your Own Review

You're reviewing: NVIDIA DGX H100 AI Server


^Top