NVIDIA DGX B200 AI Server

NVIDIA DGX B200 AI Server

From
$300,000.00

NVIDIA DGX B200 AI Server

DGXB-G1440+P2EDI36
Item Condition : Brand New
Hurry! Other 3 people are watching this product
SKU
DGXBG1440P2ED36
$300,000.00
In stock
  • Buy 10 for $2,500.00 each and save 99%
  • Buy 100 for $2,449.00 each and save 99%
  • Buy 1000 for $2,400.00 each and save 99%
  • Buy 5000 for $2,349.00 each and save 99%
Free shipping
could be yours in 1 - 5 days
Hurry! Other 3 people are watching this product

The NVIDIA DGX B200 AI Supercomputing System is a high-performance platform featuring 72 NVIDIA Blackwell GPUs and 36 Grace CPUs, designed to accelerate AI model training, deep learning, and high-performance computing workloads. With 3X faster training performance and 15X improved inference performance compared to previous generations, the DGX B200 is the ideal solution for enterprises looking to scale their AI infrastructure and drive business innovation.

Details
NVIDIA DGX

The Proven Standard for Enterprise AI

Built from the ground up for enterprise AI, the NVIDIA DGX™ platform, featuring NVIDIA DGX SuperPOD™, combines the best of NVIDIA software, infrastructure, and expertise in a modern, unified AI development solution—powering next-generation AI factories with unparalleled performance, scalability, and innovation.

NVIDIA DGX GB200

Enterprise Infrastructure for Mission-Critical AI

NVIDIA DGX™ GB200 is purpose-built for training and inferencing trillion-parameter generative AI models. Designed as a rack-scale solution, each liquid-cooled rack features 36 NVIDIA GB200 Grace Blackwell Superchips—36 NVIDIA Grace CPUs and 72 Blackwell GPUs—connected as one with NVIDIA NVLink™. Multiple racks can be connected with NVIDIA Quantum InfiniBand to scale up to hundreds of thousands of GB200 Superchips.

NVIDIA DGX SuperPODNVIDIA DGX SuperPOD

Maximize the Value of the NVIDIA DGX Platform

NVIDIA Enterprise Services provide support, education, and infrastructure specialists for your NVIDIA DGX infrastructure. With NVIDIA experts available at every step of your AI journey, Enterprise Services can help you get your projects up and running quickly and successfully.

Tech Specs

NVIDIA DGX B200 Specifications

expert_image
Still have Questions?
Call +1 833 631 7912

General Specifications

  • GPU: 8x NVIDIA Blackwell GPUs
  • GPU Memory: 1,440GB total, 64TB/s HBM3e bandwidth
  • Performance: 72 petaFLOPS FP8 training and 144 petaFLOPS FP4 inference
  • NVIDIA® NVSwitch™: 2x
  • NVIDIA NVLink Bandwidth: 14.4 TB/s aggregate bandwidth
  • System Power Usage: ~14.3kW max

Processor

  • CPU: 2 Intel® Xeon® Platinum 8570 Processors
  • Total Cores: 112 Cores total, 2.1 GHz (Base), 4.0 GHz (Max Boost)
  • System Memory: 2TB, configurable to 4TB

Networking

  • OSFP Ports: 4x OSFP ports serving 8x single-port NVIDIA ConnectX-7 VPI (Up to 400Gb/s NVIDIA InfiniBand/Ethernet)
  • QSFP112 Ports: 2x dual-port QSFP112 NVIDIA BlueField-3 DPU (Up to 400Gb/s InfiniBand/Ethernet)
  • Management Network: 10Gb/s onboard NIC with RJ45
  • Additional Networking: 100Gb/s dual-port Ethernet NIC
  • Host BMC: Baseboard management controller (BMC) with RJ45

Storage

  • Operating System Storage: 2x 1.9TB NVMe M.2
  • Internal Storage: 8x 3.84TB NVMe U.2

Software

  • NVIDIA AI Enterprise: Optimized AI Software
  • NVIDIA Mission Control: AI Data Center Operations and Orchestration with NVIDIA Run:ai Technology
  • Supported Operating Systems: NVIDIA DGX OS / Ubuntu

Rack Units (RU)

  • Rack Units: 10 RU

Dimensions & Weight

  • System Dimensions: Height: 17.5in (444mm), Width: 19.0in (482.2mm), Length: 35.3in (897.1mm)

Operating Conditions

  • Temperature Range: 5–30°C (41–86°F)

Enterprise Support

  • Support: Three-year Enterprise Business-Standard Support for hardware and software
  • 24/7 Support: Enterprise Support portal access, Live agent support during local business hours
Models
Reviews

Write Your Own Review

You're reviewing: NVIDIA DGX B200 AI Server


^Top