NVIDIA H100 Tensor Core GPU 80GB PCIe

NVIDIA H100 Tensor Core GPU 80GB PCIe

From
Special Price $18,500.00 Regular Price $29,499.00

NVIDIA H100 Tensor Core GPU 80GB PCIe

900-21010-6200-030
Item Condition : Brand New
Hurry! Other 76 people are watching this product
SKU
NVIDIA-H100
Special Price $18,500.00 Regular Price $29,499.00
In stock
Free shipping
could be yours in 1 - 5 days
Hurry! Other 76 people are watching this product

H100 features fourth-generation Tensor Cores and a Transformer Engine with FP8 precision that provides up to 4X faster training over the prior generation for GPT-3 (175B) models. The combination of fourth-generation NVLink, which offers 900 gigabytes per second (GB/s) of GPU-to-GPU interconnect; NDR Quantum-2 InfiniBand networking, which accelerates communication by every GPU across nodes; PCIe Gen5; and NVIDIA Magnum IO™ software delivers efficient scalability from small enterprise systems to massive, unified GPU clusters.

 

Details

An Order-of-Magnitude Leap for Accelerated Computing

The NVIDIA H100 Tensor Core GPU delivers exceptional performance, scalability,
and security for every workload. H100 uses breakthrough innovations based on
the NVIDIA Hopper™ architecture to deliver industry-leading conversational AI,
speeding up large language models by 30X.

Powering Advanced Workstations and Servers for Professional Applications

Securely Accelerate Workloads From Enterprise to Exascale

H100 features fourth-generation Tensor Cores and a Transformer Engine with
FP8 precision that provides up to 4X faster training over the prior generation for
GPT-3 (175B) models. For high-performance computing (HPC) applications, H100
triples the floating-point operations per second (FLOPS) of double-precision Tensor
Cores, delivering 60 teraflops of FP64 computing for HPC while also featuring
dynamic programming (DPX) instructions to deliver up to 7X higher performance.
With second-generation Multi-Instance GPU (MIG), built-in NVIDIA Confidential
Computing, and NVIDIA NVLink Switch System, H100 securely accelerates all
workloads for every data center, from enterprise to exascale.

XE9680

Supercharge Large Language Model Inference With H100 NVL

For LLMs up to 70 billion parameters (Llama 2 70B), the PCIe-based NVIDIA H100
NVL with NVLink bridge utilizes Transformer Engine, NVLink, and 188GB HBM3
memory to provide optimum performance and easy scaling across any data
center, bringing LLMs to the mainstream. Servers equipped with H100 NVL GPUs
increase Llama 2 70B model performance up to 5X over NVIDIA A100 systems while
maintaining low latency in power-constrained data center environments.

Enterprise-Ready: AI Software Streamlines Development
and Deployment


NVIDIA H100 NVL comes with a five-year NVIDIA AI Enterprise subscription and
simplifies the way you build an enterprise AI-ready platform. H100 accelerates AI
development and deployment for production-ready generative AI solutions, including
computer vision, speech AI, retrieval augmented generation (RAG), and more. NVIDIA
AI Enterprise includes NVIDIA NIM™—a set of easy-to-use microservices designed to
speed up enterprise generative AI deployment. Together, deployments have enterprise-
grade security, manageability, stability, and support. This results in performance-
optimized AI solutions that deliver faster business value and actionable insights.
Datasheet.

XE9680

Powerful and flexible

NVIDIA’s professional GPUs provide the power and efficiency required for today's most challenging computational workloads, making them the premier choice for industry professionals and businesses focused on high-performance and reliability.

Tech Specs

Tech Specs & Customization

expert_image
Still have Questions?
Call 18336317912

Form Factor

  • H100 PCIe

Performance Specifications

  • FP64: 26 teraFLOPS
  • FP64 Tensor Core: 51 teraFLOPS
  • FP32: 51 teraFLOPS
  • TF32 Tensor Core: 756 teraFLOPS²
  • BFLOAT16 Tensor Core: 1,513 teraFLOPS²
  • FP16 Tensor Core: 1,513 teraFLOPS²
  • FP8 Tensor Core: 3,026 teraFLOPS²
  • INT8 Tensor Core: 3,026 TOPS²

GPU Memory

  • 80GB
  • Memory bandwidth: 2TB/s

Decoders

  • 7 NVDEC
  • 7 JPEG decoders

Max Thermal Design Power (TDP)

  • 300-350W (configurable)

Multi-Instance GPUs

  • Up to 7 MIGs @ 10GB each

Interconnect

  • NVLink: 600GB/s
  • PCIe Gen5: 128GB/s

Server Options

  • Partner and NVIDIA-Certified Systems with 1–8 GPUs

NVIDIA AI Enterprise

  • Included
Models
Reviews

Write Your Own Review

You're reviewing: NVIDIA H100 Tensor Core GPU 80GB PCIe


^Top