NVIDIA MQM9790-NS2R Quantum 2 NDR/OSFP InfiniBand Switch
Earn 55,250 points when you buy me!
High-density Quantum-2 NDR InfiniBand switch — 64 × 400 Gb/s OSFP ports, 51.2 Tb/s fabric, SHARP in-network acceleration, adaptive routing and redundant power/cooling — ideal for large-scale AI, HPC and data-center clusters.

NVIDIA / Mellanox Quantum-2 QM9790 (MQM9790-NS2R) Resources
The NVIDIA / Mellanox Quantum-2 QM9790 (MQM9790-NS2R) is a high-density NDR InfiniBand switch built on NVIDIA’s Quantum-2 switching ASIC. It delivers 400 Gb/s per port speeds across 64 non-blocking InfiniBand ports — making it one of the most powerful fixed-configuration switches available for next-generation networking fabrics.
High-Performance 400Gb/s NDR InfiniBand Switching
The QM9790-NS2R delivers cutting-edge 400Gb/s NDR InfiniBand connectivity across 64 OSFP-based ports, enabling a powerful 51.2 Tb/s switching capacity. Designed for scale-out AI, cloud, storage, and HPC infrastructures, it provides ultra-low-latency and deterministic performance for next-generation workloads.

In-Network Acceleration for AI, Cloud & HPC Fabrics
Powered by the NVIDIA Quantum-2 architecture, the QM9790-NS2R supports SHARP in-network compute acceleration, adaptive routing, congestion control, and full-fabric telemetry. These capabilities dramatically increase efficiency in large AI training clusters, hyperscale cloud environments, and HPC deployments.

High Availability and Data-Center Grade Reliability
The QM9790-NS2R ensures mission-critical density and reliability with hot-swappable PSUs and fans, flexible airflow orientation, and seamless integration with NVIDIA UFM for orchestration of massive InfiniBand fabrics.
- 64 × 400 Gb/s NDR InfiniBand Ports (OSFP)
- Total Switching Capacity: 51.2 Tb/s
- Split Options: Up to 128 × 200 Gb/s NDR200
- SHARP-Based In-Network Compute Acceleration
- Adaptive Routing, Congestion Control & Full Telemetry
- Redundant Hot-Swap Power Supplies & Fans
- Front-to-Back or Back-to-Front Airflow
- Ideal for AI Supercomputing, Cloud Scale-Out, and HPC Clusters
| MFG Number | 920-9B210-00RN-0D0 |
|---|---|
| Condition | Item Condition : Brand New |
| Price | $55,250.00 |
| Ports | 64 × 400Gb/s InfiniBand ports, 32 OSFP connectors |
| Switches Rack Unit | 1U |
| Capacity | 51.2Tb/s |
| Power | 940W |
| Software | Cumulus Linux |
| Product Card Description | The NVIDIA/Mellanox MQM9790-NS2R Quantum-2 switch delivers enterprise-grade 400 Gb/s NDR InfiniBand connectivity across 64 OSFP ports (split-capable up to 128 × 200 Gb/s), with 51.2 Tb/s non-blocking switching capacity. Designed for hyperscale AI, cloud, and HPC infrastructure, it features SHARP in-network acceleration, adaptive routing, RDMA-optimized fabric, real-time telemetry, and redundant hot-swappable PSUs and fans. Perfect for GPU clusters, high-performance compute racks, and next-gen data-center fabrics. |
| Order Processing Guidelines | Order Processing Guidelines:
Inquiry First – Please reach out to our team to discuss your requirements before placing an order. |
