NVIDIA MQM9701-NS2R Quantum‑2 NDR InfiniBand DGX Switch
Earn 55,250 points when you buy me!
High‑performance Quantum‑2 NDR InfiniBand switch: 32 OSFP ports (up to 64‑way NDR breakout), ultra‑low latency 400 Gb/s per‑port connectivity, optimized for DGX, AI, HPC, and data‑center fabrics.

NVIDIA Quantum-2 MQM9701-NS2R Resources
The NVIDIA / Mellanox Quantum-2 QM9701-NS2R is a next-generation InfiniBand switch system based on NVIDIA’s Quantum-2 switching ASIC. It delivers 64 high-speed NDR InfiniBand ports (400 Gb/s each) in a compact 1U rackmount chassis, with massive aggregate bandwidth, ultra-low latency, and advanced in-network compute acceleration. This model is tailored for top-tier HPC and AI clusters, including NVIDIA DGX SuperPOD and other high-density environments that rely on maximum throughput and minimal communication overhead.
High-Performance 400Gb/s NDR InfiniBand Switching
The NVIDIA Quantum-2 MQM9701-NS2R delivers next-generation 400Gb/s NDR InfiniBand performance across 32 OSFP ports with 64x NDR breakout support. Designed for AI, HPC, and hyperscale cloud environments, it ensures deterministic, lossless, ultra-low latency fabric connectivity using NVIDIA Quantum-2 NDR switching architecture.

Quantum-2 Acceleration for AI, Cloud & HPC Fabrics
Powered by NVIDIA’s Quantum-2 technology, the MQM9701-NS2R provides advanced in-network acceleration, SHARPv2 computing, adaptive routing, congestion-aware fabric control, and real-time telemetry. It enables scalable, high-performance GPU and CPU cluster deployments with minimal latency and maximum throughput.

High Availability & Enterprise-Grade Fabric Reliability
The MQM9701-NS2R ensures maximum uptime with hot-swappable power supplies, redundant fan modules, and optimized thermal design. NVIDIA UFM provides centralized monitoring, orchestration, and automation for large-scale NDR InfiniBand fabrics.
- 32 OSFP NDR ports with 64x NDR breakout support
- Total Switching Capacity: Multi-Tb/s Quantum-2 architecture
- Ultra-low latency for AI, HPC & hyperscale workloads
- SHARPv2 in-network acceleration
- Adaptive routing & congestion management
- Redundant hot-swap PSUs & fan modules
- VLAN support for advanced fabric segmentation
- Ideal for HPC supercomputers, AI clusters & cloud fabrics
| MFG Number | 920-9B210-00RN-0M6 |
|---|---|
| Condition | Item Condition : Brand New |
| Price | $55,250.00 |
| Ports | 64 × 400Gb/s InfiniBand ports, 32 OSFP connectors |
| Switches Rack Unit | 1U |
| Capacity | 51.2Tb/s |
| Power | 940W |
| Software | Cumulus Linux |
| Product Card Description | The NVIDIA MQM9701‑NS2R Quantum‑2 DGX‑ready NDR InfiniBand Switch delivers enterprise‑grade 400 Gb/s NDR connectivity via 32 OSFP ports with breakout support up to 64 NDR links. Designed for GPU‑dense AI servers and HPC clusters, it ensures deterministic, lossless, ultra‑low‑latency communication — ideal for deep learning, HPC workloads, and modern data‑center fabrics. Built for reliability and scalability, it supports advanced RDMA, adaptive routing, and full InfiniBand fabric orchestration. |
| Order Processing Guidelines | Order Processing Guidelines:
Inquiry First – Please reach out to our team to discuss your requirements before placing an order. |
