NVIDIA MQM9790-NS2F Quantum-2 based NDR InfiniBand Switch
Earn 55,250 points when you buy me!
High-performance NVIDIA MQM9790-NS2F Quantum-2 NDR InfiniBand switch featuring 64 × 400Gb/s OSFP ports, 51.2Tb/s bandwidth, SHARPv3 in-network acceleration, adaptive routing, and redundant power/cooling. Ideal for large-scale AI, HPC, and data-center fabrics./p>

NVIDIA / Mellanox Quantum-2 QM9790 (MQM9790-NS2F | 920-9B210-00FN-0D0) Resources
The NVIDIA / Mellanox Quantum-2 QM9790 (MQM9790-NS2F), SKU 920-9B210-00FN-0D0, is a next-generation InfiniBand switch based on NVIDIA’s Quantum-2 switching ASIC. It delivers ultra-high-speed connectivity, extremely low latency, and in-network compute acceleration optimized for large-scale distributed systems. The “NS2F” variant features Port-to-Cable (P2C) front-to-rear airflow suitable for certain data center cooling layouts and is externally managed — designed to integrate with centralized management tools like NVIDIA UFM (Unified Fabric Manager®) rather than offering full internal autonomous management.
High-Performance NDR InfiniBand Switching for Scale-Out Data Centers
The QM9790-NS2F delivers 64 × 400 Gb/s NDR InfiniBand ports using advanced OSFP interfaces, providing industry-leading 51.2 Tb/s bidirectional switching capacity designed for large-scale HPC, AI training, cloud fabrics, and next-generation data-center expansion.

In-Network Compute Acceleration for Extreme AI & HPC
With NVIDIA Quantum-2 architecture, the QM9790 delivers state-of-the-art in-network acceleration including SHARPv3, adaptive routing, congestion control, and enhanced telemetry—empowering ultra-large GPU clusters and HPC workloads with unprecedented scalability and efficiency.

High Availability & Intelligent Fabric Management
The QM9790 is built for mission-critical data-center environments with redundant hot-swappable power supplies and fans, flexible airflow options, and full integration with NVIDIA UFM for fabric orchestration of massive AI and HPC clusters.
- 64 × 400 Gb/s NDR InfiniBand Ports (OSFP)
- Switching Capacity: 51.2 Tb/s
- Supports split options for 128 × 200 Gb/s
- Advanced SHARP in-network acceleration for AI & HPC
- Adaptive routing, congestion control, and end-to-end telemetry
- Hot-swappable PSUs & fans with front-to-back / back-to-front airflow
- Optimized for hyperscale AI training, cloud compute, and HPC fabrics
| MFG Number | 920-9B210-00FN-0D0 |
|---|---|
| Condition | Item Condition : Brand New |
| Price | $55,250.00 |
| Ports | 64 × 400Gb/s InfiniBand ports, 32 OSFP connectors |
| Switches Rack Unit | 1U |
| Capacity | 51.2Tb/s |
| Power | 940W |
| Software | Cumulus Linux |
| Product Card Description | The NVIDIA MQM9790-NS2F Quantum-2 NDR InfiniBand switch delivers exceptional fabric performance for hyperscale AI and HPC infrastructures. With 64 × 400Gb/s NDR OSFP ports (split-capable to 128 × 200Gb/s), 51.2Tb/s switching capacity, SHARPv3 acceleration, and advanced telemetry, it enables ultra-low-latency, lossless connectivity for next-gen GPU clusters and supercomputing environments. Redundant PSUs, modular cooling, and flexible airflow ensure maximum uptime in high-density deployments. |
| Order Processing Guidelines | Order Processing Guidelines:
Inquiry First – Please reach out to our team to discuss your requirements before placing an order. |
