NVIDIA Infiniband Adapters
Leveraging faster speeds and innovative In-Network Computing, NVIDIA InfiniBand smart adapters achieve extreme performance and scale. NVIDIA InfiniBand adapters lower cost per operation, increasing ROI for high-performance computing (HPC), machine learning, advanced storage, clustered databases, low-latency embedded I/O applications, and more.
HIGH-PERFORMANCE COMPUTING, ACCELERATED
NVIDIA® ConnectX® InfiniBand smart adapters with acceleration engines deliver best-in-class network performance and efficiency, enabling low latency, high throughput, and high message rates.
- World-class cluster performance
- High-performance networking and storage access
- In-network computing
- Efficient use of compute resources
- Guaranteed bandwidth and low-latency services
ConnectX®-6 VPI HDR/200GbE Adapters
ConnectX-6 Virtual Protocol Interconnect® (VPI) adapter cards offer up to two ports of 200Gb/s throughput for InfiniBand and Ethernet connectivity, provide ultra-low latency, deliver 215 million messages per second, and feature innovative smart offloads and in-network computing accelerations that drive performance and efficiency.
ConnectX®-5 VPI EDR/100GbE Adapters
ConnectX-5 Virtual Protocol Interconnect® (VPI) adapter cards support two ports of 100Gb/s throughput for InfiniBand and Ethernet connectivity, low latency, and high message rate, plus PCIe switch and NVMe over Fabrics (NVME-oF) offloads, providing a high-performance and flexible solution for the most demanding applications and workloads.
Founded by Facebook in 2011, the Open Compute Project, provides specifications for building energy-efficient and scalable high-performance Web 2.0 data centers. The results of these efforts are next-generation computing infrastructures designed and built from the ground up that are up to 24% less expensive and 38% more energy-efficient than traditional data centers. NVIDIA®, a leading OCP contributor, addresses the demanding needs of these newer data centers by providing an Open Composable Network for an end-to-end network fabric, with unmatched features for open architecture and proven, reliable performance.
NVIDIA® Mellanox® Multi-Host technology enables next generation Cloud, Web 2.0 and high-performance data centers to design and build new scale-out heterogeneous compute and storage racks with direct connectivity between multiple hosts and the centralized network controller. This enables direct data access with the lowest latency to significantly improve densities and maximizes data transfer rates.