World's first 200Gb/s HDR InfiniBand and Ethernet network adapter offering world-leading performance, smart offloads and In-Network Computing; leading to the highest return on investment for High-Performance Computing, Cloud, Web 2.0, Storage and Machine Learning applications.
Details
HDR ConnectX InfiniBand Adapter Cards IC | Single/Dual-Port
The HDR ConnectX-6 adapter IC, the newest addition to the Mellanox Smart Interconnect suite and supporting Co-Design and In-Network Compute, brings new acceleration engines for maximising High Performance, Machine Learning, Storage, Web 2.0, Cloud, Data Analytics and Telecommunications platforms.
ConnectX-6 with Virtual Protocol Interconnect supports two ports of 200Gb/s InfiniBand (HDR) and Ethernet connectivity, sub-600 nanosecond latency, and 200 million messages per second, plus block-level encryption and NVMe over Fabric offloads, providing the highest peformance and most flexible solution for the most demanding applications and markets.
Similar to previous generations, ConnectX-6 supports the Mellanox Multi-Host technology. Multi-Host enables multiple hosts to be connected into a single interconnect adapter by separating the PCIe interface into multiple and independent interfaces with no performance degradation. ConnectX-6 Multi-Host technology offers up-to eight (8) fully-independent PCIe buses, lowering total cost of ownership in the data centre by reducing CAPEX requirements from multiple cables, NICs, and switch ports to only of each, and by reducing OPEX by cutting down on switch port management and overall power usage.
Benefits
- High performance and most intelligent fabric for High Performance Computing clusters
- Maximises data centre ROI with Multi-Host technology
- Innovative rack design for storage and Machine Learning based on Host Chaining technology
- Advanced storage capabilities including NVMe over Fabric offloads
- Enhances data security by leveraging block-level XTS-AES mode hardware encryption
- Use of diffrent encryption keys enables protection between users who share resources
- Enabler for FIPS compliancy for all storage devices
- Smart interconnect for x86, Power, ARM, and GPU-based compute and storage platforms
- Intelligent network adapter supporting flexible pipeline programmability
- Cutting-edge performance in virtualised networks including Network Function Virtualisation (NFV)
- Enabler for efficient service chaining capabilities
Key Features
- HDR (200Gb/s), HDR100 (100Gb/s over 2 lanes) and EDR (100Gb/s) InfiniBand per port
- 200Gb/s Ethernet per port and all lower speeds
- Up to 200M messages/second
- Sub-600ns RDMA latency
- PCIe Gen4 support
- Block-level XTS-AES mode hardware encryption
- FIPS compliant adapter
- Tag Matching and Rendezvous offloads
- Adaptive Routing on Reliable Transport
- Burst Buffer offloads for Background Checkpointing
- NVMe over Fabric (NVMf) Target offloads
- Multi-Host technology - connectivity to up-to 8 independent hosts
- Embedded PCIe switch
- Enhanced vSwitch/vRouter Offloads
- Flexible Pipeline
- RoCE for Overlay Networks
- Erasure Coding offload
- T10-DIF Signature Handover
- IBM CAPI v2 support
- Mellanox PeerDirect communication acceleration
- Hardware offloads for NVGRE and VXLAN encapsulated traffic
- End-to-end QoS and congestion control
- Hardware-based I/O virtualisation
- RoHS2 R6
Manufacturer | Mellanox |
---|---|
Part No. | MT28908A0-FCCF-HVM |
End of Life? | No |
Advanced Network Features |
|
Bandwidth | Max 200Gb/s |
Channels | 16 million I/O channels |
Host OS Support | RHEL, SLES, Ubuntu, Windows, FreeBSD, VMWare, OFED, WinOF-2 |
I/O Virtualisation |
|
Supported Software | HPC Software Libraries: HPC-X, OpenMPI, MVAPICH, MPICH, Open SHMEM, PGAS and varied commercial packages |
PCI Slot(s) | Gen 4.0, 3.0, 2.0, 1.1 compatible |
Ports | Up to 200Gb/s connectivity per port |
Port-Port Latency | Sub 0.6usec latency |
IEEE Compliance |
|
RoHS | RoHS R6 |