Intelligent RDMA-enabled network adapter with advanced application offload capabilities for High-Performance Computing, Web2.0, Cloud and Storage platforms
Details
ConnectX-5 Single/Dual-Port Adapter ASIC Supporting 100Gb/s
The intelligent ConnectX-5 adapter IC, is a member of the Mellanox Smart Interconnect suite and supporting Co-Design and In-Network Compute, brings new acceleration engines for maximising High Performance, Web 2.0, Cloud, Data Analytics and Storage platforms.
ConnectX-5 with Virtual Protocol Interconnect supports two ports of 100Gb/s InfiniBand and Ethernet connectivity, sub-600 nanosecond latency, and very high message rate, plus embedded PCIe switch and NVMe over Fabric offloads, providing the highest performance and most flexible solution for the most demanding applications and markets.
ConnectX-5 enables higher HPC performance with new Message Passing Interface (MPI) offloads, such as MPI Tag Matching and MPI AlltoAll operations, advanced dyanmic routing, and new capabilities to perform various data algorithms.
Moreover, ConnectX-5 Accelerated Switching and Packet Processing (ASAP2) technology enhances offloading of virtual switches and virtual routers, for example, Open V-Switch (OVS), which results in significantly higher data transfer performance without overloading the CPU. Together with native RDMA and RoCE support, ConnectX-5 dramatically improves Cloud and NFV platform efficiency.
Similar to previous generations, ConnectX-5 supports the Mellanox Multi-Host technology. Multi-Host enables multiple hosts to be connected into a single interconnect adapter by separating the PCIe interface into multiple and separate interfaces. Each interface can be connected to a separate host with no performance degradation. Multi-Host technology offers four fully-independent PCIe buses, lowering total costs of ownership in the data centre by reducing CAPEX requirements from four cables, NICS, and switch ports to only one of each, and by reducing OPEX by cutting down on switch port management and overall power usage.
Benefits
- Industry-leading throughput, low latency, low CPU utilisation and high message rate
- Innovative rack design for storage and Machine Learning based on Host Chaining technology
- Maximises data centre ROI with Multi-Host technology
- Smart interconnect for x86, Power, ARM, and GPU-based compute and storage platforms
- Advanced storage capabilities including NVMe over Fabric offloads
- Intelligent network adapter supporting flexible pipeline programmability
- Cutting-edge performance in virtualised networks including Network Function Virtualisation (NFV)
- Enabler for efficient service chaining capabilities
Key Features
- EDR 100Gb/s InfiniBand or 100Gb/s Ethernet per port and all lower speeds
- Up to 200M messages/second
- Tag Matching and Rendezvou Offloads
- Adaptive Routing on Reliable Transport
- Burst Buffer Offloads
- NVMe over Fabric (NVMf) Target Offloads
- Multi-Host technology - connectivity to up-to 4 independent hosts
- Back-End Switch Elimination by Host Chaining
- Embedded PCIe Switch
- Enhanced vSwitch/vRouter Offloads
- Flexible Pipeline
- RoCE for Overlay Networks
- PCIe Gen 4 Support
- Erasure Coding offload
- T10-DIF Signature Handover
- IBM CAPI v2 support
- Mellanox PeerDirect communication acceleration
- Hardware offloads for NVGRE and VXLAN encapsulated traffic
- End-to-end QoS and congestion control
- Hardware-based I/O virtualisation
- ROHS2 R6
Manufacturer | Mellanox |
---|---|
Part No. | MT28808A0-FCCF-EVM |
End of Life? | No |
Advanced Network Features |
|
Bandwidth | Max 100Gb/s |
Channels | 16 million I/O channels |
Host OS Support | RHEL, CentOS, Windows, FreeBSD, VMWare, OFED, WinOF-2 |
I/O Virtualisation |
|
Supported Software | OpenMPI, IBM PE, OSU MPI (MVAPICH/2), Intel MPI, Platform MPI, UPC, Open SHMEM |
PCI Slot(s) | Gen 4.0, 3.0, 2.0, 1.1 compatible |
Ports | Up to 100Gb/s connectivity per port |
Port-Port Latency | Sub 600ns latency |
IEEE Compliance |
|
RoHS | RoHS R6 |