Subscriptions

Mellanox ConnectX-5 VPI IC

100Gb/s InfiniBand & Ethernet Adapter IC

[MT28808A0-FCCF-EVM]

Intelligent RDMA-enabled network adapter with advanced application offload capabilities for High-Performance Computing, Web2.0, Cloud and Storage platforms


Availability: In stock

For details on how to purchase the Mellanox ConnectX-5 VPI IC, please click the button below to send us your details and brief requirements. We can then quote you accordingly.

Details

ConnectX-5 Single/Dual-Port Adapter ASIC Supporting 100Gb/s

The intelligent ConnectX-5 adapter IC, is a member of the Mellanox Smart Interconnect suite and supporting Co-Design and In-Network Compute, brings new acceleration engines for maximising High Performance, Web 2.0, Cloud, Data Analytics and Storage platforms.


ConnectX-5 with Virtual Protocol Interconnect supports two ports of 100Gb/s InfiniBand and Ethernet connectivity, sub-600 nanosecond latency, and very high message rate, plus embedded PCIe switch and NVMe over Fabric offloads, providing the highest performance and most flexible solution for the most demanding applications and markets.


ConnectX-5 enables higher HPC performance with new Message Passing Interface (MPI) offloads, such as MPI Tag Matching and MPI AlltoAll operations, advanced dyanmic routing, and new capabilities to perform various data algorithms.


Moreover, ConnectX-5 Accelerated Switching and Packet Processing (ASAP2) technology enhances offloading of virtual switches and virtual routers, for example, Open V-Switch (OVS), which results in significantly higher data transfer performance without overloading the CPU. Together with native RDMA and RoCE support, ConnectX-5 dramatically improves Cloud and NFV platform efficiency.


Similar to previous generations, ConnectX-5 supports the Mellanox Multi-Host technology. Multi-Host enables multiple hosts to be connected into a single interconnect adapter by separating the PCIe interface into multiple and separate interfaces. Each interface can be connected to a separate host with no performance degradation. Multi-Host technology offers four fully-independent PCIe buses, lowering total costs of ownership in the data centre by reducing CAPEX requirements from four cables, NICS, and switch ports to only one of each, and by reducing OPEX by cutting down on switch port management and overall power usage.

Benefits

  • Industry-leading throughput, low latency, low CPU utilisation and high message rate
  • Innovative rack design for storage and Machine Learning based on Host Chaining technology
  • Maximises data centre ROI with Multi-Host technology
  • Smart interconnect for x86, Power, ARM, and GPU-based compute and storage platforms
  • Advanced storage capabilities including NVMe over Fabric offloads
  • Intelligent network adapter supporting flexible pipeline programmability
  • Cutting-edge performance in virtualised networks including Network Function Virtualisation (NFV)
  • Enabler for efficient service chaining capabilities

Key Features

  • EDR 100Gb/s InfiniBand or 100Gb/s Ethernet per port and all lower speeds
  • Up to 200M messages/second
  • Tag Matching and Rendezvou Offloads
  • Adaptive Routing on Reliable Transport
  • Burst Buffer Offloads
  • NVMe over Fabric (NVMf) Target Offloads
  • Multi-Host technology - connectivity to up-to 4 independent hosts
  • Back-End Switch Elimination by Host Chaining
  • Embedded PCIe Switch
  • Enhanced vSwitch/vRouter Offloads
  • Flexible Pipeline
  • RoCE for Overlay Networks
  • PCIe Gen 4 Support
  • Erasure Coding offload
  • T10-DIF Signature Handover
  • IBM CAPI v2 support
  • Mellanox PeerDirect communication acceleration
  • Hardware offloads for NVGRE and VXLAN encapsulated traffic
  • End-to-end QoS and congestion control
  • Hardware-based I/O virtualisation
  • ROHS2 R6
Manufacturer Mellanox
Part No. MT28808A0-FCCF-EVM
End of Life? No
Advanced Network Features
  • Hardware-based reliable transport
  • Collective operations offloads
  • Vector collective operations offloads
  • PeerDirect RDMA (aka GPUDirect) communication acceleration
  • 64/66 encoding
  • Advanced memory mapping support, allowing user mode registration and remapping of memory
  • Enhanced Atomic operations
  • Extended Reliable Connected transport (XRC)
  • Dynamically Connected Transport (DCT)
  • On demand paging (ODP)
  • MPI Tag Matching
  • Rendezvous protocol offload
  • Out-of-order RDMA supporting Adaptive Routing
  • Burst buffer offload
  • In-Network Memory registration-free RDMA memory access
Bandwidth Max 100Gb/s
Channels 16 million I/O channels
Host OS Support RHEL, CentOS, Windows, FreeBSD, VMWare, OFED, WinOF-2
I/O Virtualisation
  • Single Root IOV
  • Address translation and protection
  • VMWare NetQueue support
  • SR-IOV: Up to 1K Virtual Functions
  • SR-IOV: Up to 16 Physical Functions per host
  • Virtualisation hierarchies (e.g. NPAR and Multi-Host)
    • Virtualising Physical Functions on a physical port
    • SR-IOV on every Physical Function
  • Configurable and user-programmable QoS
  • Guaranteed QoS for VMs
Supported Software

OpenMPI, IBM PE, OSU MPI (MVAPICH/2), Intel MPI, Platform MPI, UPC, Open SHMEM

PCI Slot(s) Gen 4.0, 3.0, 2.0, 1.1 compatible
Ports Up to 100Gb/s connectivity per port
Port-Port Latency Sub 600ns latency
IEEE Compliance
  • IEEE 802.3bj, 802.3bm 100 Gigabit Ethernet
  • IEEE 802.3by, Ethernet Consortium 25, 50 Gigabit Ethernet, supporting all FEC modes
  • IEEE 802.3ba 40 Gigabit Ethernet
  • IEEE 802.3ae 10 Gigabit Ethernet
  • IEEE 802.3az Energy Efficient Ethernet
  • IEEE 802.3ap based auto-negotiation and KR startup
  • IEEE 802.3ad, 802.1AX Link Aggregation
  • IEEE 802.1Q, 802.1P VLAN tags and priority
  • IEEE 802.3Qau (QCN) - Congestion Notification
  • IEEE 802.1Qaz (ETS)
  • IEEE 802.1Qbb (PFC)
  • IEEE 802.1Qbg
  • IEEE 1588v2
RoHS RoHS R6

    Please login to x.com to view...