Call us FREE on 0800 488 000
X
X

Please Log In Below

Forgotten Password?
Retry
Login
loading Gif
Sorry! You can't edit your cart on this page.
Sorry! This item could not be added to your cart as it is no longer available from Comms Express.
Please check the quantity you are adding and try again.
The following item has been added to your cart.
Product Code:
Options:
Qty:
Unit Price:£
View Cart
Checkout
Your cart is empty.
Subtotal:
£0
Remove all items from cart
Are you sure? Yes No
Learn more about how
to collect Data Points
for free gifts.
Comms Express Finance Options
Request A Quote
View Cart
Checkout
Cookie Policy×

Hi there! Our website may store cookies on your computer in order to give you the best experience, such as remembering the items in your cart so you can continue shopping where you left off.

By continuing to use our site, you give consent for cookies to be used.

Spend £100.00 for
FREE DELIVERY.
Free delivery excludes heavy and bulky products
Browse Categories
In Stock: 2-3 Weeks
£830.83
£997.00 Inc VAT
Earn 415 Data Points when purchasing this product. More Info

/assets/images/gallery/large/1660045201MCX556M-ECAT-S25-1.png

Mellanox MCX556M-ECAT-S25 CONNECTX-5 VPI Adapter Card

Mellanox MCX556M-ECAT-S25 CONNECTX-5 VPI Adapter Card

With Socket Direct Supporting Dual-Socket Server EDR IB (100GB/S) and 100GBE Dual-Port QSFP28 2X PCIE3.0 X8 25CM Harness Tall Bracket ROHS R6

by Mellanox
See more product details
Part No:FEMCX556M-ECAT-S25
Manufacturer No:MCX556M-ECAT-S25
Delivery: In Stock: 2-3 Weeks

/assets/images/gallery/large/1660045201MCX556M-ECAT-S25-1.png

Mellanox MCX556M-ECAT-S25 CONNECTX-5 VPI Adapter Card
More Related Items
Click to change options
Colour:
Door:
Flat Pack:
Accessory Kit:
Apply
£670.83 Ex VAT
Qty:
£98.00 Ex VAT
Qty:
Email product to a friend
X
  • Scroll to top
    Mellanox MCX556M-ECAT-S25 CONNECTX-5 VPI Adapter Card
    With Socket Direct Supporting Dual-Socket Server EDR IB (100GB/S) and 100GBE Dual-Port QSFP28 2X PCIE3.0 X8 25CM Harness Tall Bracket ROHS R6

    Connectx-5 Infiniband Adapter Card

    ConnectX-5 VPI Socket Direct EDR IB and 100GbE InfiniBand & Ethernet Adapter Card

    Intelligent RDMA-enabled network adapter card with advanced application offload capabilities supporting 100Gb/s for servers without x16 PCIe slots.


    Benefits:

    • Up to 100Gb/s connectivity per port

    • Industry-leading throughput, low latency, low CPU utilization and high message rate

    • Low latency for dual-socket servers in environments with multiple network flows

    • Innovative rack design for storage and Machine Learning based on Host Chaining technology

    • Smart interconnect for x86, Power, Arm, and GPU-based compute and storage platforms

    • Advanced storage capabilities including NVMe over Fabric offloads

    • Intelligent network adapter supporting flexible pipeline programmability

    • Cutting-edge performance in virtualized networks including Network Function Virtualization (NFV)

    • Enabler for efficient service chaining capabilities

    • Efficient I/O consolidation, lowering data center costs and complexity


    ConnectX-5 Socket Direct with Virtual Protocol Interconnect supports two ports of 100Gb/s InfiniBand and Ethernet connectivity, very low latency, and very high message rate, OVS and NVMe over Fabric offloads, providing the highest performance and most flexible solution for the most demanding applications and markets: Machine Learning, Data Analytics, and more.


    Features:

    • Socket Direct, enabling 100Gb/s for servers without x16 PCIe slots

    • Tag matching and rendezvous offloads

    • Adaptive routing on reliable transport

    • Burst buffer offloads for background checkpointing

    • NVMe over Fabric (NVMe-oF) offloads

    • Back-end switch elimination by host chaining

    • Enhanced vSwitch/vRouter offloads

    • Flexible pipeline

    • RoCE for overlay networks

    • RoHS compliant

    • ODCC compatible


    HPC Environments

    ConnectX-5 offers enhancements to HPC infrastructures by providing MPI and SHMEM/ PGAS and rendezvous tag matching offload, hardware support for out-of-order RDMA write and read operations, as well as additional network atomic and PCIe atomic operations support.

    ConnectX-5 enhances RDMA network capabilities by completing the switch adaptive- routing capabilities and supporting data delivered out-of-order, while maintaining ordered completion semantics, providing multipath reliability, and efficient support for all network topologies, including DragonFly and DragonFly+.

    ConnectX-5 also supports burst buffer offload for background checkpointing without interfering in the main CPU operations, and the innovative dynamic connected transport (DCT) service to ensure extreme scalability for compute and storage systems.


    Storage Environments

    NVMe storage devices are gaining popularity, offering very fast storage access. The NVMe over Fabrics (NVMe-oF) protocol leverages RDMA connectivity for remote access. ConnectX-5 offers further enhancements by providing NVMe-oF target offloads, enabling highly efficient NVMe storage access with no CPU intervention, and thus improved performance and lower latency.

    Standard block and file access protocols can leverage RDMA for high-performance storage access. A consolidated compute and storage network achieves significant cost- performance advantages over multi-fabric networks.


    Adapter Card Portfolio

    ConnectX-5 InfiniBand adapter cards are available in several form factors, including low- profile stand-up PCIe, Open Compute Project (OCP) Spec 2.0 Type 1, and OCP 2.0 Type 2.

    NVIDIA Multi-Host technology allows multiple hosts to be connected into a single adapter by separating the PCIe interface into multiple and independent interfaces.

    The portfolio also offers NVIDIA Socket Direct configurations that enable servers without x16 PCIe slots to split the card’s 16-lane PCIe bus into two 8-lane buses on dedicated cards connected by a harness. This provides 100Gb/s port speed even to servers without a x16 PCIe slot.

    Socket Direct also enables NVIDIA GPUDirect® RDMA for all CPU/GPU pairs by ensuring that all GPUs are linked to CPUs close to the adapter card, and enables Intel® DDIO on both sockets by creating a direct connection between the sockets and the adapter card.


    Ethernet
    • 100GbE / 50GbE / 40GbE / 25GbE / 10GbE / 1GbE
    • Jumbo frame support (9.6KB)

    HPC Software Libraries
    • NVIDIA HPC-X,OpenMPI, IBM PE, OSU MPI (MVAPICH/2), Intel MPI
    • Platform MPI, UPC, Open SHMEM

     

    InfiniBand
    • 100Gb/s and lower speed
    • IBTA Specification 1.3 compliant
    • RDMA, send/receive semantics
    • Hardware-based congestion control
    • Atomic operations
    • 16 million I/O channels
    • 256 to 4Kbyte MTU, 2Gbyte messages
    • 8 virtual lanes + VL15

    Remote Boot
    • Remote boot over InfiniBand
    • Remote boot over Ethernet
    • Remote boot over iSCSI
    • Unified extensible firmware Interface (UEFI)
    • Pre-execution environment (PXE)

     

    Management and Control
    • NC-SI over MCTP over SMBus and NC-SI over MCTP over PCIe—Baseboard Management Controller interface
    • PLDM for Monitor and Control DSP0248
    • PLDM for Firmware Update DSP0267
    • SDN management interface for managing the eSwitch
    • I2C interface for device control and configuration
    • General purpose I/O pins
    SPI interface to flash
    • JTAG IEEE 1149.1 and IEEE 1149.6

    Hardware-Based I/O Virtualization
    • Single root IO virtualization (SR-IOV)
    • Address translation and protection
    • VMware NetQueue support
    - SR-IOV: up to 512 virtual functions
    - SR-IOV: up to 8 physical functions per host
    • Virtualization hierarchies (e.g., NPAR when enabled)
    - Virtualizing physical functions on a physical port
    - SR-IOV on every physical function
    • Configurable and user-programmable QoS
    • Guaranteed QoS for VMs

     

    Storage Offloads
    • NVMe over Fabrics offloads for target machine
    • T10 DIF—Signature handover operation at wire speed for ingress and egress traffic
    • Storage protocols: SRP, iSER, NFS RDMA, SMB Direct, NVMe-oF

    Overlay Networks
    • RoCE over overlay networks
    • Stateless offloads for overlay network tunneling protocols
    • Hardware offload of encapsulation and decapsulation of VXLAN, NVGRE, and GENEVE overlay networks

     

    Enhanced Features
    • Hardware-based reliable transport
    • Collective operations offloads
    • Vector collective operations offloads
    • NVIDIA PeerDirect™ RDMA (aka GPUDirect) communication acceleration
    • 64/66 encoding
    • Extended reliable connected transport (XRC)
    • Dynamically connected transport (DCT)
    • Enhanced atomic operations
    Advanced memory mapping support, allowing user mode registration and remapping of memory (UMR)
    • On-demand paging (ODP)
    • MPI tag matching
    Rendezvous protocol offload
    • Out-of-order RDMA supporting adaptive routing
    • Burst buffer offload
    • In-Network Memory registration-free RDMA memory access

     

    Full specification and details can be found in the Product Datasheet PDF file

    Connectx-5 Infiniband Adapter Card

    ConnectX-5 VPI Socket Direct EDR IB and 100GbE InfiniBand & Ethernet Adapter Card

    Intelligent RDMA-enabled network adapter card with advanced application offload capabilities supporting 100Gb/s for servers without x16 PCIe slots.


    Benefits:

    • Up to 100Gb/s connectivity per port

    • Industry-leading throughput, low latency, low CPU utilization and high message rate

    • Low latency for dual-socket servers in environments with multiple network flows

    • Innovative rack design for storage and Machine Learning based on Host Chaining technology

    • Smart interconnect for x86, Power, Arm, and GPU-based compute and storage platforms

    • Advanced storage capabilities including NVMe over Fabric offloads

    • Intelligent network adapter supporting flexible pipeline programmability

    • Cutting-edge performance in virtualized networks including Network Function Virtualization (NFV)

    • Enabler for efficient service chaining capabilities

    • Efficient I/O consolidation, lowering data center costs and complexity


    ConnectX-5 Socket Direct with Virtual Protocol Interconnect supports two ports of 100Gb/s InfiniBand and Ethernet connectivity, very low latency, and very high message rate, OVS and NVMe over Fabric offloads, providing the highest performance and most flexible solution for the most demanding applications and markets: Machine Learning, Data Analytics, and more.


    Features:

    • Socket Direct, enabling 100Gb/s for servers without x16 PCIe slots

    • Tag matching and rendezvous offloads

    • Adaptive routing on reliable transport

    • Burst buffer offloads for background checkpointing

    • NVMe over Fabric (NVMe-oF) offloads

    • Back-end switch elimination by host chaining

    • Enhanced vSwitch/vRouter offloads

    • Flexible pipeline

    • RoCE for overlay networks

    • RoHS compliant

    • ODCC compatible


    HPC Environments

    ConnectX-5 offers enhancements to HPC infrastructures by providing MPI and SHMEM/ PGAS and rendezvous tag matching offload, hardware support for out-of-order RDMA write and read operations, as well as additional network atomic and PCIe atomic operations support.

    ConnectX-5 enhances RDMA network capabilities by completing the switch adaptive- routing capabilities and supporting data delivered out-of-order, while maintaining ordered completion semantics, providing multipath reliability, and efficient support for all network topologies, including DragonFly and DragonFly+.

    ConnectX-5 also supports burst buffer offload for background checkpointing without interfering in the main CPU operations, and the innovative dynamic connected transport (DCT) service to ensure extreme scalability for compute and storage systems.


    Storage Environments

    NVMe storage devices are gaining popularity, offering very fast storage access. The NVMe over Fabrics (NVMe-oF) protocol leverages RDMA connectivity for remote access. ConnectX-5 offers further enhancements by providing NVMe-oF target offloads, enabling highly efficient NVMe storage access with no CPU intervention, and thus improved performance and lower latency.

    Standard block and file access protocols can leverage RDMA for high-performance storage access. A consolidated compute and storage network achieves significant cost- performance advantages over multi-fabric networks.


    Adapter Card Portfolio

    ConnectX-5 InfiniBand adapter cards are available in several form factors, including low- profile stand-up PCIe, Open Compute Project (OCP) Spec 2.0 Type 1, and OCP 2.0 Type 2.

    NVIDIA Multi-Host technology allows multiple hosts to be connected into a single adapter by separating the PCIe interface into multiple and independent interfaces.

    The portfolio also offers NVIDIA Socket Direct configurations that enable servers without x16 PCIe slots to split the card’s 16-lane PCIe bus into two 8-lane buses on dedicated cards connected by a harness. This provides 100Gb/s port speed even to servers without a x16 PCIe slot.

    Socket Direct also enables NVIDIA GPUDirect® RDMA for all CPU/GPU pairs by ensuring that all GPUs are linked to CPUs close to the adapter card, and enables Intel® DDIO on both sockets by creating a direct connection between the sockets and the adapter card.


    Ethernet
    • 100GbE / 50GbE / 40GbE / 25GbE / 10GbE / 1GbE
    • Jumbo frame support (9.6KB)

    HPC Software Libraries
    • NVIDIA HPC-X,OpenMPI, IBM PE, OSU MPI (MVAPICH/2), Intel MPI
    • Platform MPI, UPC, Open SHMEM

     

    InfiniBand
    • 100Gb/s and lower speed
    • IBTA Specification 1.3 compliant
    • RDMA, send/receive semantics
    • Hardware-based congestion control
    • Atomic operations
    • 16 million I/O channels
    • 256 to 4Kbyte MTU, 2Gbyte messages
    • 8 virtual lanes + VL15

    Remote Boot
    • Remote boot over InfiniBand
    • Remote boot over Ethernet
    • Remote boot over iSCSI
    • Unified extensible firmware Interface (UEFI)
    • Pre-execution environment (PXE)

     

    Management and Control
    • NC-SI over MCTP over SMBus and NC-SI over MCTP over PCIe—Baseboard Management Controller interface
    • PLDM for Monitor and Control DSP0248
    • PLDM for Firmware Update DSP0267
    • SDN management interface for managing the eSwitch
    • I2C interface for device control and configuration
    • General purpose I/O pins
    SPI interface to flash
    • JTAG IEEE 1149.1 and IEEE 1149.6

    Hardware-Based I/O Virtualization
    • Single root IO virtualization (SR-IOV)
    • Address translation and protection
    • VMware NetQueue support
    - SR-IOV: up to 512 virtual functions
    - SR-IOV: up to 8 physical functions per host
    • Virtualization hierarchies (e.g., NPAR when enabled)
    - Virtualizing physical functions on a physical port
    - SR-IOV on every physical function
    • Configurable and user-programmable QoS
    • Guaranteed QoS for VMs

     

    Storage Offloads
    • NVMe over Fabrics offloads for target machine
    • T10 DIF—Signature handover operation at wire speed for ingress and egress traffic
    • Storage protocols: SRP, iSER, NFS RDMA, SMB Direct, NVMe-oF

    Overlay Networks
    • RoCE over overlay networks
    • Stateless offloads for overlay network tunneling protocols
    • Hardware offload of encapsulation and decapsulation of VXLAN, NVGRE, and GENEVE overlay networks

     

    Enhanced Features
    • Hardware-based reliable transport
    • Collective operations offloads
    • Vector collective operations offloads
    • NVIDIA PeerDirect™ RDMA (aka GPUDirect) communication acceleration
    • 64/66 encoding
    • Extended reliable connected transport (XRC)
    • Dynamically connected transport (DCT)
    • Enhanced atomic operations
    Advanced memory mapping support, allowing user mode registration and remapping of memory (UMR)
    • On-demand paging (ODP)
    • MPI tag matching
    Rendezvous protocol offload
    • Out-of-order RDMA supporting adaptive routing
    • Burst buffer offload
    • In-Network Memory registration-free RDMA memory access

     

    Full specification and details can be found in the Product Datasheet PDF file

    Email product to a friend
    Print product details
    View Keywording: