Call us FREE on 0800 488 000
X
X

Please Log In Below

Forgotten Password?
Retry
Login
loading Gif
Sorry! You can't edit your cart on this page.
Sorry! This item could not be added to your cart as it is no longer available from Comms Express.
Please check the quantity you are adding and try again.
The following item has been added to your cart.
Product Code:
Options:
Qty:
Unit Price:£
View Cart
Checkout
Your cart is empty.
Subtotal:
£0
Remove all items from cart
Are you sure? Yes No
Learn more about how
to collect Data Points
for free gifts.
Comms Express Finance Options
Request A Quote
View Cart
Checkout
Cookie Policy×

Hi there! Our website may store cookies on your computer in order to give you the best experience, such as remembering the items in your cart so you can continue shopping where you left off.

By continuing to use our site, you give consent for cookies to be used.

Spend £100.00 for
FREE DELIVERY.
Free delivery excludes heavy and bulky products
Browse Categories
In Stock: 2-3 Weeks
£830.83
£997.00 Inc VAT
Earn 415 Data Points when purchasing this product. More Info

/assets/images/gallery/large/1660146597MCX516A-CCAT-1.png

Mellanox CONNECTX-5 EN Network Interface Card 100GBE

Mellanox CONNECTX-5 EN Network Interface Card 100GBE

Dual-Port QSFP28 X16 Tall Bracket ROHS R6

by Mellanox
See more product details
PCIe speed:
Part No:FEMCX516A-CCAT
Manufacturer No:MCX516A-CCAT
Delivery: In Stock: 2-3 Weeks

/assets/images/gallery/large/1660146597MCX516A-CCAT-1.png

Mellanox CONNECTX-5 EN Network Interface Card 100GBE
More Related Items
Click to change options
Colour:
Door:
Flat Pack:
Accessory Kit:
Apply
£595.83 Ex VAT
Qty:
Click to change options
Colour:
Door:
Flat Pack:
Accessory Kit:
Apply
£670.83 Ex VAT
Qty:
£98.00 Ex VAT
Qty:
Click to change options
Colour:
Apply
£140.00 Ex VAT
Qty:
Email product to a friend
X
  • Scroll to top
    Mellanox CONNECTX-5 EN Network Interface Card 100GBE
    Dual-Port QSFP28 X16 Tall Bracket ROHS R6

    ConnectX®-5 EN Card

    Up to 100Gb/s Ethernet Adapter Cards

    Intelligent RDMA-enabled, single and dual-port network adapter with advanced application offload capabilities for Web 2.0, Cloud, Storage, and Telco platforms

    ConnectX-5 Ethernet network interface cards provide high performance and flexible solutions with up to two ports of 100GbE connectivity, 750ns latency, up to 200 million messages per second (Mpps), and a record setting 197Mpps when running an open source Data Path Development Kit (DPDK) PCIe (Gen 4.0). For storage workloads, ConnectX-5 delivers a range of innovative accelerations, such as Signature Handover (T10-DIF) in hardware, an embedded PCIe Switch, and NVMe over Fabric target offloads. ConnectX-5 adapter cards also bring advanced Open vSwitch offloads to telecommunications and cloud data centers to drive extremely high packet rates and throughput with reduced CPU resource consumption, thus boosting data center infrastructure efficiency.

    ConnectX-5 adapter cards are available for PCIe Gen 3.0 and Gen 4.0 servers and provide support for 1, 10, 25, 40, 50 and 100 GbE speeds in stand-up PCIe cards, OCP 2.0, and OCP 3.0 form factors. ConnectX-5 cards also offer advanced Multi-Host and Socket Direct technologies.


    Benefits

    • Up to 100Gb/s connectivity per port

    • Industry-leading throughput, low latency, low CPU utilization and high message rate

    • Innovative rack design for storage and Machine Learning based on Host Chaining technology

    • Smart interconnect for x86, Power, Arm, and GPU-based compute & storage platforms

    • Advanced storage capabilities including NVMe over Fabric offloads

    • Intelligent network adapter supporting flexible pipeline programmability

    • Cutting-edge performance in virtualized networks including Network Function Virtualization (NFV)

    • Enabler for efficient service chaining capabilities

    • Efficient I/O consolidation, lowering data center costs and complexity


    ConnectX-5 Ethernet adapter cards provide high performance and flexible solutions with up to two ports of 100GbE connectivity, 750ns latency, up to 200 million messages per second (Mpps), and a recordsetting 197Mpps when running an open source Data Path Development Kit (DPDK) PCIe (Gen 4.0). For storage workloads, ConnectX-5 delivers a range of innovative accelerations, such as Signature Handover (T10-DIF) in hardware, an embedded PCIe Switch, and NVMe over Fabric target offloads. ConnectX-5 adapter cards also bring advanced Open vSwitch offloads to telecommunications and cloud data centers to drive extremely high packet rates and throughput with reduced CPU resource consumption, thus boosting data center infrastructure efficiency.

    ConnectX-5 adapter cards are available for PCIe Gen 3.0 and Gen 4.0 servers and provide support for 1, 10, 25, 40, 50 and 100 GbE speeds in stand-up PCIe cards, OCP 2.0, and OCP 3.0 form factors. ConnectX-5 cards also offer advanced Mellanox Multi-Host® and Mellanox Socket Direct® technologies.


    Features

    • Tag matching and rendezvous offloads

    • Adaptive routing on reliable transport

    • Burst buffer offloads for background checkpointing

    • NVMe over Fabric offloads

    • Backend switch elimination by host chaining

    • Embedded PCIe switch

    • Enhanced vSwitch/vRouter offloads

    • Flexible pipeline

    • RoCE for overlay networks

    • PCIe Gen 4.0 support

    • RoHS compliant

    • ODCC compatible

    • Various form factors available


    Cloud and Web 2.0 Environments

    ConnectX-5 adapter cards enable data center administrators to benefit from better server utilization and reduced costs, power usage, and cable complexity, allowing for more virtual appliances, virtual machines (VMs) and tenants to co-exist on the same hardware.

    Supported vSwitch/vRouter offload functions include:

    • Overlay Networks (e.g., VXLAN, NVGRE, MPLS, GENEVE, and NSH) header encapsulation & decapsulation.

    • Stateless offloads of inner packets and packet headers’ re-write, enabling NAT functionality and more.

    • Flexible and programmable parser and match-action tables, which enable hardware offloads for future protocols.

    • SR-IOV technology, providing dedicated adapter resources, guaranteed isolation and protection for virtual machines (VMs) within the server.

    • Network Function Virtualization (NFV), enabling a VM to be used as a virtual appliance. The full datapath operation offloads, hairpin hardware capability and service chaining enables data to be handled by the virtual appliance, with minimum CPU utilization.

    Cloud and Web 2.0 customers developing platforms on Software Defined Network (SDN) environments are leveraging their servers’ Operating System Virtual-Switching capabilities to achieve maximum flexibility. Open vSwitch (OvS) is an example of a virtual switch that allows Virtual Machines to communicate with each other and with the outside world. Traditionally residing in the hypervisor where switching is based on twelve-tuple matching onflows, the virtual switch, or virtual router software-based solution, is CPU-intensive. This can negatively affect system performance and prevent the full utilization of available bandwidth.

    Mellanox ASAP2 - Accelerated Switching and Packet Processing® technology enables offloading the vSwitch/vRouter by handling the data plane in the NIC hardware, without modifying the control plane. This results in significantly higher vSwitch/vRouter performance without the associated CPU load.

    Additionally, intelligent ConnectX-5’s flexible pipeline capabilities, including flexible parser and flexible match-action tables, are programmable, enabling hardware offloads for future protocols.


    Storage Environments

    NVMe storage devices are gaining popularity by offering very fast storage access. The evolving NVMe over Fabric (NVMe-oF) protocol leverages the RDMA connectivity for remote access. ConnectX-5 offers further enhancements by providing NVMe-oF target offloads, enabling very efficient NVMe storage access with no CPU intervention, and thus improving performance and reducing latency.

    The embedded PCIe switch enables customers to build standalone storage or Machine Learning appliances. As with earlier generations of ConnectX adapters, standard block and file access protocols leverage RoCE for high-performance storage access. A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks.

    ConnectX-5 enables an innovative storage rack design, Host Chaining, which enables different servers to interconnect without involving the Top of the Rack (ToR) switch. Leveraging Host Chaining, ConnectX-5 lowers the data center’s total cost of ownership (TCO) by reducing CAPEX (cables, NICs, and switch port expenses). OPEX is also reduced by cutting down on switch port management and overall power usage.


    Telecommunications

    Telecommunications service providers are moving towards disaggregation, server virtualization, and orchestration as key tenets to modernize their networks. Likewise, they’re also moving towards Network Function Virtualization (NFV), which enables the rapid deployment of new network services. With this move, proprietary dedicated hardware and software, which tend to be static and difficult to scale, are being replaced with virtual machines running on commercial off-the-shelf (COTS) servers.

    For telecom service providers, choosing the right networking hardware is critical to achieving a cloud-native NFV solution that is agile, reliable, fast and efficient. Telco service providers typically leverage virtualization and cloud technologies to better achieve agile service delivery and efficient scalability; these technologies require an advanced network infrastructure to support higher rates of packet processing. However, the resultant east-west traffic causes numerous interrupts as I/O traverses from kernel to user space, eats up CPU cycles and decreases packet performance. Particularly sensitive to delays are voice and video applications which often require less than 100ms of latency.

    ConnectX-5 adapter cards drive extremely high packet rates, increased throughput and drive higher network efficiency through the following technologies; Open vSwitch Offloads (OvS), OvS over DPDK or ASAP², Network Overlay Virtualization, SR-IOV, and RDMA. This allows for secure data delivery through higher-performance offloads, reducing CPU resource utlization, and boosting data center infrastructure efficiency. The result is a much more responsive and agile network capable of rapidly deploying network services.


    Wide Selection of Adapter Cards

    ConnectX-5 Ethernet adapter cards are available in several form factors including: low-profile stand-up PCIe, OCP 2.0 Type 1 and Type 2, and OCP 3.0 Small Form Factor.

    Mellanox Multi-Host® technology allows multiple hosts to be connected into a single adapter by separating the PCIe interface into multiple and independent interfaces.

    The portfolio also offers Mellanox Socket-Direct® configurations that enable servers without x16 PCIe slots to split the card’s 16-lane PCIe bus into two 8-lane buses on dedicated cards connected by a harness.


    Host Management

    Host Management includes NC-SI over MCTP over SMBus, and MCTP over PCIe - Baseboard Management Controller (BMC) interface, as well as PLDM for Monitor and Control DSP0248 and PLDM for Firmware Update DSP0267.


    Storage Offloads
    • NVMe over Fabric offloads for target machine
    • T10 DIF – Signature handover operation at wire speed, for ingress and egress traffic
    • Storage protocols: SRP, iSER, NFS RDMA, SMB Direct, NVMe-oF

    Remote Boot
    • Remote boot over Ethernet
    • Remote boot over iSCSI
    • Unified Extensible Firmware Interface (UEFI)
    • Pre-execution Environment (PXE)

     

    Enhanced Features
    • Hardware-based reliable transport
    • Collective operations offloads
    • Vector collective operations offloads
    • Mellanox PeerDirect® RDMA (aka GPUDirect®) communication acceleration
    • 64/66 encoding
    • Extended Reliable Connected transport (XRC)
    • Dynamically Connected Transport (DCT)
    • Enhanced Atomic operations
    • Advanced memory mapping support, allowing user mode registration and remapping of memory (UMR)
    • On demand paging (ODP)
    • MPI Tag Matching
    • Rendezvous protocol offload
    • Out-of-order RDMA supporting Adaptive Routing
    • Burst buffer offload
    • In-Network Memory registration-free RDMA memory access

    CPU Offloads
    • RDMA over Converged Ethernet (RoCE)
    • TCP/UDP/IP stateless offload
    • LSO, LRO, checksum offload
    • RSS (also on encapsulated packet), TSS, HDS, VLAN and MPLS tag insertion/stripping, Receive flow steering
    • Data Plane Development Kit (DPDK) for kernel bypass applications
    • Open VSwitch (OVS) offload using ASAP2
    -Flexible match-action flow tables
    -Tunneling encapsulation/de-capsulation
    • Intelligent interrupt coalescence
    • Header rewrite supporting hardware offload of NAT router

     

    Hardware-Based I/O Virtualization
    • Single Root IOV
    • Address translation and protection
    • VMware NetQueue support
    -SR-IOV: Up to 512 Virtual Functions
    -SR-IOV: Up to 8 Physical Functions per host
    • Virtualization hierarchies (e.g., NPAR when enabled)
    -Virtualizing Physical Functions on a physical port
    -SR-IOV on every Physical Function
    • Configurable and user-programmable QoS
    • Guaranteed QoS for VMs

    Management and Control
    • NC-SI over MCTP over SMBus and NC-SI over MCTP over PCIe - Baseboard Management Controller interface
    • PLDM for Monitor and Control DSP0248
    • PLDM for Firmware Update DSP0267
    • SDN management interface for managing the eSwitch – I2C interface for device control and configuration
    • General Purpose I/O pins
    • SPI interface to Flash
    • JTAG IEEE 1149.1 and IEEE 1149.6

     

    Overlay Networks
    • RoCE over Overlay Networks
    • Stateless offloads for overlay network tunneling protocols
    • Hardware offload of encapsulation and decapsulation of VXLAN, NVGRE, and GENEVE overlay networks

     

    Full specification and details can be found in the Product Datasheet PDF file

    ConnectX®-5 EN Card

    Up to 100Gb/s Ethernet Adapter Cards

    Intelligent RDMA-enabled, single and dual-port network adapter with advanced application offload capabilities for Web 2.0, Cloud, Storage, and Telco platforms

    ConnectX-5 Ethernet network interface cards provide high performance and flexible solutions with up to two ports of 100GbE connectivity, 750ns latency, up to 200 million messages per second (Mpps), and a record setting 197Mpps when running an open source Data Path Development Kit (DPDK) PCIe (Gen 4.0). For storage workloads, ConnectX-5 delivers a range of innovative accelerations, such as Signature Handover (T10-DIF) in hardware, an embedded PCIe Switch, and NVMe over Fabric target offloads. ConnectX-5 adapter cards also bring advanced Open vSwitch offloads to telecommunications and cloud data centers to drive extremely high packet rates and throughput with reduced CPU resource consumption, thus boosting data center infrastructure efficiency.

    ConnectX-5 adapter cards are available for PCIe Gen 3.0 and Gen 4.0 servers and provide support for 1, 10, 25, 40, 50 and 100 GbE speeds in stand-up PCIe cards, OCP 2.0, and OCP 3.0 form factors. ConnectX-5 cards also offer advanced Multi-Host and Socket Direct technologies.


    Benefits

    • Up to 100Gb/s connectivity per port

    • Industry-leading throughput, low latency, low CPU utilization and high message rate

    • Innovative rack design for storage and Machine Learning based on Host Chaining technology

    • Smart interconnect for x86, Power, Arm, and GPU-based compute & storage platforms

    • Advanced storage capabilities including NVMe over Fabric offloads

    • Intelligent network adapter supporting flexible pipeline programmability

    • Cutting-edge performance in virtualized networks including Network Function Virtualization (NFV)

    • Enabler for efficient service chaining capabilities

    • Efficient I/O consolidation, lowering data center costs and complexity


    ConnectX-5 Ethernet adapter cards provide high performance and flexible solutions with up to two ports of 100GbE connectivity, 750ns latency, up to 200 million messages per second (Mpps), and a recordsetting 197Mpps when running an open source Data Path Development Kit (DPDK) PCIe (Gen 4.0). For storage workloads, ConnectX-5 delivers a range of innovative accelerations, such as Signature Handover (T10-DIF) in hardware, an embedded PCIe Switch, and NVMe over Fabric target offloads. ConnectX-5 adapter cards also bring advanced Open vSwitch offloads to telecommunications and cloud data centers to drive extremely high packet rates and throughput with reduced CPU resource consumption, thus boosting data center infrastructure efficiency.

    ConnectX-5 adapter cards are available for PCIe Gen 3.0 and Gen 4.0 servers and provide support for 1, 10, 25, 40, 50 and 100 GbE speeds in stand-up PCIe cards, OCP 2.0, and OCP 3.0 form factors. ConnectX-5 cards also offer advanced Mellanox Multi-Host® and Mellanox Socket Direct® technologies.


    Features

    • Tag matching and rendezvous offloads

    • Adaptive routing on reliable transport

    • Burst buffer offloads for background checkpointing

    • NVMe over Fabric offloads

    • Backend switch elimination by host chaining

    • Embedded PCIe switch

    • Enhanced vSwitch/vRouter offloads

    • Flexible pipeline

    • RoCE for overlay networks

    • PCIe Gen 4.0 support

    • RoHS compliant

    • ODCC compatible

    • Various form factors available


    Cloud and Web 2.0 Environments

    ConnectX-5 adapter cards enable data center administrators to benefit from better server utilization and reduced costs, power usage, and cable complexity, allowing for more virtual appliances, virtual machines (VMs) and tenants to co-exist on the same hardware.

    Supported vSwitch/vRouter offload functions include:

    • Overlay Networks (e.g., VXLAN, NVGRE, MPLS, GENEVE, and NSH) header encapsulation & decapsulation.

    • Stateless offloads of inner packets and packet headers’ re-write, enabling NAT functionality and more.

    • Flexible and programmable parser and match-action tables, which enable hardware offloads for future protocols.

    • SR-IOV technology, providing dedicated adapter resources, guaranteed isolation and protection for virtual machines (VMs) within the server.

    • Network Function Virtualization (NFV), enabling a VM to be used as a virtual appliance. The full datapath operation offloads, hairpin hardware capability and service chaining enables data to be handled by the virtual appliance, with minimum CPU utilization.

    Cloud and Web 2.0 customers developing platforms on Software Defined Network (SDN) environments are leveraging their servers’ Operating System Virtual-Switching capabilities to achieve maximum flexibility. Open vSwitch (OvS) is an example of a virtual switch that allows Virtual Machines to communicate with each other and with the outside world. Traditionally residing in the hypervisor where switching is based on twelve-tuple matching onflows, the virtual switch, or virtual router software-based solution, is CPU-intensive. This can negatively affect system performance and prevent the full utilization of available bandwidth.

    Mellanox ASAP2 - Accelerated Switching and Packet Processing® technology enables offloading the vSwitch/vRouter by handling the data plane in the NIC hardware, without modifying the control plane. This results in significantly higher vSwitch/vRouter performance without the associated CPU load.

    Additionally, intelligent ConnectX-5’s flexible pipeline capabilities, including flexible parser and flexible match-action tables, are programmable, enabling hardware offloads for future protocols.


    Storage Environments

    NVMe storage devices are gaining popularity by offering very fast storage access. The evolving NVMe over Fabric (NVMe-oF) protocol leverages the RDMA connectivity for remote access. ConnectX-5 offers further enhancements by providing NVMe-oF target offloads, enabling very efficient NVMe storage access with no CPU intervention, and thus improving performance and reducing latency.

    The embedded PCIe switch enables customers to build standalone storage or Machine Learning appliances. As with earlier generations of ConnectX adapters, standard block and file access protocols leverage RoCE for high-performance storage access. A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks.

    ConnectX-5 enables an innovative storage rack design, Host Chaining, which enables different servers to interconnect without involving the Top of the Rack (ToR) switch. Leveraging Host Chaining, ConnectX-5 lowers the data center’s total cost of ownership (TCO) by reducing CAPEX (cables, NICs, and switch port expenses). OPEX is also reduced by cutting down on switch port management and overall power usage.


    Telecommunications

    Telecommunications service providers are moving towards disaggregation, server virtualization, and orchestration as key tenets to modernize their networks. Likewise, they’re also moving towards Network Function Virtualization (NFV), which enables the rapid deployment of new network services. With this move, proprietary dedicated hardware and software, which tend to be static and difficult to scale, are being replaced with virtual machines running on commercial off-the-shelf (COTS) servers.

    For telecom service providers, choosing the right networking hardware is critical to achieving a cloud-native NFV solution that is agile, reliable, fast and efficient. Telco service providers typically leverage virtualization and cloud technologies to better achieve agile service delivery and efficient scalability; these technologies require an advanced network infrastructure to support higher rates of packet processing. However, the resultant east-west traffic causes numerous interrupts as I/O traverses from kernel to user space, eats up CPU cycles and decreases packet performance. Particularly sensitive to delays are voice and video applications which often require less than 100ms of latency.

    ConnectX-5 adapter cards drive extremely high packet rates, increased throughput and drive higher network efficiency through the following technologies; Open vSwitch Offloads (OvS), OvS over DPDK or ASAP², Network Overlay Virtualization, SR-IOV, and RDMA. This allows for secure data delivery through higher-performance offloads, reducing CPU resource utlization, and boosting data center infrastructure efficiency. The result is a much more responsive and agile network capable of rapidly deploying network services.


    Wide Selection of Adapter Cards

    ConnectX-5 Ethernet adapter cards are available in several form factors including: low-profile stand-up PCIe, OCP 2.0 Type 1 and Type 2, and OCP 3.0 Small Form Factor.

    Mellanox Multi-Host® technology allows multiple hosts to be connected into a single adapter by separating the PCIe interface into multiple and independent interfaces.

    The portfolio also offers Mellanox Socket-Direct® configurations that enable servers without x16 PCIe slots to split the card’s 16-lane PCIe bus into two 8-lane buses on dedicated cards connected by a harness.


    Host Management

    Host Management includes NC-SI over MCTP over SMBus, and MCTP over PCIe - Baseboard Management Controller (BMC) interface, as well as PLDM for Monitor and Control DSP0248 and PLDM for Firmware Update DSP0267.


    Storage Offloads
    • NVMe over Fabric offloads for target machine
    • T10 DIF – Signature handover operation at wire speed, for ingress and egress traffic
    • Storage protocols: SRP, iSER, NFS RDMA, SMB Direct, NVMe-oF

    Remote Boot
    • Remote boot over Ethernet
    • Remote boot over iSCSI
    • Unified Extensible Firmware Interface (UEFI)
    • Pre-execution Environment (PXE)

     

    Enhanced Features
    • Hardware-based reliable transport
    • Collective operations offloads
    • Vector collective operations offloads
    • Mellanox PeerDirect® RDMA (aka GPUDirect®) communication acceleration
    • 64/66 encoding
    • Extended Reliable Connected transport (XRC)
    • Dynamically Connected Transport (DCT)
    • Enhanced Atomic operations
    • Advanced memory mapping support, allowing user mode registration and remapping of memory (UMR)
    • On demand paging (ODP)
    • MPI Tag Matching
    • Rendezvous protocol offload
    • Out-of-order RDMA supporting Adaptive Routing
    • Burst buffer offload
    • In-Network Memory registration-free RDMA memory access

    CPU Offloads
    • RDMA over Converged Ethernet (RoCE)
    • TCP/UDP/IP stateless offload
    • LSO, LRO, checksum offload
    • RSS (also on encapsulated packet), TSS, HDS, VLAN and MPLS tag insertion/stripping, Receive flow steering
    • Data Plane Development Kit (DPDK) for kernel bypass applications
    • Open VSwitch (OVS) offload using ASAP2
    -Flexible match-action flow tables
    -Tunneling encapsulation/de-capsulation
    • Intelligent interrupt coalescence
    • Header rewrite supporting hardware offload of NAT router

     

    Hardware-Based I/O Virtualization
    • Single Root IOV
    • Address translation and protection
    • VMware NetQueue support
    -SR-IOV: Up to 512 Virtual Functions
    -SR-IOV: Up to 8 Physical Functions per host
    • Virtualization hierarchies (e.g., NPAR when enabled)
    -Virtualizing Physical Functions on a physical port
    -SR-IOV on every Physical Function
    • Configurable and user-programmable QoS
    • Guaranteed QoS for VMs

    Management and Control
    • NC-SI over MCTP over SMBus and NC-SI over MCTP over PCIe - Baseboard Management Controller interface
    • PLDM for Monitor and Control DSP0248
    • PLDM for Firmware Update DSP0267
    • SDN management interface for managing the eSwitch – I2C interface for device control and configuration
    • General Purpose I/O pins
    • SPI interface to Flash
    • JTAG IEEE 1149.1 and IEEE 1149.6

     

    Overlay Networks
    • RoCE over Overlay Networks
    • Stateless offloads for overlay network tunneling protocols
    • Hardware offload of encapsulation and decapsulation of VXLAN, NVGRE, and GENEVE overlay networks

     

    Full specification and details can be found in the Product Datasheet PDF file

    Email product to a friend
    Print product details
    View Keywording: