Early Closing Friday

We will be closing early on Friday 3rdth May at 16:30

Last orders: 16:00 (next day: 15:30) | CLOSED - May 6thth Bank Holiday | OPEN as normal Tuesday

Call us FREE on 0800 488 000
X
X

Please Log In Below

Forgotten Password?
Retry
Login
loading Gif
Sorry! You can't edit your cart on this page.
Sorry! This item could not be added to your cart as it is no longer available from Comms Express.
Please check the quantity you are adding and try again.
The following item has been added to your cart.
Product Code:
Options:
Qty:
Unit Price:£
View Cart
Checkout
Your cart is empty.
Subtotal:
£0
Remove all items from cart
Are you sure? Yes No
Learn more about how
to collect Data Points
for free gifts.
Comms Express Finance Options
Request A Quote
View Cart
Checkout
Cookie Policy×

Hi there! Our website may store cookies on your computer in order to give you the best experience, such as remembering the items in your cart so you can continue shopping where you left off.

By continuing to use our site, you give consent for cookies to be used.

Browse Categories
Spend £100.00 for
FREE DELIVERY.
Free delivery excludes heavy and bulky products
Express Finder Logo
Express Switch Finder
Select requirements to view suitable switches instantly from our entire range.
Reset
Mellanox MCX4121A-ACUT CONNECTX-4 LX EN NIC 25GBE Dual-Port
Mellanox Logo
UEFI Enabled Tall Bracket

ConnectX-4 Lx EN Ethernet Adapter Cards

Up to 50Gb/s Ethernet Adapter Cards

1/10/25/40/50 Gigabit Ethernet adapter cards supporting RDMA, Overlay Network Encapsulation/Decapsulation and more

ConnectX-4 Lx EN network interface card with 25Gb/s Ethernet connectivity addresses virtualized infrastructure challenges, delivering best-in-class and highest performance to various demanding markets and applications. Providing true hardware-based I/O isolation with unmatched scalability and efficiency, achieving the most cost-effective and flexible solution for Web 2.0, Cloud, data analytics, database, and storage platforms.




Benefits: • High performance boards for applications requiring high bandwidth, low latency and high message rate

• Industry leading throughput and latency for Web 2.0, Cloud and Big Data applications

• Smart interconnect for x86, Power, ARM, and GPU-based compute and storage platforms

• Cutting-edge performance in virtualized overlay networks

• Efficient I/O consolidation, lowering data center costs and complexity

• Virtualization acceleration

• Power efficiency


With the exponential increase in usage of data and the creation of new applications, the demand for the highest throughput, lowest latency, virtualization and sophisticated data acceleration engines continues to rise. ConnectX-4 Lx EN enables data centers to leverage the world's leading interconnect adapter for increasing their operational efficiency, improving server utilization, maximizing applications productivity, while reducing total cost of ownership (TCO).

ConnectX-4 Lx EN adapter cards provide a combination of 1 and 10 GbE bandwidth, sub-microsecond latency and a 75 million packets per second message rate. They include native hardware support for RDMA over Converged Ethernet (RoCE), Ethernet stateless offload engines, Overlay Networks, GPUDirect technology and Multi-Host technology.




Features: • 1/10 Gb/s speeds

• Single and dual-port options

• Virtualization

• Low latency RDMA over Converged Ethernet (RoCE)

• Multi-Host technology connects up to 4 independent hosts

• CPU offloading of transport operations

• Application offloading

• PeerDirect communication acceleration

• Hardware offloads for NVGRE, VXLAN and GENEVE encapsulated traffic

• End-to-end QoS and congestion control

• Hardware-based I/O virtualization

• RoHS compliant

• ODCC compatible

• Various form factors available


Storage Acceleration
Storage applications will see improved performance with the higher bandwidth ConnectX-4 Lx EN delivers. Moreover, standard block and file access protocols can leverage RoCE for high-performance storage access. A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks.

Host Management
Mellanox host management and control capabilities include NC-SI over MCTP over SMBus, and MCTP over PCIe - Baseboard Management Controller (BMC) interface, as well as PLDM for Monitoring and Control DSP0248 and PLDM for Firmware Update DSP0267.

  I/O Virtualization
ConnectX-4 Lx EN SR-IOV technology provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VMs) within the server. I/O virtualization with ConnectX-4 Lx EN gives data center administrators better server utilization while reducing cost, power, and cable complexity, allowing more virtual machines and more tenants on the same hardware.

Mellanox PeerDirect®
Mellanox PeerDirect communication provides high efficiency RDMA access by eliminating unnecessary internal data copies between components on the PCIe bus (for example, from GPU to CPU), and therefore significantly reduces application run time. ConnectX-4 Lx EN advanced acceleration technology enables higher cluster efficiency and scalability to tens of thousands of nodes.

  Software Support
All Mellanox adapter cards are supported by Windows, Linux distributions, VMware, FreeBSD, and Citrix XENServer. ConnectX-4 Lx EN supports various management interfaces and has a rich set of tools for configuration and management across operating systems.
Additionally, ConnectX-4 Lx EN provides the option for a secure firmware update check using digital signatures to prevent remote attackers from uploading malicious firmware images; this ensures that only officially authentic images produced by Mellanox can be installed, regardless whether the source of the installation is the host, the network, or a BMC.

Wide Selection of Ethernet Adapter Cards
ConnectX-4 Lx EN adapter cards offer a cost-effective Ethernet adapter solution for 1, 10, 25, 40 and 50 Gb/s Ethernet speeds, enabling seamless networking, clustering, or storage. The adapter reduces application runtime, and offers the flexibility and scalability to make infrastructure run as efficiently and productively as possible.
ConnectX-4 Lx Ethernet adapter cards are available in several form factors including: low-profile stand-up PCIe, OCP 2.0 Type 1, OCP 2.0 Type 2, and OCP 3.0 small form factor

  Overlay Networks
In order to better scale their networks, data center operators often create overlay networks that carry traffic from individual virtual machines over logical tunnels in encapsulated formats such as NVGRE and VXLAN. While this solves network scalability issues, it hides the TCP packet from the hardware offloading engines, placing higher loads on the host CPU. ConnectX-4 Lx EN effectively addresses this by providing advanced NVGRE, VXLAN and GENEVE hardware offloading engines that encapsulate and de-capsulate the overlay protocol headers, enabling the traditional offloads to be performed on the encapsulated traffic for these and other tunneling protocols (GENEVE, MPLS, QinQ, and so on). With ConnectX-4 Lx EN, data center operators can achieve native performance in the new network architecture.

RDMA over Converged Ethernet (RoCE)
ConnectX-4 Lx EN supports RoCE specifications delivering low-latency and high- performance over Ethernet networks. Leveraging data center bridging (DCB) capabilities as well as ConnectX-4 Lx EN advanced congestion control hardware mechanisms, RoCE provides efficient low-latency RDMA services over Layer 2 and Layer 3 networks.

 

Mellanox Multi-Host® Technology Innovative Mellanox Multi-Host technology enables data centers to design and build scale-out heterogeneous compute and storage racks, with direct connectivity between compute elements and the network. Significantly improving cost savings, flexibility, and total cost of ownership, Mellanox MultiHost technology provides better power and performance, while achieving maximum data processing and data transfer at minimum capital and operational expenses.

Mellanox Multi-Host works by allowing multiple hosts to connect into a single interconnect adapter, by separating the adapter PCIe interface into several independent PCIe interfaces. Each interface connects to a separate host CPU—with no performance degradation. Reducing data center CAPEX and OPEX, Mellanox Multi-Host slashes switch port management and power usage by reducing the number of cables, NICs and switch ports required by four independent servers, from four to one of each. Additional features & benefits of Mellanox Multi-Host technology:

• Enables IT managers to remotely control the configuration and power state of each host individually; guaranteeing host security and isolation, the management of one host does not affect host traffic performance nor the management of other hosts.

• Lowering total cost of ownership (TCO), Mellanox Multi-Host uses a single BMC, with independent NC-SI/MCTP management channels for each of the managed hosts.

• Mellanox Multi-Host also supports a heterogeneous data center architecture; the various hosts connected to the single adapter can be x86, Power, GPU, Arm or FPGA, thereby removing any limitations in passing data or communicating between compute elements.


Full specification and details can be found in the Product Datasheet PDF file

Express View Logo
In Stock: 2-3 Weeks
£ POA
£ POA Inc Vat
Mellanox MCX4121A-ACAT CONNECTX-4 LX EN Network Interface Card 25GBE
Mellanox Logo
SFP28 Tall Bracket ROHS R6

ConnectX-4 Lx EN Ethernet Adapter Cards

Up to 50Gb/s Ethernet Adapter Cards

1/10/25/40/50 Gigabit Ethernet adapter cards supporting RDMA, Overlay Network Encapsulation/Decapsulation and more

ConnectX-4 Lx EN network interface card with 25Gb/s Ethernet connectivity addresses virtualized infrastructure challenges, delivering best-in-class and highest performance to various demanding markets and applications. Providing true hardware-based I/O isolation with unmatched scalability and efficiency, achieving the most cost-effective and flexible solution for Web 2.0, Cloud, data analytics, database, and storage platforms.




Benefits: • High performance boards for applications requiring high bandwidth, low latency and high message rate

• Industry leading throughput and latency for Web 2.0, Cloud and Big Data applications

• Smart interconnect for x86, Power, ARM, and GPU-based compute and storage platforms

• Cutting-edge performance in virtualized overlay networks

• Efficient I/O consolidation, lowering data center costs and complexity

• Virtualization acceleration

• Power efficiency


With the exponential increase in usage of data and the creation of new applications, the demand for the highest throughput, lowest latency, virtualization and sophisticated data acceleration engines continues to rise. ConnectX-4 Lx EN enables data centers to leverage the world's leading interconnect adapter for increasing their operational efficiency, improving server utilization, maximizing applications productivity, while reducing total cost of ownership (TCO).

ConnectX-4 Lx EN adapter cards provide a combination of 1 and 10 GbE bandwidth, sub-microsecond latency and a 75 million packets per second message rate. They include native hardware support for RDMA over Converged Ethernet (RoCE), Ethernet stateless offload engines, Overlay Networks, GPUDirect technology and Multi-Host technology.




Features: • 1/10/25/40/50 Gb/s speeds

• Single and dual-port options

• Virtualization

• Low latency RDMA over Converged Ethernet (RoCE)

• Multi-Host technology connects up to 4 independent hosts

• CPU offloading of transport operations

• Application offloading

• PeerDirect communication acceleration

• Hardware offloads for NVGRE, VXLAN and GENEVE encapsulated traffic

• End-to-end QoS and congestion control

• Hardware-based I/O virtualization

• RoHS compliant

• ODCC compatible

• Various form factors available


Storage Acceleration
Storage applications will see improved performance with the higher bandwidth ConnectX-4 Lx EN delivers. Moreover, standard block and file access protocols can leverage RoCE for high-performance storage access. A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks.

Host Management
Mellanox host management and control capabilities include NC-SI over MCTP over SMBus, and MCTP over PCIe - Baseboard Management Controller (BMC) interface, as well as PLDM for Monitoring and Control DSP0248 and PLDM for Firmware Update DSP0267.

  I/O Virtualization
ConnectX-4 Lx EN SR-IOV technology provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VMs) within the server. I/O virtualization with ConnectX-4 Lx EN gives data center administrators better server utilization while reducing cost, power, and cable complexity, allowing more virtual machines and more tenants on the same hardware.

Mellanox PeerDirect®
Mellanox PeerDirect communication provides high efficiency RDMA access by eliminating unnecessary internal data copies between components on the PCIe bus (for example, from GPU to CPU), and therefore significantly reduces application run time. ConnectX-4 Lx EN advanced acceleration technology enables higher cluster efficiency and scalability to tens of thousands of nodes.

  Software Support
All Mellanox adapter cards are supported by Windows, Linux distributions, VMware, FreeBSD, and Citrix XENServer. ConnectX-4 Lx EN supports various management interfaces and has a rich set of tools for configuration and management across operating systems.
Additionally, ConnectX-4 Lx EN provides the option for a secure firmware update check using digital signatures to prevent remote attackers from uploading malicious firmware images; this ensures that only officially authentic images produced by Mellanox can be installed, regardless whether the source of the installation is the host, the network, or a BMC.

Wide Selection of Ethernet Adapter Cards
ConnectX-4 Lx EN adapter cards offer a cost-effective Ethernet adapter solution for 1, 10, 25, 40 and 50 Gb/s Ethernet speeds, enabling seamless networking, clustering, or storage. The adapter reduces application runtime, and offers the flexibility and scalability to make infrastructure run as efficiently and productively as possible.
ConnectX-4 Lx Ethernet adapter cards are available in several form factors including: low-profile stand-up PCIe, OCP 2.0 Type 1, OCP 2.0 Type 2, and OCP 3.0 small form factor

  Overlay Networks
In order to better scale their networks, data center operators often create overlay networks that carry traffic from individual virtual machines over logical tunnels in encapsulated formats such as NVGRE and VXLAN. While this solves network scalability issues, it hides the TCP packet from the hardware offloading engines, placing higher loads on the host CPU. ConnectX-4 Lx EN effectively addresses this by providing advanced NVGRE, VXLAN and GENEVE hardware offloading engines that encapsulate and de-capsulate the overlay protocol headers, enabling the traditional offloads to be performed on the encapsulated traffic for these and other tunneling protocols (GENEVE, MPLS, QinQ, and so on). With ConnectX-4 Lx EN, data center operators can achieve native performance in the new network architecture.

RDMA over Converged Ethernet (RoCE)
ConnectX-4 Lx EN supports RoCE specifications delivering low-latency and high- performance over Ethernet networks. Leveraging data center bridging (DCB) capabilities as well as ConnectX-4 Lx EN advanced congestion control hardware mechanisms, RoCE provides efficient low-latency RDMA services over Layer 2 and Layer 3 networks.

 

Mellanox Multi-Host® Technology Innovative Mellanox Multi-Host technology enables data centers to design and build scale-out heterogeneous compute and storage racks, with direct connectivity between compute elements and the network. Significantly improving cost savings, flexibility, and total cost of ownership, Mellanox MultiHost technology provides better power and performance, while achieving maximum data processing and data transfer at minimum capital and operational expenses.

Mellanox Multi-Host works by allowing multiple hosts to connect into a single interconnect adapter, by separating the adapter PCIe interface into several independent PCIe interfaces. Each interface connects to a separate host CPU—with no performance degradation. Reducing data center CAPEX and OPEX, Mellanox Multi-Host slashes switch port management and power usage by reducing the number of cables, NICs and switch ports required by four independent servers, from four to one of each. Additional features & benefits of Mellanox Multi-Host technology:

• Enables IT managers to remotely control the configuration and power state of each host individually; guaranteeing host security and isolation, the management of one host does not affect host traffic performance nor the management of other hosts.

• Lowering total cost of ownership (TCO), Mellanox Multi-Host uses a single BMC, with independent NC-SI/MCTP management channels for each of the managed hosts.

• Mellanox Multi-Host also supports a heterogeneous data center architecture; the various hosts connected to the single adapter can be x86, Power, GPU, Arm or FPGA, thereby removing any limitations in passing data or communicating between compute elements.


Full specification and details can be found in the Product Datasheet PDF file

Express View Logo
In Stock: 2-3 Weeks
£235.62
£282.74 Inc Vat
Add To Cart
Mellanox Logo
No Bracket ROHS R6

ConnectX-4 Lx EN Cards

for Open Compute Project (OCP) Ethernet Adapter Card

1/10/25/40/50 Gigabit Ethernet adapter cards supporting RDMA, Overlay Network Encapsulation/Decapsulation and more.

ConnectX-4 Lx EN network interface cards with 1/10/25 Gb/s Ethernet connectivity addresses virtualized infrastructure challenges for today's demanding markets and applications. Providing true hardware-based I/O isolation with unmatched scalability and efficiency, ConnectX-4 Lx EN provides a cost-effective and flexible Ethernet adapter solution for Web 2.0, cloud, data analytics, database, and storage platforms.




Benefits: • High performance boards for applications requiring high bandwidth, low latency and high message rate

• Industry leading throughput and latency for Web 2.0, Cloud and Big Data applications

• Smart interconnect for x86, Power, ARM, and GPU-based compute and storage platforms

• Cutting-edge performance in virtualized overlay networks

• Efficient I/O consolidation, lowering data center costs and complexity

• Virtualization acceleration

• Power efficiency


With the exponential increase in usage of data and the creation of new applications, the demand for the highest throughput, lowest latency, virtualization and sophisticated data acceleration engines continues to rise. ConnectX-4 Lx EN adapter cards enable data centers to leverage leading interconnect adapters for increasing their operational efficiency, improving server utilization, maximizing applications productivity, while reducing total cost of ownership (TCO).

ConnectX-4 Lx EN adapter cards provide a combination of 1, 10, and 25 GbE bandwidth, sub-microsecond latency and a 75 million packets per second message rate. They include native hardware support for RDMA over Converged Ethernet (RoCE), Ethernet stateless offload engines, Overlay Networks, GPUDirect technology and Multi-Host technology.




Features: • 1/10/25/40/50 Gb/s speeds

• Single and dual-port options

• Virtualization

• Low latency RDMA over Converged Ethernet (RoCE)

• Multi-Host technology connects up to 4 independent hosts

• CPU offloading of transport operations

• Application offloading

• PeerDirect communication acceleration

• Hardware offloads for NVGRE, VXLAN and GENEVE encapsulated traffic

• End-to-end QoS and congestion control

• Hardware-based I/O virtualization

• RoHS compliant

• ODCC compatible

• Various form factors available


Storage Acceleration
Storage applications will see improved performance with the higher bandwidth ConnectX-4 Lx EN delivers. Moreover, standard block and file access protocols can leverage RoCE for high-performance storage access. A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks.

Host Management
Mellanox host management and control capabilities include NC-SI over MCTP over SMBus, and MCTP over PCIe - Baseboard Management Controller (BMC) interface, as well as PLDM for Monitoring and Control DSP0248 and PLDM for Firmware Update DSP0267.

  I/O Virtualization
ConnectX-4 Lx EN SR-IOV technology provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VMs) within the server. I/O virtualization with ConnectX-4 Lx EN gives data center administrators better server utilization while reducing cost, power, and cable complexity, allowing more virtual machines and more tenants on the same hardware.

Mellanox PeerDirect®
Mellanox PeerDirect communication provides high efficiency RDMA access by eliminating unnecessary internal data copies between components on the PCIe bus (for example, from GPU to CPU), and therefore significantly reduces application run time. ConnectX-4 Lx EN advanced acceleration technology enables higher cluster efficiency and scalability to tens of thousands of nodes.

  Software Support
All Mellanox adapter cards are supported by Windows, Linux distributions, VMware, FreeBSD, and Citrix XENServer. ConnectX-4 Lx EN supports various management interfaces and has a rich set of tools for configuration and management across operating systems.
Additionally, ConnectX-4 Lx EN provides the option for a secure firmware update check using digital signatures to prevent remote attackers from uploading malicious firmware images; this ensures that only officially authentic images produced by Mellanox can be installed, regardless whether the source of the installation is the host, the network, or a BMC.

Wide Selection of Ethernet Adapter Cards
ConnectX-4 Lx EN adapter cards offer a cost-effective Ethernet adapter solution for 1, 10, 25, 40 and 50 Gb/s Ethernet speeds, enabling seamless networking, clustering, or storage. The adapter reduces application runtime, and offers the flexibility and scalability to make infrastructure run as efficiently and productively as possible.
ConnectX-4 Lx Ethernet adapter cards are available in several form factors including: low-profile stand-up PCIe, OCP 2.0 Type 1, OCP 2.0 Type 2, and OCP 3.0 small form factor

  Overlay Networks
In order to better scale their networks, data center operators often create overlay networks that carry traffic from individual virtual machines over logical tunnels in encapsulated formats such as NVGRE and VXLAN. While this solves network scalability issues, it hides the TCP packet from the hardware offloading engines, placing higher loads on the host CPU. ConnectX-4 Lx EN effectively addresses this by providing advanced NVGRE, VXLAN and GENEVE hardware offloading engines that encapsulate and de-capsulate the overlay protocol headers, enabling the traditional offloads to be performed on the encapsulated traffic for these and other tunneling protocols (GENEVE, MPLS, QinQ, and so on). With ConnectX-4 Lx EN, data center operators can achieve native performance in the new network architecture.

RDMA over Converged Ethernet (RoCE)
ConnectX-4 Lx EN supports RoCE specifications delivering low-latency and high- performance over Ethernet networks. Leveraging data center bridging (DCB) capabilities as well as ConnectX-4 Lx EN advanced congestion control hardware mechanisms, RoCE provides efficient low-latency RDMA services over Layer 2 and Layer 3 networks.

 

Mellanox Multi-Host® Technology Innovative Mellanox Multi-Host technology enables data centers to design and build scale-out heterogeneous compute and storage racks, with direct connectivity between compute elements and the network. Significantly improving cost savings, flexibility, and total cost of ownership, Mellanox MultiHost technology provides better power and performance, while achieving maximum data processing and data transfer at minimum capital and operational expenses.

Mellanox Multi-Host works by allowing multiple hosts to connect into a single interconnect adapter, by separating the adapter PCIe interface into several independent PCIe interfaces. Each interface connects to a separate host CPU—with no performance degradation. Reducing data center CAPEX and OPEX, Mellanox Multi-Host slashes switch port management and power usage by reducing the number of cables, NICs and switch ports required by four independent servers, from four to one of each. Additional features & benefits of Mellanox Multi-Host technology:

• Enables IT managers to remotely control the configuration and power state of each host individually; guaranteeing host security and isolation, the management of one host does not affect host traffic performance nor the management of other hosts.

• Lowering total cost of ownership (TCO), Mellanox Multi-Host uses a single BMC, with independent NC-SI/MCTP management channels for each of the managed hosts.

• Mellanox Multi-Host also supports a heterogeneous data center architecture; the various hosts connected to the single adapter can be x86, Power, GPU, Arm or FPGA, thereby removing any limitations in passing data or communicating between compute elements.


Full specification and details can be found in the Product Datasheet PDF file

Express View Logo
In Stock: 2-3 Weeks
£235.62
£282.74 Inc Vat
Add To Cart
25GBE Dual-Port SFP28
Mellanox Logo

ConnectX®-5 EN Card

Up to 10/25Gb/s Ethernet Adapter Cards

With UEFI Enabled (x86/Arm)

Intelligent RDMA-enabled, single and dual-port network adapter with advanced application offload capabilities for Web 2.0, Cloud, Storage, and Telco platforms.




Benefits: • Up to 100Gb/s connectivity per port

• Industry-leading throughput, low latency, low CPU utilization and high message rate

• Innovative rack design for storage and Machine Learning based on Host Chaining technology

• Smart interconnect for x86, Power, Arm, and GPU-based compute & storage platforms

• Advanced storage capabilities including NVMe over Fabric offloads

• Intelligent network adapter supporting flexible pipeline programmability

• Cutting-edge performance in virtualized networks including Network Function Virtualization (NFV)

• Enabler for efficient service chaining capabilities

• Efficient I/O consolidation, lowering data center costs and complexity


ConnectX-5 Ethernet network interface cards provide high performance and flexible solutions with up to two ports of 10/25GbE connectivity, 750ns latency, up to 200 million messages per second (Mpps), and a record setting 197Mpps when running an open source Data Path Development Kit (DPDK) PCIe (Gen 4.0). For storage workloads, ConnectX-5 delivers a range of innovative accelerations, such as Signature Handover (T10-DIF) in hardware, an embedded PCIe Switch, and NVMe over Fabric target offloads. ConnectX-5 adapter cards also bring advanced Open vSwitch offloads to telecommunications and cloud data centers to drive extremely high packet rates and throughput with reduced CPU resource consumption, thus boosting data center infrastructure efficiency.

ConnectX-5 adapter cards are available for PCIe Gen 3.0 and Gen 4.0 servers and provide support for 1, 10, 25, 40, 50 and 100 GbE speeds in stand-up PCIe cards, OCP 2.0, and OCP 3.0 form factors. ConnectX-5 cards also offer advanced Multi-Host and Socket Direct technologies.




Features: • Tag matching and rendezvous offloads

• Adaptive routing on reliable transport

• Burst buffer offloads for background checkpointing

• NVMe over Fabric offloads

• Backend switch elimination by host chaining

• Embedded PCIe switch

• Enhanced vSwitch/vRouter offloads

• Flexible pipeline

• RoCE for overlay networks

• PCIe Gen 4.0 support

• RoHS compliant

• ODCC compatible

• Various form factors available




Cloud and Web 2.0 Environments ConnectX-5 adapter cards enable data center administrators to benefit from better server utilization and reduced costs, power usage, and cable complexity, allowing for more virtual appliances, virtual machines (VMs) and tenants to co-exist on the same hardware.

Supported vSwitch/vRouter offload functions include:

• Overlay Networks (e.g., VXLAN, NVGRE, MPLS, GENEVE, and NSH) header encapsulation & decapsulation.

• Stateless offloads of inner packets and packet headers’ re-write, enabling NAT functionality and more.

• Flexible and programmable parser and match-action tables, which enable hardware offloads for future protocols.

• SR-IOV technology, providing dedicated adapter resources, guaranteed isolation and protection for virtual machines (VMs) within the server.

• Network Function Virtualization (NFV), enabling a VM to be used as a virtual appliance. The full datapath operation offloads, hairpin hardware capability and service chaining enables data to be handled by the virtual appliance, with minimum CPU utilization.



Cloud and Web 2.0 customers developing platforms on Software Defined Network (SDN) environments are leveraging their servers’ Operating System Virtual-Switching capabilities to achieve maximum flexibility. Open vSwitch (OvS) is an example of a virtual switch that allows Virtual Machines to communicate with each other and with the outside world. Traditionally residing in the hypervisor where switching is based on twelve-tuple matching onflows, the virtual switch, or virtual router software-based solution, is CPU-intensive. This can negatively affect system performance and prevent the full utilization of available bandwidth.

Mellanox ASAP2 - Accelerated Switching and Packet Processing® technology enables offloading the vSwitch/vRouter by handling the data plane in the NIC hardware, without modifying the control plane. This results in significantly higher vSwitch/vRouter performance without the associated CPU load.

Additionally, intelligent ConnectX-5’s flexible pipeline capabilities, including flexible parser and flexible match-action tables, are programmable, enabling hardware offloads for future protocols.




Storage Environments NVMe storage devices are gaining popularity by offering very fast storage access. The evolving NVMe over Fabric (NVMe-oF) protocol leverages the RDMA connectivity for remote access. ConnectX-5 offers further enhancements by providing NVMe-oF target offloads, enabling very efficient NVMe storage access with no CPU intervention, and thus improving performance and reducing latency.



The embedded PCIe switch enables customers to build standalone storage or Machine Learning appliances. As with earlier generations of ConnectX adapters, standard block and file access protocols leverage RoCE for high-performance storage access. A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks.

ConnectX-5 enables an innovative storage rack design, Host Chaining, which enables different servers to interconnect without involving the Top of the Rack (ToR) switch. Leveraging Host Chaining, ConnectX-5 lowers the data center’s total cost of ownership (TCO) by reducing CAPEX (cables, NICs, and switch port expenses). OPEX is also reduced by cutting down on switch port management and overall power usage.








Telecommunications Telecommunications service providers are moving towards disaggregation, server virtualization, and orchestration as key tenets to modernize their networks. Likewise, they’re also moving towards Network Function Virtualization (NFV), which enables the rapid deployment of new network services. With this move, proprietary dedicated hardware and software, which tend to be static and difficult to scale, are being replaced with virtual machines running on commercial off-the-shelf (COTS) servers.

For telecom service providers, choosing the right networking hardware is critical to achieving a cloud-native NFV solution that is agile, reliable, fast and efficient. Telco service providers typically leverage virtualization and cloud technologies to better achieve agile service delivery and efficient scalability; these technologies require an advanced network infrastructure to support higher rates of packet processing. However, the resultant east-west traffic causes numerous interrupts as I/O traverses from kernel to user space, eats up CPU cycles and decreases packet performance. Particularly sensitive to delays are voice and video applications which often require less than 100ms of latency.

ConnectX-5 adapter cards drive extremely high packet rates, increased throughput and drive higher network efficiency through the following technologies; Open vSwitch Offloads (OvS), OvS over DPDK or ASAP², Network Overlay Virtualization, SR-IOV, and RDMA. This allows for secure data delivery through higher-performance offloads, reducing CPU resource utlization, and boosting data center infrastructure efficiency. The result is a much more responsive and agile network capable of rapidly deploying network services.




Wide Selection of Adapter Cards ConnectX-5 Ethernet adapter cards are available in several form factors including: low-profile stand-up PCIe, OCP 2.0 Type 1 and Type 2, and OCP 3.0 Small Form Factor.

Mellanox Multi-Host® technology allows multiple hosts to be connected into a single adapter by separating the PCIe interface into multiple and independent interfaces.

The portfolio also offers Mellanox Socket-Direct® configurations that enable servers without x16 PCIe slots to split the card’s 16-lane PCIe bus into two 8-lane buses on dedicated cards connected by a harness.




Host Management Host Management includes NC-SI over MCTP over SMBus, and MCTP over PCIe - Baseboard Management Controller (BMC) interface, as well as PLDM for Monitor and Control DSP0248 and PLDM for Firmware Update DSP0267.


Full specification and details can be found in the Product Datasheet PDF file

Express View Logo
In Stock: 2-3 Weeks
£321.74
£386.09 Inc Vat
Add To Cart
Mellanox MCX542B-ACAN CONNECTX-5 EN Network Interface Card
Mellanox Logo
With Host management 25GBE Dual-Port SFP28 PCIE3.0 X8 UEFI Enabled No Bracket HALOGEN FREE

ConnectX®-5 EN Card

Up to 25Gb/s Ethernet Adapter Cards

Intelligent RDMA-enabled, single and dual-port network adapter with advanced application offload capabilities for Web 2.0, Cloud, Storage, and Telco platforms.




Benefits: • Up to 100Gb/s connectivity per port

• Industry-leading throughput, low latency, low CPU utilization and high message rate

• Innovative rack design for storage and Machine Learning based on Host Chaining technology

• Smart interconnect for x86, Power, Arm, and GPU-based compute & storage platforms

• Advanced storage capabilities including NVMe over Fabric offloads

• Intelligent network adapter supporting flexible pipeline programmability

• Cutting-edge performance in virtualized networks including Network Function Virtualization (NFV)

• Enabler for efficient service chaining capabilities

• Efficient I/O consolidation, lowering data center costs and complexity


ConnectX-5 Ethernet network interface cards provide high performance and flexible solutions with up to two ports of 25GbE connectivity, 750ns latency, up to 200 million messages per second (Mpps), and a record setting 197Mpps when running an open source Data Path Development Kit (DPDK) PCIe (Gen 4.0). For storage workloads, ConnectX-5 delivers a range of innovative accelerations, such as Signature Handover (T10-DIF) in hardware, an embedded PCIe Switch, and NVMe over Fabric target offloads. ConnectX-5 adapter cards also bring advanced Open vSwitch offloads to telecommunications and cloud data centers to drive extremely high packet rates and throughput with reduced CPU resource consumption, thus boosting data center infrastructure efficiency.

ConnectX-5 adapter cards are available for PCIe Gen 3.0 and Gen 4.0 servers and provide support for 1, 10, 25, 40, 50 and 100 GbE speeds in stand-up PCIe cards, OCP 2.0, and OCP 3.0 form factors. ConnectX-5 cards also offer advanced Multi-Host and Socket Direct technologies.




Features: • Tag matching and rendezvous offloads

• Adaptive routing on reliable transport

• Burst buffer offloads for background checkpointing

• NVMe over Fabric offloads

• Backend switch elimination by host chaining

• Embedded PCIe switch

• Enhanced vSwitch/vRouter offloads

• Flexible pipeline

• RoCE for overlay networks

• PCIe Gen 4.0 support

• RoHS compliant

• ODCC compatible

• Various form factors available




Cloud and Web 2.0 Environments ConnectX-5 adapter cards enable data center administrators to benefit from better server utilization and reduced costs, power usage, and cable complexity, allowing for more virtual appliances, virtual machines (VMs) and tenants to co-exist on the same hardware.

Supported vSwitch/vRouter offload functions include:

• Overlay Networks (e.g., VXLAN, NVGRE, MPLS, GENEVE, and NSH) header encapsulation & decapsulation.

• Stateless offloads of inner packets and packet headers’ re-write, enabling NAT functionality and more.

• Flexible and programmable parser and match-action tables, which enable hardware offloads for future protocols.

• SR-IOV technology, providing dedicated adapter resources, guaranteed isolation and protection for virtual machines (VMs) within the server.

• Network Function Virtualization (NFV), enabling a VM to be used as a virtual appliance. The full datapath operation offloads, hairpin hardware capability and service chaining enables data to be handled by the virtual appliance, with minimum CPU utilization.



Cloud and Web 2.0 customers developing platforms on Software Defined Network (SDN) environments are leveraging their servers’ Operating System Virtual-Switching capabilities to achieve maximum flexibility. Open vSwitch (OvS) is an example of a virtual switch that allows Virtual Machines to communicate with each other and with the outside world. Traditionally residing in the hypervisor where switching is based on twelve-tuple matching onflows, the virtual switch, or virtual router software-based solution, is CPU-intensive. This can negatively affect system performance and prevent the full utilization of available bandwidth.

Mellanox ASAP2 - Accelerated Switching and Packet Processing® technology enables offloading the vSwitch/vRouter by handling the data plane in the NIC hardware, without modifying the control plane. This results in significantly higher vSwitch/vRouter performance without the associated CPU load.

Additionally, intelligent ConnectX-5’s flexible pipeline capabilities, including flexible parser and flexible match-action tables, are programmable, enabling hardware offloads for future protocols.




Storage Environments NVMe storage devices are gaining popularity by offering very fast storage access. The evolving NVMe over Fabric (NVMe-oF) protocol leverages the RDMA connectivity for remote access. ConnectX-5 offers further enhancements by providing NVMe-oF target offloads, enabling very efficient NVMe storage access with no CPU intervention, and thus improving performance and reducing latency.



The embedded PCIe switch enables customers to build standalone storage or Machine Learning appliances. As with earlier generations of ConnectX adapters, standard block and file access protocols leverage RoCE for high-performance storage access. A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks.

ConnectX-5 enables an innovative storage rack design, Host Chaining, which enables different servers to interconnect without involving the Top of the Rack (ToR) switch. Leveraging Host Chaining, ConnectX-5 lowers the data center’s total cost of ownership (TCO) by reducing CAPEX (cables, NICs, and switch port expenses). OPEX is also reduced by cutting down on switch port management and overall power usage.








Telecommunications Telecommunications service providers are moving towards disaggregation, server virtualization, and orchestration as key tenets to modernize their networks. Likewise, they’re also moving towards Network Function Virtualization (NFV), which enables the rapid deployment of new network services. With this move, proprietary dedicated hardware and software, which tend to be static and difficult to scale, are being replaced with virtual machines running on commercial off-the-shelf (COTS) servers.

For telecom service providers, choosing the right networking hardware is critical to achieving a cloud-native NFV solution that is agile, reliable, fast and efficient. Telco service providers typically leverage virtualization and cloud technologies to better achieve agile service delivery and efficient scalability; these technologies require an advanced network infrastructure to support higher rates of packet processing. However, the resultant east-west traffic causes numerous interrupts as I/O traverses from kernel to user space, eats up CPU cycles and decreases packet performance. Particularly sensitive to delays are voice and video applications which often require less than 100ms of latency.

ConnectX-5 adapter cards drive extremely high packet rates, increased throughput and drive higher network efficiency through the following technologies; Open vSwitch Offloads (OvS), OvS over DPDK or ASAP², Network Overlay Virtualization, SR-IOV, and RDMA. This allows for secure data delivery through higher-performance offloads, reducing CPU resource utlization, and boosting data center infrastructure efficiency. The result is a much more responsive and agile network capable of rapidly deploying network services.




Wide Selection of Adapter Cards ConnectX-5 Ethernet adapter cards are available in several form factors including: low-profile stand-up PCIe, OCP 2.0 Type 1 and Type 2, and OCP 3.0 Small Form Factor.

Mellanox Multi-Host® technology allows multiple hosts to be connected into a single adapter by separating the PCIe interface into multiple and independent interfaces.

The portfolio also offers Mellanox Socket-Direct® configurations that enable servers without x16 PCIe slots to split the card’s 16-lane PCIe bus into two 8-lane buses on dedicated cards connected by a harness.




Host Management Host Management includes NC-SI over MCTP over SMBus, and MCTP over PCIe - Baseboard Management Controller (BMC) interface, as well as PLDM for Monitor and Control DSP0248 and PLDM for Firmware Update DSP0267.


Full specification and details can be found in the Product Datasheet PDF file

Express View Logo
In Stock: 2-3 Weeks
£321.74
£386.09 Inc Vat
Add To Cart
Mellanox MCX562A-ACAB CONNECTX-5 EN Network Interface Card
Mellanox Logo
With Host management 25GBE Dual-Port SFP28 PCIE3.0 X16 Thumbscrew (Pulltab) Bracket

ConnectX®-5 EN Card

Up to 25Gb/s Ethernet Adapter Cards

Thumbscrew (pull tab) Bracket

Intelligent RDMA-enabled, single and dual-port network adapter with advanced application offload capabilities for Web 2.0, Cloud, Storage, and Telco platforms.




Benefits: • Up to 100Gb/s connectivity per port

• Industry-leading throughput, low latency, low CPU utilization and high message rate

• Innovative rack design for storage and Machine Learning based on Host Chaining technology

• Smart interconnect for x86, Power, Arm, and GPU-based compute & storage platforms

• Advanced storage capabilities including NVMe over Fabric offloads

• Intelligent network adapter supporting flexible pipeline programmability

• Cutting-edge performance in virtualized networks including Network Function Virtualization (NFV)

• Enabler for efficient service chaining capabilities

• Efficient I/O consolidation, lowering data center costs and complexity


ConnectX-5 Ethernet network interface cards provide high performance and flexible solutions with up to two ports of 25GbE connectivity, 750ns latency, up to 200 million messages per second (Mpps), and a record setting 197Mpps when running an open source Data Path Development Kit (DPDK) PCIe (Gen 4.0). For storage workloads, ConnectX-5 delivers a range of innovative accelerations, such as Signature Handover (T10-DIF) in hardware, an embedded PCIe Switch, and NVMe over Fabric target offloads. ConnectX-5 adapter cards also bring advanced Open vSwitch offloads to telecommunications and cloud data centers to drive extremely high packet rates and throughput with reduced CPU resource consumption, thus boosting data center infrastructure efficiency.

ConnectX-5 adapter cards are available for PCIe Gen 3.0 and Gen 4.0 servers and provide support for 1, 10, 25, 40, 50 and 100 GbE speeds in stand-up PCIe cards, OCP 2.0, and OCP 3.0 form factors. ConnectX-5 cards also offer advanced Multi-Host and Socket Direct technologies.




Features: • Tag matching and rendezvous offloads

• Adaptive routing on reliable transport

• Burst buffer offloads for background checkpointing

• NVMe over Fabric offloads

• Backend switch elimination by host chaining

• Embedded PCIe switch

• Enhanced vSwitch/vRouter offloads

• Flexible pipeline

• RoCE for overlay networks

• PCIe Gen 4.0 support

• RoHS compliant

• ODCC compatible

• Various form factors available




Cloud and Web 2.0 Environments ConnectX-5 adapter cards enable data center administrators to benefit from better server utilization and reduced costs, power usage, and cable complexity, allowing for more virtual appliances, virtual machines (VMs) and tenants to co-exist on the same hardware.

Supported vSwitch/vRouter offload functions include:

• Overlay Networks (e.g., VXLAN, NVGRE, MPLS, GENEVE, and NSH) header encapsulation & decapsulation.

• Stateless offloads of inner packets and packet headers’ re-write, enabling NAT functionality and more.

• Flexible and programmable parser and match-action tables, which enable hardware offloads for future protocols.

• SR-IOV technology, providing dedicated adapter resources, guaranteed isolation and protection for virtual machines (VMs) within the server.

• Network Function Virtualization (NFV), enabling a VM to be used as a virtual appliance. The full datapath operation offloads, hairpin hardware capability and service chaining enables data to be handled by the virtual appliance, with minimum CPU utilization.



Cloud and Web 2.0 customers developing platforms on Software Defined Network (SDN) environments are leveraging their servers’ Operating System Virtual-Switching capabilities to achieve maximum flexibility. Open vSwitch (OvS) is an example of a virtual switch that allows Virtual Machines to communicate with each other and with the outside world. Traditionally residing in the hypervisor where switching is based on twelve-tuple matching onflows, the virtual switch, or virtual router software-based solution, is CPU-intensive. This can negatively affect system performance and prevent the full utilization of available bandwidth.

Mellanox ASAP2 - Accelerated Switching and Packet Processing® technology enables offloading the vSwitch/vRouter by handling the data plane in the NIC hardware, without modifying the control plane. This results in significantly higher vSwitch/vRouter performance without the associated CPU load.

Additionally, intelligent ConnectX-5’s flexible pipeline capabilities, including flexible parser and flexible match-action tables, are programmable, enabling hardware offloads for future protocols.




Storage Environments NVMe storage devices are gaining popularity by offering very fast storage access. The evolving NVMe over Fabric (NVMe-oF) protocol leverages the RDMA connectivity for remote access. ConnectX-5 offers further enhancements by providing NVMe-oF target offloads, enabling very efficient NVMe storage access with no CPU intervention, and thus improving performance and reducing latency.



The embedded PCIe switch enables customers to build standalone storage or Machine Learning appliances. As with earlier generations of ConnectX adapters, standard block and file access protocols leverage RoCE for high-performance storage access. A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks.

ConnectX-5 enables an innovative storage rack design, Host Chaining, which enables different servers to interconnect without involving the Top of the Rack (ToR) switch. Leveraging Host Chaining, ConnectX-5 lowers the data center’s total cost of ownership (TCO) by reducing CAPEX (cables, NICs, and switch port expenses). OPEX is also reduced by cutting down on switch port management and overall power usage.








Telecommunications Telecommunications service providers are moving towards disaggregation, server virtualization, and orchestration as key tenets to modernize their networks. Likewise, they’re also moving towards Network Function Virtualization (NFV), which enables the rapid deployment of new network services. With this move, proprietary dedicated hardware and software, which tend to be static and difficult to scale, are being replaced with virtual machines running on commercial off-the-shelf (COTS) servers.

For telecom service providers, choosing the right networking hardware is critical to achieving a cloud-native NFV solution that is agile, reliable, fast and efficient. Telco service providers typically leverage virtualization and cloud technologies to better achieve agile service delivery and efficient scalability; these technologies require an advanced network infrastructure to support higher rates of packet processing. However, the resultant east-west traffic causes numerous interrupts as I/O traverses from kernel to user space, eats up CPU cycles and decreases packet performance. Particularly sensitive to delays are voice and video applications which often require less than 100ms of latency.

ConnectX-5 adapter cards drive extremely high packet rates, increased throughput and drive higher network efficiency through the following technologies; Open vSwitch Offloads (OvS), OvS over DPDK or ASAP², Network Overlay Virtualization, SR-IOV, and RDMA. This allows for secure data delivery through higher-performance offloads, reducing CPU resource utlization, and boosting data center infrastructure efficiency. The result is a much more responsive and agile network capable of rapidly deploying network services.




Wide Selection of Adapter Cards ConnectX-5 Ethernet adapter cards are available in several form factors including: low-profile stand-up PCIe, OCP 2.0 Type 1 and Type 2, and OCP 3.0 Small Form Factor.

Mellanox Multi-Host® technology allows multiple hosts to be connected into a single adapter by separating the PCIe interface into multiple and independent interfaces.

The portfolio also offers Mellanox Socket-Direct® configurations that enable servers without x16 PCIe slots to split the card’s 16-lane PCIe bus into two 8-lane buses on dedicated cards connected by a harness.




Host Management Host Management includes NC-SI over MCTP over SMBus, and MCTP over PCIe - Baseboard Management Controller (BMC) interface, as well as PLDM for Monitor and Control DSP0248 and PLDM for Firmware Update DSP0267.


Full specification and details can be found in the Product Datasheet PDF file

Express View Logo
In Stock: 2-3 Weeks
£ POA
£ POA Inc Vat
Mellanox MCX512F-ACAT ConnectX-5 EN Network Interface Card 25GbE
Mellanox Logo
With Tall Bracket

ConnectX®-5 EN Card

Up to 25Gb/s Ethernet Adapter Cards

Intelligent RDMA-enabled, single and dual-port network adapter with advanced application offload capabilities for Web 2.0, Cloud, Storage, and Telco platforms.




Benefits: • Up to 100Gb/s connectivity per port

• Industry-leading throughput, low latency, low CPU utilization and high message rate

• Innovative rack design for storage and Machine Learning based on Host Chaining technology

• Smart interconnect for x86, Power, Arm, and GPU-based compute & storage platforms

• Advanced storage capabilities including NVMe over Fabric offloads

• Intelligent network adapter supporting flexible pipeline programmability

• Cutting-edge performance in virtualized networks including Network Function Virtualization (NFV)

• Enabler for efficient service chaining capabilities

• Efficient I/O consolidation, lowering data center costs and complexity


ConnectX-5 Ethernet network interface cards provide high performance and flexible solutions with up to two ports of 25GbE connectivity, 750ns latency, up to 200 million messages per second (Mpps), and a record setting 197Mpps when running an open source Data Path Development Kit (DPDK) PCIe (Gen 4.0). For storage workloads, ConnectX-5 delivers a range of innovative accelerations, such as Signature Handover (T10-DIF) in hardware, an embedded PCIe Switch, and NVMe over Fabric target offloads. ConnectX-5 adapter cards also bring advanced Open vSwitch offloads to telecommunications and cloud data centers to drive extremely high packet rates and throughput with reduced CPU resource consumption, thus boosting data center infrastructure efficiency.

ConnectX-5 adapter cards are available for PCIe Gen 3.0 and Gen 4.0 servers and provide support for 1, 10, 25, 40, 50 and 100 GbE speeds in stand-up PCIe cards, OCP 2.0, and OCP 3.0 form factors. ConnectX-5 cards also offer advanced Multi-Host and Socket Direct technologies.




Features: • Tag matching and rendezvous offloads

• Adaptive routing on reliable transport

• Burst buffer offloads for background checkpointing

• NVMe over Fabric offloads

• Backend switch elimination by host chaining

• Embedded PCIe switch

• Enhanced vSwitch/vRouter offloads

• Flexible pipeline

• RoCE for overlay networks

• PCIe Gen 4.0 support

• RoHS compliant

• ODCC compatible

• Various form factors available




Cloud and Web 2.0 Environments ConnectX-5 adapter cards enable data center administrators to benefit from better server utilization and reduced costs, power usage, and cable complexity, allowing for more virtual appliances, virtual machines (VMs) and tenants to co-exist on the same hardware.

Supported vSwitch/vRouter offload functions include:

• Overlay Networks (e.g., VXLAN, NVGRE, MPLS, GENEVE, and NSH) header encapsulation & decapsulation.

• Stateless offloads of inner packets and packet headers’ re-write, enabling NAT functionality and more.

• Flexible and programmable parser and match-action tables, which enable hardware offloads for future protocols.

• SR-IOV technology, providing dedicated adapter resources, guaranteed isolation and protection for virtual machines (VMs) within the server.

• Network Function Virtualization (NFV), enabling a VM to be used as a virtual appliance. The full datapath operation offloads, hairpin hardware capability and service chaining enables data to be handled by the virtual appliance, with minimum CPU utilization.



Cloud and Web 2.0 customers developing platforms on Software Defined Network (SDN) environments are leveraging their servers’ Operating System Virtual-Switching capabilities to achieve maximum flexibility. Open vSwitch (OvS) is an example of a virtual switch that allows Virtual Machines to communicate with each other and with the outside world. Traditionally residing in the hypervisor where switching is based on twelve-tuple matching onflows, the virtual switch, or virtual router software-based solution, is CPU-intensive. This can negatively affect system performance and prevent the full utilization of available bandwidth.

Mellanox ASAP2 - Accelerated Switching and Packet Processing® technology enables offloading the vSwitch/vRouter by handling the data plane in the NIC hardware, without modifying the control plane. This results in significantly higher vSwitch/vRouter performance without the associated CPU load.

Additionally, intelligent ConnectX-5’s flexible pipeline capabilities, including flexible parser and flexible match-action tables, are programmable, enabling hardware offloads for future protocols.




Storage Environments NVMe storage devices are gaining popularity by offering very fast storage access. The evolving NVMe over Fabric (NVMe-oF) protocol leverages the RDMA connectivity for remote access. ConnectX-5 offers further enhancements by providing NVMe-oF target offloads, enabling very efficient NVMe storage access with no CPU intervention, and thus improving performance and reducing latency.



The embedded PCIe switch enables customers to build standalone storage or Machine Learning appliances. As with earlier generations of ConnectX adapters, standard block and file access protocols leverage RoCE for high-performance storage access. A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks.

ConnectX-5 enables an innovative storage rack design, Host Chaining, which enables different servers to interconnect without involving the Top of the Rack (ToR) switch. Leveraging Host Chaining, ConnectX-5 lowers the data center’s total cost of ownership (TCO) by reducing CAPEX (cables, NICs, and switch port expenses). OPEX is also reduced by cutting down on switch port management and overall power usage.








Telecommunications Telecommunications service providers are moving towards disaggregation, server virtualization, and orchestration as key tenets to modernize their networks. Likewise, they’re also moving towards Network Function Virtualization (NFV), which enables the rapid deployment of new network services. With this move, proprietary dedicated hardware and software, which tend to be static and difficult to scale, are being replaced with virtual machines running on commercial off-the-shelf (COTS) servers.

For telecom service providers, choosing the right networking hardware is critical to achieving a cloud-native NFV solution that is agile, reliable, fast and efficient. Telco service providers typically leverage virtualization and cloud technologies to better achieve agile service delivery and efficient scalability; these technologies require an advanced network infrastructure to support higher rates of packet processing. However, the resultant east-west traffic causes numerous interrupts as I/O traverses from kernel to user space, eats up CPU cycles and decreases packet performance. Particularly sensitive to delays are voice and video applications which often require less than 100ms of latency.

ConnectX-5 adapter cards drive extremely high packet rates, increased throughput and drive higher network efficiency through the following technologies; Open vSwitch Offloads (OvS), OvS over DPDK or ASAP², Network Overlay Virtualization, SR-IOV, and RDMA. This allows for secure data delivery through higher-performance offloads, reducing CPU resource utlization, and boosting data center infrastructure efficiency. The result is a much more responsive and agile network capable of rapidly deploying network services.




Wide Selection of Adapter Cards ConnectX-5 Ethernet adapter cards are available in several form factors including: low-profile stand-up PCIe, OCP 2.0 Type 1 and Type 2, and OCP 3.0 Small Form Factor.

Mellanox Multi-Host® technology allows multiple hosts to be connected into a single adapter by separating the PCIe interface into multiple and independent interfaces.

The portfolio also offers Mellanox Socket-Direct® configurations that enable servers without x16 PCIe slots to split the card’s 16-lane PCIe bus into two 8-lane buses on dedicated cards connected by a harness.




Host Management Host Management includes NC-SI over MCTP over SMBus, and MCTP over PCIe - Baseboard Management Controller (BMC) interface, as well as PLDM for Monitor and Control DSP0248 and PLDM for Firmware Update DSP0267.


Full specification and details can be found in the Product Datasheet PDF file

Express View Logo
In Stock: 2-3 Weeks
£ POA
£ POA Inc Vat
Mellanox Logo
With Host Management 25GBE Dual-Port SFP28 PCIE3.0 X16 No Bracket Halogen Free

ConnectX®-5 EN Card

Up to 25Gb/s Ethernet Adapter Cards

Intelligent RDMA-enabled, single and dual-port network adapter with advanced application offload capabilities for Web 2.0, Cloud, Storage, and Telco platforms.




Benefits: • Up to 100Gb/s connectivity per port

• Industry-leading throughput, low latency, low CPU utilization and high message rate

• Innovative rack design for storage and Machine Learning based on Host Chaining technology

• Smart interconnect for x86, Power, Arm, and GPU-based compute & storage platforms

• Advanced storage capabilities including NVMe over Fabric offloads

• Intelligent network adapter supporting flexible pipeline programmability

• Cutting-edge performance in virtualized networks including Network Function Virtualization (NFV)

• Enabler for efficient service chaining capabilities

• Efficient I/O consolidation, lowering data center costs and complexity


ConnectX-5 Ethernet network interface cards provide high performance and flexible solutions with up to two ports of 25GbE connectivity, 750ns latency, up to 200 million messages per second (Mpps), and a record setting 197Mpps when running an open source Data Path Development Kit (DPDK) PCIe (Gen 4.0). For storage workloads, ConnectX-5 delivers a range of innovative accelerations, such as Signature Handover (T10-DIF) in hardware, an embedded PCIe Switch, and NVMe over Fabric target offloads. ConnectX-5 adapter cards also bring advanced Open vSwitch offloads to telecommunications and cloud data centers to drive extremely high packet rates and throughput with reduced CPU resource consumption, thus boosting data center infrastructure efficiency.

ConnectX-5 adapter cards are available for PCIe Gen 3.0 and Gen 4.0 servers and provide support for 1, 10, 25, 40, 50 and 100 GbE speeds in stand-up PCIe cards, OCP 2.0, and OCP 3.0 form factors. ConnectX-5 cards also offer advanced Multi-Host and Socket Direct technologies.




Features: • Tag matching and rendezvous offloads

• Adaptive routing on reliable transport

• Burst buffer offloads for background checkpointing

• NVMe over Fabric offloads

• Backend switch elimination by host chaining

• Embedded PCIe switch

• Enhanced vSwitch/vRouter offloads

• Flexible pipeline

• RoCE for overlay networks

• PCIe Gen 4.0 support

• RoHS compliant

• ODCC compatible

• Various form factors available




Cloud and Web 2.0 Environments ConnectX-5 adapter cards enable data center administrators to benefit from better server utilization and reduced costs, power usage, and cable complexity, allowing for more virtual appliances, virtual machines (VMs) and tenants to co-exist on the same hardware.

Supported vSwitch/vRouter offload functions include:

• Overlay Networks (e.g., VXLAN, NVGRE, MPLS, GENEVE, and NSH) header encapsulation & decapsulation.

• Stateless offloads of inner packets and packet headers’ re-write, enabling NAT functionality and more.

• Flexible and programmable parser and match-action tables, which enable hardware offloads for future protocols.

• SR-IOV technology, providing dedicated adapter resources, guaranteed isolation and protection for virtual machines (VMs) within the server.

• Network Function Virtualization (NFV), enabling a VM to be used as a virtual appliance. The full datapath operation offloads, hairpin hardware capability and service chaining enables data to be handled by the virtual appliance, with minimum CPU utilization.



Cloud and Web 2.0 customers developing platforms on Software Defined Network (SDN) environments are leveraging their servers’ Operating System Virtual-Switching capabilities to achieve maximum flexibility. Open vSwitch (OvS) is an example of a virtual switch that allows Virtual Machines to communicate with each other and with the outside world. Traditionally residing in the hypervisor where switching is based on twelve-tuple matching onflows, the virtual switch, or virtual router software-based solution, is CPU-intensive. This can negatively affect system performance and prevent the full utilization of available bandwidth.

Mellanox ASAP2 - Accelerated Switching and Packet Processing® technology enables offloading the vSwitch/vRouter by handling the data plane in the NIC hardware, without modifying the control plane. This results in significantly higher vSwitch/vRouter performance without the associated CPU load.

Additionally, intelligent ConnectX-5’s flexible pipeline capabilities, including flexible parser and flexible match-action tables, are programmable, enabling hardware offloads for future protocols.




Storage Environments NVMe storage devices are gaining popularity by offering very fast storage access. The evolving NVMe over Fabric (NVMe-oF) protocol leverages the RDMA connectivity for remote access. ConnectX-5 offers further enhancements by providing NVMe-oF target offloads, enabling very efficient NVMe storage access with no CPU intervention, and thus improving performance and reducing latency.



The embedded PCIe switch enables customers to build standalone storage or Machine Learning appliances. As with earlier generations of ConnectX adapters, standard block and file access protocols leverage RoCE for high-performance storage access. A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks.

ConnectX-5 enables an innovative storage rack design, Host Chaining, which enables different servers to interconnect without involving the Top of the Rack (ToR) switch. Leveraging Host Chaining, ConnectX-5 lowers the data center’s total cost of ownership (TCO) by reducing CAPEX (cables, NICs, and switch port expenses). OPEX is also reduced by cutting down on switch port management and overall power usage.








Telecommunications Telecommunications service providers are moving towards disaggregation, server virtualization, and orchestration as key tenets to modernize their networks. Likewise, they’re also moving towards Network Function Virtualization (NFV), which enables the rapid deployment of new network services. With this move, proprietary dedicated hardware and software, which tend to be static and difficult to scale, are being replaced with virtual machines running on commercial off-the-shelf (COTS) servers.

For telecom service providers, choosing the right networking hardware is critical to achieving a cloud-native NFV solution that is agile, reliable, fast and efficient. Telco service providers typically leverage virtualization and cloud technologies to better achieve agile service delivery and efficient scalability; these technologies require an advanced network infrastructure to support higher rates of packet processing. However, the resultant east-west traffic causes numerous interrupts as I/O traverses from kernel to user space, eats up CPU cycles and decreases packet performance. Particularly sensitive to delays are voice and video applications which often require less than 100ms of latency.

ConnectX-5 adapter cards drive extremely high packet rates, increased throughput and drive higher network efficiency through the following technologies; Open vSwitch Offloads (OvS), OvS over DPDK or ASAP², Network Overlay Virtualization, SR-IOV, and RDMA. This allows for secure data delivery through higher-performance offloads, reducing CPU resource utlization, and boosting data center infrastructure efficiency. The result is a much more responsive and agile network capable of rapidly deploying network services.




Wide Selection of Adapter Cards ConnectX-5 Ethernet adapter cards are available in several form factors including: low-profile stand-up PCIe, OCP 2.0 Type 1 and Type 2, and OCP 3.0 Small Form Factor.

Mellanox Multi-Host® technology allows multiple hosts to be connected into a single adapter by separating the PCIe interface into multiple and independent interfaces.

The portfolio also offers Mellanox Socket-Direct® configurations that enable servers without x16 PCIe slots to split the card’s 16-lane PCIe bus into two 8-lane buses on dedicated cards connected by a harness.




Host Management Host Management includes NC-SI over MCTP over SMBus, and MCTP over PCIe - Baseboard Management Controller (BMC) interface, as well as PLDM for Monitor and Control DSP0248 and PLDM for Firmware Update DSP0267.


Full specification and details can be found in the Product Datasheet PDF file

Express View Logo
In Stock: 2-3 Weeks
£361.34
£433.61 Inc Vat
Add To Cart
Mellanox Logo
Tall Bracket ROHS R6 14.2CM X 6.9CM (Low Profile)

ConnectX®-5 EN Card

Up to 10/25Gb/s Ethernet Adapter Cards

Intelligent RDMA-enabled, single and dual-port network adapter with advanced application offload capabilities for Web 2.0, Cloud, Storage, and Telco platforms.




Benefits: • Up to 100Gb/s connectivity per port

• Industry-leading throughput, low latency, low CPU utilization and high message rate

• Innovative rack design for storage and Machine Learning based on Host Chaining technology

• Smart interconnect for x86, Power, Arm, and GPU-based compute & storage platforms

• Advanced storage capabilities including NVMe over Fabric offloads

• Intelligent network adapter supporting flexible pipeline programmability

• Cutting-edge performance in virtualized networks including Network Function Virtualization (NFV)

• Enabler for efficient service chaining capabilities

• Efficient I/O consolidation, lowering data center costs and complexity


ConnectX-5 Ethernet network interface cards provide high performance and flexible solutions with up to two ports of 10/25GbE connectivity, 750ns latency, up to 200 million messages per second (Mpps), and a record setting 197Mpps when running an open source Data Path Development Kit (DPDK) PCIe (Gen 4.0). For storage workloads, ConnectX-5 delivers a range of innovative accelerations, such as Signature Handover (T10-DIF) in hardware, an embedded PCIe Switch, and NVMe over Fabric target offloads. ConnectX-5 adapter cards also bring advanced Open vSwitch offloads to telecommunications and cloud data centers to drive extremely high packet rates and throughput with reduced CPU resource consumption, thus boosting data center infrastructure efficiency.

ConnectX-5 adapter cards are available for PCIe Gen 3.0 and Gen 4.0 servers and provide support for 1, 10, 25, 40, 50 and 100 GbE speeds in stand-up PCIe cards, OCP 2.0, and OCP 3.0 form factors. ConnectX-5 cards also offer advanced Multi-Host and Socket Direct technologies.




Features: • Tag matching and rendezvous offloads

• Adaptive routing on reliable transport

• Burst buffer offloads for background checkpointing

• NVMe over Fabric offloads

• Backend switch elimination by host chaining

• Embedded PCIe switch

• Enhanced vSwitch/vRouter offloads

• Flexible pipeline

• RoCE for overlay networks

• PCIe Gen 4.0 support

• RoHS compliant

• ODCC compatible

• Various form factors available




Cloud and Web 2.0 Environments ConnectX-5 adapter cards enable data center administrators to benefit from better server utilization and reduced costs, power usage, and cable complexity, allowing for more virtual appliances, virtual machines (VMs) and tenants to co-exist on the same hardware.

Supported vSwitch/vRouter offload functions include:

• Overlay Networks (e.g., VXLAN, NVGRE, MPLS, GENEVE, and NSH) header encapsulation & decapsulation.

• Stateless offloads of inner packets and packet headers’ re-write, enabling NAT functionality and more.

• Flexible and programmable parser and match-action tables, which enable hardware offloads for future protocols.

• SR-IOV technology, providing dedicated adapter resources, guaranteed isolation and protection for virtual machines (VMs) within the server.

• Network Function Virtualization (NFV), enabling a VM to be used as a virtual appliance. The full datapath operation offloads, hairpin hardware capability and service chaining enables data to be handled by the virtual appliance, with minimum CPU utilization.



Cloud and Web 2.0 customers developing platforms on Software Defined Network (SDN) environments are leveraging their servers’ Operating System Virtual-Switching capabilities to achieve maximum flexibility. Open vSwitch (OvS) is an example of a virtual switch that allows Virtual Machines to communicate with each other and with the outside world. Traditionally residing in the hypervisor where switching is based on twelve-tuple matching onflows, the virtual switch, or virtual router software-based solution, is CPU-intensive. This can negatively affect system performance and prevent the full utilization of available bandwidth.

Mellanox ASAP2 - Accelerated Switching and Packet Processing® technology enables offloading the vSwitch/vRouter by handling the data plane in the NIC hardware, without modifying the control plane. This results in significantly higher vSwitch/vRouter performance without the associated CPU load.

Additionally, intelligent ConnectX-5’s flexible pipeline capabilities, including flexible parser and flexible match-action tables, are programmable, enabling hardware offloads for future protocols.




Storage Environments NVMe storage devices are gaining popularity by offering very fast storage access. The evolving NVMe over Fabric (NVMe-oF) protocol leverages the RDMA connectivity for remote access. ConnectX-5 offers further enhancements by providing NVMe-oF target offloads, enabling very efficient NVMe storage access with no CPU intervention, and thus improving performance and reducing latency.



The embedded PCIe switch enables customers to build standalone storage or Machine Learning appliances. As with earlier generations of ConnectX adapters, standard block and file access protocols leverage RoCE for high-performance storage access. A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks.

ConnectX-5 enables an innovative storage rack design, Host Chaining, which enables different servers to interconnect without involving the Top of the Rack (ToR) switch. Leveraging Host Chaining, ConnectX-5 lowers the data center’s total cost of ownership (TCO) by reducing CAPEX (cables, NICs, and switch port expenses). OPEX is also reduced by cutting down on switch port management and overall power usage.






Telecommunications Telecommunications service providers are moving towards disaggregation, server virtualization, and orchestration as key tenets to modernize their networks. Likewise, they’re also moving towards Network Function Virtualization (NFV), which enables the rapid deployment of new network services. With this move, proprietary dedicated hardware and software, which tend to be static and difficult to scale, are being replaced with virtual machines running on commercial off-the-shelf (COTS) servers.

For telecom service providers, choosing the right networking hardware is critical to achieving a cloud-native NFV solution that is agile, reliable, fast and efficient. Telco service providers typically leverage virtualization and cloud technologies to better achieve agile service delivery and efficient scalability; these technologies require an advanced network infrastructure to support higher rates of packet processing. However, the resultant east-west traffic causes numerous interrupts as I/O traverses from kernel to user space, eats up CPU cycles and decreases packet performance. Particularly sensitive to delays are voice and video applications which often require less than 100ms of latency.

ConnectX-5 adapter cards drive extremely high packet rates, increased throughput and drive higher network efficiency through the following technologies; Open vSwitch Offloads (OvS), OvS over DPDK or ASAP², Network Overlay Virtualization, SR-IOV, and RDMA. This allows for secure data delivery through higher-performance offloads, reducing CPU resource utlization, and boosting data center infrastructure efficiency. The result is a much more responsive and agile network capable of rapidly deploying network services.




Wide Selection of Adapter Cards ConnectX-5 Ethernet adapter cards are available in several form factors including: low-profile stand-up PCIe, OCP 2.0 Type 1 and Type 2, and OCP 3.0 Small Form Factor.

Mellanox Multi-Host® technology allows multiple hosts to be connected into a single adapter by separating the PCIe interface into multiple and independent interfaces.

The portfolio also offers Mellanox Socket-Direct® configurations that enable servers without x16 PCIe slots to split the card’s 16-lane PCIe bus into two 8-lane buses on dedicated cards connected by a harness.




Host Management Host Management includes NC-SI over MCTP over SMBus, and MCTP over PCIe - Baseboard Management Controller (BMC) interface, as well as PLDM for Monitor and Control DSP0248 and PLDM for Firmware Update DSP0267.


Full specification and details can be found in the Product Datasheet PDF file

Express View Logo
In Stock: 2-3 Weeks
£321.74
£386.09 Inc Vat
Add To Cart
Mellanox Logo

ConnectX®-5 EN Card

Up to 25Gb/s Ethernet Adapter Cards

With ConnectX-5 Ex

Intelligent RDMA-enabled, single and dual-port network adapter with advanced application offload capabilities for Web 2.0, Cloud, Storage, and Telco platforms.




Benefits: • Up to 25Gb/s connectivity per port

• Industry-leading throughput, low latency, low CPU utilization and high message rate

• Innovative rack design for storage and Machine Learning based on Host Chaining technology

• Smart interconnect for x86, Power, Arm, and GPU-based compute & storage platforms

• Advanced storage capabilities including NVMe over Fabric offloads

• Intelligent network adapter supporting flexible pipeline programmability

• Cutting-edge performance in virtualized networks including Network Function Virtualization (NFV)

• Enabler for efficient service chaining capabilities

• Efficient I/O consolidation, lowering data center costs and complexity


ConnectX-5 Ethernet network interface cards provide high performance and flexible solutions with up to two ports of 25GbE connectivity, 750ns latency, up to 200 million messages per second (Mpps), and a record setting 197Mpps when running an open source Data Path Development Kit (DPDK) PCIe (Gen 4.0). For storage workloads, ConnectX-5 delivers a range of innovative accelerations, such as Signature Handover (T10-DIF) in hardware, an embedded PCIe Switch, and NVMe over Fabric target offloads. ConnectX-5 adapter cards also bring advanced Open vSwitch offloads to telecommunications and cloud data centers to drive extremely high packet rates and throughput with reduced CPU resource consumption, thus boosting data center infrastructure efficiency.

ConnectX-5 series adapter cards are available for PCIe Gen 3.0 and Gen 4.0 servers and provide support for 1, 10, 25, 40, 50 and 100GbE speeds in stand-up PCIe cards, OCP 2.0, and OCP 3.0 form factors. ConnectX-5 cards also offer advanced Multi-Host and Socket Direct technologies.




Features: • Tag matching and rendezvous offloads

• Adaptive routing on reliable transport

• Burst buffer offloads for background checkpointing

• NVMe over Fabric offloads

• Backend switch elimination by host chaining

• Embedded PCIe switch

• Enhanced vSwitch/vRouter offloads

• Flexible pipeline

• RoCE for overlay networks

• PCIe Gen 4.0 support

• RoHS compliant

• ODCC compatible

• Various form factors available




Cloud and Web 2.0 Environments ConnectX-5 adapter cards enable data center administrators to benefit from better server utilization and reduced costs, power usage, and cable complexity, allowing for more virtual appliances, virtual machines (VMs) and tenants to co-exist on the same hardware.

Supported vSwitch/vRouter offload functions include:

• Overlay Networks (e.g., VXLAN, NVGRE, MPLS, GENEVE, and NSH) header encapsulation & decapsulation.

• Stateless offloads of inner packets and packet headers’ re-write, enabling NAT functionality and more.

• Flexible and programmable parser and match-action tables, which enable hardware offloads for future protocols.

• SR-IOV technology, providing dedicated adapter resources, guaranteed isolation and protection for virtual machines (VMs) within the server.

• Network Function Virtualization (NFV), enabling a VM to be used as a virtual appliance. The full datapath operation offloads, hairpin hardware capability and service chaining enables data to be handled by the virtual appliance, with minimum CPU utilization.



Cloud and Web 2.0 customers developing platforms on Software Defined Network (SDN) environments are leveraging their servers’ Operating System Virtual-Switching capabilities to achieve maximum flexibility. Open vSwitch (OvS) is an example of a virtual switch that allows Virtual Machines to communicate with each other and with the outside world. Traditionally residing in the hypervisor where switching is based on twelve-tuple matching onflows, the virtual switch, or virtual router software-based solution, is CPU-intensive. This can negatively affect system performance and prevent the full utilization of available bandwidth.

Mellanox ASAP2 - Accelerated Switching and Packet Processing® technology enables offloading the vSwitch/vRouter by handling the data plane in the NIC hardware, without modifying the control plane. This results in significantly higher vSwitch/vRouter performance without the associated CPU load.

Additionally, intelligent ConnectX-5’s flexible pipeline capabilities, including flexible parser and flexible match-action tables, are programmable, enabling hardware offloads for future protocols.




Storage Environments NVMe storage devices are gaining popularity by offering very fast storage access. The evolving NVMe over Fabric (NVMe-oF) protocol leverages the RDMA connectivity for remote access. ConnectX-5 offers further enhancements by providing NVMe-oF target offloads, enabling very efficient NVMe storage access with no CPU intervention, and thus improving performance and reducing latency.



The embedded PCIe switch enables customers to build standalone storage or Machine Learning appliances. As with earlier generations of ConnectX adapters, standard block and file access protocols leverage RoCE for high-performance storage access. A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks.

ConnectX-5 enables an innovative storage rack design, Host Chaining, which enables different servers to interconnect without involving the Top of the Rack (ToR) switch. Leveraging Host Chaining, ConnectX-5 lowers the data center’s total cost of ownership (TCO) by reducing CAPEX (cables, NICs, and switch port expenses). OPEX is also reduced by cutting down on switch port management and overall power usage.








Telecommunications Telecommunications service providers are moving towards disaggregation, server virtualization, and orchestration as key tenets to modernize their networks. Likewise, they’re also moving towards Network Function Virtualization (NFV), which enables the rapid deployment of new network services. With this move, proprietary dedicated hardware and software, which tend to be static and difficult to scale, are being replaced with virtual machines running on commercial off-the-shelf (COTS) servers.

For telecom service providers, choosing the right networking hardware is critical to achieving a cloud-native NFV solution that is agile, reliable, fast and efficient. Telco service providers typically leverage virtualization and cloud technologies to better achieve agile service delivery and efficient scalability; these technologies require an advanced network infrastructure to support higher rates of packet processing. However, the resultant east-west traffic causes numerous interrupts as I/O traverses from kernel to user space, eats up CPU cycles and decreases packet performance. Particularly sensitive to delays are voice and video applications which often require less than 100ms of latency.

ConnectX-5 adapter cards drive extremely high packet rates, increased throughput and drive higher network efficiency through the following technologies; Open vSwitch Offloads (OvS), OvS over DPDK or ASAP², Network Overlay Virtualization, SR-IOV, and RDMA. This allows for secure data delivery through higher-performance offloads, reducing CPU resource utlization, and boosting data center infrastructure efficiency. The result is a much more responsive and agile network capable of rapidly deploying network services.




Wide Selection of Adapter Cards ConnectX-5 Ethernet adapter cards are available in several form factors including: low-profile stand-up PCIe, OCP 2.0 Type 1 and Type 2, and OCP 3.0 Small Form Factor.

Mellanox Multi-Host® technology allows multiple hosts to be connected into a single adapter by separating the PCIe interface into multiple and independent interfaces.

The portfolio also offers Mellanox Socket-Direct® configurations that enable servers without x16 PCIe slots to split the card’s 16-lane PCIe bus into two 8-lane buses on dedicated cards connected by a harness.




Host Management Host Management includes NC-SI over MCTP over SMBus, and MCTP over PCIe - Baseboard Management Controller (BMC) interface, as well as PLDM for Monitor and Control DSP0248 and PLDM for Firmware Update DSP0267.


Full specification and details can be found in the Product Datasheet PDF file

Express View Logo
In Stock: 2-3 Weeks
£ POA
£ POA Inc Vat
Mellanox Logo

ConnectX-4 Lx EN Ethernet Adapter Cards

Up to 50Gb/s Ethernet Adapter Cards

1/10/25/40/50 Gigabit Ethernet adapter cards supporting RDMA, Overlay Network Encapsulation/Decapsulation and more

ConnectX-4 Lx EN network interface card with 50Gb/s Ethernet connectivity addresses virtualized infrastructure challenges, delivering best-in-class and highest performance to various demanding markets and applications. Providing true hardware-based I/O isolation with unmatched scalability and efficiency, achieving the most cost-effective and flexible solution for Web 2.0, Cloud, data analytics, database, and storage platforms.




Benefits: • High performance boards for applications requiring high bandwidth, low latency and high message rate

• Industry leading throughput and latency for Web 2.0, Cloud and Big Data applications

• Smart interconnect for x86, Power, ARM, and GPU-based compute and storage platforms

• Cutting-edge performance in virtualized overlay networks

• Efficient I/O consolidation, lowering data center costs and complexity

• Virtualization acceleration

• Power efficiency


With the exponential increase in usage of data and the creation of new applications, the demand for the highest throughput, lowest latency, virtualization and sophisticated data acceleration engines continues to rise. ConnectX-4 Lx EN enables data centers to leverage the world's leading interconnect adapter for increasing their operational efficiency, improving server utilization, maximizing applications productivity, while reducing total cost of ownership (TCO).

ConnectX-4 Lx EN adapter cards provide a combination of 1, 10, 25, 40, and 50 GbE bandwidth, sub-microsecond latency and a 75 million packets per second message rate. They include native hardware support for RDMA over Converged Ethernet (RoCE), Ethernet stateless offload engines, Overlay Networks, GPUDirect technology and Multi-Host technology.




Features: • 1/10/25/40/50 Gb/s speeds

• Single and dual-port options

• Virtualization

• Low latency RDMA over Converged Ethernet (RoCE)

• Multi-Host technology connects up to 4 independent hosts

• CPU offloading of transport operations

• Application offloading

• PeerDirect communication acceleration

• Hardware offloads for NVGRE, VXLAN and GENEVE encapsulated traffic

• End-to-end QoS and congestion control

• Hardware-based I/O virtualization

• RoHS compliant

• ODCC compatible

• Various form factors available


Storage Acceleration
Storage applications will see improved performance with the higher bandwidth ConnectX-4 Lx EN delivers. Moreover, standard block and file access protocols can leverage RoCE for high-performance storage access. A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks.

Host Management
Mellanox host management and control capabilities include NC-SI over MCTP over SMBus, and MCTP over PCIe - Baseboard Management Controller (BMC) interface, as well as PLDM for Monitoring and Control DSP0248 and PLDM for Firmware Update DSP0267.

  I/O Virtualization
ConnectX-4 Lx EN SR-IOV technology provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VMs) within the server. I/O virtualization with ConnectX-4 Lx EN gives data center administrators better server utilization while reducing cost, power, and cable complexity, allowing more virtual machines and more tenants on the same hardware.

Mellanox PeerDirect®
Mellanox PeerDirect communication provides high efficiency RDMA access by eliminating unnecessary internal data copies between components on the PCIe bus (for example, from GPU to CPU), and therefore significantly reduces application run time. ConnectX-4 Lx EN advanced acceleration technology enables higher cluster efficiency and scalability to tens of thousands of nodes.

  Software Support
All Mellanox adapter cards are supported by Windows, Linux distributions, VMware, FreeBSD, and Citrix XENServer. ConnectX-4 Lx EN supports various management interfaces and has a rich set of tools for configuration and management across operating systems.
Additionally, ConnectX-4 Lx EN provides the option for a secure firmware update check using digital signatures to prevent remote attackers from uploading malicious firmware images; this ensures that only officially authentic images produced by Mellanox can be installed, regardless whether the source of the installation is the host, the network, or a BMC.

Wide Selection of Ethernet Adapter Cards
ConnectX-4 Lx EN adapter cards offer a cost-effective Ethernet adapter solution for 1, 10, 25, 40 and 50 Gb/s Ethernet speeds, enabling seamless networking, clustering, or storage. The adapter reduces application runtime, and offers the flexibility and scalability to make infrastructure run as efficiently and productively as possible.
ConnectX-4 Lx Ethernet adapter cards are available in several form factors including: low-profile stand-up PCIe, OCP 2.0 Type 1, OCP 2.0 Type 2, and OCP 3.0 small form factor

  Overlay Networks
In order to better scale their networks, data center operators often create overlay networks that carry traffic from individual virtual machines over logical tunnels in encapsulated formats such as NVGRE and VXLAN. While this solves network scalability issues, it hides the TCP packet from the hardware offloading engines, placing higher loads on the host CPU. ConnectX-4 Lx EN effectively addresses this by providing advanced NVGRE, VXLAN and GENEVE hardware offloading engines that encapsulate and de-capsulate the overlay protocol headers, enabling the traditional offloads to be performed on the encapsulated traffic for these and other tunneling protocols (GENEVE, MPLS, QinQ, and so on). With ConnectX-4 Lx EN, data center operators can achieve native performance in the new network architecture.

RDMA over Converged Ethernet (RoCE)
ConnectX-4 Lx EN supports RoCE specifications delivering low-latency and high- performance over Ethernet networks. Leveraging data center bridging (DCB) capabilities as well as ConnectX-4 Lx EN advanced congestion control hardware mechanisms, RoCE provides efficient low-latency RDMA services over Layer 2 and Layer 3 networks.

 

Mellanox Multi-Host® Technology Innovative Mellanox Multi-Host technology enables data centers to design and build scale-out heterogeneous compute and storage racks, with direct connectivity between compute elements and the network. Significantly improving cost savings, flexibility, and total cost of ownership, Mellanox MultiHost technology provides better power and performance, while achieving maximum data processing and data transfer at minimum capital and operational expenses.

Mellanox Multi-Host works by allowing multiple hosts to connect into a single interconnect adapter, by separating the adapter PCIe interface into several independent PCIe interfaces. Each interface connects to a separate host CPU—with no performance degradation. Reducing data center CAPEX and OPEX, Mellanox Multi-Host slashes switch port management and power usage by reducing the number of cables, NICs and switch ports required by four independent servers, from four to one of each. Additional features & benefits of Mellanox Multi-Host technology:

• Enables IT managers to remotely control the configuration and power state of each host individually; guaranteeing host security and isolation, the management of one host does not affect host traffic performance nor the management of other hosts.

• Lowering total cost of ownership (TCO), Mellanox Multi-Host uses a single BMC, with independent NC-SI/MCTP management channels for each of the managed hosts.

• Mellanox Multi-Host also supports a heterogeneous data center architecture; the various hosts connected to the single adapter can be x86, Power, GPU, Arm or FPGA, thereby removing any limitations in passing data or communicating between compute elements.


Full specification and details can be found in the Product Datasheet PDF file

Express View Logo
In Stock: 2-3 Weeks
£ POA
£ POA Inc Vat
Mellanox Logo
40GBE Dual-Port QSFP28 PCIE3.0 X16 No Bracket

ConnectX®-5 EN network interface card

for OCP, 40GbE dual-port QSFP28

Dual QSFP28 Ethernet (copper and optical)

Express View Logo
In Stock: 2-3 Weeks
£ POA
£ POA Inc Vat
Mellanox Logo
UEFI Enabled (ARM X86) Tall Bracket

ConnectX®-5 EN Card

Up to 100Gb/s Ethernet Adapter Cards

UEFI Enabled (x86/Arm)

Intelligent RDMA-enabled, single and dual-port network adapter with advanced application offload capabilities for Web 2.0, Cloud, Storage, and Telco platforms.




Benefits: • Up to 100Gb/s connectivity per port

• Industry-leading throughput, low latency, low CPU utilization and high message rate

• Innovative rack design for storage and Machine Learning based on Host Chaining technology

• Smart interconnect for x86, Power, Arm, and GPU-based compute & storage platforms

• Advanced storage capabilities including NVMe over Fabric offloads

• Intelligent network adapter supporting flexible pipeline programmability

• Cutting-edge performance in virtualized networks including Network Function Virtualization (NFV)

• Enabler for efficient service chaining capabilities

• Efficient I/O consolidation, lowering data center costs and complexity


ConnectX-5 Ethernet network interface cards provide high performance and flexible solutions with up to two ports of 100GbE connectivity, 750ns latency, up to 200 million messages per second (Mpps), and a record setting 197Mpps when running an open source Data Path Development Kit (DPDK) PCIe (Gen 4.0). For storage workloads, ConnectX-5 delivers a range of innovative accelerations, such as Signature Handover (T10-DIF) in hardware, an embedded PCIe Switch, and NVMe over Fabric target offloads. ConnectX-5 adapter cards also bring advanced Open vSwitch offloads to telecommunications and cloud data centers to drive extremely high packet rates and throughput with reduced CPU resource consumption, thus boosting data center infrastructure efficiency.

ConnectX-5 adapter cards are available for PCIe Gen 3.0 and Gen 4.0 servers and provide support for 1, 10, 25, 40, 50 and 100 GbE speeds in stand-up PCIe cards, OCP 2.0, and OCP 3.0 form factors. ConnectX-5 cards also offer advanced Multi-Host and Socket Direct technologies.




Features: • Tag matching and rendezvous offloads

• Adaptive routing on reliable transport

• Burst buffer offloads for background checkpointing

• NVMe over Fabric offloads

• Backend switch elimination by host chaining

• Embedded PCIe switch

• Enhanced vSwitch/vRouter offloads

• Flexible pipeline

• RoCE for overlay networks

• PCIe Gen 4.0 support

• RoHS compliant

• ODCC compatible

• Various form factors available




Cloud and Web 2.0 Environments ConnectX-5 adapter cards enable data center administrators to benefit from better server utilization and reduced costs, power usage, and cable complexity, allowing for more virtual appliances, virtual machines (VMs) and tenants to co-exist on the same hardware.

Supported vSwitch/vRouter offload functions include:

• Overlay Networks (e.g., VXLAN, NVGRE, MPLS, GENEVE, and NSH) header encapsulation & decapsulation.

• Stateless offloads of inner packets and packet headers’ re-write, enabling NAT functionality and more.

• Flexible and programmable parser and match-action tables, which enable hardware offloads for future protocols.

• SR-IOV technology, providing dedicated adapter resources, guaranteed isolation and protection for virtual machines (VMs) within the server.

• Network Function Virtualization (NFV), enabling a VM to be used as a virtual appliance. The full datapath operation offloads, hairpin hardware capability and service chaining enables data to be handled by the virtual appliance, with minimum CPU utilization.



Cloud and Web 2.0 customers developing platforms on Software Defined Network (SDN) environments are leveraging their servers’ Operating System Virtual-Switching capabilities to achieve maximum flexibility. Open vSwitch (OvS) is an example of a virtual switch that allows Virtual Machines to communicate with each other and with the outside world. Traditionally residing in the hypervisor where switching is based on twelve-tuple matching onflows, the virtual switch, or virtual router software-based solution, is CPU-intensive. This can negatively affect system performance and prevent the full utilization of available bandwidth.

Mellanox ASAP2 - Accelerated Switching and Packet Processing® technology enables offloading the vSwitch/vRouter by handling the data plane in the NIC hardware, without modifying the control plane. This results in significantly higher vSwitch/vRouter performance without the associated CPU load.

Additionally, intelligent ConnectX-5’s flexible pipeline capabilities, including flexible parser and flexible match-action tables, are programmable, enabling hardware offloads for future protocols.




Storage Environments NVMe storage devices are gaining popularity by offering very fast storage access. The evolving NVMe over Fabric (NVMe-oF) protocol leverages the RDMA connectivity for remote access. ConnectX-5 offers further enhancements by providing NVMe-oF target offloads, enabling very efficient NVMe storage access with no CPU intervention, and thus improving performance and reducing latency.



The embedded PCIe switch enables customers to build standalone storage or Machine Learning appliances. As with earlier generations of ConnectX adapters, standard block and file access protocols leverage RoCE for high-performance storage access. A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks.

ConnectX-5 enables an innovative storage rack design, Host Chaining, which enables different servers to interconnect without involving the Top of the Rack (ToR) switch. Leveraging Host Chaining, ConnectX-5 lowers the data center’s total cost of ownership (TCO) by reducing CAPEX (cables, NICs, and switch port expenses). OPEX is also reduced by cutting down on switch port management and overall power usage.








Telecommunications Telecommunications service providers are moving towards disaggregation, server virtualization, and orchestration as key tenets to modernize their networks. Likewise, they’re also moving towards Network Function Virtualization (NFV), which enables the rapid deployment of new network services. With this move, proprietary dedicated hardware and software, which tend to be static and difficult to scale, are being replaced with virtual machines running on commercial off-the-shelf (COTS) servers.

For telecom service providers, choosing the right networking hardware is critical to achieving a cloud-native NFV solution that is agile, reliable, fast and efficient. Telco service providers typically leverage virtualization and cloud technologies to better achieve agile service delivery and efficient scalability; these technologies require an advanced network infrastructure to support higher rates of packet processing. However, the resultant east-west traffic causes numerous interrupts as I/O traverses from kernel to user space, eats up CPU cycles and decreases packet performance. Particularly sensitive to delays are voice and video applications which often require less than 100ms of latency.

ConnectX-5 adapter cards drive extremely high packet rates, increased throughput and drive higher network efficiency through the following technologies; Open vSwitch Offloads (OvS), OvS over DPDK or ASAP², Network Overlay Virtualization, SR-IOV, and RDMA. This allows for secure data delivery through higher-performance offloads, reducing CPU resource utlization, and boosting data center infrastructure efficiency. The result is a much more responsive and agile network capable of rapidly deploying network services.




Wide Selection of Adapter Cards ConnectX-5 Ethernet adapter cards are available in several form factors including: low-profile stand-up PCIe, OCP 2.0 Type 1 and Type 2, and OCP 3.0 Small Form Factor.

Mellanox Multi-Host® technology allows multiple hosts to be connected into a single adapter by separating the PCIe interface into multiple and independent interfaces.

The portfolio also offers Mellanox Socket-Direct® configurations that enable servers without x16 PCIe slots to split the card’s 16-lane PCIe bus into two 8-lane buses on dedicated cards connected by a harness.




Host Management Host Management includes NC-SI over MCTP over SMBus, and MCTP over PCIe - Baseboard Management Controller (BMC) interface, as well as PLDM for Monitor and Control DSP0248 and PLDM for Firmware Update DSP0267.


Full specification and details can be found in the Product Datasheet PDF file

Express View Logo
In Stock: 2-3 Weeks
£697.94
£837.53 Inc Vat
Add To Cart
Mellanox Logo
Tall Bracket ROHS R6

ConnectX®-5 EN Card

Up to 40Gb/s Ethernet Adapter Cards

With ConnectX-5 E

Intelligent RDMA-enabled, single and dual-port network adapter with advanced application offload capabilities for Web 2.0, Cloud, Storage, and Telco platforms.




Benefits: • Up to 40Gb/s connectivity per port

• Industry-leading throughput, low latency, low CPU utilization and high message rate

• Innovative rack design for storage and Machine Learning based on Host Chaining technology

• Smart interconnect for x86, Power, Arm, and GPU-based compute & storage platforms

• Advanced storage capabilities including NVMe over Fabric offloads

• Intelligent network adapter supporting flexible pipeline programmability

• Cutting-edge performance in virtualized networks including Network Function Virtualization (NFV)

• Enabler for efficient service chaining capabilities

• Efficient I/O consolidation, lowering data center costs and complexity


ConnectX-5 Ethernet network interface cards provide high performance and flexible solutions with up to two ports of 40GbE connectivity, 750ns latency, up to 200 million messages per second (Mpps), and a record setting 197Mpps when running an open source Data Path Development Kit (DPDK) PCIe (Gen 4.0). For storage workloads, ConnectX-5 delivers a range of innovative accelerations, such as Signature Handover (T10-DIF) in hardware, an embedded PCIe Switch, and NVMe over Fabric target offloads. ConnectX-5 adapter cards also bring advanced Open vSwitch offloads to telecommunications and cloud data centers to drive extremely high packet rates and throughput with reduced CPU resource consumption, thus boosting data center infrastructure efficiency.

ConnectX-5 series adapter cards are available for PCIe Gen 3.0 and Gen 4.0 servers and provide support for 1, 10, 25, 40, 50 and 100GbE speeds in stand-up PCIe cards, OCP 2.0, and OCP 3.0 form factors. ConnectX-5 cards also offer advanced Multi-Host and Socket Direct technologies.




Features: • Tag matching and rendezvous offloads

• Adaptive routing on reliable transport

• Burst buffer offloads for background checkpointing

• NVMe over Fabric offloads

• Backend switch elimination by host chaining

• Embedded PCIe switch

• Enhanced vSwitch/vRouter offloads

• Flexible pipeline

• RoCE for overlay networks

• PCIe Gen 4.0 support

• RoHS compliant

• ODCC compatible

• Various form factors available




Cloud and Web 2.0 Environments ConnectX-5 adapter cards enable data center administrators to benefit from better server utilization and reduced costs, power usage, and cable complexity, allowing for more virtual appliances, virtual machines (VMs) and tenants to co-exist on the same hardware.

Supported vSwitch/vRouter offload functions include:

• Overlay Networks (e.g., VXLAN, NVGRE, MPLS, GENEVE, and NSH) header encapsulation & decapsulation.

• Stateless offloads of inner packets and packet headers’ re-write, enabling NAT functionality and more.

• Flexible and programmable parser and match-action tables, which enable hardware offloads for future protocols.

• SR-IOV technology, providing dedicated adapter resources, guaranteed isolation and protection for virtual machines (VMs) within the server.

• Network Function Virtualization (NFV), enabling a VM to be used as a virtual appliance. The full datapath operation offloads, hairpin hardware capability and service chaining enables data to be handled by the virtual appliance, with minimum CPU utilization.



Cloud and Web 2.0 customers developing platforms on Software Defined Network (SDN) environments are leveraging their servers’ Operating System Virtual-Switching capabilities to achieve maximum flexibility. Open vSwitch (OvS) is an example of a virtual switch that allows Virtual Machines to communicate with each other and with the outside world. Traditionally residing in the hypervisor where switching is based on twelve-tuple matching onflows, the virtual switch, or virtual router software-based solution, is CPU-intensive. This can negatively affect system performance and prevent the full utilization of available bandwidth.

Mellanox ASAP2 - Accelerated Switching and Packet Processing® technology enables offloading the vSwitch/vRouter by handling the data plane in the NIC hardware, without modifying the control plane. This results in significantly higher vSwitch/vRouter performance without the associated CPU load.

Additionally, intelligent ConnectX-5’s flexible pipeline capabilities, including flexible parser and flexible match-action tables, are programmable, enabling hardware offloads for future protocols.




Storage Environments NVMe storage devices are gaining popularity by offering very fast storage access. The evolving NVMe over Fabric (NVMe-oF) protocol leverages the RDMA connectivity for remote access. ConnectX-5 offers further enhancements by providing NVMe-oF target offloads, enabling very efficient NVMe storage access with no CPU intervention, and thus improving performance and reducing latency.



The embedded PCIe switch enables customers to build standalone storage or Machine Learning appliances. As with earlier generations of ConnectX adapters, standard block and file access protocols leverage RoCE for high-performance storage access. A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks.

ConnectX-5 enables an innovative storage rack design, Host Chaining, which enables different servers to interconnect without involving the Top of the Rack (ToR) switch. Leveraging Host Chaining, ConnectX-5 lowers the data center’s total cost of ownership (TCO) by reducing CAPEX (cables, NICs, and switch port expenses). OPEX is also reduced by cutting down on switch port management and overall power usage.








Telecommunications Telecommunications service providers are moving towards disaggregation, server virtualization, and orchestration as key tenets to modernize their networks. Likewise, they’re also moving towards Network Function Virtualization (NFV), which enables the rapid deployment of new network services. With this move, proprietary dedicated hardware and software, which tend to be static and difficult to scale, are being replaced with virtual machines running on commercial off-the-shelf (COTS) servers.

For telecom service providers, choosing the right networking hardware is critical to achieving a cloud-native NFV solution that is agile, reliable, fast and efficient. Telco service providers typically leverage virtualization and cloud technologies to better achieve agile service delivery and efficient scalability; these technologies require an advanced network infrastructure to support higher rates of packet processing. However, the resultant east-west traffic causes numerous interrupts as I/O traverses from kernel to user space, eats up CPU cycles and decreases packet performance. Particularly sensitive to delays are voice and video applications which often require less than 100ms of latency.

ConnectX-5 adapter cards drive extremely high packet rates, increased throughput and drive higher network efficiency through the following technologies; Open vSwitch Offloads (OvS), OvS over DPDK or ASAP², Network Overlay Virtualization, SR-IOV, and RDMA. This allows for secure data delivery through higher-performance offloads, reducing CPU resource utlization, and boosting data center infrastructure efficiency. The result is a much more responsive and agile network capable of rapidly deploying network services.




Wide Selection of Adapter Cards ConnectX-5 Ethernet adapter cards are available in several form factors including: low-profile stand-up PCIe, OCP 2.0 Type 1 and Type 2, and OCP 3.0 Small Form Factor.

Mellanox Multi-Host® technology allows multiple hosts to be connected into a single adapter by separating the PCIe interface into multiple and independent interfaces.

The portfolio also offers Mellanox Socket-Direct® configurations that enable servers without x16 PCIe slots to split the card’s 16-lane PCIe bus into two 8-lane buses on dedicated cards connected by a harness.




Host Management Host Management includes NC-SI over MCTP over SMBus, and MCTP over PCIe - Baseboard Management Controller (BMC) interface, as well as PLDM for Monitor and Control DSP0248 and PLDM for Firmware Update DSP0267.


Full specification and details can be found in the Product Datasheet PDF file

Express View Logo
In Stock: 2-3 Weeks
£ POA
£ POA Inc Vat
Mellanox Logo
Tall Bracket ROHS R6

ConnectX®-5 EN Card

Up to 100Gb/s Ethernet Adapter Cards

Intelligent RDMA-enabled, single and dual-port network adapter with advanced application offload capabilities for Web 2.0, Cloud, Storage, and Telco platforms.




Benefits: • Up to 100Gb/s connectivity per port

• Industry-leading throughput, low latency, low CPU utilization and high message rate

• Innovative rack design for storage and Machine Learning based on Host Chaining technology

• Smart interconnect for x86, Power, Arm, and GPU-based compute & storage platforms

• Advanced storage capabilities including NVMe over Fabric offloads

• Intelligent network adapter supporting flexible pipeline programmability

• Cutting-edge performance in virtualized networks including Network Function Virtualization (NFV)

• Enabler for efficient service chaining capabilities

• Efficient I/O consolidation, lowering data center costs and complexity


ConnectX-5 Ethernet network interface cards provide high performance and flexible solutions with up to two ports of 100GbE connectivity, 750ns latency, up to 200 million messages per second (Mpps), and a record setting 197Mpps when running an open source Data Path Development Kit (DPDK) PCIe (Gen 4.0). For storage workloads, ConnectX-5 delivers a range of innovative accelerations, such as Signature Handover (T10-DIF) in hardware, an embedded PCIe Switch, and NVMe over Fabric target offloads. ConnectX-5 adapter cards also bring advanced Open vSwitch offloads to telecommunications and cloud data centers to drive extremely high packet rates and throughput with reduced CPU resource consumption, thus boosting data center infrastructure efficiency.

ConnectX-5 adapter cards are available for PCIe Gen 3.0 and Gen 4.0 servers and provide support for 1, 10, 25, 40, 50 and 100 GbE speeds in stand-up PCIe cards, OCP 2.0, and OCP 3.0 form factors. ConnectX-5 cards also offer advanced Multi-Host and Socket Direct technologies.




Features: • Tag matching and rendezvous offloads

• Adaptive routing on reliable transport

• Burst buffer offloads for background checkpointing

• NVMe over Fabric offloads

• Backend switch elimination by host chaining

• Embedded PCIe switch

• Enhanced vSwitch/vRouter offloads

• Flexible pipeline

• RoCE for overlay networks

• PCIe Gen 4.0 support

• RoHS compliant

• ODCC compatible

• Various form factors available




Cloud and Web 2.0 Environments ConnectX-5 adapter cards enable data center administrators to benefit from better server utilization and reduced costs, power usage, and cable complexity, allowing for more virtual appliances, virtual machines (VMs) and tenants to co-exist on the same hardware.

Supported vSwitch/vRouter offload functions include:

• Overlay Networks (e.g., VXLAN, NVGRE, MPLS, GENEVE, and NSH) header encapsulation & decapsulation.

• Stateless offloads of inner packets and packet headers’ re-write, enabling NAT functionality and more.

• Flexible and programmable parser and match-action tables, which enable hardware offloads for future protocols.

• SR-IOV technology, providing dedicated adapter resources, guaranteed isolation and protection for virtual machines (VMs) within the server.

• Network Function Virtualization (NFV), enabling a VM to be used as a virtual appliance. The full datapath operation offloads, hairpin hardware capability and service chaining enables data to be handled by the virtual appliance, with minimum CPU utilization.



Cloud and Web 2.0 customers developing platforms on Software Defined Network (SDN) environments are leveraging their servers’ Operating System Virtual-Switching capabilities to achieve maximum flexibility. Open vSwitch (OvS) is an example of a virtual switch that allows Virtual Machines to communicate with each other and with the outside world. Traditionally residing in the hypervisor where switching is based on twelve-tuple matching onflows, the virtual switch, or virtual router software-based solution, is CPU-intensive. This can negatively affect system performance and prevent the full utilization of available bandwidth.

Mellanox ASAP2 - Accelerated Switching and Packet Processing® technology enables offloading the vSwitch/vRouter by handling the data plane in the NIC hardware, without modifying the control plane. This results in significantly higher vSwitch/vRouter performance without the associated CPU load.

Additionally, intelligent ConnectX-5’s flexible pipeline capabilities, including flexible parser and flexible match-action tables, are programmable, enabling hardware offloads for future protocols.




Storage Environments NVMe storage devices are gaining popularity by offering very fast storage access. The evolving NVMe over Fabric (NVMe-oF) protocol leverages the RDMA connectivity for remote access. ConnectX-5 offers further enhancements by providing NVMe-oF target offloads, enabling very efficient NVMe storage access with no CPU intervention, and thus improving performance and reducing latency.



The embedded PCIe switch enables customers to build standalone storage or Machine Learning appliances. As with earlier generations of ConnectX adapters, standard block and file access protocols leverage RoCE for high-performance storage access. A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks.

ConnectX-5 enables an innovative storage rack design, Host Chaining, which enables different servers to interconnect without involving the Top of the Rack (ToR) switch. Leveraging Host Chaining, ConnectX-5 lowers the data center’s total cost of ownership (TCO) by reducing CAPEX (cables, NICs, and switch port expenses). OPEX is also reduced by cutting down on switch port management and overall power usage.








Telecommunications Telecommunications service providers are moving towards disaggregation, server virtualization, and orchestration as key tenets to modernize their networks. Likewise, they’re also moving towards Network Function Virtualization (NFV), which enables the rapid deployment of new network services. With this move, proprietary dedicated hardware and software, which tend to be static and difficult to scale, are being replaced with virtual machines running on commercial off-the-shelf (COTS) servers.

For telecom service providers, choosing the right networking hardware is critical to achieving a cloud-native NFV solution that is agile, reliable, fast and efficient. Telco service providers typically leverage virtualization and cloud technologies to better achieve agile service delivery and efficient scalability; these technologies require an advanced network infrastructure to support higher rates of packet processing. However, the resultant east-west traffic causes numerous interrupts as I/O traverses from kernel to user space, eats up CPU cycles and decreases packet performance. Particularly sensitive to delays are voice and video applications which often require less than 100ms of latency.

ConnectX-5 adapter cards drive extremely high packet rates, increased throughput and drive higher network efficiency through the following technologies; Open vSwitch Offloads (OvS), OvS over DPDK or ASAP², Network Overlay Virtualization, SR-IOV, and RDMA. This allows for secure data delivery through higher-performance offloads, reducing CPU resource utlization, and boosting data center infrastructure efficiency. The result is a much more responsive and agile network capable of rapidly deploying network services.




Wide Selection of Adapter Cards ConnectX-5 Ethernet adapter cards are available in several form factors including: low-profile stand-up PCIe, OCP 2.0 Type 1 and Type 2, and OCP 3.0 Small Form Factor.

Mellanox Multi-Host® technology allows multiple hosts to be connected into a single adapter by separating the PCIe interface into multiple and independent interfaces.

The portfolio also offers Mellanox Socket-Direct® configurations that enable servers without x16 PCIe slots to split the card’s 16-lane PCIe bus into two 8-lane buses on dedicated cards connected by a harness.




Host Management Host Management includes NC-SI over MCTP over SMBus, and MCTP over PCIe - Baseboard Management Controller (BMC) interface, as well as PLDM for Monitor and Control DSP0248 and PLDM for Firmware Update DSP0267.


Full specification and details can be found in the Product Datasheet PDF file

Express View Logo
In Stock: 2-3 Weeks
£697.94
£837.53 Inc Vat
Add To Cart
Other Ranges Available
Mellanox Fan Modules
View Range
Mellanox Adapter Cards
View Range
Mellanox Brackets and Mounting Kits
View Range
Mellanox Power Supply Units
Mellanox Power Supply Units
View Range
Join Our Mailing List
Social Links
  • Apply for Credit