Early Closing Friday

We will be closing early on Friday 3rdth May at 16:30

Last orders: 16:00 (next day: 15:30) | CLOSED - May 6thth Bank Holiday | OPEN as normal Tuesday

Call us FREE on 0800 488 000
X
X

Please Log In Below

Forgotten Password?
Retry
Login
loading Gif
Sorry! You can't edit your cart on this page.
Sorry! This item could not be added to your cart as it is no longer available from Comms Express.
Please check the quantity you are adding and try again.
The following item has been added to your cart.
Product Code:
Options:
Qty:
Unit Price:£
View Cart
Checkout
Your cart is empty.
Subtotal:
£0
Remove all items from cart
Are you sure? Yes No
Learn more about how
to collect Data Points
for free gifts.
Comms Express Finance Options
Request A Quote
View Cart
Checkout
Cookie Policy×

Hi there! Our website may store cookies on your computer in order to give you the best experience, such as remembering the items in your cart so you can continue shopping where you left off.

By continuing to use our site, you give consent for cookies to be used.

Browse Categories
Spend £100.00 for
FREE DELIVERY.
Free delivery excludes heavy and bulky products
Express Finder Logo
Express Switch Finder
Select requirements to view suitable switches instantly from our entire range.
Reset
Mellanox MCX631432AN-ADAB CONNECTX-6 LX EN Adapter Card 25GBE OCP3.0
Mellanox Logo
With Host management Dual-Port SFP28 PCIE 4.0 X8 No Crypto Thumbscrew (Pulltab) Bracket

ConnectX-6 Lx Ethernet SmartNIC 25GbE Performance at the Speed of Lite

25G/50G Ethernet SmartNIC (PCIe HHHL/OCP3)

ConnectX-6 Lx SmartNICs deliver scalability, high-performance, advanced security capabilities and accelerated networking with the best total cost of ownership for 25GbE deployments in cloud, telco, and enterprise data centers.

Providing up to two ports of 25GbE or a single-port of 50GbE connectivity, and PCIe Gen 3.0/4.0 x8 host connectivity, ConnectX-6 Lx is a member of NVIDIA's world-class, award-winning, ConnectX family of network adapters and provides agility and efficiency at every scale. ConnectX-6 Lx delivers cutting edge 25GbE performance and security for uncompromising data centers.




Features & Applications • Line speed message rate of 75Mpps

• Advanced RoCE

• ASAP2 - Accelerated Switching and Packet Processing

• IPsec in-line crypto acceleration

• Overlay tunneling accelerations

• Stateful rule checking for connection tracking

• Hardware Root-of-Trust and secure firmware update

• Best-in-class PTP performance

• ODCC compatible


ConnectX®-6 Lx Ethernet smart network interface cards (SmartNIC) deliver scalability, high performance, advanced security capabilities, and accelerated networking with the best total cost of ownership for 25GbE deployments in cloud and enterprise data centers. The SmartNICs support up to two ports of 25GbE, or a single-port of 50GbE connectivity, along with PCI Express Gen3 and Gen4 x8 host connectivity to deliver cutting-edge 25GbE performance and security for uncompromising data centers.




SmartNIC Portfolio • 10/25/50 Gb/s Ethernet

• Various form factors:

- PCIe low-profile

- OCP 3.0 Small Form Factor (SFF)

• Connectivity options:

- SFP28, QSFP28

• PCIe Gen 3.0/4.0 x8

• Crypto and non-crypto versions




SDN Acceleration NVIDIA ASAP2 - Accelerated Switch and Packet ProcessingTM technology offloads the software-defined networking (SDN) data plane to the SmartNIC, accelerating performance and offloading the CPU in virtualized or containerized cloud data centers. Customers can accelerate their data centers with an SR-IOV or VirtIO interface while continuing to enjoy their SDN solution of choice. The ConnectX-6 Lx ASAP2 rich feature set accelerates public and on-premises enterprise clouds and boosts communication service providers? (CSP) transition to network function virtualization (NFV). ASAP2 supports these communication service providers by enabling packet encapsulations, such as MPLS and GTP, alongside cloud encapsulations, such as VXLAN, GENEVE, and others.




Industry-leading RoCE Following in the ConnectX tradition of providing industry-leading RDMA over Converged Ethernet (RoCE) capabilities, ConnectX-6 Lx enables more scalable, resilient, and easy-to-deploy RoCE solutions. With Zero Touch RoCE (ZTR), the ConnectX-6 Lx allows RoCE payloads to run seamlessly on existing networks without special configuration, either to priority flow control (PFC) or explicit congestion notification (ECN), for simplified RoCE deployments. ConnectX-6 Lx ensures RoCE resilience and efficiency at scale.




Secure Your Infrastructure In an era where data privacy is key, ConnectX-6 Lx adapters offer advanced, built-in capabilities that bring security down to the endpoints with unprecedented performance and scalability. ConnectX-6 Lx offers IPsec inline encryption and decryption acceleration. ASAP2 connection-tracking hardware offload accelerates Layer 4 firewall performance.

ConnectX-6 Lx also delivers supply chain protection with hardware root-of-trust (RoT) for secure boot and firmware updates using RSA cryptography and cloning- protection, via a device-unique key, to guarantee firmware authenticity.


Network Interface
• Two SerDes lanes supporting 25Gb/s per lane, for various port configurations:
-2x 10/25 GbE
-1x 50GbE

Storage Accelerations
• NVMe over Fabrics offloads for target
• Storage protocols: iSER, NFSoRDMA, SMB Direct, NVMe-oF, and more

  Management and Control
• NC-SI, MCTP over SMBus and MCTP over PCIe - Baseboard Management Controller interface, NCSI over RBT in OCP cards
• PLDM for Monitor and Control DSP0248
• PLDM for Firmware Update DSP0267

Remote Boot
• Remote boot over Ethernet
• Remote boot over iSCSI
• Unified extensible firmware interface (UEFI) support for x86 and Arm servers
• Pre-execution environment (PXE) boot

  Host Interface
• PCIe Gen 4.0, 3.0, 2.0, 1.1
• 16.0, 8.0, 5.0, 2.5 GT/s link rate
• 8 lanes of PCIe
• MSI/MSI-X mechanisms
• Advanced PCIe capabilities

Cybersecurity
• Inline hardware IPsec encryption and decryption
• AES-XTS 256/512-bit key
• IPsec over RoCE
• Platform security
• Hardware root-of-trust
• Secure firmware update

  Virtualization/Cloud Native
• Single root IOV (SR-IOV) and VirtIO acceleration
-Up to 512 virtual functions per port
-8 physical functions
• Support for tunneling
-Encap/decap of VXLAN, NVGRE, GENEVE, and more
-Stateless offloads for overlay tunnels

Stateless Offloads
• TCP/UDP/IP stateless offload
• LSO, LRO, checksum offload
• Receive side scaling (RSS) also on encapsulated packets
• Transmit side scaling (TSS)
• VLAN and MPLS tag insertion/stripping
• Receive flow steering

  NVIDIA ASAP
• SDN acceleration for:
-Bare metal
-Virtualization
-Containers
• Full hardware offload for OVS data plane
• Flow update through RTE_Flow or TC_Flower
• OpenStack support
• Kubernetes support
• Rich classification engine (Layer 2 to Layer 4)
• Flex-parser
• Hardware offload for:
-Connection tracking (Layer 4 firewall)
-NAT
-Header rewrite
-Mirroring
-Sampling
-Flow aging
-Hierarchical QoS
-Flow-based statistics

 
Full specification and details can be found in the Product Datasheet PDF file

Express View Logo
In Stock: 2-3 Weeks
£310.86
£373.03 Inc Vat
Add To Cart
Mellanox MCX631102AS-ADAT CONNECTX-6 LX EN Adapter Card 25GBE Dual-Port
Mellanox Logo
Secure Boot No Crypto Tall Bracket

NVIDIA ConnectX-6 Lx Ethernet SmartNIC

LENOVO 25G ETHERNET SMARTNIC

Providing up to two ports of 25GbE connectivity, and PCIe Gen 3.0/4.0 x8 host connectivity, ConnectX-6 Lx MCX631102AS-ADAT is a member of NVIDIA's world-class, award-winning, ConnectX family of network adapters. Continuing NVIDIA's consistent innovation in networking, ConnectX-6 Lx provides agility and efficiency at every scale. ConnectX-6 Lx delivers cutting edge 25GbE performance and security for uncompromising data centers.




Features & Applications • Line speed message rate of 75Mpps

• Advanced RoCE

• ASAP2

• Accelerated Switching and Packet Processing

• IPsec in-line crypto acceleration

• Overlay tunneling accelerations

• Stateful rule checking for connection tracking

• Hardware Root-of-Trust and secure firmware update

• Best-in-class PTP performance

• ODCC compatible


Mellanox® ConnectX®-6 Lx SmartNICs deliver scalability, high-performance, advanced security capabilities, and accelerated networking with the best total cost of ownership for 25 GbE deployments in cloud, telco, and enterprise data centers.

Providing up to two ports of 25 GbE connectivity, along with PCIe Gen 3.0/4.0 x8 host connectivity, ConnectX-6 Lx is a member of Mellanox’s world-class, award-winning, ConnectX family of network adapters. Continuing Mellanox’s consistent innovation in networking, ConnectX-6 Lx provides agility and efficiency at every scale

ConnectX-6 Lx SmartNICs deliver cutting edge 25 GbE performance and security for uncompromising data centers.




SmartNIC Portfolio • 10/25 Gb/s Ethernet

• Various form factors:

- PCIe low-profile

- OCP 3.0 Small Form Factor (SFF)

• Connectivity options:

- SFP28

• PCIe Gen 3.0/4.0 x8

• Crypto and non-crypto versions




WIDE SELECTION OF SMARTNICS ConnectX-6 Lx SmartNICs are available in several form factors including low-profile PCIe and OCP 3.0 cards with SFP28 connectors for 10/25 GbE applications. Low-profile PCIe cards are available with tall and short brackets, while OCP3.0 cards are available with either a pull tab or an internal lock bracket.




BEST-IN-CLASS SDN ACCELERATION Mellanox’s ASAP2 - Accelerated Switch and Packet Processing® technology offloads the SDN data plane to the SmartNIC, accelerating performance and offloading the CPU in virtualized or containerized cloud data centers. Customers can accelerate their data centers with an SR-IOV or VirtIO interface while continuing to enjoy their SDN of choice.

The ConnectX-6 Lx ASAP2 rich feature set accelerates public and on-premises enterprise clouds, and boosts communication service providers (CSP) transition to NFV. ASAP2 supports these communication service providers by enabling packet encapsulations, such as MPLS and GTP, along side cloud encapsulations, such as VXLAN, Geneve, and others.




INDUSTRY-LEADING ROCE Following the Mellanox ConnectX tradition of industry-leading RoCE capabilities, ConnectX-6 Lx enables more scalable, resilient, and easy-to-deploy RoCE solutions— Zero Touch RoCE. ConnectX-6 Lx allows RoCE payloads to run seamlessly on existing networks without requiring network configuration (no PFC, no ECN) for simplified RoCE deployments. ConnectX-6 Lx ensures RoCE resiliency and efficiency at scale.




SECURE YOUR INFRASTRUCTURE In an era where privacy of information is key and zero trust is the rule, ConnectX-6 Lx adapters offer a range of advanced built-in capabilities that bring infrastructure security down to every endpoint with unprecedented performance and scalability. ConnectX-6 Lx offers IPsec inline encryption/decryption acceleration. ASAP2 connection-tracking hardware offload accelerates L4 firewall performance.

ConnectX-6 Lx also delivers supply chain protection with hardware Root-of-Trust (RoT) for Secure Boot as well as Secure Firmware Update using RSA cryptography and cloningprotection, via a device-unique key, to guarantee firmware authenticity.


Platform Security
• Hardware root-of-trust
• Secure firmware update

Storage Accelerations
• NVMe over Fabrics offloads for target
• Storage protocols: iSER, NFSoRDMA, SMB Direct, NVMe-oF, and more

  Management and Control
• NC-SI, MCTP over SMBus and MCTP over PCIe - Baseboard Management Controller interface, NCSI over RBT in OCP 2.0/3.0 cards
• PLDM for Monitor and Control DSP0248
• PLDM for Firmware Update DSP026

Remote Boot
• Remote boot over Ethernet
• Remote boot over iSCSI
• UEFI support for x86 and Arm servers
• PXE boot

  Host Interface
• PCIe Gen 4.0, 3.0, 2.0, 1.1
• 16.0, 8.0, 5.0, 2.5 GT/s link rate
• 8 lanes of PCIe
• MSI/MSI-X mechanisms
• Advanced PCIe capabilities

Virtualization/Cloud Native
• Single Root IOV (SR-IOV) and VirtIO acceleration
-Up to 512 VFs per port
-8 PFs
• Support for tunneling
-Encap/decap of VXLAN, NVGRE, Geneve, and more
-Stateless offloads for overlay tunnels

  Stateless Offloads
• TCP/UDP/IP stateless offload
• LSO, LRO, checksum offload
• Receive Side Scaling (RSS) also on encapsulated packets
• Transmit Side Scaling (TSS)
• VLAN and MPLS tag insertion/stripping
• Receive flow steering

Advanced Timing & Synchronization
• Advanced PTP
-IEEE 1588v2 (any profile)
-PTP Hardware Clock (PHC) (UTC format)
-Line rate hardware timestamp (UTC format)
• Time triggered scheduling
• PTP based packet pacing
• Time based SDN acceleration (ASAP2)

  Mellanox ASAP
• SDN acceleration for:
-Bare metal
-Virtualization
-Containers
• Full hardware offload for OVS data plane
• Flow update through RTE_Flow or TC_Flower
• OpenStack support
• Kubernetes support
• Rich classification engine (L2 to L4)
• Flex-Parser: user defined classification
• Hardware offload for:
-Connection tracking (L4 firewall)
-NAT
-Header rewrite
-Mirroring
-Sampling
-Flow aging
-Hierarchal QoS
-Flow-based statistics

RDMA over Converged Ethernet
• RoCE v1/v2
• Zero-Touch RoCE: no ECN, no PFC
• RoCE over overlay networks
• Selective repeat
• GPUDirect®
• Dynamically Connected Transport (DCT)
• Burst buffer offload

 
Full specification and details can be found in the Product Datasheet PDF file

Express View Logo
In Stock: 2-3 Weeks
£310.86
£373.03 Inc Vat
Add To Cart
Mellanox MCX631102AC-ADAT CONNECTX-6 LX EN Adapter Card 25GBE
Mellanox Logo
Crypto and Secure Boot Tall Bracket

ConnectX-6 Lx Ethernet SmartNIC

25G/50G Ethernet SmartNIC (PCIe HHHL/OCP3)

Providing up to two ports of 25GbE connectivity, and PCIe Gen 3.0/4.0 x8 host connectivity, ConnectX-6 Lx MCX631102AS-ADAT is a member of NVIDIA's world-class, award-winning, ConnectX family of network adapters. Continuing NVIDIA's consistent innovation in networking, ConnectX-6 Lx provides agility and efficiency at every scale. ConnectX-6 Lx delivers cutting edge 25GbE performance and security for uncompromising data centers.




Features & Applications • Line speed message rate of 75Mpps

• Advanced RoCE

• ASAP2

• Accelerated Switching and Packet Processing

• IPsec in-line crypto acceleration

• Overlay tunneling accelerations

• Stateful rule checking for connection tracking

• Hardware Root-of-Trust and secure firmware update

• Best-in-class PTP performance

• ODCC compatible


ConnectX®-6 Lx Ethernet smart network interface cards (SmartNIC) deliver scalability, high performance, advanced security capabilities, and accelerated networking with the best total cost of ownership for 25GbE deployments in cloud and enterprise data centers. The SmartNICs support up to two ports of 25GbE, or a single-port of 50GbE connectivity, along with PCI Express Gen3 and Gen4 x8 host connectivity to deliver cutting-edge 25GbE performance and security for uncompromising data centers.




SmartNIC Portfolio • 10/25 Gb/s Ethernet

• Various form factors:

- PCIe low-profile

- OCP 3.0 Small Form Factor (SFF)

• Connectivity options:

- SFP28

• PCIe Gen 3.0/4.0 x8

• Crypto and non-crypto versions




SDN Acceleration NVIDIA ASAP2 - Accelerated Switch and Packet ProcessingTM technology offloads the software-defined networking (SDN) data plane to the SmartNIC, accelerating performance and offloading the CPU in virtualized or containerized cloud data centers. Customers can accelerate their data centers with an SR-IOV or VirtIO interface while continuing to enjoy their SDN solution of choice. The ConnectX-6 Lx ASAP2 rich feature set accelerates public and on-premises enterprise clouds and boosts communication service providers? (CSP) transition to network function virtualization (NFV). ASAP2 supports these communication service providers by enabling packet encapsulations, such as MPLS and GTP, alongside cloud encapsulations, such as VXLAN, GENEVE, and others.




Industry-leading RoCE Following in the ConnectX tradition of providing industry-leading RDMA over Converged Ethernet (RoCE) capabilities, ConnectX-6 Lx enables more scalable, resilient, and easy-to-deploy RoCE solutions. With Zero Touch RoCE (ZTR), the ConnectX-6 Lx allows RoCE payloads to run seamlessly on existing networks without special configuration, either to priority flow control (PFC) or explicit congestion notification (ECN), for simplified RoCE deployments. ConnectX-6 Lx ensures RoCE resilience and efficiency at scale.




Secure Your Infrastructure In an era where data privacy is key, ConnectX-6 Lx adapters offer advanced, built-in capabilities that bring security down to the endpoints with unprecedented performance and scalability. ConnectX-6 Lx offers IPsec inline encryption and decryption acceleration. ASAP2 connection-tracking hardware offload accelerates Layer 4 firewall performance.

ConnectX-6 Lx also delivers supply chain protection with hardware root-of-trust (RoT) for secure boot and firmware updates using RSA cryptography and cloning- protection, via a device-unique key, to guarantee firmware authenticity.


Network Interface
• Two SerDes lanes supporting 25Gb/s per lane, for various port configurations:
-2x 10/25 GbE
-1x 50GbE

Storage Accelerations
• NVMe over Fabrics offloads for target
• Storage protocols: iSER, NFSoRDMA, SMB Direct, NVMe-oF, and more

  Management and Control
• NC-SI, MCTP over SMBus and MCTP over PCIe - Baseboard Management Controller interface, NCSI over RBT in OCP cards
• PLDM for Monitor and Control DSP0248
• PLDM for Firmware Update DSP0267

Remote Boot
• Remote boot over Ethernet
• Remote boot over iSCSI
• Unified extensible firmware interface (UEFI) support for x86 and Arm servers
• Pre-execution environment (PXE) boot

  Host Interface
• PCIe Gen 4.0, 3.0, 2.0, 1.1
• 16.0, 8.0, 5.0, 2.5 GT/s link rate
• 8 lanes of PCIe
• MSI/MSI-X mechanisms
• Advanced PCIe capabilities

Cybersecurity
• Inline hardware IPsec encryption and decryption
• AES-XTS 256/512-bit key
• IPsec over RoCE
• Platform security
• Hardware root-of-trust
• Secure firmware update

  Virtualization/Cloud Native
• Single root IOV (SR-IOV) and VirtIO acceleration
-Up to 512 virtual functions per port
-8 physical functions
• Support for tunneling
-Encap/decap of VXLAN, NVGRE, GENEVE, and more
-Stateless offloads for overlay tunnels

Stateless Offloads
• TCP/UDP/IP stateless offload
• LSO, LRO, checksum offload
• Receive side scaling (RSS) also on encapsulated packets
• Transmit side scaling (TSS)
• VLAN and MPLS tag insertion/stripping
• Receive flow steering

  NVIDIA ASAP
• SDN acceleration for:
-Bare metal
-Virtualization
-Containers
• Full hardware offload for OVS data plane
• Flow update through RTE_Flow or TC_Flower
• OpenStack support
• Kubernetes support
• Rich classification engine (Layer 2 to Layer 4)
• Flex-parser
• Hardware offload for:
-Connection tracking (Layer 4 firewall)
-NAT
-Header rewrite
-Mirroring
-Sampling
-Flow aging
-Hierarchical QoS
-Flow-based statistics

 
Full specification and details can be found in the Product Datasheet PDF file

Express View Logo
In Stock: 2-3 Weeks
£388.08
£465.70 Inc Vat
Add To Cart
Mellanox CONNECTX-5 EN Network Interface Card 50GBE
Mellanox Logo
QSFP28 PCIE3.0 X16 Tall Bracket

ConnectX®-5 EN Card

Up to 50Gb/s Ethernet Adapter Cards

Intelligent RDMA-enabled, single and dual-port network adapter with advanced application offload capabilities for Web 2.0, Cloud, Storage, and Telco platforms

ConnectX-5 Ethernet network interface cards provide high performance and flexible solutions with up to two ports of 50GbE connectivity, 750ns latency, up to 200 million messages per second (Mpps), and a record setting 197Mpps when running an open source Data Path Development Kit (DPDK) PCIe (Gen 4.0). For storage workloads, ConnectX-5 delivers a range of innovative accelerations, such as Signature Handover (T10-DIF) in hardware, an embedded PCIe Switch, and NVMe over Fabric target offloads. ConnectX-5 adapter cards also bring advanced Open vSwitch offloads to telecommunications and cloud data centers to drive extremely high packet rates and throughput with reduced CPU resource consumption, thus boosting data center infrastructure efficiency.

ConnectX-5 adapter cards are available for PCIe Gen 3.0 and Gen 4.0 servers and provide support for 1, 10, 25, 40, 50 and 100 GbE speeds in stand-up PCIe cards, OCP 2.0, and OCP 3.0 form factors. ConnectX-5 cards also offer advanced Multi-Host and Socket Direct technologies.




Benefits • Up to 100Gb/s connectivity per port

• Industry-leading throughput, low latency, low CPU utilization and high message rate

• Innovative rack design for storage and Machine Learning based on Host Chaining technology

• Smart interconnect for x86, Power, Arm, and GPU-based compute & storage platforms

• Advanced storage capabilities including NVMe over Fabric offloads

• Intelligent network adapter supporting flexible pipeline programmability

• Cutting-edge performance in virtualized networks including Network Function Virtualization (NFV)

• Enabler for efficient service chaining capabilities

• Efficient I/O consolidation, lowering data center costs and complexity


ConnectX-5 Ethernet adapter cards provide high performance and flexible solutions with up to two ports of 100GbE connectivity, 750ns latency, up to 200 million messages per second (Mpps), and a recordsetting 197Mpps when running an open source Data Path Development Kit (DPDK) PCIe (Gen 4.0). For storage workloads, ConnectX-5 delivers a range of innovative accelerations, such as Signature Handover (T10-DIF) in hardware, an embedded PCIe Switch, and NVMe over Fabric target offloads. ConnectX-5 adapter cards also bring advanced Open vSwitch offloads to telecommunications and cloud data centers to drive extremely high packet rates and throughput with reduced CPU resource consumption, thus boosting data center infrastructure efficiency.

ConnectX-5 adapter cards are available for PCIe Gen 3.0 and Gen 4.0 servers and provide support for 1, 10, 25, 40, 50 and 100 GbE speeds in stand-up PCIe cards, OCP 2.0, and OCP 3.0 form factors. ConnectX-5 cards also offer advanced Mellanox Multi-Host® and Mellanox Socket Direct® technologies.




Features • Tag matching and rendezvous offloads

• Adaptive routing on reliable transport

• Burst buffer offloads for background checkpointing

• NVMe over Fabric offloads

• Backend switch elimination by host chaining

• Embedded PCIe switch

• Enhanced vSwitch/vRouter offloads

• Flexible pipeline

• RoCE for overlay networks

• PCIe Gen 4.0 support

• RoHS compliant

• ODCC compatible

• Various form factors available




Cloud and Web 2.0 Environments ConnectX-5 adapter cards enable data center administrators to benefit from better server utilization and reduced costs, power usage, and cable complexity, allowing for more virtual appliances, virtual machines (VMs) and tenants to co-exist on the same hardware.

Supported vSwitch/vRouter offload functions include:

• Overlay Networks (e.g., VXLAN, NVGRE, MPLS, GENEVE, and NSH) header encapsulation & decapsulation.

• Stateless offloads of inner packets and packet headers’ re-write, enabling NAT functionality and more.

• Flexible and programmable parser and match-action tables, which enable hardware offloads for future protocols.

• SR-IOV technology, providing dedicated adapter resources, guaranteed isolation and protection for virtual machines (VMs) within the server.

• Network Function Virtualization (NFV), enabling a VM to be used as a virtual appliance. The full datapath operation offloads, hairpin hardware capability and service chaining enables data to be handled by the virtual appliance, with minimum CPU utilization.



Cloud and Web 2.0 customers developing platforms on Software Defined Network (SDN) environments are leveraging their servers’ Operating System Virtual-Switching capabilities to achieve maximum flexibility. Open vSwitch (OvS) is an example of a virtual switch that allows Virtual Machines to communicate with each other and with the outside world. Traditionally residing in the hypervisor where switching is based on twelve-tuple matching onflows, the virtual switch, or virtual router software-based solution, is CPU-intensive. This can negatively affect system performance and prevent the full utilization of available bandwidth.

Mellanox ASAP2 - Accelerated Switching and Packet Processing® technology enables offloading the vSwitch/vRouter by handling the data plane in the NIC hardware, without modifying the control plane. This results in significantly higher vSwitch/vRouter performance without the associated CPU load.

Additionally, intelligent ConnectX-5’s flexible pipeline capabilities, including flexible parser and flexible match-action tables, are programmable, enabling hardware offloads for future protocols.




Storage Environments NVMe storage devices are gaining popularity by offering very fast storage access. The evolving NVMe over Fabric (NVMe-oF) protocol leverages the RDMA connectivity for remote access. ConnectX-5 offers further enhancements by providing NVMe-oF target offloads, enabling very efficient NVMe storage access with no CPU intervention, and thus improving performance and reducing latency.



The embedded PCIe switch enables customers to build standalone storage or Machine Learning appliances. As with earlier generations of ConnectX adapters, standard block and file access protocols leverage RoCE for high-performance storage access. A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks.

ConnectX-5 enables an innovative storage rack design, Host Chaining, which enables different servers to interconnect without involving the Top of the Rack (ToR) switch. Leveraging Host Chaining, ConnectX-5 lowers the data center’s total cost of ownership (TCO) by reducing CAPEX (cables, NICs, and switch port expenses). OPEX is also reduced by cutting down on switch port management and overall power usage.






Telecommunications Telecommunications service providers are moving towards disaggregation, server virtualization, and orchestration as key tenets to modernize their networks. Likewise, they’re also moving towards Network Function Virtualization (NFV), which enables the rapid deployment of new network services. With this move, proprietary dedicated hardware and software, which tend to be static and difficult to scale, are being replaced with virtual machines running on commercial off-the-shelf (COTS) servers.

For telecom service providers, choosing the right networking hardware is critical to achieving a cloud-native NFV solution that is agile, reliable, fast and efficient. Telco service providers typically leverage virtualization and cloud technologies to better achieve agile service delivery and efficient scalability; these technologies require an advanced network infrastructure to support higher rates of packet processing. However, the resultant east-west traffic causes numerous interrupts as I/O traverses from kernel to user space, eats up CPU cycles and decreases packet performance. Particularly sensitive to delays are voice and video applications which often require less than 100ms of latency.

ConnectX-5 adapter cards drive extremely high packet rates, increased throughput and drive higher network efficiency through the following technologies; Open vSwitch Offloads (OvS), OvS over DPDK or ASAP², Network Overlay Virtualization, SR-IOV, and RDMA. This allows for secure data delivery through higher-performance offloads, reducing CPU resource utlization, and boosting data center infrastructure efficiency. The result is a much more responsive and agile network capable of rapidly deploying network services.




Wide Selection of Adapter Cards ConnectX-5 Ethernet adapter cards are available in several form factors including: low-profile stand-up PCIe, OCP 2.0 Type 1 and Type 2, and OCP 3.0 Small Form Factor.

Mellanox Multi-Host® technology allows multiple hosts to be connected into a single adapter by separating the PCIe interface into multiple and independent interfaces.

The portfolio also offers Mellanox Socket-Direct® configurations that enable servers without x16 PCIe slots to split the card’s 16-lane PCIe bus into two 8-lane buses on dedicated cards connected by a harness.




Host Management Host Management includes NC-SI over MCTP over SMBus, and MCTP over PCIe - Baseboard Management Controller (BMC) interface, as well as PLDM for Monitor and Control DSP0248 and PLDM for Firmware Update DSP0267.


Storage Offloads
• NVMe over Fabric offloads for target machine
• T10 DIF – Signature handover operation at wire speed, for ingress and egress traffic
• Storage protocols: SRP, iSER, NFS RDMA, SMB Direct, NVMe-oF

Remote Boot
• Remote boot over Ethernet
• Remote boot over iSCSI
• Unified Extensible Firmware Interface (UEFI)
• Pre-execution Environment (PXE)

  Enhanced Features
• Hardware-based reliable transport
• Collective operations offloads
• Vector collective operations offloads
• Mellanox PeerDirect® RDMA (aka GPUDirect®) communication acceleration
• 64/66 encoding
• Extended Reliable Connected transport (XRC)
• Dynamically Connected Transport (DCT)
• Enhanced Atomic operations
• Advanced memory mapping support, allowing user mode registration and remapping of memory (UMR)
• On demand paging (ODP)
• MPI Tag Matching
• Rendezvous protocol offload
• Out-of-order RDMA supporting Adaptive Routing
• Burst buffer offload
• In-Network Memory registration-free RDMA memory access

CPU Offloads
• RDMA over Converged Ethernet (RoCE)
• TCP/UDP/IP stateless offload
• LSO, LRO, checksum offload
• RSS (also on encapsulated packet), TSS, HDS, VLAN and MPLS tag insertion/stripping, Receive flow steering
• Data Plane Development Kit (DPDK) for kernel bypass applications
• Open VSwitch (OVS) offload using ASAP2
-Flexible match-action flow tables
-Tunneling encapsulation/de-capsulation
• Intelligent interrupt coalescence
• Header rewrite supporting hardware offload of NAT router

  Hardware-Based I/O Virtualization
• Single Root IOV
• Address translation and protection
• VMware NetQueue support
-SR-IOV: Up to 512 Virtual Functions
-SR-IOV: Up to 8 Physical Functions per host
• Virtualization hierarchies (e.g., NPAR when enabled)
-Virtualizing Physical Functions on a physical port
-SR-IOV on every Physical Function
• Configurable and user-programmable QoS
• Guaranteed QoS for VMs

Management and Control
• NC-SI over MCTP over SMBus and NC-SI over MCTP over PCIe - Baseboard Management Controller interface
• PLDM for Monitor and Control DSP0248
• PLDM for Firmware Update DSP0267
• SDN management interface for managing the eSwitch – I2C interface for device control and configuration
• General Purpose I/O pins
• SPI interface to Flash
• JTAG IEEE 1149.1 and IEEE 1149.6

  Overlay Networks
• RoCE over Overlay Networks
• Stateless offloads for overlay network tunneling protocols
• Hardware offload of encapsulation and decapsulation of VXLAN, NVGRE, and GENEVE overlay networks

 
Full specification and details can be found in the Product Datasheet PDF file

Express View Logo
Ports:
In Stock: 2-3 Weeks
£ POA
£ POA Inc Vat
Mellanox MCX631102AN-ADAT CONNECTX-6 LX EN Adapter Card 25GBE Dual-Port
Mellanox Logo
No Crypto Tall Bracket

NVIDIA ConnectX-6 Lx Ethernet SmartNIC

25G/50G Ethernet SmartNIC (PCIe HHHL/OCP3)

Providing up to two ports of 25GbE connectivity, and PCIe Gen 3.0/4.0 x8 host connectivity, ConnectX-6 Lx MCX631102AS-ADAT is a member of NVIDIA's world-class, award-winning, ConnectX family of network adapters. Continuing NVIDIA's consistent innovation in networking, ConnectX-6 Lx provides agility and efficiency at every scale. ConnectX-6 Lx delivers cutting edge 25GbE performance and security for uncompromising data centers.




Features & Applications • Line speed message rate of 75Mpps

• Advanced RoCE

• ASAP2

• Accelerated Switching and Packet Processing

• IPsec in-line crypto acceleration

• Overlay tunneling accelerations

• Stateful rule checking for connection tracking

• Hardware Root-of-Trust and secure firmware update

• Best-in-class PTP performance

• ODCC compatible


ConnectX®-6 Lx Ethernet smart network interface cards (SmartNIC) deliver scalability, high performance, advanced security capabilities, and accelerated networking with the best total cost of ownership for 25GbE deployments in cloud and enterprise data centers. The SmartNICs support up to two ports of 25GbE, or a single-port of 50GbE connectivity, along with PCI Express Gen3 and Gen4 x8 host connectivity to deliver cutting-edge 25GbE performance and security for uncompromising data centers.




SmartNIC Portfolio • 10/25 Gb/s Ethernet

• Various form factors:

- PCIe low-profile

- OCP 3.0 Small Form Factor (SFF)

• Connectivity options:

- SFP28

• PCIe Gen 3.0/4.0 x8

• Crypto and non-crypto versions




SDN Acceleration NVIDIA ASAP2 - Accelerated Switch and Packet ProcessingTM technology offloads the software-defined networking (SDN) data plane to the SmartNIC, accelerating performance and offloading the CPU in virtualized or containerized cloud data centers. Customers can accelerate their data centers with an SR-IOV or VirtIO interface while continuing to enjoy their SDN solution of choice. The ConnectX-6 Lx ASAP2 rich feature set accelerates public and on-premises enterprise clouds and boosts communication service providers? (CSP) transition to network function virtualization (NFV). ASAP2 supports these communication service providers by enabling packet encapsulations, such as MPLS and GTP, alongside cloud encapsulations, such as VXLAN, GENEVE, and others.




Industry-leading RoCE Following in the ConnectX tradition of providing industry-leading RDMA over Converged Ethernet (RoCE) capabilities, ConnectX-6 Lx enables more scalable, resilient, and easy-to-deploy RoCE solutions. With Zero Touch RoCE (ZTR), the ConnectX-6 Lx allows RoCE payloads to run seamlessly on existing networks without special configuration, either to priority flow control (PFC) or explicit congestion notification (ECN), for simplified RoCE deployments. ConnectX-6 Lx ensures RoCE resilience and efficiency at scale.




Secure Your Infrastructure In an era where data privacy is key, ConnectX-6 Lx adapters offer advanced, built-in capabilities that bring security down to the endpoints with unprecedented performance and scalability. ConnectX-6 Lx offers IPsec inline encryption and decryption acceleration. ASAP2 connection-tracking hardware offload accelerates Layer 4 firewall performance.

ConnectX-6 Lx also delivers supply chain protection with hardware root-of-trust (RoT) for secure boot and firmware updates using RSA cryptography and cloning- protection, via a device-unique key, to guarantee firmware authenticity.


Network Interface
• Two SerDes lanes supporting 25Gb/s per lane, for various port configurations:
-2x 10/25 GbE
-1x 50GbE

Storage Accelerations
• NVMe over Fabrics offloads for target
• Storage protocols: iSER, NFSoRDMA, SMB Direct, NVMe-oF, and more

  Management and Control
• NC-SI, MCTP over SMBus and MCTP over PCIe - Baseboard Management Controller interface, NCSI over RBT in OCP cards
• PLDM for Monitor and Control DSP0248
• PLDM for Firmware Update DSP0267

Remote Boot
• Remote boot over Ethernet
• Remote boot over iSCSI
• Unified extensible firmware interface (UEFI) support for x86 and Arm servers
• Pre-execution environment (PXE) boot

  Host Interface
• PCIe Gen 4.0, 3.0, 2.0, 1.1
• 16.0, 8.0, 5.0, 2.5 GT/s link rate
• 8 lanes of PCIe
• MSI/MSI-X mechanisms
• Advanced PCIe capabilities

Cybersecurity
• Inline hardware IPsec encryption and decryption
• AES-XTS 256/512-bit key
• IPsec over RoCE
• Platform security
• Hardware root-of-trust
• Secure firmware update

  Virtualization/Cloud Native
• Single root IOV (SR-IOV) and VirtIO acceleration
-Up to 512 virtual functions per port
-8 physical functions
• Support for tunneling
-Encap/decap of VXLAN, NVGRE, GENEVE, and more
-Stateless offloads for overlay tunnels

Stateless Offloads
• TCP/UDP/IP stateless offload
• LSO, LRO, checksum offload
• Receive side scaling (RSS) also on encapsulated packets
• Transmit side scaling (TSS)
• VLAN and MPLS tag insertion/stripping
• Receive flow steering

  NVIDIA ASAP
• SDN acceleration for:
-Bare metal
-Virtualization
-Containers
• Full hardware offload for OVS data plane
• Flow update through RTE_Flow or TC_Flower
• OpenStack support
• Kubernetes support
• Rich classification engine (Layer 2 to Layer 4)
• Flex-parser
• Hardware offload for:
-Connection tracking (Layer 4 firewall)
-NAT
-Header rewrite
-Mirroring
-Sampling
-Flow aging
-Hierarchical QoS
-Flow-based statistics

 
Full specification and details can be found in the Product Datasheet PDF file

Express View Logo
In Stock: 2-3 Weeks
£310.86
£373.03 Inc Vat
Add To Cart
Mellanox MCX623102AC-ADAT CONNECTX-6 DX EN Adapter Card 25GBE Dual-Port
Mellanox Logo
Crypto and Secure Boot Tall Bracket

NVIDIA ConnectX-6 DX

ETHERNET SMARTNIC

NVIDIA® ConnectX®-6 Dx is a highly secure and advanced smart network interface card (SmartNIC) to accelerate mission-critical data center applications, such as security, virtualization, SDN/NFV, big data, machine learning, and storage. It provides up to two ports of 100Gb/s or a single-port of 200Gb/s Ethernet connectivity and the highest ROI of any SmartNIC.

ConnectX-6 Dx is powered by leading 50Gb/s (PAM4) and 25/10 Gb/s (NRZ) SerDes technology and novel capabilities that accelerate cloud and data center payloads.




Key Features • Up to 200 Gb/s bandwidth

• Message rate of up to 215Mpps

• Sub 0.8usec latency

• Programmable pipeline for new network flows

• NVIDIA Multi-Host with advanced QoS

• ASAP2 for vSwitches and vRouters

• Overlay tunneling technologies

• IPsec and TLS in-line crypto acceleration

• Block crypto acceleration for data-at-rest

• Hardware root-of-trust and secure firmware update

• Connection tracking offload

• Advanced RoCE capabilities

• Best in class PTP for time-sensitive networking (TSN) applications

• NVIDIA GPUDirect® for GPU-to-GPU communication

• Host chaining technology for economical rack design

• Platform agnostic: x86, Power, Arm

• Open Data Center Committee (ODCC) compatible




Zero-Trust Security In an era where data privacy is key, ConnectX-6 Dx adapters offer advanced, built-in capabilities that bring security down to the endpoints with unprecedented performance and scalability:

• Crypto—IPsec and TLS data-in-motion inline encryption and decryption offload, and AES-XTS block-level, data-at-rest encryption and decryption offloads

• Probes and denial-of-service (DoS) attack protection—ConnectX-6 Dx enables a hardware-based L4 firewall by offloading stateful connection tracking through NVIDIA ASAP2 - Accelerated Switch and Packet Processing® offload technology

• NIC security—Hardware root-of-trust (RoT) secure boot and secure firmware update using RSA cryptography, and cloning protection, via a device-unique secret key




Advanced Vistualization ConnectX-6 Dx enables building highly efficient virtualized cloud data centers:

• Virtualization—ASAP2 delivers virtual switch (vSwitch) and virtual router (vRouter) hardware offloads at orders-of-magnitude higher performance than software-based solutions. ConnectX-6 Dx ASAP2 offers both SR-IOV and VirtIO in-hardware offload capabilities and supports up to 8 million rules.

• Advanced quality of service (QoS)—ConnectX-6 Dx includes traffic shaping and classification-based data policing




Industry-Leading RoCE With industry-leading capabilities, ConnectX-6 Dx delivers more scalable, resilient, and easy-to-deploy remote direct-memory access over converged Ethernet (RoCE) solutions.

• Zero Touch RoCE (ZTR)—Simplifying RoCE deployments, ConnectX-6 Dx with ZTR allows RoCE payloads to run seamlessly on existing networks without special configuration, either to priority flow control (PFC) or explicit congestion notification (ECN). ConnectX-6 Dx ensures the resilience, efficiency, and scalability of deployments.

• Programmable congestion control—ConnectX-6 Dx includes an API for building userdefined congestion control algorithms for various environments running RoCE and background TCP/IP traffic concurrently.




Best-In-Class PTP For Time Sensitive Application NVIDIA offers a full IEEE 1588v2 Precision Time Protocol (PTP) software solution as well as time-sensitive-related features called 5T for 5G. NVIDIA PTP and 5T for 5G software solutions are designed to meet the most demanding PTP profiles. ConnectX-6 Dx incorporates an integrated PTP hardware clock (PHC) that allows the device to achieve sub-20 nanosecond (nsec) accuracy while offering timing-related functions, including timetriggered scheduling or time-based, software-defined networking (SDN) accelerations (time based ASAP²). 5T for 5G technology also enables software applications to transmit front-haul radio area network (RAN)-compatible data in high bandwidth. The PTP solution supports slave clock, master clock, and boundary clock operations.

ConnectX-6 Dx also supports SyncE, allowing selected ConnectX-6 Dx SmartNICs to provide PPS-Out or PPS-In signals from designated SMA connectors.




Efficient Storage Solutions With its NVMe-oF target and initiator offloads, ConnectX-6 Dx brings further optimization, enhancing CPU utilization and scalability. Additionally, ConnectX-6 Dx supports hardware offload for ingress and egress of T10-DIF/PI/CRC32/CRC64 signatures and AES-XTS encryption and decryption offloads, enabling user-based key management and a one-time Federal Information Processing Standards (FIPS) certification approach.




NIC Portfolio ConnectX-6 Dx SmartNICs are available in several form factors including low-profile PCIe, OCP2.0 and OCP3.0 cards, with various network connector types (SFP28/56, QSFP28/56, or DSFP). The ConnectX-6 Dx portfolio also provides options for NVIDIA Multi-Host® and NVIDIA Socket Direct® configurations.

ConnectX-6 Dx adds significant improvements to NVIDIA Multi-Host applications by offering advanced QoS features that ensure complete isolation among the multiple hosts connected to the NIC, and by achieving superior fairness among the hosts.


Host Interface
• PCIe Gen 4.0, 3.0, 2.0, 1.1
• 16.0, 8.0, 5.0, 2.5 GT/s link rate
• 16 lanes of PCIe
• MSI/MSI-X mechanisms
• Advanced PCIe capabilities

Remote Boot
• Remote boot over Ethernet
• Remote boot over iSCSI
• UEFI support for x86 and Arm servers
• PXE boot

  Management and Control
• NC-SI, MCTP over SMBus and MCTP over PCIe - Baseboard Management Controller interface, NCSI over RBT in OCP 2.0/3.0 cards
• PLDM for Monitor and Control DSP0248
• PLDM for Firmware Update DSP026
• I2C interface for device control and configuration

Storage Offloads
• Block-level encryption: XTS-AES 256/512-bit key
• NVMe over Fabric offloads for target machine
• T10 DIF - signature handover operation at wire speed, for ingress and egress traffic
• Storage protocols: SRP, iSER, NFS RDMA, SMB Direct, NVMe-oF

  Virtualization/Cloud Native
• Single Root IOV (SR-IOV) and VirtIO acceleration
-Up to 1K virtual functions per port
-8 physical functions
• Support for tunneling
-Encap/decap of VXLAN, NVGRE, Geneve, and more
-Stateless offloads for overlay tunnels

Stateless Offloads
• TCP/UDP/IP stateless offload
• LSO, LRO, checksum offload
• Receive side scaling (RSS) also on encapsulated packets
• Transmit side scaling (TSS)
• VLAN and MPLS tag insertion/stripping
• Receive flow steering

  Advanced Timing & Synchronization
• Advanced PTP
-IEEE 1588v2 (any profile)
-PTP hardware clock (PHC) (UTC format)
-20nsec accuracy
-Line rate hardware timestamp (UTC format)
-PPS In and configurable PPS Out
• Time triggered scheduling
• PTP based packet pacing
• Time based SDN acceleration (ASAP2)
• Time sensitive networking (TSN)

RDMA over Converged Ethernet (RoCE)
• RoCE v1/v2
• Zero Touch RoCE: no ECN, no PFC
• RoCE over overlay networks
• IPsec over RoCE
• Selective repeat
• Programmable congestion control interface
• GPUDirect
• Dynamically connected transport (DCT)
• Burst buffer offload

  NVIDIA ASAP2
• SDN acceleration for:
-Bare metal
-Virtualization
-Containers
• Full hardware offload for OVS data plane
• Flow update through RTE_Flow or TC_Flower
• OpenStack support
• Kubernetes support
• Rich classification engine (L2 to L4)
• Flex-parser: user defined classification
• Hardware offload for:
-Connection tracking (L4 firewall)
-NAT
-Header rewrite
-Mirroring
-Sampling
-Flow aging
-Hierarchial QoS
-Flow-based statistics

Cyber Security
• Inline hardware IPsec encryption and decryption
-AES-GCM 128/256-bit key
-IPsec over RoCE
• Inline hardware TLS encryption and decryption
-AES-GCM 128/256-bit key
• Data-at-rest AES-XTS encryption and decryption
-AES-GCM 256 /512-bit key
• Platform security
-Hardware root-of-trust
-Secure firmware update

 
Full specification and details can be found in the Product Datasheet PDF file

Express View Logo
In Stock: 2-3 Weeks
£510.92
£613.10 Inc Vat
Add To Cart
Mellanox MCX623432AC-ADAB CONNECTX-6 DX EN Adapter Card 25GBE
Mellanox Logo
With Host management Dual-Port SFP28 PCIE 4.0 X16 Crypto and Secure Boot Thumbscrew (Pulltab) Bracket

NVIDIA ConnectX-6 DX

ETHERNET SMARTNIC

Mellanox® ConnectX®-6 Dx SmartNIC is the industry’s most secure and advanced cloud network interface card to accelerate mission-critical data-center applications, such as security, virtualization, SDN/NFV, big data, machine learning, and storage. The SmartNIC provides up to two ports of 100 Gb/s or a single-port of 200 Gb/s Ethernet connectivity and delivers the highest return on investment (ROI) of any smart network interface card.

ConnectX-6 Dx is a member of NVIDIA Mellanox’s world-class, award-winning ConnectX series of network adapters powered by leading 50 Gb/s (PAM4) and 25/10 Gb/s (NRZ) SerDes technology and novel capabilities that accelerate cloud and data-center payloads.




Key Features • Up to 200 Gb/s bandwidth

• Message rate of up to 215 Mpps

• Sub 0.8 usec latency

• Flexible programmable pipeline for new network flows

• Mellanox Multi-Host with advanced QoS

• ASAP2 - Accelerated Switching and Packet Processing for virtual switches/routers

• Overlay tunneling technologies

• IPsec and TLS in-line crypto acceleration

• Block crypto acceleration for data-at-rest

• Hardware Root-of-Trust and secure firmware update

• Connection Tracking offload

• Advanced RoCE capabilities

• Best in class PTP for TSN applications

• GPUDirect® for GPU-to-GPU communication

• Host chaining technology for economical rack design

• Platform agnostic: x86, Power, Arm

• ODCC compatible




Zero-Trust Security In an era where privacy of information is key and zero trust is the rule, ConnectX-6 Dx adapters offer a range of advanced built-in capabilities that bring security down to the endpoints with unprecedented performance and scalability, including:

• Crypto – IPsec and TLS data-in-motion inline encryption and decryption offload, and AES-XTS block-level data-at-rest encryption and decryption offload.

• Probes & DoS Attack Protection – ConnectX-6 Dx enables a hardware-based L4 firewall by offloading stateful connection tracking through Mellanox ASAP2 - Accelerated Switch and Packet Processing®.

• NIC Security – Hardware Root-of-Trust (RoT) Secure Boot and secure firmware update using RSA cryptography, and cloning-protection, via a device-unique secret key.




Advanced Vistualization ConnectX-6 Dx delivers another level of innovation to enable building highly efficient virtualized cloud data centers:

• Virtualization – Mellanox ASAP2 technology for vSwitch/vRouter hardware offload delivers orders of magnitude higher performance vs. software-based solutions. ConnectX-6 Dx ASAP2 offers both SR-IOV and VirtIO in-hardware offload capabilities, and supports up to 8 million rules.

• Advanced Quality of Service – Includes traffic shaping and classification-based data policing.




Industry-Leading RoCE Following the Mellanox ConnectX tradition of industry-leading RoCE capabilities, ConnectX-6 Dx adds another layer of innovation to enable more scalable, resilient and easy-to-deploy RoCE solutions.

• Zero Touch RoCE – Simplifying RoCE deployments, ConnectX-6 Dx allows RoCE payloads to run seamlessly on existing networks without requiring special configuration on the network (no PFC, no ECN). New features in ConnectX-6 Dx ensure resiliency and efficiency at scale of such deployments.

• Configurable Congestion Control – API to build user-defined congestion control algorithms, best serving various environments and RoCE and TCP/IP traffic patterns.




Best-In-Class PTP For Time Sensitive Application Mellanox offers a full IEEE 1588v2 PTP software solution as well as time sensitive related features called 5T45G. Mellanox PTP and 5T45G software solutions are designed to meet the most demanding PTP profiles. ConnectX-6 Dx incorporates an integrated Hardware Clock (PHC) that allows the device to achieve sub-20 usec accuracy while offering various timing related functions, including time-triggered scheduling or time-based SNDaccelerations (time based ASAP²). Furthermore, 5T45G technology enables software applications to transmit front-haul (ORAN) compatible in high bandwidth. The PTP solution supports slave clock, master clock, and boundary clock.

Selected ConectX-6 Dx SmartNICs provide PPS-Out or PPS-In signals from designated SMA connectors.




Efficient Storage Solutions With its NVMe-oF target and initiator offloads, ConnectX-6 Dx brings further optimization to NVMe-oF, enhancing CPU utilization and scalability. Additionally, ConnectX-6 Dx supports hardware offload for ingress/egress of T10-DIF/PI/CRC32/CRC64 signatures, as well as AES-XTS encryption/decryption offload enabling user-based key management and a one-time-FIPS-certification approach.




Wide Selection of NICs ConnectX-6 Dx SmartNICs are available in several form factors including low-profile PCIe, OCP2.0 and OCP3.0 cards, with various network connector types (SFP28/56, QSFP28/56, or DSFP). The ConnectX-6 Dx portfolio also provides options for Mellanox Multi-Host® and Mellanox Socket Direct® configurations.

Mellanox Multi-Host® connects multiple compute or storage hosts to a single interconnect adapter and enables designing and building new scale-out compute and storage racks. This enables better power and performance management, while reducing capital and operational expenses.

Mellanox Socket Direct® technology brings improved performance to multi-socket servers, by enabling each CPU in a multi-socket server to directly connect to the network through its dedicated PCIe interface. This enables data to bypass the QPI (UPI) and the other CPU, improving latency, performance and CPU utilization.


Storage Accelerations
• NVMe over Fabric offloads for target
• Storage protocols: iSER, NFSoRDMA, SMB Direct, NVMe-oF, and more
• T-10 Dif/Signature Handover

Remote Boot
• Remote boot over Ethernet
• Remote boot over iSCSI
• UEFI support for x86 and Arm servers
• PXE boot

  Management and Control
• NC-SI, MCTP over SMBus and MCTP over PCIe - Baseboard Management Controller interface, NCSI over RBT in OCP 2.0/3.0 cards
• PLDM for Monitor and Control DSP0248
• PLDM for Firmware Update DSP026
• I2C interface for device control and configuration

Host Interface
• PCIe Gen 4.0, 3.0, 2.0, 1.1
• 16.0, 8.0, 5.0, 2.5 GT/s link rate
• 16 lanes of PCIe
• MSI/MSI-X mechanisms
• Advanced PCIe capabilities

  Virtualization/Cloud Native
• Single Root IOV (SR-IOV) and VirtIO acceleration
-Up to 1K virtual functions per port
-8 PFs
• Support for tunneling
-Encap/decap of VXLAN, NVGRE, Geneve, and more
-Stateless offloads for overlay tunnels

Stateless Offloads
• TCP/UDP/IP stateless offload
• LSO, LRO, checksum offload
• Receive side scaling (RSS) also on encapsulated packets
• Transmit side scaling (TSS)
• VLAN and MPLS tag insertion/stripping
• Receive flow steering

  Advanced Timing & Synchronization
• Advanced PTP
-IEEE 1588v2 (any profile)
-PTP hardware clock (PHC) (UTC format)
-16 nsec accuracy
-Line rate hardware timestamp (UTC format)
-PPS In and configurable PPS Out
• Time triggered scheduling
• PTP based packet pacing
• Time based SDN acceleration (ASAP2)
• Time sensitive networking (TSN)

RDMA over Converged Ethernet (RoCE)
• RoCE v1/v2
• Zero Touch RoCE: no ECN, no PFC
• RoCE over overlay networks
• IPsec over RoCE
• Selective repeat
• Programmable congestion control interface
• GPUDirect
• Dynamically connected transport (DCT)
• Burst buffer offload

  Mellanox ASAP2
• SDN acceleration for:
-Bare metal
-Virtualization
-Containers
• Full hardware offload for OVS data plane
• Flow update through RTE_Flow or TC_Flower
• OpenStack support
• Kubernetes support
• Rich classification engine (L2 to L4)
• Flex-parser: user defined classification
• Hardware offload for:
-Connection tracking (L4 firewall)
-NAT
-Header rewrite
-Mirroring
-Sampling
-Flow aging
-Hierarchial QoS
-Flow-based statistics

Cyber Security
• Inline hardware IPsec encryption and decryption
-AES-GCM 128/256-bit key
-IPsec over RoCE
• Inline hardware TLS encryption and decryption
-AES-GCM 128/256-bit key
• Data-at-rest AES-XTS encryption and decryption
-AES-GCM 256 /512-bit key
• Platform security
-Hardware root-of-trust
-Secure firmware update

 
Full specification and details can be found in the Product Datasheet PDF file

Express View Logo
In Stock: 2-3 Weeks
£ POA
£ POA Inc Vat
S Single-Port
Mellanox Logo
With Tall Bracket

MELLANOX CONNECTX-6 DX

Featuring In-Network Computing for Enhanced Efficiency and Scalability

Complex workloads demand ultra-fast processing of high-resolution simulations, extreme-size datasets, and highly-parallelized algorithms. As these computing requirements continue to grow, NVIDIA Quantum InfiniBand—the world’s onlyfully offloadable, In-Network Computing acceleration technology—provides the dramatic leap in performance needed to achieve unmatched results in high performance computing (HPC), AI, and hyperscale cloud infrastructures—with less cost and complexity.




Key Applications • Industry-leading throughput, low CPU utilization, and high message rate

• High performance and intelligent fabric for compute and storage infrastructures

• Cutting-edge performance in virtualized networks, including network function virtualization (NFV)

• Host chaining technology for economical rack design

• Smart interconnect for x86, Power, Arm, GPU, and FPGA-based compute and storage platforms

• Flexible programmable pipeline for new network flows

• Efficient service chaining enablement

• Increased I/O consolidation, reducing data center costs and complexity


NVIDIA® ConnectX®-6 InfiniBand smart adapter cards are a key element in the NVIDIA Quantum InfiniBand platform. ConnectX-6 provides up to two ports of 200Gb/s InfiniBand and Ethernet (1) connectivity with extremely low latency, high message rate, smart offloads, and NVIDIA In-Network Computing acceleration that improve performance and scalability




Key Features • Up to 200Gb/s connectivity per port

•Max bandwidth of 200Gb/s

•Up to 215 million messages/sec

•Extremely low latency

•Block-level XTS-AES mode hardware encryption

•Federal Information Processing Standards (FIPS) compliant

•Supports both 50G SerDes (PAM4)- and 25G SerDes (NRZ)-based ports

•Best-in-class packet pacing with subnanosecond accuracy

•PCIe Gen 3.0 and Gen 4.0 support

•In-Network Compute acceleration engines

•RoHS compliant

•Open Data Center Committee (ODCC) compatible




High Performance Computing Environments With its NVIDIA In-Network Computing and In-Network Memory capabilities, ConnectX-6 offloads computation even further to the network, saving CPU cycles and increasing network efficiency. ConnectX-6 utilizes remote direct memory access (RDMA) technology as defined in the InfiniBand Trade Association (IBTA) specification, delivering low latency, and high performance. ConnectX-6 enhances RDMA network capabilities even further by delivering end-to-end packet-level flow control.




Machine Learning and Big Data Environments Data analytics has become an essential function within many enterprise data centers, clouds, and hyperscale platforms. Machine learning (ML) relies on especially high throughput and low latency to train deep neural networks and improve recognition and classification accuracy. With its 200Gb/s throughput, ConnectX-6 is an excellent solution to provide ML applications with the levels of performance and scalability that they require




Security Including Block-Level Encryption ConnectX-6 block-level encryption offers a critical innovation to network security. As data in transit is stored or retrieved, it undergoes encryption and decryption. ConnectX-6 hardware offloads the IEEE AES-XTS encryption/decryption from the CPU, saving latency and CPU utilization. It also guarantees protection for users sharing the same resources through the use of dedicated encryption keys.

• By performing block storage encryption in the adapter, ConnectX-6 eliminates the need for self-encrypted disks. This gives customers the freedom to choose their preferred storage device, including byte addressable and NVDIMM devices that traditionally do not provide encryption. Moreover, ConnectX-6 can offer Federal Information Processing Standards (FIPS) compliance.




Bring NVMe-oF to Storage Environments NVMe storage devices are gaining momentum, offering very fast access to storage media. The evolving NVMe over Fabrics (NVMe-oF) protocol leverages RDMA connectivity to remotely access NVMe storage devices efficiently, while keeping the end-to-end NVMe model at lowest latency. With its NVMe-oF target and initiator offloads, ConnectX-6 brings further optimization to NVMe-oF, enhancing CPU utilization and scalability.




Portfolio of Smart Adapters ConnectX-6 is available in two form factors: low-profile stand-up PCIe and Open Compute Project (OCP) Spec 3.0 cards with QSFP connectors. Single-port, HDR, stand-up PCIe adapters are available based on either ConnectX-6 or ConnectX-6 DE (ConnectX-6 Dx enhanced for HPC applications).

In addition, specific PCIe stand-up cards are available with a cold plate for insertion into liquid-cooled Intel Server System D50TNP platforms.




Socket Direct ConnectX-6 also provides options for NVIDIA Socket Direct™ configurations, which improves the performance of multi-socket servers by enabling each of their CPUs to access the network through its dedicated PCIe interface. This enables data to bypass the QPI (UPI) and the other CPU, improving latency, performance, and CPU utilization.

Socket Direct also enables NVIDIA GPUDirect® RDMA for all CPU/GPU pairs by ensuring that GPUs are linked to the CPUs closest to the adapter card. Socket Direct enables Intel® DDIO optimization on both sockets by creating a direct connection between the sockets and the adapter card.

Socket Direct technology is enabled by a main card housing the ConnectX-6 and an auxiliary PCIe card bringing in the remaining PCIe lanes. The ConnectX-6 Socket Direct card is installed into two PCIe x16 slots and connected using a harness. The two PCIe x16 slots may also be connected to the same CPU. In this case, the main advantage of the technology lies in delivering 200Gb/s to servers with PCIe Gen3-only support.




Host Management Host Management includes NC-SI over MCTP over SMBus, and MCTP over PCIe— Baseboard Management Controller (BMC) interface, as well as PLDM for Monitor and Control DSP0248 and PLDM for Firmware Update DSP0267.




Broad Software Support All ConnectX adapters are supported by a full suite of drivers for major Linux distributions, as well as Microsoft® Windows® Server and VMware vSphere®.

HPC software libraries supported include HPC-X, OpenMPI, MVAPICH, MPICH, OpenSHMEM, PGAS, and varied commercial packages.


Overlay Networks
• RoCE over overlay networks
• Stateless offloads for overlay network tunneling protocols
• Hardware offload of encapsulation and decapsulation of VXLAN, NVGRE, and Geneve overlay networks

Storage Offloads
• Block-level encryption: XTS-AES 256/512-bit key
• NVMe over Fabrics offloads for target machine
• T10-DIF signature handover operation at wire speed, for ingress and egress traffic
• Storage protocols: SRP, iSER, NFS RDMA, SMB Direct, and NVMe-oF

  InfiniBand
• 200Gb/s and lower rates
• IBTA Specification 1.3 compliant
• RDMA, send/receive semantics
• Hardware-based congestion control
• Atomic operations
• 16 million I/O channels
• 256 to 4Kbyte MTU, 2Gbyte messages
• 8 virtual lanes + VL15

Remote Boot
• Remote boot over InfiniBand
• Remote boot over Ethernet
• Remote boot over iSCSI
• Unified Extensible Firmware Interface (UEFI)
• Pre-execution Environment (PXE)

  Hardware-Based I/O Virtualization
• Single Root IOV (SR-IOV)
• Address translation and protection
• VMware NetQueue support
-SR-IOV: Up to 1K virtual functions
-SR-IOV: Up to 8 physical functions per host
• Virtualization hierarchies (e.g., NPAR)
-Virtualizing physical functions on a physical port
-SR-IOV on every physical function
• Configurable and user-programmable QoS
• Guaranteed QoS for VMs

Management and Control
• NC-SI, MCTP over SMBus and MCTP over PCIe—Baseboard Management Controller interface
• PLDM for Monitor and Control DSP0248
• PLDM for Firmware Update DSP0267
• SDN management interface for managing the eSwitch
• I2C interface for device control and configuration
• General Purpose I/O pins
• SPI interface to flash
• JTAG IEEE 1149.1 and IEEE 1149.6

  Enhanced Features
• Hardware-based reliable transport
• Collective operations offloads
• Vector collective operations offloads
• NVIDIA PeerDirect® RDMA (a.k.a. NVIDIA GPUDirect) communication acceleration
• 64/66 encoding
• Enhanced atomic operations
• Advanced memory mapping support, allowing user mode registration and remapping of memory (UMR)
• Extended Reliable Connected transport (XRC)
• Dynamically Connected Transport (DCT)
• On demand paging (ODP)
• MPI tag matching
• Rendezvous protocol offload
• Out-of-order RDMA supporting Adaptive Routing
• Burst buffer offload
• In-Network Memory registration-free RDMA memory access

CPU Offloads
• RDMA over Converged Ethernet (RoCE)
• TCP/UDP/IP stateless offload
• LSO, LRO, checksum offload
• RSS (also on encapsulated packet), TSS, HDS, VLAN and MPLS tag insertion/stripping, receive flow steering
• Data plane development kit (DPDK) for kernel bypass applications
• Open vSwitch (OVS) offload using ASAP2
• Flexible match-action flow tables
• Tunneling encapsulation/decapsulation
• Intelligent interrupt coalescence
• Header rewrite supporting hardware offload of NAT router

 
Full specification and details can be found in the Product Datasheet PDF file

Express View Logo
In Stock: 2-3 Weeks
£ POA
£ POA Inc Vat
Mellanox MCX623102AN-GDAT CONNECTX-6 DX EN Adapter Card 50GBE Dual-Port
Mellanox Logo
No Crypto Tall Bracket

NVIDIA ConnectX-6 DX Ethernet SmartNIC

Advanced Networking and Security for the Most Demanding Cloud and Data Center Workloads

ConnectX-6 Dx SmartNIC is the industry's most secure and advanced cloud network interface card to accelerate mission-critical data-center applications, such as security, virtualization, SDN/NFV, big data, machine learning, and storage. The SmartNIC delivers the highest return on investment (ROI) of any smart network interface card.

ConnectX-6 Dx is a member of NVIDIA's world-class, award-winning ConnectX series of network adapters powered by leading 50 Gb/s (PAM4) and 25/10 Gb/s (NRZ) SerDes technology and novel capabilities that accelerate cloud and data-center payloads.




Key Features: • Up to 50 Gb/s bandwidth

• Message rate of up to 215 Mpps

• Sub 0.8 usec latency

• Flexible programmable pipeline for new network flows

• Multi-Host with advanced QoS

• ASAP2 - Accelerated Switching and Packet Processing for virtual switches/routers

• Overlay tunneling technologies

• IPsec and TLS in-line crypto acceleration

• Block crypto acceleration for data-at-rest

• Hardware Root-of-Trust and secure firmware update

• Connection Tracking offload

• Advanced RoCE capabilities

• Best in class PTP for TSN applications

• GPUDirect for GPU-to-GPU communication

• Host chaining technology for economical rack design

• Platform agnostic: x86, Power, Arm

• ODCC compatible


ConnectX®-6 Dx is a highly secure and advanced smart network interface card (SmartNIC) that accelerates mission-critical cloud and data center applications, including security, virtualization, SDN/NFV, big data, machine learning, and storage. ConnectX-6 Dx provides up to two ports of 100Gb/s or a single port of 200Gb/s Ethernet connectivity and is powered by 50Gb/s (PAM4) or 25/10 Gb/s (NRZ) SerDes technology.

ConnectX-6 Dx features virtual switch (vSwitch) and virtual router (vRouter) hardware accelerations delivering orders-of-magnitude higher performance than softwarebased solutions. ConnectX-6 Dx supports a choice of single-root I/O virtualization (SR-IOV) and VirtIO in hardware, enabling customers to best address their application needs. By offloading cloud networking workloads, ConnectX-6 Dx frees up CPU cores for business applications while reducing total cost-of-ownership.




SmartNIC Portfolio • 1/10/25/40/50 Gb/s Ethernet, PAM4/NRZ

• Various form factors:

-PCIe low-profile

-OCP 3.0 Small Form Factor (SFF)

-OCP 2.0

• Connectivity options:

-SFP28 and SFP56

• PCIe Gen 4.0 x16 host interface

• Multi-host and single-host flavors

• Crypto and non-crypto versions


Host Interface
• 16 lanes of PCIe Gen4, compatible with PCIe Gen2/Gen3
• Integrated PCI switch
• NVIDIA Multi-Host and NVIDIA Socket Direct

Remote Boot
• Remote boot over Ethernet
• Remote boot over iSCSI
• UEFI and PXE support for x86 and Arm servers

  Virtualization/Cloud Native
• SR-IOV and VirtIO acceleration
-Up to 1K virtual functions per port
-8 physical functions
• Support for tunneling
-Encap/decap of VXLAN, NVGRE, Geneve, and more
-Stateless offloads for overlay tunnels

Storage Offloads
• Block-level encryption: XTS-AES 256/512-bit key
• NVMe over Fabrics offloads for target machine
• T10 DIF signature handover operation at wire speed, for ingress and egress traffic
• Storage protocols: SRP, iSER, NFS RDMA, SMB Direct, NVMe-oF

  Stateless Offloads
• TCP/UDP/IP stateless offload
• LSO, LRO, checksum offload
• Receive side scaling (RSS) also on encapsulated packet
• Transmit side scaling (TSS)
• VLAN and MPLS tag insertion/stripping
• Receive flow steering

RDMA over Converged Ethernet (RoCE)
• RoCE v1/v2
• Zero-touch RoCE: no ECN, no PFC
• RoCE over overlay networks
• Selective repeat
• Programmable congestion control interface
• GPUDirect®

  Cybersecurity
• Inline hardware IPsec encryption and decryption
-AES-GCM 128/256-bit key
-RoCE over IPsec
• Inline hardware TLS encryption and decryption
-AES-GCM 128/256-bit key
• Data-at-rest AES-XTS encryption and decryption
-AES-XTS 256/512-bit key
• Platform security
-Hardware root-of-trust
-Secure firmware update

Management and Control
• NC-SI, MCTP over SMBus and MCTP over PCIe—Baseboard Management Controller interface, NCSI over RBT in Open Compute Project (OCP) 2.0/3.0 cards
• PLDM for Monitor and Control DSP0248
• PLDM for Firmware Update DSP0267
• I2C interface for device control and configuration

  Accelerated Switching & Packet Processing
• SDN acceleration for:
-Bare metal
-Virtualization
-Containers
• Full hardware offload for OVS data plane
• Flow update through RTE_Flow or TC_Flower
• Flex-parser: user-defined classification
• Hardware offload for:
-Connection tracking (Layer 4 firewall)
-NAT
-Header rewrite
-Mirroring
-Sampling
-Flow aging
-Hierarchical QoS
-Flow-based statistics

Advanced Timing and Synchronization
• Advanced PTP
-IEEE 1588v2 (any profile)
-PTP hardware clock (PHC) (UTC format)
-Nanosecond-level accuracy
-Line rate hardware timestamp (UTC format)
-PPS in and configurable PPS out
• Time-triggered scheduling
• PTP-based packet pacing
• Time-based SDN acceleration (ASAP2)
• Time-sensitive networking (TSN)
• Dedicated precision timing card option

 
Full specification and details can be found in the Product Datasheet PDF file

Express View Logo
In Stock: 2-3 Weeks
£779.13
£934.96 Inc Vat
Add To Cart
Mellanox MCX555A-ECAT CONNECTX-5 VPI Adapter Card EDR IB & 100GBE
Mellanox Logo
Single-Port with Tall Bracket ROHS R6

Connectx-5 Infiniband Adapter Card

100Gb/s InfiniBand & Ethernet (VPI) Adapter Card

ConnectX-5 network adapter cards with Virtual Protocol Interconnect (VPI), supporting FDR IB and 40/56GbE connectivity, provide the highest performance and most flexible solution for high-performance, Web 2.0, Cloud, data analytics, database, and storage platforms.




Benefits: • Up to 100Gb/s connectivity per port

• Industry-leading throughput, low latency, low CPU utilization and high message rate

• Innovative rack design for storage and Machine Learning based on Host Chaining technology

• Smart interconnect for x86, Power, Arm, and GPU-based compute & storage platforms

• Advanced storage capabilities including NVMe over Fabric offloads

• Intelligent network adapter supporting flexible pipeline programmability

• Cutting-edge performance in virtualized networks including Network Function Virtualization (NFV)

• Enabler for efficient service chaining capabilities

• Efficient I/O consolidation, lowering data center costs and complexity


With the exponential growth of data being shared and stored by applications and social networks, the need for high-speed and high performance compute and storage data centers is skyrocketing.

ConnectX-5 provides exceptional high performance for the most demanding data centers, public and private clouds, Web2.0 and Big Data applications, as well as High-Performance Computing (HPC) and Storage systems, enabling today's corporations to meet the demands of the data explosion.




Features: • Tag matching and rendezvous offloads

• Adaptive routing on reliable transport

• Burst buffer offloads for background checkpointing

• NVMe over Fabric (NVMe-oF) offloads

• Back-end switch elimination by host chaining

• Embedded PCIe switch

• Enhanced vSwitch/vRouter offloads

• Flexible pipeline

• RoCE for overlay networks

• PCIe Gen 4.0 support

• RoHS compliant

• ODCC compatible




HPC Environments ConnectX-5 offers enhancements to HPC infrastructures by providing MPI and SHMEM/ PGAS and rendezvous tag matching offload, hardware support for out-of-order RDMA write and read operations, as well as additional network atomic and PCIe atomic operations support.

ConnectX-5 enhances RDMA network capabilities by completing the switch adaptive- routing capabilities and supporting data delivered out-of-order, while maintaining ordered completion semantics, providing multipath reliability, and efficient support for all network topologies, including DragonFly and DragonFly+.

ConnectX-5 also supports burst buffer offload for background checkpointing without interfering in the main CPU operations, and the innovative dynamic connected transport (DCT) service to ensure extreme scalability for compute and storage systems.




Storage Environments NVMe storage devices are gaining popularity, offering very fast storage access. The NVMe over Fabrics (NVMe-oF) protocol leverages RDMA connectivity for remote access. ConnectX-5 offers further enhancements by providing NVMe-oF target offloads, enabling highly efficient NVMe storage access with no CPU intervention, and thus improved performance and lower latency.

Standard block and file access protocols can leverage RDMA for high-performance storage access. A consolidated compute and storage network achieves significant cost- performance advantages over multi-fabric networks.




Adapter Card Portfolio ConnectX-5 InfiniBand adapter cards are available in several form factors, including low-profile stand-up PCIe, Open Compute Project (OCP) Spec 2.0 Type 1, and OCP 2.0 Type 2.

NVIDIA Multi-Host technology allows multiple hosts to be connected into a single adapter by separating the PCIe interface into multiple and independent interfaces.

The portfolio also offers NVIDIA Socket Direct configurations that enable servers without x16 PCIe slots to split the card’s 16-lane PCIe bus into two 8-lane buses on dedicated cards connected by a harness. This provides 100Gb/s port speed even to servers without a x16 PCIe slot.

Socket Direct also enables NVIDIA GPUDirect® RDMA for all CPU/GPU pairs by ensuring that all GPUs are linked to CPUs close to the adapter card, and enables Intel® DDIO on both sockets by creating a direct connection between the sockets and the adapter card.


Ethernet
• 100GbE / 50GbE / 40GbE / 25GbE / 10GbE / 1GbE
• Jumbo frame support (9.6KB)

HPC Software Libraries
• NVIDIA HPC-X,OpenMPI, IBM PE, OSU MPI (MVAPICH/2), Intel MPI
• Platform MPI, UPC, Open SHMEM

  InfiniBand
• 100Gb/s and lower speed
• IBTA Specification 1.3 compliant
• RDMA, send/receive semantics
• Hardware-based congestion control
• Atomic operations
• 16 million I/O channels
• 256 to 4Kbyte MTU, 2Gbyte messages
• 8 virtual lanes + VL15

Remote Boot
• Remote boot over InfiniBand
• Remote boot over Ethernet
• Remote boot over iSCSI
• Unified extensible firmware Interface (UEFI)
• Pre-execution environment (PXE)

  Management and Control
• NC-SI over MCTP over SMBus and NC-SI over MCTP over PCIe—Baseboard Management Controller interface
• PLDM for Monitor and Control DSP0248
• PLDM for Firmware Update DSP0267
• SDN management interface for managing the eSwitch
• I2C interface for device control and configuration
• General purpose I/O pins
SPI interface to flash
• JTAG IEEE 1149.1 and IEEE 1149.6

Hardware-Based I/O Virtualization
• Single root IO virtualization (SR-IOV)
• Address translation and protection
• VMware NetQueue support
- SR-IOV: up to 512 virtual functions
- SR-IOV: up to 8 physical functions per host
• Virtualization hierarchies (e.g., NPAR when enabled)
- Virtualizing physical functions on a physical port
- SR-IOV on every physical function
• Configurable and user-programmable QoS
• Guaranteed QoS for VMs

  Storage Offloads
• NVMe over Fabrics offloads for target machine
• T10 DIF—Signature handover operation at wire speed for ingress and egress traffic
• Storage protocols: SRP, iSER, NFS RDMA, SMB Direct, NVMe-oF

Overlay Networks
• RoCE over overlay networks
• Stateless offloads for overlay network tunneling protocols
• Hardware offload of encapsulation and decapsulation of VXLAN, NVGRE, and GENEVE overlay networks

  Enhanced Features
• Hardware-based reliable transport
• Collective operations offloads
• Vector collective operations offloads
• NVIDIA PeerDirect™ RDMA (aka GPUDirect) communication acceleration
• 64/66 encoding
• Extended reliable connected transport (XRC)
• Dynamically connected transport (DCT)
• Enhanced atomic operations
Advanced memory mapping support, allowing user mode registration and remapping of memory (UMR)
• On-demand paging (ODP)
• MPI tag matching
Rendezvous protocol offload
• Out-of-order RDMA supporting adaptive routing
• Burst buffer offload
• In-Network Memory registration-free RDMA memory access

 
Full specification and details can be found in the Product Datasheet PDF file

Express View Logo
In Stock: 2-3 Weeks
£ POA
£ POA Inc Vat
Mellanox CONNECTX-5 EN Network Interface Card 100GBE
Mellanox Logo
Dual-Port QSFP28 X16 Tall Bracket ROHS R6

ConnectX®-5 EN Card

Up to 100Gb/s Ethernet Adapter Cards

Intelligent RDMA-enabled, single and dual-port network adapter with advanced application offload capabilities for Web 2.0, Cloud, Storage, and Telco platforms

ConnectX-5 Ethernet network interface cards provide high performance and flexible solutions with up to two ports of 100GbE connectivity, 750ns latency, up to 200 million messages per second (Mpps), and a record setting 197Mpps when running an open source Data Path Development Kit (DPDK) PCIe (Gen 4.0). For storage workloads, ConnectX-5 delivers a range of innovative accelerations, such as Signature Handover (T10-DIF) in hardware, an embedded PCIe Switch, and NVMe over Fabric target offloads. ConnectX-5 adapter cards also bring advanced Open vSwitch offloads to telecommunications and cloud data centers to drive extremely high packet rates and throughput with reduced CPU resource consumption, thus boosting data center infrastructure efficiency.

ConnectX-5 adapter cards are available for PCIe Gen 3.0 and Gen 4.0 servers and provide support for 1, 10, 25, 40, 50 and 100 GbE speeds in stand-up PCIe cards, OCP 2.0, and OCP 3.0 form factors. ConnectX-5 cards also offer advanced Multi-Host and Socket Direct technologies.




Benefits • Up to 100Gb/s connectivity per port

• Industry-leading throughput, low latency, low CPU utilization and high message rate

• Innovative rack design for storage and Machine Learning based on Host Chaining technology

• Smart interconnect for x86, Power, Arm, and GPU-based compute & storage platforms

• Advanced storage capabilities including NVMe over Fabric offloads

• Intelligent network adapter supporting flexible pipeline programmability

• Cutting-edge performance in virtualized networks including Network Function Virtualization (NFV)

• Enabler for efficient service chaining capabilities

• Efficient I/O consolidation, lowering data center costs and complexity


ConnectX-5 Ethernet adapter cards provide high performance and flexible solutions with up to two ports of 100GbE connectivity, 750ns latency, up to 200 million messages per second (Mpps), and a recordsetting 197Mpps when running an open source Data Path Development Kit (DPDK) PCIe (Gen 4.0). For storage workloads, ConnectX-5 delivers a range of innovative accelerations, such as Signature Handover (T10-DIF) in hardware, an embedded PCIe Switch, and NVMe over Fabric target offloads. ConnectX-5 adapter cards also bring advanced Open vSwitch offloads to telecommunications and cloud data centers to drive extremely high packet rates and throughput with reduced CPU resource consumption, thus boosting data center infrastructure efficiency.

ConnectX-5 adapter cards are available for PCIe Gen 3.0 and Gen 4.0 servers and provide support for 1, 10, 25, 40, 50 and 100 GbE speeds in stand-up PCIe cards, OCP 2.0, and OCP 3.0 form factors. ConnectX-5 cards also offer advanced Mellanox Multi-Host® and Mellanox Socket Direct® technologies.




Features • Tag matching and rendezvous offloads

• Adaptive routing on reliable transport

• Burst buffer offloads for background checkpointing

• NVMe over Fabric offloads

• Backend switch elimination by host chaining

• Embedded PCIe switch

• Enhanced vSwitch/vRouter offloads

• Flexible pipeline

• RoCE for overlay networks

• PCIe Gen 4.0 support

• RoHS compliant

• ODCC compatible

• Various form factors available




Cloud and Web 2.0 Environments ConnectX-5 adapter cards enable data center administrators to benefit from better server utilization and reduced costs, power usage, and cable complexity, allowing for more virtual appliances, virtual machines (VMs) and tenants to co-exist on the same hardware.

Supported vSwitch/vRouter offload functions include:

• Overlay Networks (e.g., VXLAN, NVGRE, MPLS, GENEVE, and NSH) header encapsulation & decapsulation.

• Stateless offloads of inner packets and packet headers’ re-write, enabling NAT functionality and more.

• Flexible and programmable parser and match-action tables, which enable hardware offloads for future protocols.

• SR-IOV technology, providing dedicated adapter resources, guaranteed isolation and protection for virtual machines (VMs) within the server.

• Network Function Virtualization (NFV), enabling a VM to be used as a virtual appliance. The full datapath operation offloads, hairpin hardware capability and service chaining enables data to be handled by the virtual appliance, with minimum CPU utilization.



Cloud and Web 2.0 customers developing platforms on Software Defined Network (SDN) environments are leveraging their servers’ Operating System Virtual-Switching capabilities to achieve maximum flexibility. Open vSwitch (OvS) is an example of a virtual switch that allows Virtual Machines to communicate with each other and with the outside world. Traditionally residing in the hypervisor where switching is based on twelve-tuple matching onflows, the virtual switch, or virtual router software-based solution, is CPU-intensive. This can negatively affect system performance and prevent the full utilization of available bandwidth.

Mellanox ASAP2 - Accelerated Switching and Packet Processing® technology enables offloading the vSwitch/vRouter by handling the data plane in the NIC hardware, without modifying the control plane. This results in significantly higher vSwitch/vRouter performance without the associated CPU load.

Additionally, intelligent ConnectX-5’s flexible pipeline capabilities, including flexible parser and flexible match-action tables, are programmable, enabling hardware offloads for future protocols.




Storage Environments NVMe storage devices are gaining popularity by offering very fast storage access. The evolving NVMe over Fabric (NVMe-oF) protocol leverages the RDMA connectivity for remote access. ConnectX-5 offers further enhancements by providing NVMe-oF target offloads, enabling very efficient NVMe storage access with no CPU intervention, and thus improving performance and reducing latency.



The embedded PCIe switch enables customers to build standalone storage or Machine Learning appliances. As with earlier generations of ConnectX adapters, standard block and file access protocols leverage RoCE for high-performance storage access. A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks.

ConnectX-5 enables an innovative storage rack design, Host Chaining, which enables different servers to interconnect without involving the Top of the Rack (ToR) switch. Leveraging Host Chaining, ConnectX-5 lowers the data center’s total cost of ownership (TCO) by reducing CAPEX (cables, NICs, and switch port expenses). OPEX is also reduced by cutting down on switch port management and overall power usage.






Telecommunications Telecommunications service providers are moving towards disaggregation, server virtualization, and orchestration as key tenets to modernize their networks. Likewise, they’re also moving towards Network Function Virtualization (NFV), which enables the rapid deployment of new network services. With this move, proprietary dedicated hardware and software, which tend to be static and difficult to scale, are being replaced with virtual machines running on commercial off-the-shelf (COTS) servers.

For telecom service providers, choosing the right networking hardware is critical to achieving a cloud-native NFV solution that is agile, reliable, fast and efficient. Telco service providers typically leverage virtualization and cloud technologies to better achieve agile service delivery and efficient scalability; these technologies require an advanced network infrastructure to support higher rates of packet processing. However, the resultant east-west traffic causes numerous interrupts as I/O traverses from kernel to user space, eats up CPU cycles and decreases packet performance. Particularly sensitive to delays are voice and video applications which often require less than 100ms of latency.

ConnectX-5 adapter cards drive extremely high packet rates, increased throughput and drive higher network efficiency through the following technologies; Open vSwitch Offloads (OvS), OvS over DPDK or ASAP², Network Overlay Virtualization, SR-IOV, and RDMA. This allows for secure data delivery through higher-performance offloads, reducing CPU resource utlization, and boosting data center infrastructure efficiency. The result is a much more responsive and agile network capable of rapidly deploying network services.




Wide Selection of Adapter Cards ConnectX-5 Ethernet adapter cards are available in several form factors including: low-profile stand-up PCIe, OCP 2.0 Type 1 and Type 2, and OCP 3.0 Small Form Factor.

Mellanox Multi-Host® technology allows multiple hosts to be connected into a single adapter by separating the PCIe interface into multiple and independent interfaces.

The portfolio also offers Mellanox Socket-Direct® configurations that enable servers without x16 PCIe slots to split the card’s 16-lane PCIe bus into two 8-lane buses on dedicated cards connected by a harness.




Host Management Host Management includes NC-SI over MCTP over SMBus, and MCTP over PCIe - Baseboard Management Controller (BMC) interface, as well as PLDM for Monitor and Control DSP0248 and PLDM for Firmware Update DSP0267.


Storage Offloads
• NVMe over Fabric offloads for target machine
• T10 DIF – Signature handover operation at wire speed, for ingress and egress traffic
• Storage protocols: SRP, iSER, NFS RDMA, SMB Direct, NVMe-oF

Remote Boot
• Remote boot over Ethernet
• Remote boot over iSCSI
• Unified Extensible Firmware Interface (UEFI)
• Pre-execution Environment (PXE)

  Enhanced Features
• Hardware-based reliable transport
• Collective operations offloads
• Vector collective operations offloads
• Mellanox PeerDirect® RDMA (aka GPUDirect®) communication acceleration
• 64/66 encoding
• Extended Reliable Connected transport (XRC)
• Dynamically Connected Transport (DCT)
• Enhanced Atomic operations
• Advanced memory mapping support, allowing user mode registration and remapping of memory (UMR)
• On demand paging (ODP)
• MPI Tag Matching
• Rendezvous protocol offload
• Out-of-order RDMA supporting Adaptive Routing
• Burst buffer offload
• In-Network Memory registration-free RDMA memory access

CPU Offloads
• RDMA over Converged Ethernet (RoCE)
• TCP/UDP/IP stateless offload
• LSO, LRO, checksum offload
• RSS (also on encapsulated packet), TSS, HDS, VLAN and MPLS tag insertion/stripping, Receive flow steering
• Data Plane Development Kit (DPDK) for kernel bypass applications
• Open VSwitch (OVS) offload using ASAP2
-Flexible match-action flow tables
-Tunneling encapsulation/de-capsulation
• Intelligent interrupt coalescence
• Header rewrite supporting hardware offload of NAT router

  Hardware-Based I/O Virtualization
• Single Root IOV
• Address translation and protection
• VMware NetQueue support
-SR-IOV: Up to 512 Virtual Functions
-SR-IOV: Up to 8 Physical Functions per host
• Virtualization hierarchies (e.g., NPAR when enabled)
-Virtualizing Physical Functions on a physical port
-SR-IOV on every Physical Function
• Configurable and user-programmable QoS
• Guaranteed QoS for VMs

Management and Control
• NC-SI over MCTP over SMBus and NC-SI over MCTP over PCIe - Baseboard Management Controller interface
• PLDM for Monitor and Control DSP0248
• PLDM for Firmware Update DSP0267
• SDN management interface for managing the eSwitch – I2C interface for device control and configuration
• General Purpose I/O pins
• SPI interface to Flash
• JTAG IEEE 1149.1 and IEEE 1149.6

  Overlay Networks
• RoCE over Overlay Networks
• Stateless offloads for overlay network tunneling protocols
• Hardware offload of encapsulation and decapsulation of VXLAN, NVGRE, and GENEVE overlay networks

 
Full specification and details can be found in the Product Datasheet PDF file

Express View Logo
PCIe speed:
In Stock: 2-3 Weeks
£837.54
£1005.05 Inc Vat
Add To Cart
Mellanox MCX653105A-ECAT-SP CONNECTX-6 VPI Adapter Card
Mellanox Logo
Single-Port QSFP56 Tall Bracket

ConnectX-6 VPI Card HDR100 EDR InfiniBand and 100GbE Ethernet Adapter Card

Featuring In-Network Computing for Enhanced Efficiency and Scalability

ConnectX-6 Virtual Protocol Interconnect (VPI) cards are a groundbreaking addition to the ConnectX series of industry-leading network adapter cards. Providing two ports of HDR100 EDR InfiniBand and 100GbE Ethernet connectivity, sub-600ns latency and 215 million messages per second, ConnectX-6 VPI cards enable the highest performance and most flexible solution aimed at meeting the continually growing demands of data center applications. In addition to all the existing innovative features of past versions, ConnectX-6 cards offer a number of enhancements to further improve performance and scalability.

ConnectX-6 VPI series supports HDR, HDR100, EDR, FDR, QDR, DDR and SDR InfiniBand speeds as well as 200, 100, 50, 40, 25, and 10Gb/s Ethernet speeds.




Benefits: • Industry-leading throughput, low CPU utilization and high message rate

• Highest performance and most intelligent fabric for compute and storage infrastructures

• Cutting-edge performance in virtualized networks including Network Function Virtualization (NFV)

• Host Chaining technology for economical rack design

• Smart interconnect for x86, Power, Arm, GPU and FPGA-based compute and storage platforms

• Flexible programmable pipeline for new network flows

• Efficient service chaining enablement

• Increased I/O consolidation efficiencies, reducing data center costs & complexity


ConnectX®-6 Dx is a highly secure and advanced smart network interface card (SmartNIC) that accelerates mission-critical cloud and data center applications, including security, virtualization, SDN/NFV, big data, machine learning, and storage. ConnectX-6 Dx provides up to two ports of 100Gb/s or a single port of 200Gb/s Ethernet connectivity and is powered by 50Gb/s (PAM4) or 25/10 Gb/s (NRZ) SerDes technology.

ConnectX-6 Dx features virtual switch (vSwitch) and virtual router (vRouter) hardware accelerations delivering orders-of-magnitude higher performance than softwarebased solutions. ConnectX-6 Dx supports a choice of single-root I/O virtualization (SR-IOV) and VirtIO in hardware, enabling customers to best address their application needs. By offloading cloud networking workloads, ConnectX-6 Dx frees up CPU cores for business applications while reducing total cost-of-ownership.




Features: • Up to HDR100 EDR InfiniBand and 100GbE Ethernet connectivity per port

• Max bandwidth of 200Gb/s

• Up to 215 million messages/sec

• Sub 0.6usec latency

• OCP 2.0

• FIPS capable

• Advanced storage capabilities including block-level encryption and checksum offloads

• Supports both 50G SerDes (PAM4) and 25G SerDes (NRZ) based ports

• Best-in-class packet pacing with sub-nanosecond accuracy

• PCIe Gen 3.0 and Gen 4.0 support

• RoHS compliant

• ODCC compatible




High Performance Computing Environments With its NVIDIA In-Network Computing and In-Network Memory capabilities, ConnectX-6 offloads computation even further to the network, saving CPU cycles and increasing network efficiency. ConnectX-6 utilizes remote direct memory access (RDMA) technology as defined in the InfiniBand Trade Association (IBTA) specification, delivering low latency, and high performance. ConnectX-6 enhances RDMA network capabilities even further by delivering end-to-end packet-level flow control.




Machine Learning and Big Data Environments Data analytics has become an essential function within many enterprise data centers, clouds, and hyperscale platforms. Machine learning (ML) relies on especially high throughput and low latency to train deep neural networks and improve recognition and classification accuracy. With its 200Gb/s throughput, ConnectX-6 is an excellent solution to provide ML applications with the levels of performance and scalability that they require.




Security Including Block-Level Encryption ConnectX-6 block-level encryption offers a critical innovation to network security. As data in transit is stored or retrieved, it undergoes encryption and decryption. ConnectX-6 hardware offloads the IEEE AES-XTS encryption/decryption from the CPU, saving latency and CPU utilization. It also guarantees protection for users sharing the same resources through the use of dedicated encryption keys.

By performing block storage encryption in the adapter, ConnectX-6 eliminates the need for self-encrypted disks. This gives customers the freedom to choose their preferred storage device, including byte addressable and NVDIMM devices that traditionally do not provide encryption. Moreover, ConnectX-6 can offer Federal Information Processing Standards (FIPS) compliance.




Bring NVMe-oF to Storage Environments NVMe storage devices are gaining momentum, offering very fast access to storage media. The evolving NVMe over Fabrics (NVMe-oF) protocol leverages RDMA connectivity to remotely access NVMe storage devices efficiently, while keeping the end-to-end NVMe model at lowest latency. With its NVMe-oF target and initiator offloads, ConnectX-6 brings further optimization to NVMe-oF, enhancing CPU utilization and scalability.




Portfolio of Smart Adapters ConnectX-6 is available in two form factors: low-profile stand-up PCIe and Open Compute Project (OCP) Spec 3.0 cards with QSFP connectors. Single-port, HDR, stand-up PCIe adapters are available based on either ConnectX-6 or ConnectX-6 DE (ConnectX-6 Dx enhanced for HPC applications).

In addition, specific PCIe stand-up cards are available with a cold plate for insertion into liquid-cooled Intel Server System D50TNP platforms.




Socket Direct ConnectX-6 also provides options for NVIDIA Socket Direct™ configurations, which improves the performance of multi-socket servers by enabling each of their CPUs to access the network through its dedicated PCIe interface. This enables data to bypass the QPI (UPI) and the other CPU, improving latency, performance, and CPU utilization.

Socket Direct also enables NVIDIA GPUDirect® RDMA for all CPU/GPU pairs by ensuring that GPUs are linked to the CPUs closest to the adapter card. Socket Direct enables Intel® DDIO optimization on both sockets by creating a direct connection between the sockets and the adapter card.

Socket Direct technology is enabled by a main card housing the ConnectX-6 and an auxiliary PCIe card bringing in the remaining PCIe lanes. The ConnectX-6 Socket Direct card is installed into two PCIe x16 slots and connected using a harness. The two PCIe x16 slots may also be connected to the same CPU. In this case, the main advantage of the technology lies in delivering 200Gb/s to servers with PCIe Gen3-only support.




Host Management Host Management includes NC-SI over MCTP over SMBus, and MCTP over PCIe— Baseboard Management Controller (BMC) interface, as well as PLDM for Monitor and Control DSP0248 and PLDM for Firmware Update DSP0267.




Broad Software Support All ConnectX adapters are supported by a full suite of drivers for major Linux distributions, as well as Microsoft® Windows® Server and VMware vSphere®.

HPC software libraries supported include HPC-X, OpenMPI, MVAPICH, MPICH, OpenSHMEM, PGAS, and varied commercial packages.

Overlay Networks
• RoCE over overlay networks
• Stateless offloads for overlay network tunneling protocols
• Hardware offload of encapsulation and decapsulation of VXLAN, NVGRE, and Geneve overlay networks

Storage Offloads
• Block-level encryption: XTS-AES 256/512-bit key
• NVMe over Fabrics offloads for target machine
• T10-DIF signature handover operation at wire speed, for ingress and egress traffic
• Storage protocols: SRP, iSER, NFS RDMA, SMB Direct, and NVMe-oF

  InfiniBand
• 200Gb/s and lower rates
• IBTA Specification 1.3 compliant
• RDMA, send/receive semantics
• Hardware-based congestion control
• Atomic operations
• 16 million I/O channels
• 256 to 4Kbyte MTU, 2Gbyte messages
• 8 virtual lanes + VL15

Remote Boot
• Remote boot over InfiniBand
• Remote boot over Ethernet
• Remote boot over iSCSI
• Unified Extensible Firmware Interface (UEFI)
• Pre-execution Environment (PXE)

  Hardware-Based I/O Virtualization
• Single Root IOV (SR-IOV)
• Address translation and protection
• VMware NetQueue support
-SR-IOV: Up to 1K virtual functions
-SR-IOV: Up to 8 physical functions per host
• Virtualization hierarchies (e.g., NPAR)
-Virtualizing physical functions on a physical port
-SR-IOV on every physical function
• Configurable and user-programmable QoS
• Guaranteed QoS for VMs

Management and Control
• NC-SI, MCTP over SMBus and MCTP over PCIe—Baseboard Management Controller interface
• PLDM for Monitor and Control DSP0248
• PLDM for Firmware Update DSP0267
• SDN management interface for managing the eSwitch
• I2C interface for device control and configuration
• General Purpose I/O pins
• SPI interface to flash
• JTAG IEEE 1149.1 and IEEE 1149.6

  Enhanced Features
• Hardware-based reliable transport
• Collective operations offloads
• Vector collective operations offloads
• NVIDIA PeerDirect® RDMA (a.k.a. NVIDIA GPUDirect) communication acceleration
• 64/66 encoding
• Enhanced atomic operations
• Advanced memory mapping support, allowing user mode registration and remapping of memory (UMR)
• Extended Reliable Connected transport (XRC)
• Dynamically Connected Transport (DCT)
• On demand paging (ODP)
• MPI tag matching
• Rendezvous protocol offload
• Out-of-order RDMA supporting Adaptive Routing
• Burst buffer offload
• In-Network Memory registration-free RDMA memory access

CPU Offloads
• RDMA over Converged Ethernet (RoCE)
• TCP/UDP/IP stateless offload
• LSO, LRO, checksum offload
• RSS (also on encapsulated packet), TSS, HDS, VLAN and MPLS tag insertion/stripping, receive flow steering
• Data plane development kit (DPDK) for kernel bypass applications
• Open vSwitch (OVS) offload using ASAP2
• Flexible match-action flow tables
• Tunneling encapsulation/decapsulation
• Intelligent interrupt coalescence
• Header rewrite supporting hardware offload of NAT router

 
Full specification and details can be found in the Product Datasheet PDF file

Express View Logo
End of Life Product
£ POA
£ POA Inc Vat
Mellanox Logo
With Host management Dual-Port QSFP56 PCIe 4.0 x16 No Crypto Thumbscrew (Pulltab) Bracket

NVIDIA ConnectX-6 DX Ethernet SmartNIC

Ethernet SmartNIC

ConnectX-6 Dx SmartNIC is the industry's most secure and advanced cloud network interface card to accelerate mission-critical data-center applications, such as security, virtualization, SDN/NFV, big data, machine learning, and storage. The SmartNIC delivers the highest return on investment (ROI) of any smart network interface card.

ConnectX-6 Dx is a member of NVIDIA's world-class, award-winning ConnectX series of network adapters powered by leading 50 Gb/s (PAM4) and 25/10 Gb/s (NRZ) SerDes technology and novel capabilities that accelerate cloud and data-center payloads.




Key Features • Up to 100 Gb/s bandwidth

• Message rate of up to 215 Mpps

• Sub 0.8 usec latency

• Flexible programmable pipeline for new network flows

• Multi-Host with advanced QoS

• ASAP2 - Accelerated Switching and Packet Processing for virtual switches/routers

• Overlay tunneling technologies

• IPsec and TLS in-line crypto acceleration

• Block crypto acceleration for data-at-rest

• Hardware Root-of-Trust and secure firmware update

• Connection Tracking offload

• Advanced RoCE capabilities

• Best in class PTP for TSN applications

• GPUDirect for GPU-to-GPU communication

• Host chaining technology for economical rack design

• Platform agnostic: x86, Power, Arm

• ODCC compatible


Host Interface
• 16 lanes of PCIe Gen4, compatible with PCIe Gen2/Gen3
• Integrated PCI switch
• NVIDIA Multi-Host and NVIDIA Socket Direct™

Remote Boot
• Remote boot over Ethernet
• Remote boot over iSCSI
• UEFI and PXE support for x86 and Arm servers

  Stateless Offloads
• TCP/UDP/IP stateless offload
• LSO, LRO, checksum offload
• Receive side scaling (RSS) also on encapsulated packet
• Transmit side scaling (TSS)
• VLAN and MPLS tag insertion/stripping
• Receive flow steering

Storage Offloads
• Block-level encryption: XTS-AES 256/512-bit key
• NVMe over Fabrics offloads for target machine
• T10 DIF signature handover operation at wire speed, for ingress and egress traffic
• Storage protocols: SRP, iSER, NFS RDMA, SMB Direct, NVMe-oF

  RDMA over Converged Ethernet (RoCE)
• RoCE v1/v2
• Zero-touch RoCE: no ECN, no PFC
• RoCE over overlay networks
• Selective repeat
• Programmable congestion control interface
• GPUDirect®

Management and Control
• NC-SI, MCTP over SMBus and MCTP over PCIe—Baseboard Management Controller interface, NCSI over RBT in Open Compute Project (OCP) 2.0/3.0 cards
• PLDM for Monitor and Control DSP0248
• PLDM for Firmware Update DSP0267
• I2C interface for device control and configuration

  Cybersecurity
• Inline hardware IPsec encryption and decryption
-AES-GCM 128/256-bit key
-RoCE over IPsec
• Inline hardware TLS encryption and decryption
-AES-GCM 128/256-bit key
• Data-at-rest AES-XTS encryption and decryption
-AES-XTS 256/512-bit key
• Platform security
-Hardware root-of-trust
-Secure firmware update

Virtualization/Cloud Native
• SR-IOV and VirtIO acceleration
-Up to 1K virtual functions per port
-8 physical functions
• Support for tunneling
-Encap/decap of VXLAN, NVGRE, Geneve, and more
-Stateless offloads for overlay tunnels

  ASAP2 Accelerated Switching & Packet Processing
• SDN acceleration for:
-Bare metal
-Virtualization
-Containers
• Full hardware offload for OVS data plane
• Flow update through RTE_Flow or TC_Flower
• Flex-parser: user-defined classification
• Hardware offload for:
-Connection tracking (Layer 4 firewall)
-NAT
-Header rewrite
-Mirroring
-Sampling
-Flow aging
-Hierarchical QoS
-Flow-based statistics

Advanced Timing and Synchronization
• Advanced PTP
-IEEE 1588v2 (any profile)
-PTP hardware clock (PHC) (UTC format)
-Nanosecond-level accuracy
-Line rate hardware timestamp (UTC format)
-PPS in and configurable PPS out
• Time-triggered scheduling
• PTP-based packet pacing
• Time-based SDN acceleration (ASAP2)
• Time-sensitive networking (TSN)
• Dedicated precision timing card option

 
Full specification and details can be found in the Product Datasheet PDF file

Express View Logo
In Stock: 2-3 Weeks
£1027.03
£1232.44 Inc Vat
Add To Cart
Mellanox MCX654106A-ECAT CONNECTX-6 VPI Adapter Card Kit
Mellanox Logo
100GB/S Dual-Port QSFP56 Socket Direct 2X PCIE3.0 X16 Tall Brackets

ConnectX-6 VPI Card HDR100 EDR InfiniBand and 100GbE Ethernet Adapter Card

Featuring In-Network Computing for Enhanced Efficiency and Scalability

ConnectX-6 Virtual Protocol Interconnect (VPI) cards are a groundbreaking addition to the ConnectX series of industry-leading network adapter cards. Providing two ports of HDR100 EDR InfiniBand and 100GbE Ethernet connectivity, sub-600ns latency and 215 million messages per second, ConnectX-6 VPI cards enable the highest performance and most flexible solution aimed at meeting the continually growing demands of data center applications. In addition to all the existing innovative features of past versions, ConnectX-6 cards offer a number of enhancements to further improve performance and scalability.

ConnectX-6 VPI series supports HDR, HDR100, EDR, FDR, QDR, DDR and SDR InfiniBand speeds as well as 200, 100, 50, 40, 25, and 10Gb/s Ethernet speeds.




Benefits • Industry-leading throughput, low CPU utilization and high message rate

• Highest performance and most intelligent fabric for compute and storage infrastructures

• Cutting-edge performance in virtualized networks including Network Function Virtualization (NFV)

• Host Chaining technology for economical rack design

• Smart interconnect for x86, Power, Arm, GPU and FPGA-based compute and storage platforms

• Flexible programmable pipeline for new network flows

• Efficient service chaining enablement

• Increased I/O consolidation efficiencies, reducing data center costs & complexity


Complex workloads demand ultra-fast processing of high-resolution simulations, extreme-size datasets, and highly-parallelized algorithms. As these computing requirements continue to grow, NVIDIA Quantum InfiniBand—the world’s only fully offloadable, In-Network Computing acceleration technology—provides the dramatic leap in performance needed to achieve unmatched results in high performance computing (HPC), AI, and hyperscale cloud infrastructures—with less cost and complexity.

ConnectX®-6 InfiniBand smart adapter cards are a key element in the NVIDIA Quantum InfiniBand platform. ConnectX-6 provides up to two ports of 200Gb/s InfiniBand and Ethernet(1) connectivity with extremely low latency, high message rate, smart offloads, and NVIDIA In-Network Computing acceleration that improve performance and scalability.




Features • Up to HDR100 EDR InfiniBand and 100GbE Ethernet connectivity per port

• Max bandwidth of 200Gb/s

• Up to 215 million messages/sec

• Sub 0.6usec latency

• Block-level XTS-AES mode hardware encryption

• FIPS capable

• Advanced storage capabilities including block-level encryption and checksum offloads

• Supports both 50G SerDes (PAM4) and 25G SerDes (NRZ) based ports

• Best-in-class packet pacing with sub-nanosecond accuracy

• PCIe Gen 3.0 and Gen 4.0 support

• RoHS compliant

• ODCC compatible




High Performance Computing Environments With its NVIDIA In-Network Computing and In-Network Memory capabilities, ConnectX-6 offloads computation even further to the network, saving CPU cycles and increasing network efficiency. ConnectX-6 utilizes remote direct memory access (RDMA) technology as defined in the InfiniBand Trade Association (IBTA) specification, delivering low latency, and high performance. ConnectX-6 enhances RDMA network capabilities even further by delivering end-to-end packet-level flow control.




Machine Learning and Big Data Environment Data analytics has become an essential function within many enterprise data centers, clouds, and hyperscale platforms. Machine learning (ML) relies on especially high throughput and low latency to train deep neural networks and improve recognition and classification accuracy. With its 200Gb/s throughput, ConnectX-6 is an excellent solution to provide ML applications with the levels of performance and scalability that they require.




Security Including Block-Level Encryption ConnectX-6 block-level encryption offers a critical innovation to network security. As data in transit is stored or retrieved, it undergoes encryption and decryption. ConnectX-6 hardware offloads the IEEE AES-XTS encryption/decryption from the CPU, saving latency and CPU utilization. It also guarantees protection for users sharing the same resources through the use of dedicated encryption keys.

By performing block storage encryption in the adapter, ConnectX-6 eliminates the need for self-encrypted disks. This gives customers the freedom to choose their preferred storage device, including byte addressable and NVDIMM devices that traditionally do not provide encryption. Moreover, ConnectX-6 can offer Federal Information Processing Standards (FIPS) compliance.




Bring NVMe-oF to Storage Environments NVMe storage devices are gaining momentum, offering very fast access to storage media. The evolving NVMe over Fabrics (NVMe-oF) protocol leverages RDMA connectivity to remotely access NVMe storage devices efficiently, while keeping the end-to-end NVMe model at lowest latency. With its NVMe-oF target and initiator offloads, ConnectX-6 brings further optimization to NVMe-oF, enhancing CPU utilization and scalability




Portfolio of Smart Adapters ConnectX-6 is available in two form factors: low-profile stand-up PCIe and Open Compute Project (OCP) Spec 3.0 cards with QSFP connectors. Single-port, HDR, stand-up PCIe adapters are available based on either ConnectX-6 or ConnectX-6 DE (ConnectX-6 Dx enhanced for HPC applications).

In addition, specific PCIe stand-up cards are available with a cold plate for insertion into liquid-cooled Intel Server System D50TNP platforms.




Socket Direct ConnectX-6 also provides options for NVIDIA Socket Direct™ configurations, which improves the performance of multi-socket servers by enabling each of their CPUs to access the network through its dedicated PCIe interface. This enables data to bypass the QPI (UPI) and the other CPU, improving latency, performance, and CPU utilization.

Socket Direct also enables NVIDIA GPUDirect® RDMA for all CPU/GPU pairs by ensuring that GPUs are linked to the CPUs closest to the adapter card. Socket Direct enables Intel® DDIO optimization on both sockets by creating a direct connection between the sockets and the adapter card.

Socket Direct technology is enabled by a main card housing the ConnectX-6 and an auxiliary PCIe card bringing in the remaining PCIe lanes. The ConnectX-6 Socket Direct card is installed into two PCIe x16 slots and connected using a harness. The two PCIe x16 slots may also be connected to the same CPU. In this case, the main advantage of the technology lies in delivering 200Gb/s to servers with PCIe Gen3-only support.




Host Management Host Management includes NC-SI over MCTP over SMBus, and MCTP over PCIe— Baseboard Management Controller (BMC) interface, as well as PLDM for Monitor and Control DSP0248 and PLDM for Firmware Update DSP0267.




Broad Software Support All ConnectX adapters are supported by a full suite of drivers for major Linux distributions, as well as Microsoft® Windows® Server and VMware vSphere®.

HPC software libraries supported include HPC-X, OpenMPI, MVAPICH, MPICH, OpenSHMEM, PGAS, and varied commercial packages.


Storage Offloads
• Block-level encryption: XTS-AES 256/512-bit key
• NVMe over Fabrics offloads for target machine
• T10-DIF signature handover operation at wire speed, for ingress and egress traffic
• Storage protocols: SRP, iSER, NFS RDMA, SMB Direct, and NVMe-oF

Overlay Networks
• RoCE over overlay networks
• Stateless offloads for overlay network tunneling protocols
• Hardware offload of encapsulation and decapsulation of VXLAN, NVGRE, and Geneve overlay networks

  InfiniBand
• 200Gb/s and lower rates
• IBTA Specification 1.3 compliant
• RDMA, send/receive semantics
• Hardware-based congestion control
• Atomic operations
• 16 million I/O channels
• 256 to 4Kbyte MTU, 2Gbyte messages
• 8 virtual lanes + VL15

Remote Boot
• Remote boot over InfiniBand
• Remote boot over Ethernet
• Remote boot over iSCSI
• Unified Extensible Firmware Interface (UEFI)
• Pre-execution Environment (PXE)

  Management and Control
• NC-SI, MCTP over SMBus and MCTP over PCIe—Baseboard Management Controller interface
• PLDM for Monitor and Control DSP0248
• PLDM for Firmware Update DSP0267
• SDN management interface for managing the eSwitch
• I2C interface for device control and configuration
• General Purpose I/O pins
• SPI interface to flash
• JTAG IEEE 1149.1 and IEEE 1149.6

Hardware-Based I/O Virtualization
• Single Root IOV (SR-IOV)
• Address translation and protection
• VMware NetQueue support
-SR-IOV: Up to 1K virtual functions
-SR-IOV: Up to 8 physical functions per host
• Virtualization hierarchies (e.g., NPAR)
-Virtualizing physical functions on a physical port
• SR-IOV on every physical function
• Configurable and user-programmable QoS
• Guaranteed QoS for VMs

  Enhanced Features
• Hardware-based reliable transport
• Collective operations offloads
• Vector collective operations offloads
• NVIDIA PeerDirect® RDMA (a.k.a. NVIDIA GPUDirect) communication acceleration
• 64/66 encoding
• Enhanced atomic operations
• Advanced memory mapping support, allowing user mode registration and remapping of memory (UMR)
• Extended Reliable Connected transport (XRC)
• Dynamically Connected Transport (DCT)
• On demand paging (ODP)
• MPI tag matching
• Rendezvous protocol offload
• Out-of-order RDMA supporting Adaptive Routing
• Burst buffer offload
• In-Network Memory registration-free RDMA memory access

CPU Offloads
• RDMA over Converged Ethernet (RoCE)
• TCP/UDP/IP stateless offload
• LSO, LRO, checksum offload
• RSS (also on encapsulated packet), TSS, HDS, VLAN and MPLS tag insertion/stripping, receive flow steering
• Data plane development kit (DPDK) for kernel bypass applications
• Open vSwitch (OVS) offload using ASAP2
• Flexible match-action flow tables
• Tunneling encapsulation/decapsulation
• Intelligent interrupt coalescence
• Header rewrite supporting hardware offload of NAT router

 
Full specification and details can be found in the Product Datasheet PDF file

Express View Logo
End of Life Product
£ POA
£ POA Inc Vat
Mellanox MCX623105AN-CDAT CONNECTX6 DX EN Adapter Card 100GBE
Mellanox Logo
No Crypto Tall Bracket

ConnectX-6 DX Ethernet SmartNIC

Advanced Networking and Security for the Most Demanding Cloud and Data Center Workloads

ConnectX-6 Dx SmartNIC is the industry's most secure and advanced cloud network interface card to accelerate mission-critical data-center applications, such as security, virtualization, SDN/NFV, big data, machine learning, and storage. The SmartNIC provides up to two ports of 100 Gb/s or a single-port of 200 Gb/s Ethernet connectivity and delivers the highest return on investment (ROI) of any smart network interface card.

ConnectX-6 Dx is a member of NVIDIA's world-class, award-winning ConnectX series of network adapters powered by leading 50 Gb/s (PAM4) and 25/10 Gb/s (NRZ) SerDes technology and novel capabilities that accelerate cloud and data-center payloads.




Key Features: • Up to 200 Gb/s bandwidth

• Message rate of up to 215 Mpps

• Sub 0.8 usec latency

• Flexible programmable pipeline for new network flows

• Multi-Host with advanced QoS

• ASAP2 - Accelerated Switching and Packet Processing for virtual switches/routers

• Overlay tunneling technologies

• IPsec and TLS in-line crypto acceleration

• Block crypto acceleration for data-at-rest

• Hardware Root-of-Trust and secure firmware update

• Connection Tracking offload

• Advanced RoCE capabilities

• Best in class PTP for TSN applications

• GPUDirect for GPU-to-GPU communication

• Host chaining technology for economical rack design

• Platform agnostic: x86, Power, Arm

• ODCC compatible


ConnectX®-6 Dx is a highly secure and advanced smart network interface card (SmartNIC) that accelerates mission-critical cloud and data center applications, including security, virtualization, SDN/NFV, big data, machine learning, and storage. ConnectX-6 Dx provides up to two ports of 100Gb/s or a single port of 200Gb/s Ethernet connectivity and is powered by 50Gb/s (PAM4) or 25/10 Gb/s (NRZ) SerDes technology.

ConnectX-6 Dx features virtual switch (vSwitch) and virtual router (vRouter) hardware accelerations delivering orders-of-magnitude higher performance than softwarebased solutions. ConnectX-6 Dx supports a choice of single-root I/O virtualization (SR-IOV) and VirtIO in hardware, enabling customers to best address their application needs. By offloading cloud networking workloads, ConnectX-6 Dx frees up CPU cores for business applications while reducing total cost-of-ownership.




SmartNIC Portfolio • 1/10/25/40/50/100/200 Gb/s Ethernet, PAM4/NRZ

• Various form factors:

-PCIe low-profile

-OCP 3.0 Small Form Factor (SFF)

-OCP 2.0

• Connectivity options:

-SFP28, SFP56, QSFP28, QSFP56, DSFP

• PCIe Gen 3.0/4.0 x16 host interface

• Multi-host and single-host flavors

• Crypto and non-crypto versions


Host Interface
• 16 lanes of PCIe Gen4, compatible with PCIe Gen2/Gen3
• Integrated PCI switch
• NVIDIA Multi-Host and NVIDIA Socket Direct

Remote Boot
• Remote boot over Ethernet
• Remote boot over iSCSI
• UEFI and PXE support for x86 and Arm servers

  Virtualization/Cloud Native
• SR-IOV and VirtIO acceleration
-Up to 1K virtual functions per port
-8 physical functions
• Support for tunneling
-Encap/decap of VXLAN, NVGRE, Geneve, and more
-Stateless offloads for overlay tunnels

Storage Offloads
• Block-level encryption: XTS-AES 256/512-bit key
• NVMe over Fabrics offloads for target machine
• T10 DIF signature handover operation at wire speed, for ingress and egress traffic
• Storage protocols: SRP, iSER, NFS RDMA, SMB Direct, NVMe-oF

  Stateless Offloads
• TCP/UDP/IP stateless offload
• LSO, LRO, checksum offload
• Receive side scaling (RSS) also on encapsulated packet
• Transmit side scaling (TSS)
• VLAN and MPLS tag insertion/stripping
• Receive flow steering

RDMA over Converged Ethernet (RoCE)
• RoCE v1/v2
• Zero-touch RoCE: no ECN, no PFC
• RoCE over overlay networks
• Selective repeat
• Programmable congestion control interface
• GPUDirect®

  Cybersecurity
• Inline hardware IPsec encryption and decryption
-AES-GCM 128/256-bit key
-RoCE over IPsec
• Inline hardware TLS encryption and decryption
-AES-GCM 128/256-bit key
• Data-at-rest AES-XTS encryption and decryption
-AES-XTS 256/512-bit key
• Platform security
-Hardware root-of-trust
-Secure firmware update

Management and Control
• NC-SI, MCTP over SMBus and MCTP over PCIe—Baseboard Management Controller interface, NCSI over RBT in Open Compute Project (OCP) 2.0/3.0 cards
• PLDM for Monitor and Control DSP0248
• PLDM for Firmware Update DSP0267
• I2C interface for device control and configuration

  Accelerated Switching & Packet Processing
• SDN acceleration for:
-Bare metal
-Virtualization
-Containers
• Full hardware offload for OVS data plane
• Flow update through RTE_Flow or TC_Flower
• Flex-parser: user-defined classification
• Hardware offload for:
-Connection tracking (Layer 4 firewall)
-NAT
-Header rewrite
-Mirroring
-Sampling
-Flow aging
-Hierarchical QoS
-Flow-based statistics

Advanced Timing and Synchronization
• Advanced PTP
-IEEE 1588v2 (any profile)
-PTP hardware clock (PHC) (UTC format)
-Nanosecond-level accuracy
-Line rate hardware timestamp (UTC format)
-PPS in and configurable PPS out
• Time-triggered scheduling
• PTP-based packet pacing
• Time-based SDN acceleration (ASAP2)
• Time-sensitive networking (TSN)
• Dedicated precision timing card option

 
Full specification and details can be found in the Product Datasheet PDF file

Express View Logo
In Stock: 2-3 Weeks
£779.13
£934.96 Inc Vat
Add To Cart
Mellanox MCX653106A-ECAT-SP CONNECTX-6 VPI Adapter Card
Mellanox Logo
Dual-Port with Tall Bracket

ConnectX-6 VPI Card HDR100 EDR InfiniBand and 100GbE Ethernet Adapter Card

Featuring In-Network Computing for Enhanced Efficiency and Scalability

ConnectX-6 Virtual Protocol Interconnect (VPI) cards are a groundbreaking addition to the ConnectX series of industry-leading network adapter cards. Providing two ports of HDR100 EDR InfiniBand and 100GbE Ethernet connectivity, sub-600ns latency and 215 million messages per second, ConnectX-6 VPI cards enable the highest performance and most flexible solution aimed at meeting the continually growing demands of data center applications. In addition to all the existing innovative features of past versions, ConnectX-6 cards offer a number of enhancements to further improve performance and scalability.

ConnectX-6 VPI series supports HDR, HDR100, EDR, FDR, QDR, DDR and SDR InfiniBand speeds as well as 200, 100, 50, 40, 25, and 10Gb/s Ethernet speeds.




Benefits: • Industry-leading throughput, low CPU utilization and high message rate

• Highest performance and most intelligent fabric for compute and storage infrastructures

• Cutting-edge performance in virtualized networks including Network Function Virtualization (NFV)

• Host Chaining technology for economical rack design

• Smart interconnect for x86, Power, Arm, GPU and FPGA-based compute and storage platforms

• Flexible programmable pipeline for new network flows

• Efficient service chaining enablement

• Increased I/O consolidation efficiencies, reducing data center costs & complexity


Complex workloads demand ultra-fast processing of high-resolution simulations, extreme-size datasets, and highly-parallelized algorithms. As these computing requirements continue to grow, NVIDIA Quantum InfiniBand—the world’s only fully offloadable, In-Network Computing acceleration technology—provides the dramatic leap in performance needed to achieve unmatched results in high performance computing (HPC), AI, and hyperscale cloud infrastructures—with less cost and complexity.

NVIDIA® ConnectX®-6 InfiniBand smart adapter cards are a key element in the NVIDIA Quantum InfiniBand platform. ConnectX-6 provides up to two ports of 200Gb/s InfiniBand and Ethernet(1) connectivity with extremely low latency, high message rate, smart offloads, and NVIDIA In-Network Computing accelerationthat improve performance and scalability.




Features: • Up to HDR100 EDR InfiniBand and 100GbE Ethernet connectivity per port

• Max bandwidth of 200Gb/s

• Up to 215 million messages/sec

• Sub 0.6usec latency

• Block-level XTS-AES mode hardware encryption

• FIPS capable

• Advanced storage capabilities including block-level encryption and checksum offloads

• Supports both 50G SerDes (PAM4) and 25G SerDes (NRZ) based ports

• Best-in-class packet pacing with sub-nanosecond accuracy

• PCIe Gen 3.0 and Gen 4.0 support

• RoHS compliant

• ODCC compatible




High Performance Computing Environments With its NVIDIA In-Network Computing and In-Network Memory capabilities, ConnectX-6 offloads computation even further to the network, saving CPU cycles and increasing network efficiency. ConnectX-6 utilizes remote direct memory access (RDMA) technology as defined in the InfiniBand Trade Association (IBTA) specification, delivering low latency, and high performance. ConnectX-6 enhances RDMA network capabilities even further by delivering end-to-end packet-level flow control.




Machine Learning and Big Data Environments Data analytics has become an essential function within many enterprise data centers, clouds, and hyperscale platforms. Machine learning (ML) relies on especially high throughput and low latency to train deep neural networks and improve recognition and classification accuracy. With its 200Gb/s throughput, ConnectX-6 is an excellent solution to provide ML applications with the levels of performance and scalability that they require.




Security Including Block-Level Encryption ConnectX-6 block-level encryption offers a critical innovation to network security. As data in transit is stored or retrieved, it undergoes encryption and decryption. ConnectX-6 hardware offloads the IEEE AES-XTS encryption/decryption from the CPU, saving latency and CPU utilization. It also guarantees protection for users sharing the same resources through the use of dedicated encryption keys.

By performing block storage encryption in the adapter, ConnectX-6 eliminates the need for self-encrypted disks. This gives customers the freedom to choose their preferred storage device, including byte addressable and NVDIMM devices that traditionally do not provide encryption. Moreover, ConnectX-6 can offer Federal Information Processing Standards (FIPS) compliance.




Bring NVMe-oF to Storage Environments NVMe storage devices are gaining momentum, offering very fast access to storage media. The evolving NVMe over Fabrics (NVMe-oF) protocol leverages RDMA connectivity to remotely access NVMe storage devices efficiently, while keeping the end-to-end NVMe model at lowest latency. With its NVMe-oF target and initiator offloads, ConnectX-6 brings further optimization to NVMe-oF, enhancing CPU utilization and scalability




Portfolio of Smart Adapters ConnectX-6 is available in two form factors: low-profile stand-up PCIe and Open Compute Project (OCP) Spec 3.0 cards with QSFP connectors. Single-port, HDR, stand-up PCIe adapters are available based on either ConnectX-6 or ConnectX-6 DE (ConnectX-6 Dx enhanced for HPC applications).

In addition, specific PCIe stand-up cards are available with a cold plate for insertion into liquid-cooled Intel Server System D50TNP platforms.




Socket Direct ConnectX-6 also provides options for NVIDIA Socket Direct™ configurations, which improves the performance of multi-socket servers by enabling each of their CPUs to access the network through its dedicated PCIe interface. This enables data to bypass the QPI (UPI) and the other CPU, improving latency, performance, and CPU utilization.

Socket Direct also enables NVIDIA GPUDirect® RDMA for all CPU/GPU pairs by ensuring that GPUs are linked to the CPUs closest to the adapter card. Socket Direct enables Intel® DDIO optimization on both sockets by creating a direct connection between the sockets and the adapter card.

Socket Direct technology is enabled by a main card housing the ConnectX-6 and an auxiliary PCIe card bringing in the remaining PCIe lanes. The ConnectX-6 Socket Direct card is installed into two PCIe x16 slots and connected using a harness. The two PCIe x16 slots may also be connected to the same CPU. In this case, the main advantage of the technology lies in delivering 200Gb/s to servers with PCIe Gen3-only support.




Host Management Host Management includes NC-SI over MCTP over SMBus, and MCTP over PCIe— Baseboard Management Controller (BMC) interface, as well as PLDM for Monitor and Control DSP0248 and PLDM for Firmware Update DSP0267.




Broad Software Support All ConnectX adapters are supported by a full suite of drivers for major Linux distributions, as well as Microsoft® Windows® Server and VMware vSphere®.

HPC software libraries supported include HPC-X, OpenMPI, MVAPICH, MPICH,OpenSHMEM, PGAS, and varied commercial packages.

Overlay Networks
• RoCE over overlay networks
• Stateless offloads for overlay network tunneling protocols
• Hardware offload of encapsulation and decapsulation of VXLAN, NVGRE, and Geneve overlay networks

Storage Offloads
• Block-level encryption: XTS-AES 256/512-bit key
• NVMe over Fabrics offloads for target machine
• T10-DIF signature handover operation at wire speed, for ingress and egress traffic
• Storage protocols: SRP, iSER, NFS RDMA, SMB Direct, and NVMe-oF

  InfiniBand
• 200Gb/s and lower rates
• IBTA Specification 1.3 compliant
• RDMA, send/receive semantics
• Hardware-based congestion control
• Atomic operations
• 16 million I/O channels
• 256 to 4Kbyte MTU, 2Gbyte messages
• 8 virtual lanes + VL15

Remote Boot
• Remote boot over InfiniBand
• Remote boot over Ethernet
• Remote boot over iSCSI
• Unified Extensible Firmware Interface (UEFI)
• Pre-execution Environment (PXE)

  Hardware-Based I/O Virtualization
• Single Root IOV (SR-IOV)
• Address translation and protection
• VMware NetQueue support
-SR-IOV: Up to 1K virtual functions
-SR-IOV: Up to 8 physical functions per host
• Virtualization hierarchies (e.g., NPAR)
-Virtualizing physical functions on a physical port
-SR-IOV on every physical function
• Configurable and user-programmable QoS
• Guaranteed QoS for VMs

Management and Control
• NC-SI, MCTP over SMBus and MCTP over PCIe—Baseboard Management Controller interface
• PLDM for Monitor and Control DSP0248
• PLDM for Firmware Update DSP0267
• SDN management interface for managing the eSwitch
• I2C interface for device control and configuration
• General Purpose I/O pins
• SPI interface to flash
• JTAG IEEE 1149.1 and IEEE 1149.6

  Enhanced Features
• Hardware-based reliable transport
• Collective operations offloads
• Vector collective operations offloads
• NVIDIA PeerDirect® RDMA (a.k.a. NVIDIA GPUDirect) communication acceleration
• 64/66 encoding
• Enhanced atomic operations
• Advanced memory mapping support, allowing user mode registration and remapping of memory (UMR)
• Extended Reliable Connected transport (XRC)
• Dynamically Connected Transport (DCT)
• On demand paging (ODP)
• MPI tag matching
• Rendezvous protocol offload
• Out-of-order RDMA supporting Adaptive Routing
• Burst buffer offload
• In-Network Memory registration-free RDMA memory access

CPU Offloads
• RDMA over Converged Ethernet (RoCE)
• TCP/UDP/IP stateless offload
• LSO, LRO, checksum offload
• RSS (also on encapsulated packet), TSS, HDS, VLAN and MPLS tag insertion/stripping, receive flow steering
• Data plane development kit (DPDK) for kernel bypass applications
• Open vSwitch (OVS) offload using ASAP2
• Flexible match-action flow tables
• Tunneling encapsulation/decapsulation
• Intelligent interrupt coalescence
• Header rewrite supporting hardware offload of NAT router

 
Full specification and details can be found in the Product Datasheet PDF file

Express View Logo
End of Life Product
£ POA
£ POA Inc Vat
Mellanox MCX556A-ECAT CONNECTX-5 VPI Adapter Card
Mellanox Logo
Dual-Port with Tall Bracket ROHS R6

Connectx-5 Infiniband Adapter Card

100Gb/s InfiniBand & Ethernet (VPI) Adapter Card

ConnectX-5 network adapter cards with Virtual Protocol Interconnect (VPI), supporting FDR IB and 40/56GbE connectivity, provide the highest performance and most flexible solution for high-performance, Web 2.0, Cloud, data analytics, database, and storage platforms.




Benefits: • Up to 100Gb/s connectivity per port

• Industry-leading throughput, low latency, low CPU utilization and high message rate

• Innovative rack design for storage and Machine Learning based on Host Chaining technology

• Smart interconnect for x86, Power, Arm, and GPU-based compute & storage platforms

• Advanced storage capabilities including NVMe over Fabric offloads

• Intelligent network adapter supporting flexible pipeline programmability

• Cutting-edge performance in virtualized networks including Network Function Virtualization (NFV)

• Enabler for efficient service chaining capabilities

• Efficient I/O consolidation, lowering data center costs and complexity


With the exponential growth of data being shared and stored by applications and social networks, the need for high-speed and high performance compute and storage data centers is skyrocketing.

ConnectX-5 provides exceptional high performance for the most demanding data centers, public and private clouds, Web2.0 and Big Data applications, as well as High-Performance Computing (HPC) and Storage systems, enabling today's corporations to meet the demands of the data explosion.




Features: • Tag matching and rendezvous offloads

• Adaptive routing on reliable transport

• Burst buffer offloads for background checkpointing

• NVMe over Fabric (NVMe-oF) offloads

• Back-end switch elimination by host chaining

• Embedded PCIe switch

• Enhanced vSwitch/vRouter offloads

• Flexible pipeline

• RoCE for overlay networks

• PCIe Gen 4.0 support

• RoHS compliant

• ODCC compatible




HPC Environments ConnectX-5 offers enhancements to HPC infrastructures by providing MPI and SHMEM/ PGAS and rendezvous tag matching offload, hardware support for out-of-order RDMA write and read operations, as well as additional network atomic and PCIe atomic operations support.

ConnectX-5 enhances RDMA network capabilities by completing the switch adaptive- routing capabilities and supporting data delivered out-of-order, while maintaining ordered completion semantics, providing multipath reliability, and efficient support for all network topologies, including DragonFly and DragonFly+.

ConnectX-5 also supports burst buffer offload for background checkpointing without interfering in the main CPU operations, and the innovative dynamic connected transport (DCT) service to ensure extreme scalability for compute and storage systems.




Storage Environments NVMe storage devices are gaining popularity, offering very fast storage access. The NVMe over Fabrics (NVMe-oF) protocol leverages RDMA connectivity for remote access. ConnectX-5 offers further enhancements by providing NVMe-oF target offloads, enabling highly efficient NVMe storage access with no CPU intervention, and thus improved performance and lower latency.

Standard block and file access protocols can leverage RDMA for high-performance storage access. A consolidated compute and storage network achieves significant cost- performance advantages over multi-fabric networks.




Adapter Card Portfolio ConnectX-5 InfiniBand adapter cards are available in several form factors, including low-profile stand-up PCIe, Open Compute Project (OCP) Spec 2.0 Type 1, and OCP 2.0 Type 2.

NVIDIA Multi-Host technology allows multiple hosts to be connected into a single adapter by separating the PCIe interface into multiple and independent interfaces.

The portfolio also offers NVIDIA Socket Direct configurations that enable servers without x16 PCIe slots to split the card’s 16-lane PCIe bus into two 8-lane buses on dedicated cards connected by a harness. This provides 100Gb/s port speed even to servers without a x16 PCIe slot.

Socket Direct also enables NVIDIA GPUDirect® RDMA for all CPU/GPU pairs by ensuring that all GPUs are linked to CPUs close to the adapter card, and enables Intel® DDIO on both sockets by creating a direct connection between the sockets and the adapter card.


Ethernet
• 100GbE / 50GbE / 40GbE / 25GbE / 10GbE / 1GbE
• Jumbo frame support (9.6KB)

HPC Software Libraries
• NVIDIA HPC-X,OpenMPI, IBM PE, OSU MPI (MVAPICH/2), Intel MPI
• Platform MPI, UPC, Open SHMEM

  InfiniBand
• 100Gb/s and lower speed
• IBTA Specification 1.3 compliant
• RDMA, send/receive semantics
• Hardware-based congestion control
• Atomic operations
• 16 million I/O channels
• 256 to 4Kbyte MTU, 2Gbyte messages
• 8 virtual lanes + VL15

Remote Boot
• Remote boot over InfiniBand
• Remote boot over Ethernet
• Remote boot over iSCSI
• Unified extensible firmware Interface (UEFI)
• Pre-execution environment (PXE)

  Management and Control
• NC-SI over MCTP over SMBus and NC-SI over MCTP over PCIe—Baseboard Management Controller interface
• PLDM for Monitor and Control DSP0248
• PLDM for Firmware Update DSP0267
• SDN management interface for managing the eSwitch
• I2C interface for device control and configuration
• General purpose I/O pins
SPI interface to flash
• JTAG IEEE 1149.1 and IEEE 1149.6

Hardware-Based I/O Virtualization
• Single root IO virtualization (SR-IOV)
• Address translation and protection
• VMware NetQueue support
- SR-IOV: up to 512 virtual functions
- SR-IOV: up to 8 physical functions per host
• Virtualization hierarchies (e.g., NPAR when enabled)
- Virtualizing physical functions on a physical port
- SR-IOV on every physical function
• Configurable and user-programmable QoS
• Guaranteed QoS for VMs

  Storage Offloads
• NVMe over Fabrics offloads for target machine
• T10 DIF—Signature handover operation at wire speed for ingress and egress traffic
• Storage protocols: SRP, iSER, NFS RDMA, SMB Direct, NVMe-oF

Overlay Networks
• RoCE over overlay networks
• Stateless offloads for overlay network tunneling protocols
• Hardware offload of encapsulation and decapsulation of VXLAN, NVGRE, and GENEVE overlay networks

  Enhanced Features
• Hardware-based reliable transport
• Collective operations offloads
• Vector collective operations offloads
• NVIDIA PeerDirect™ RDMA (aka GPUDirect) communication acceleration
• 64/66 encoding
• Extended reliable connected transport (XRC)
• Dynamically connected transport (DCT)
• Enhanced atomic operations
Advanced memory mapping support, allowing user mode registration and remapping of memory (UMR)
• On-demand paging (ODP)
• MPI tag matching
Rendezvous protocol offload
• Out-of-order RDMA supporting adaptive routing
• Burst buffer offload
• In-Network Memory registration-free RDMA memory access

 
Full specification and details can be found in the Product Datasheet PDF file

Express View Logo
In Stock: 2-3 Weeks
£ POA
£ POA Inc Vat
Mellanox MCX556M-ECAT-S25 CONNECTX-5 VPI Adapter Card
Mellanox Logo
With Socket Direct Supporting Dual-Socket Server EDR IB (100GB/S) and 100GBE Dual-Port QSFP28 2X PCIE3.0 X8 25CM Harness Tall Bracket ROHS R6

Connectx-5 Infiniband Adapter Card

ConnectX-5 VPI Socket Direct EDR IB and 100GbE InfiniBand & Ethernet Adapter Card

Intelligent RDMA-enabled network adapter card with advanced application offload capabilities supporting 100Gb/s for servers without x16 PCIe slots.




Benefits: • Up to 100Gb/s connectivity per port

• Industry-leading throughput, low latency, low CPU utilization and high message rate

• Low latency for dual-socket servers in environments with multiple network flows

• Innovative rack design for storage and Machine Learning based on Host Chaining technology

• Smart interconnect for x86, Power, Arm, and GPU-based compute and storage platforms

• Advanced storage capabilities including NVMe over Fabric offloads

• Intelligent network adapter supporting flexible pipeline programmability

• Cutting-edge performance in virtualized networks including Network Function Virtualization (NFV)

• Enabler for efficient service chaining capabilities

• Efficient I/O consolidation, lowering data center costs and complexity


ConnectX-5 Socket Direct with Virtual Protocol Interconnect supports two ports of 100Gb/s InfiniBand and Ethernet connectivity, very low latency, and very high message rate, OVS and NVMe over Fabric offloads, providing the highest performance and most flexible solution for the most demanding applications and markets: Machine Learning, Data Analytics, and more.




Features: • Socket Direct, enabling 100Gb/s for servers without x16 PCIe slots

• Tag matching and rendezvous offloads

• Adaptive routing on reliable transport

• Burst buffer offloads for background checkpointing

• NVMe over Fabric (NVMe-oF) offloads

• Back-end switch elimination by host chaining

• Enhanced vSwitch/vRouter offloads

• Flexible pipeline

• RoCE for overlay networks

• RoHS compliant

• ODCC compatible




HPC Environments ConnectX-5 offers enhancements to HPC infrastructures by providing MPI and SHMEM/ PGAS and rendezvous tag matching offload, hardware support for out-of-order RDMA write and read operations, as well as additional network atomic and PCIe atomic operations support.

ConnectX-5 enhances RDMA network capabilities by completing the switch adaptive- routing capabilities and supporting data delivered out-of-order, while maintaining ordered completion semantics, providing multipath reliability, and efficient support for all network topologies, including DragonFly and DragonFly+.

ConnectX-5 also supports burst buffer offload for background checkpointing without interfering in the main CPU operations, and the innovative dynamic connected transport (DCT) service to ensure extreme scalability for compute and storage systems.




Storage Environments NVMe storage devices are gaining popularity, offering very fast storage access. The NVMe over Fabrics (NVMe-oF) protocol leverages RDMA connectivity for remote access. ConnectX-5 offers further enhancements by providing NVMe-oF target offloads, enabling highly efficient NVMe storage access with no CPU intervention, and thus improved performance and lower latency.

Standard block and file access protocols can leverage RDMA for high-performance storage access. A consolidated compute and storage network achieves significant cost- performance advantages over multi-fabric networks.




Adapter Card Portfolio ConnectX-5 InfiniBand adapter cards are available in several form factors, including low- profile stand-up PCIe, Open Compute Project (OCP) Spec 2.0 Type 1, and OCP 2.0 Type 2.

NVIDIA Multi-Host technology allows multiple hosts to be connected into a single adapter by separating the PCIe interface into multiple and independent interfaces.

The portfolio also offers NVIDIA Socket Direct configurations that enable servers without x16 PCIe slots to split the card’s 16-lane PCIe bus into two 8-lane buses on dedicated cards connected by a harness. This provides 100Gb/s port speed even to servers without a x16 PCIe slot.

Socket Direct also enables NVIDIA GPUDirect® RDMA for all CPU/GPU pairs by ensuring that all GPUs are linked to CPUs close to the adapter card, and enables Intel® DDIO on both sockets by creating a direct connection between the sockets and the adapter card.


Ethernet
• 100GbE / 50GbE / 40GbE / 25GbE / 10GbE / 1GbE
• Jumbo frame support (9.6KB)

HPC Software Libraries
• NVIDIA HPC-X,OpenMPI, IBM PE, OSU MPI (MVAPICH/2), Intel MPI
• Platform MPI, UPC, Open SHMEM

  InfiniBand
• 100Gb/s and lower speed
• IBTA Specification 1.3 compliant
• RDMA, send/receive semantics
• Hardware-based congestion control
• Atomic operations
• 16 million I/O channels
• 256 to 4Kbyte MTU, 2Gbyte messages
• 8 virtual lanes + VL15

Remote Boot
• Remote boot over InfiniBand
• Remote boot over Ethernet
• Remote boot over iSCSI
• Unified extensible firmware Interface (UEFI)
• Pre-execution environment (PXE)

  Management and Control
• NC-SI over MCTP over SMBus and NC-SI over MCTP over PCIe—Baseboard Management Controller interface
• PLDM for Monitor and Control DSP0248
• PLDM for Firmware Update DSP0267
• SDN management interface for managing the eSwitch
• I2C interface for device control and configuration
• General purpose I/O pins
SPI interface to flash
• JTAG IEEE 1149.1 and IEEE 1149.6

Hardware-Based I/O Virtualization
• Single root IO virtualization (SR-IOV)
• Address translation and protection
• VMware NetQueue support
- SR-IOV: up to 512 virtual functions
- SR-IOV: up to 8 physical functions per host
• Virtualization hierarchies (e.g., NPAR when enabled)
- Virtualizing physical functions on a physical port
- SR-IOV on every physical function
• Configurable and user-programmable QoS
• Guaranteed QoS for VMs

  Storage Offloads
• NVMe over Fabrics offloads for target machine
• T10 DIF—Signature handover operation at wire speed for ingress and egress traffic
• Storage protocols: SRP, iSER, NFS RDMA, SMB Direct, NVMe-oF

Overlay Networks
• RoCE over overlay networks
• Stateless offloads for overlay network tunneling protocols
• Hardware offload of encapsulation and decapsulation of VXLAN, NVGRE, and GENEVE overlay networks

  Enhanced Features
• Hardware-based reliable transport
• Collective operations offloads
• Vector collective operations offloads
• NVIDIA PeerDirect™ RDMA (aka GPUDirect) communication acceleration
• 64/66 encoding
• Extended reliable connected transport (XRC)
• Dynamically connected transport (DCT)
• Enhanced atomic operations
Advanced memory mapping support, allowing user mode registration and remapping of memory (UMR)
• On-demand paging (ODP)
• MPI tag matching
Rendezvous protocol offload
• Out-of-order RDMA supporting adaptive routing
• Burst buffer offload
• In-Network Memory registration-free RDMA memory access

 
Full specification and details can be found in the Product Datasheet PDF file

Express View Logo
In Stock: 2-3 Weeks
£837.54
£1005.05 Inc Vat
Add To Cart
Mellanox MCX556A-EDAT CONNECTX-5 EX VPI Adapter Card
Mellanox Logo
Tall Bracket ROHS R6

Connectx-5 Infiniband Adapter Card

100Gb/s InfiniBand & Ethernet (VPI) Adapter Card ? ConnectX-5 Ex

ConnectX-5 network adapter cards with Virtual Protocol Interconnect (VPI), supporting FDR IB and 40/56GbE connectivity, provide the highest performance and most flexible solution for high-performance, Web 2.0, Cloud, data analytics, database, and storage platforms.




Benefits: • Up to 100Gb/s connectivity per port

• Industry-leading throughput, low latency, low CPU utilization and high message rate

• Innovative rack design for storage and Machine Learning based on Host Chaining technology

• Smart interconnect for x86, Power, Arm, and GPU-based compute & storage platforms

• Advanced storage capabilities including NVMe over Fabric offloads

• Intelligent network adapter supporting flexible pipeline programmability

• Cutting-edge performance in virtualized networks including Network Function Virtualization (NFV)

• Enabler for efficient service chaining capabilities

• Efficient I/O consolidation, lowering data center costs and complexity


With the exponential growth of data being shared and stored by applications and social networks, the need for high-speed and high performance compute and storage data centers is skyrocketing.

ConnectX-5 provides exceptional high performance for the most demanding data centers, public and private clouds, Web2.0 and Big Data applications, as well as High-Performance Computing (HPC) and Storage systems, enabling today's corporations to meet the demands of the data explosion.




Features: • Tag matching and rendezvous offloads

• Adaptive routing on reliable transport

• Burst buffer offloads for background checkpointing

• NVMe over Fabric (NVMe-oF) offloads

• Back-end switch elimination by host chaining

• Embedded PCIe switch

• Enhanced vSwitch/vRouter offloads

• Flexible pipeline

• RoCE for overlay networks

• PCIe Gen 4.0 support

• RoHS compliant

• ODCC compatible




HPC Environments ConnectX-5 offers enhancements to HPC infrastructures by providing MPI and SHMEM/ PGAS and rendezvous tag matching offload, hardware support for out-of-order RDMA write and read operations, as well as additional network atomic and PCIe atomic operations support.

ConnectX-5 enhances RDMA network capabilities by completing the switch adaptive- routing capabilities and supporting data delivered out-of-order, while maintaining ordered completion semantics, providing multipath reliability, and efficient support for all network topologies, including DragonFly and DragonFly+.

ConnectX-5 also supports burst buffer offload for background checkpointing without interfering in the main CPU operations, and the innovative dynamic connected transport (DCT) service to ensure extreme scalability for compute and storage systems.




Storage Environments NVMe storage devices are gaining popularity, offering very fast storage access. The NVMe over Fabrics (NVMe-oF) protocol leverages RDMA connectivity for remote access. ConnectX-5 offers further enhancements by providing NVMe-oF target offloads, enabling highly efficient NVMe storage access with no CPU intervention, and thus improved performance and lower latency.

Standard block and file access protocols can leverage RDMA for high-performance storage access. A consolidated compute and storage network achieves significant cost- performance advantages over multi-fabric networks.




Adapter Card Portfolio ConnectX-5 InfiniBand adapter cards are available in several form factors, including low-profile stand-up PCIe, Open Compute Project (OCP) Spec 2.0 Type 1, and OCP 2.0 Type 2.

NVIDIA Multi-Host technology allows multiple hosts to be connected into a single adapter by separating the PCIe interface into multiple and independent interfaces.

The portfolio also offers NVIDIA Socket Direct configurations that enable servers without x16 PCIe slots to split the card’s 16-lane PCIe bus into two 8-lane buses on dedicated cards connected by a harness. This provides 100Gb/s port speed even to servers without a x16 PCIe slot.

Socket Direct also enables NVIDIA GPUDirect® RDMA for all CPU/GPU pairs by ensuring that all GPUs are linked to CPUs close to the adapter card, and enables Intel® DDIO on both sockets by creating a direct connection between the sockets and the adapter card.


Ethernet
• 100GbE / 50GbE / 40GbE / 25GbE / 10GbE / 1GbE
• Jumbo frame support (9.6KB)

HPC Software Libraries
• NVIDIA HPC-X,OpenMPI, IBM PE, OSU MPI (MVAPICH/2), Intel MPI
• Platform MPI, UPC, Open SHMEM

  InfiniBand
• 100Gb/s and lower speed
• IBTA Specification 1.3 compliant
• RDMA, send/receive semantics
• Hardware-based congestion control
• Atomic operations
• 16 million I/O channels
• 256 to 4Kbyte MTU, 2Gbyte messages
• 8 virtual lanes + VL15

Remote Boot
• Remote boot over InfiniBand
• Remote boot over Ethernet
• Remote boot over iSCSI
• Unified extensible firmware Interface (UEFI)
• Pre-execution environment (PXE)

  Management and Control
• NC-SI over MCTP over SMBus and NC-SI over MCTP over PCIe—Baseboard Management Controller interface
• PLDM for Monitor and Control DSP0248
• PLDM for Firmware Update DSP0267
• SDN management interface for managing the eSwitch
• I2C interface for device control and configuration
• General purpose I/O pins
SPI interface to flash
• JTAG IEEE 1149.1 and IEEE 1149.6

Hardware-Based I/O Virtualization
• Single root IO virtualization (SR-IOV)
• Address translation and protection
• VMware NetQueue support
- SR-IOV: up to 512 virtual functions
- SR-IOV: up to 8 physical functions per host
• Virtualization hierarchies (e.g., NPAR when enabled)
- Virtualizing physical functions on a physical port
- SR-IOV on every physical function
• Configurable and user-programmable QoS
• Guaranteed QoS for VMs

  Storage Offloads
• NVMe over Fabrics offloads for target machine
• T10 DIF—Signature handover operation at wire speed for ingress and egress traffic
• Storage protocols: SRP, iSER, NFS RDMA, SMB Direct, NVMe-oF

Overlay Networks
• RoCE over overlay networks
• Stateless offloads for overlay network tunneling protocols
• Hardware offload of encapsulation and decapsulation of VXLAN, NVGRE, and GENEVE overlay networks

  Enhanced Features
• Hardware-based reliable transport
• Collective operations offloads
• Vector collective operations offloads
• NVIDIA PeerDirect™ RDMA (aka GPUDirect) communication acceleration
• 64/66 encoding
• Extended reliable connected transport (XRC)
• Dynamically connected transport (DCT)
• Enhanced atomic operations
Advanced memory mapping support, allowing user mode registration and remapping of memory (UMR)
• On-demand paging (ODP)
• MPI tag matching
Rendezvous protocol offload
• Out-of-order RDMA supporting adaptive routing
• Burst buffer offload
• In-Network Memory registration-free RDMA memory access

 
Full specification and details can be found in the Product Datasheet PDF file

Express View Logo
In Stock: 2-3 Weeks
£ POA
£ POA Inc Vat
Mellanox MCX654105A-HCAT CONNECTX-6 VPI Adapter Card Kit
Mellanox Logo
HDR IB (200GB/S) and 200GBE Single-Port QSFP56 Socket Direct 2X PCIE3.0 X16 Tall Brackets

ConnectX-6 VPI Card 200Gb/s InfiniBand & Ethernet Adapter Card

Featuring In-Network Computing for Enhanced Efficiency and Scalability

Socket Direct 2x PCIe 3.0 x16

ConnectX-6 Virtual Protocol Interconnect (VPI) cards are a groundbreaking addition to the ConnectX series of industry-leading network adapter cards. Providing two ports of 200Gb/s for InfiniBand and Ethernet connectivity, sub-600ns latency and 215 million messages per second, ConnectX-6 VPI cards enable the highest performance and most flexible solution aimed at meeting the continually growing demands of data center applications. In addition to all the existing innovative features of past versions, ConnectX-6 cards offer a number of enhancements to further improve performance and scalability.




Benefits: • Industry-leading throughput, low CPU utilization and high message rate

• Highest performance and most intelligent fabric for compute and storage infrastructures

• Cutting-edge performance in virtualized networks including Network Function Virtualization (NFV)

• Host Chaining technology for economical rack design

• Smart interconnect for x86, Power, Arm, GPU and FPGA-based compute and storage platforms

• Flexible programmable pipeline for new network flows

• Enabler for efficient service chaining capabilities

• Efficient I/O consolidation, lowering data center costs and complexity


ConnectX-6 VPI supports HDR, HDR100, EDR, FDR, QDR, DDR and SDR InfiniBand speeds as well as 200, 100, 50, 40, 25, and 10Gb/s Ethernet speeds.

ConnectX®-6 InfiniBand smart adapter cards are a key element in the NVIDIA Quantum InfiniBand platform. ConnectX-6 provides up to two ports of 200Gb/s InfiniBand and Ethernet(1) connectivity with extremely low latency, high message rate, smart offloads, and NVIDIA In-Network Computing acceleration that improve performance and scalability.




Features: • Up to 200Gb/s connectivity per port

• Max bandwidth of 200Gb/s

• Up to 215 million messages/sec

• Sub 0.6usec latency

• Block-level XTS-AES mode hardware encryption

• FIPS capable

• Advanced storage capabilities including block-level encryption and checksum offloads

• Supports both 50G SerDes (PAM4) and 25G SerDes (NRZ) based ports

• Best-in-class packet pacing with sub-nanosecond accuracy

• PCIe Gen 3.0 and Gen 4.0 support

• RoHS compliant

• ODCC compatible




High Performance Computing Environment With its NVIDIA In-Network Computing and In-Network Memory capabilities, ConnectX-6 offloads computation even further to the network, saving CPU cycles and increasing network efficiency. ConnectX-6 utilizes remote direct memory access (RDMA) technology as defined in the InfiniBand Trade Association (IBTA) specification, delivering low latency, and high performance. ConnectX-6 enhances RDMA network capabilities even further by delivering end-to-end packet-level flow control.




Machine Learning and Big Data Environments Data analytics has become an essential function within many enterprise data centers, clouds, and hyperscale platforms. Machine learning (ML) relies on especially high throughput and low latency to train deep neural networks and improve recognition and classification accuracy. With its 200Gb/s throughput, ConnectX-6 is an excellent solution to provide ML applications with the levels of performance and scalability that they require.




Security Including Block-Level Encryption ConnectX-6 block-level encryption offers a critical innovation to network security. As data in transit is stored or retrieved, it undergoes encryption and decryption. ConnectX-6 hardware offloads the IEEE AES-XTS encryption/decryption from the CPU, saving latency and CPU utilization. It also guarantees protection for users sharing the same resources through the use of dedicated encryption keys.

By performing block storage encryption in the adapter, ConnectX-6 eliminates the need for self-encrypted disks. This gives customers the freedom to choose their preferred storage device, including byte addressable and NVDIMM devices that traditionally do not provide encryption. Moreover, ConnectX-6 can offer Federal Information Processing Standards (FIPS) compliance.




Bring NVMe-oF to Storage Environments NVMe storage devices are gaining momentum, offering very fast access to storage media. The evolving NVMe over Fabrics (NVMe-oF) protocol leverages RDMA connectivity to remotely access NVMe storage devices efficiently, while keeping the end-to-end NVMe model at lowest latency. With its NVMe-oF target and initiator offloads, ConnectX-6 brings further optimization to NVMe-oF, enhancing CPU utilization and scalability.




Portfolio of Smart Adapters ConnectX-6 is available in two form factors: low-profile stand-up PCIe and Open Compute Project (OCP) Spec 3.0 cards with QSFP connectors. Single-port, HDR, stand-up PCIe adapters are available based on either ConnectX-6 or ConnectX-6 DE (ConnectX-6 Dx enhanced for HPC applications).

In addition, specific PCIe stand-up cards are available with a cold plate for insertion into liquid-cooled Intel Server System D50TNP platforms.




Socket Direct ConnectX-6 also provides options for NVIDIA Socket Direct™ configurations, which improves the performance of multi-socket servers by enabling each of their CPUs to access the network through its dedicated PCIe interface. This enables data to bypass the QPI (UPI) and the other CPU, improving latency, performance, and CPU utilization.

Socket Direct also enables NVIDIA GPUDirect® RDMA for all CPU/GPU pairs by ensuring that GPUs are linked to the CPUs closest to the adapter card. Socket Direct enables Intel® DDIO optimization on both sockets by creating a direct connection between the sockets and the adapter card.

Socket Direct technology is enabled by a main card housing the ConnectX-6 and an auxiliary PCIe card bringing in the remaining PCIe lanes. The ConnectX-6 Socket Direct card is installed into two PCIe x16 slots and connected using a harness. The two PCIex16 slots may also be connected to the same CPU. In this case, the main advantage of the technology lies in delivering 200Gb/s to servers with PCIe Gen3-only support.




Host Management Host Management includes NC-SI over MCTP over SMBus, and MCTP over PCIe—Baseboard Management Controller (BMC) interface, as well as PLDM for Monitor and Control DSP0248 and PLDM for Firmware Update DSP0267.




Broad Software Support All ConnectX adapters are supported by a full suite of drivers for major Linux distributions, as well as Microsoft® Windows® Server and VMware vSphere®.

HPC software libraries supported include HPC-X, OpenMPI, MVAPICH, MPICH, OpenSHMEM, PGAS, and varied commercial packages.


InfiniBand
• 200Gb/s and lower rates
• IBTA Specification 1.3 compliant
• RDMA, send/receive semantics
• Hardware-based congestion control
• Atomic operations
• 16 million I/O channels
• 256 to 4Kbyte MTU, 2Gbyte messages
• 8 virtual lanes + VL15

Storage Offloads
• Block-level encryption: XTS-AES 256/512-bit key
• NVMe over Fabrics offloads for target machine
• T10 DIF—Signature handover operation at wire speed for ingress and egress traffic
• Storage protocols: SRP, iSER, NFS RDMA, SMB Direct, NVMe-oF

  Management and Control
• NC-SI, MCTP over SMBus and MCTP over PCIe—Baseboard Management Controller interface
• PLDM for Monitor and Control DSP0248
• PLDM for Firmware Update DSP0267
• SDN management interface for managing the eSwitch
• I2C interface for device control and configuration
• General Purpose I/O pins
• SPI interface to flash
• JTAG IEEE 1149.1 and IEEE 1149.6

Hardware-Based I/O Virtualization
• Single Root IOV (SR-IOV)
• Address translation and protection
• VMware NetQueue support
-SR-IOV: Up to 1K virtual functions
- SR-IOV: Up to 8 physical functions per host
• Virtualization hierarchies (e.g., NPAR)
- Virtualizing physical functions on a physical port
- SR-IOV on every physical function
• Configurable and user-programmable QoS
• Guaranteed QoS for VMs

  Remote Boot
• Remote boot over InfiniBand
• Remote boot over Ethernet
• Remote boot over iSCS
• Unified Extensible Firmware Interface (UEFI)
• Pre-execution Environment (PXE)

Overlay Networks
• RoCE over overlay networks
• Stateless offloads for overlay network tunneling protocols
• Hardware offload of encapsulation and decapsulation of VXLAN, NVGRE, and GENEVE overlay networks

  Enhanced Features
• Hardware-based reliable transport
• Collective operations offloads
• Vector collective operations offloads
• NVIDIA PeerDirect™ RDMA (aka GPUDirect) communication acceleration
• 64/66 encoding
• Enhanced atomic operations
• Advanced memory mapping support, allowing user mode registration and remapping of memory (UMR)
• Extended reliable connected transport (XRC)
• Dynamically connected transport (DCT)
• On demand paging (ODP)
• MPI tag matching
• Rendezvous protocol offload
• Out-of-order RDMA supporting adaptive routing
• Burst buffer offload
• In-Network Memory registration-free RDMA memory access

CPU Offloads
• RDMA over Converged Ethernet (RoCE)
• TCP/UDP/IP stateless offload
• LSO, LRO, checksum offload
• RSS (also on encapsulated packet), TSS, HDS, VLAN and MPLS tag insertion/stripping, receive flow steering
• Data plane development kit (DPDK) for kernel bypass applications
• Open vSwitch (OVS) offload using ASAP2
• Flexible match-action flow tables
• Tunneling encapsulation/decapsulation
• Intelligent interrupt coalescence
• Header rewrite supporting hardware offload of NAT router

 
Full specification and details can be found in the Product Datasheet PDF file

Express View Logo
In Stock: 2-3 Weeks
£ POA
£ POA Inc Vat
Mellanox MCX623106AS-CDAT CONNECTX-6 DX EN Adapter Card
Mellanox Logo
Secure Boot No Crypto Tall Bracket

ConnectX-6 DX Ethernet SmartNIC

Advanced Networking and Security for the Most Demanding Cloud and Data Center Workloads

ConnectX-6 Dx SmartNIC is the industry's most secure and advanced cloud network interface card to accelerate mission-critical data-center applications, such as security, virtualization, SDN/NFV, big data, machine learning, and storage. The SmartNIC delivers the highest return on investment (ROI) of any smart network interface card.

ConnectX-6 Dx is a member of NVIDIA's world-class, award-winning ConnectX series of network adapters powered by leading 50 Gb/s (PAM4) and 25/10 Gb/s (NRZ) SerDes technology and novel capabilities that accelerate cloud and data-center payloads.




Key Features: • Up to 100 Gb/s bandwidth

• Message rate of up to 215 Mpps

• Sub 0.8 usec latency

• Flexible programmable pipeline for new network flows

• Multi-Host with advanced QoS

• ASAP2 - Accelerated Switching and Packet Processing for virtual switches/routers

• Overlay tunneling technologies

• IPsec and TLS in-line crypto acceleration

• Block crypto acceleration for data-at-rest

• Hardware Root-of-Trust and secure firmware update

• Connection Tracking offload

• Advanced RoCE capabilities

• Best in class PTP for TSN applications

• GPUDirect for GPU-to-GPU communication

• Host chaining technology for economical rack design

• Platform agnostic: x86, Power, Arm

• ODCC compatible


ConnectX®-6 Dx is a highly secure and advanced smart network interface card (SmartNIC) that accelerates mission-critical cloud and data center applications, including security, virtualization, SDN/NFV, big data, machine learning, and storage. ConnectX-6 Dx provides up to two ports of 100Gb/s or a single port of 200Gb/s Ethernet connectivity and is powered by 50Gb/s (PAM4) or 25/10 Gb/s (NRZ) SerDes technology.

ConnectX-6 Dx features virtual switch (vSwitch) and virtual router (vRouter) hardware accelerations delivering orders-of-magnitude higher performance than softwarebased solutions. ConnectX-6 Dx supports a choice of single-root I/O virtualization (SR-IOV) and VirtIO in hardware, enabling customers to best address their application needs. By offloading cloud networking workloads, ConnectX-6 Dx frees up CPU cores for business applications while reducing total cost-of-ownership.




SmartNIC Portfolio • 1/10/25/40/50/100 Gb/s Ethernet, PAM4/NRZ

• Various form factors:

-PCIe low-profile

-OCP 3.0 Small Form Factor (SFF)

-OCP 2.0

• Connectivity options:

-SFP28, SFP56, QSFP28, QSFP56

• PCIe Gen 3.0/4.0 x16 host interface

• Multi-host and single-host flavors

• Crypto and non-crypto versions


Host Interface
• 16 lanes of PCIe Gen4, compatible with PCIe Gen2/Gen3
• Integrated PCI switch
• NVIDIA Multi-Host and NVIDIA Socket Direct

Remote Boot
• Remote boot over Ethernet
• Remote boot over iSCSI
• UEFI and PXE support for x86 and Arm servers

  Virtualization/Cloud Native
• SR-IOV and VirtIO acceleration
-Up to 1K virtual functions per port
-8 physical functions
• Support for tunneling
-Encap/decap of VXLAN, NVGRE, Geneve, and more
-Stateless offloads for overlay tunnels

Storage Offloads
• Block-level encryption: XTS-AES 256/512-bit key
• NVMe over Fabrics offloads for target machine
• T10 DIF signature handover operation at wire speed, for ingress and egress traffic
• Storage protocols: SRP, iSER, NFS RDMA, SMB Direct, NVMe-oF

  Stateless Offloads
• TCP/UDP/IP stateless offload
• LSO, LRO, checksum offload
• Receive side scaling (RSS) also on encapsulated packet
• Transmit side scaling (TSS)
• VLAN and MPLS tag insertion/stripping
• Receive flow steering

RDMA over Converged Ethernet (RoCE)
• RoCE v1/v2
• Zero-touch RoCE: no ECN, no PFC
• RoCE over overlay networks
• Selective repeat
• Programmable congestion control interface
• GPUDirect®

  Cybersecurity
• Inline hardware IPsec encryption and decryption
-AES-GCM 128/256-bit key
-RoCE over IPsec
• Inline hardware TLS encryption and decryption
-AES-GCM 128/256-bit key
• Data-at-rest AES-XTS encryption and decryption
-AES-XTS 256/512-bit key
• Platform security
-Hardware root-of-trust
-Secure firmware update

Management and Control
• NC-SI, MCTP over SMBus and MCTP over PCIe—Baseboard Management Controller interface, NCSI over RBT in Open Compute Project (OCP) 2.0/3.0 cards
• PLDM for Monitor and Control DSP0248
• PLDM for Firmware Update DSP0267
• I2C interface for device control and configuration

  Accelerated Switching & Packet Processing
• SDN acceleration for:
-Bare metal
-Virtualization
-Containers
• Full hardware offload for OVS data plane
• Flow update through RTE_Flow or TC_Flower
• Flex-parser: user-defined classification
• Hardware offload for:
-Connection tracking (Layer 4 firewall)
-NAT
-Header rewrite
-Mirroring
-Sampling
-Flow aging
-Hierarchical QoS
-Flow-based statistics

Advanced Timing and Synchronization
• Advanced PTP
-IEEE 1588v2 (any profile)
-PTP hardware clock (PHC) (UTC format)
-Nanosecond-level accuracy
-Line rate hardware timestamp (UTC format)
-PPS in and configurable PPS out
• Time-triggered scheduling
• PTP-based packet pacing
• Time-based SDN acceleration (ASAP2)
• Time-sensitive networking (TSN)
• Dedicated precision timing card option

 
Full specification and details can be found in the Product Datasheet PDF file

Express View Logo
In Stock: 2-3 Weeks
£1047.42
£1256.90 Inc Vat
Add To Cart
Mellanox MCX623106AN-CDAT CONNECTX-6 DX EN Adapter Card
Mellanox Logo
No Crypto Tall Bracket

ConnectX-6 DX Ethernet SmartNIC

Advanced Networking and Security for the Most Demanding Cloud and Data Center Workloads.

ConnectX-6 Dx SmartNIC is the industry's most secure and advanced cloud network interface card to accelerate mission-critical data-center applications, such as security, virtualization, SDN/NFV, big data, machine learning, and storage. The SmartNIC delivers the highest return on investment (ROI) of any smart network interface card.

ConnectX-6 Dx is a member of NVIDIA's world-class, award-winning ConnectX series of network adapters powered by leading 50 Gb/s (PAM4) and 25/10 Gb/s (NRZ) SerDes technology and novel capabilities that accelerate cloud and data-center payloads.




Key Features: • Up to 100 Gb/s bandwidth

• Message rate of up to 215 Mpps

• Sub 0.8 usec latency

• Flexible programmable pipeline for new network flows

• Multi-Host with advanced QoS

• ASAP2 - Accelerated Switching and Packet Processing for virtual switches/routers

• Overlay tunneling technologies

• IPsec and TLS in-line crypto acceleration

• Block crypto acceleration for data-at-rest

• Hardware Root-of-Trust and secure firmware update

• Connection Tracking offload

• Advanced RoCE capabilities

• Best in class PTP for TSN applications

• GPUDirect for GPU-to-GPU communication

• Host chaining technology for economical rack design

• Platform agnostic: x86, Power, Arm

• ODCC compatible


ConnectX®-6 Dx is a highly secure and advanced smart network interface card (SmartNIC) that accelerates mission-critical cloud and data center applications, including security, virtualization, SDN/NFV, big data, machine learning, and storage. ConnectX-6 Dx provides up to two ports of 100Gb/s or a single port of 200Gb/s Ethernet connectivity and is powered by 50Gb/s (PAM4) or 25/10 Gb/s (NRZ) SerDes technology.

ConnectX-6 Dx features virtual switch (vSwitch) and virtual router (vRouter) hardware accelerations delivering orders-of-magnitude higher performance than softwarebased solutions. ConnectX-6 Dx supports a choice of single-root I/O virtualization (SR-IOV) and VirtIO in hardware, enabling customers to best address their application needs. By offloading cloud networking workloads, ConnectX-6 Dx frees up CPU cores for business applications while reducing total cost-of-ownership.




SmartNIC Portfolio • 1/10/25/40/50/100/200 Gb/s Ethernet, PAM4/NRZ

• Various form factors:

-PCIe low-profile

-OCP 3.0 Small Form Factor (SFF)

-OCP 2.0

• Connectivity options:

-SFP28, SFP56, QSFP28, QSFP56, DSFP

• PCIe Gen 3.0/4.0 x16 host interface

• Multi-host and single-host flavors

• Crypto and non-crypto versions


Host Interface
• 16 lanes of PCIe Gen4, compatible with PCIe Gen2/Gen3
• Integrated PCI switch
• NVIDIA Multi-Host and NVIDIA Socket Direct

Remote Boot
• Remote boot over Ethernet
• Remote boot over iSCSI
• UEFI and PXE support for x86 and Arm servers

  Virtualization/Cloud Native
• SR-IOV and VirtIO acceleration
-Up to 1K virtual functions per port
-8 physical functions
• Support for tunneling
-Encap/decap of VXLAN, NVGRE, Geneve, and more
-Stateless offloads for overlay tunnels

Storage Offloads
• Block-level encryption: XTS-AES 256/512-bit key
• NVMe over Fabrics offloads for target machine
• T10 DIF signature handover operation at wire speed, for ingress and egress traffic
• Storage protocols: SRP, iSER, NFS RDMA, SMB Direct, NVMe-oF

  Stateless Offloads
• TCP/UDP/IP stateless offload
• LSO, LRO, checksum offload
• Receive side scaling (RSS) also on encapsulated packet
• Transmit side scaling (TSS)
• VLAN and MPLS tag insertion/stripping
• Receive flow steering

RDMA over Converged Ethernet (RoCE)
• RoCE v1/v2
• Zero-touch RoCE: no ECN, no PFC
• RoCE over overlay networks
• Selective repeat
• Programmable congestion control interface
• GPUDirect®

  Cybersecurity
• Inline hardware IPsec encryption and decryption
-AES-GCM 128/256-bit key
-RoCE over IPsec
• Inline hardware TLS encryption and decryption
-AES-GCM 128/256-bit key
• Data-at-rest AES-XTS encryption and decryption
-AES-XTS 256/512-bit key
• Platform security
-Hardware root-of-trust
-Secure firmware update

Management and Control
• NC-SI, MCTP over SMBus and MCTP over PCIe—Baseboard Management Controller interface, NCSI over RBT in Open Compute Project (OCP) 2.0/3.0 cards
• PLDM for Monitor and Control DSP0248
• PLDM for Firmware Update DSP0267
• I2C interface for device control and configuration

  Accelerated Switching & Packet Processing
• SDN acceleration for:
-Bare metal
-Virtualization
-Containers
• Full hardware offload for OVS data plane
• Flow update through RTE_Flow or TC_Flower
• Flex-parser: user-defined classification
• Hardware offload for:
-Connection tracking (Layer 4 firewall)
-NAT
-Header rewrite
-Mirroring
-Sampling
-Flow aging
-Hierarchical QoS
-Flow-based statistics

Advanced Timing and Synchronization
• Advanced PTP
-IEEE 1588v2 (any profile)
-PTP hardware clock (PHC) (UTC format)
-Nanosecond-level accuracy
-Line rate hardware timestamp (UTC format)
-PPS in and configurable PPS out
• Time-triggered scheduling
• PTP-based packet pacing
• Time-based SDN acceleration (ASAP2)
• Time-sensitive networking (TSN)
• Dedicated precision timing card option

 
Full specification and details can be found in the Product Datasheet PDF file

Express View Logo
In Stock: 2-3 Weeks
£1047.42
£1256.90 Inc Vat
Add To Cart
Mellanox MCX653105A-HDAT-SP CONNECTX-6 VPI Adapter Card
Mellanox Logo
and 200GBE Single-Port QSFP56 with Tall Bracket

ConnectX-6 VPI Card 200Gb/s InfiniBand & Ethernet Adapter Card

Featuring In-Network Computing for Enhanced Efficiency and Scalability

ConnectX-6 Virtual Protocol Interconnect (VPI) cards are a groundbreaking addition to the ConnectX series of industry-leading network adapter cards. Providing two ports of 200Gb/s for InfiniBand and Ethernet connectivity, sub-600ns latency and 215 million messages per second, ConnectX-6 VPI cards enable the highest performance and most flexible solution aimed at meeting the continually growing demands of data center applications. In addition to all the existing innovative features of past versions, ConnectX-6 cards offer a number of enhancements to further improve performance and scalability.

ConnectX-6 VPI supports HDR, HDR100, EDR, FDR, QDR, DDR and SDR InfiniBand speeds as well as 200, 100, 50, 40, 25, and 10Gb/s Ethernet speeds.




Benefits: • Industry-leading throughput, low CPU utilization and high message rate

• Highest performance and most intelligent fabric for compute and storage infrastructures

• Cutting-edge performance in virtualized networks including Network Function Virtualization (NFV)

• Host Chaining technology for economical rack design

• Smart interconnect for x86, Power, Arm, GPU and FPGA-based compute and storage platforms

• Flexible programmable pipeline for new network flows

• Efficient service chaining enablement

• Increased I/O consolidation efficiencies, reducing data center costs & complexity


ConnectX®-6 Dx is a highly secure and advanced smart network interface card (SmartNIC) that accelerates mission-critical cloud and data center applications, including security, virtualization, SDN/NFV, big data, machine learning, and storage. ConnectX-6 Dx provides up to two ports of 100Gb/s or a single port of 200Gb/s Ethernet connectivity and is powered by 50Gb/s (PAM4) or 25/10 Gb/s (NRZ) SerDes technology.

ConnectX-6 Dx features virtual switch (vSwitch) and virtual router (vRouter) hardware accelerations delivering orders-of-magnitude higher performance than softwarebased solutions. ConnectX-6 Dx supports a choice of single-root I/O virtualization (SR-IOV) and VirtIO in hardware, enabling customers to best address their application needs. By offloading cloud networking workloads, ConnectX-6 Dx frees up CPU cores for business applications while reducing total cost-of-ownership.




Features: • Up to 200Gb/s connectivity per port

• Max bandwidth of 200Gb/s

• Up to 215 million messages/sec

• Sub 0.6usec latency

• OCP 2.0

• FIPS capable

• Advanced storage capabilities including block-level encryption and checksum offloads

• Supports both 50G SerDes (PAM4) and 25G SerDes (NRZ) based ports

• Best-in-class packet pacing with sub-nanosecond accuracy

• PCIe Gen 3.0 and Gen 4.0 support

• RoHS compliant

• ODCC compatible




High Performance Computing Environments With its NVIDIA In-Network Computing and In-Network Memory capabilities, ConnectX-6 offloads computation even further to the network, saving CPU cycles and increasing network efficiency. ConnectX-6 utilizes remote direct memory access (RDMA) technology as defined in the InfiniBand Trade Association (IBTA) specification, delivering low latency, and high performance. ConnectX-6 enhances RDMA network capabilities even further by delivering end-to-end packet-level flow control.




Machine Learning and Big Data Environments Data analytics has become an essential function within many enterprise data centers, clouds, and hyperscale platforms. Machine learning (ML) relies on especially high throughput and low latency to train deep neural networks and improve recognition and classification accuracy. With its 200Gb/s throughput, ConnectX-6 is an excellent solution to provide ML applications with the levels of performance and scalability that they require.




Security Including Block-Level Encryption ConnectX-6 block-level encryption offers a critical innovation to network security. As data in transit is stored or retrieved, it undergoes encryption and decryption. ConnectX-6 hardware offloads the IEEE AES-XTS encryption/decryption from the CPU, saving latency and CPU utilization. It also guarantees protection for users sharing the same resources through the use of dedicated encryption keys.

By performing block storage encryption in the adapter, ConnectX-6 eliminates the need for self-encrypted disks. This gives customers the freedom to choose their preferred storage device, including byte addressable and NVDIMM devices that traditionally do not provide encryption. Moreover, ConnectX-6 can offer Federal Information Processing Standards (FIPS) compliance.




Bring NVMe-oF to Storage Environments NVMe storage devices are gaining momentum, offering very fast access to storage media. The evolving NVMe over Fabrics (NVMe-oF) protocol leverages RDMA connectivity to remotely access NVMe storage devices efficiently, while keeping the end-to-end NVMe model at lowest latency. With its NVMe-oF target and initiator offloads, ConnectX-6 brings further optimization to NVMe-oF, enhancing CPU utilization and scalability.




Portfolio of Smart Adapters ConnectX-6 is available in two form factors: low-profile stand-up PCIe and Open Compute Project (OCP) Spec 3.0 cards with QSFP connectors. Single-port, HDR, stand-up PCIe adapters are available based on either ConnectX-6 or ConnectX-6 DE (ConnectX-6 Dx enhanced for HPC applications).

In addition, specific PCIe stand-up cards are available with a cold plate for insertion into liquid-cooled Intel Server System D50TNP platforms.




Socket Direct ConnectX-6 also provides options for NVIDIA Socket Direct™ configurations, which improves the performance of multi-socket servers by enabling each of their CPUs to access the network through its dedicated PCIe interface. This enables data to bypass the QPI (UPI) and the other CPU, improving latency, performance, and CPU utilization.

Socket Direct also enables NVIDIA GPUDirect® RDMA for all CPU/GPU pairs by ensuring that GPUs are linked to the CPUs closest to the adapter card. Socket Direct enables Intel® DDIO optimization on both sockets by creating a direct connection between the sockets and the adapter card.

Socket Direct technology is enabled by a main card housing the ConnectX-6 and an auxiliary PCIe card bringing in the remaining PCIe lanes. The ConnectX-6 Socket Direct card is installed into two PCIe x16 slots and connected using a harness. The two PCIe x16 slots may also be connected to the same CPU. In this case, the main advantage of the technology lies in delivering 200Gb/s to servers with PCIe Gen3-only support.




Host Management Host Management includes NC-SI over MCTP over SMBus, and MCTP over PCIe— Baseboard Management Controller (BMC) interface, as well as PLDM for Monitor and Control DSP0248 and PLDM for Firmware Update DSP0267.




Broad Software Support All ConnectX adapters are supported by a full suite of drivers for major Linux distributions, as well as Microsoft® Windows® Server and VMware vSphere®.

HPC software libraries supported include HPC-X, OpenMPI, MVAPICH, MPICH, OpenSHMEM, PGAS, and varied commercial packages.

Overlay Networks
• RoCE over overlay networks
• Stateless offloads for overlay network tunneling protocols
• Hardware offload of encapsulation and decapsulation of VXLAN, NVGRE, and Geneve overlay networks

Storage Offloads
• Block-level encryption: XTS-AES 256/512-bit key
• NVMe over Fabrics offloads for target machine
• T10-DIF signature handover operation at wire speed, for ingress and egress traffic
• Storage protocols: SRP, iSER, NFS RDMA, SMB Direct, and NVMe-oF

  InfiniBand
• 200Gb/s and lower rates
• IBTA Specification 1.3 compliant
• RDMA, send/receive semantics
• Hardware-based congestion control
• Atomic operations
• 16 million I/O channels
• 256 to 4Kbyte MTU, 2Gbyte messages
• 8 virtual lanes + VL15

Remote Boot
• Remote boot over InfiniBand
• Remote boot over Ethernet
• Remote boot over iSCSI
• Unified Extensible Firmware Interface (UEFI)
• Pre-execution Environment (PXE)

  Hardware-Based I/O Virtualization
• Single Root IOV (SR-IOV)
• Address translation and protection
• VMware NetQueue support
-SR-IOV: Up to 1K virtual functions
-SR-IOV: Up to 8 physical functions per host
• Virtualization hierarchies (e.g., NPAR)
-Virtualizing physical functions on a physical port
-SR-IOV on every physical function
• Configurable and user-programmable QoS
• Guaranteed QoS for VMs

Management and Control
• NC-SI, MCTP over SMBus and MCTP over PCIe—Baseboard Management Controller interface
• PLDM for Monitor and Control DSP0248
• PLDM for Firmware Update DSP0267
• SDN management interface for managing the eSwitch
• I2C interface for device control and configuration
• General Purpose I/O pins
• SPI interface to flash
• JTAG IEEE 1149.1 and IEEE 1149.6

  Enhanced Features
• Hardware-based reliable transport
• Collective operations offloads
• Vector collective operations offloads
• NVIDIA PeerDirect® RDMA (a.k.a. NVIDIA GPUDirect) communication acceleration
• 64/66 encoding
• Enhanced atomic operations
• Advanced memory mapping support, allowing user mode registration and remapping of memory (UMR)
• Extended Reliable Connected transport (XRC)
• Dynamically Connected Transport (DCT)
• On demand paging (ODP)
• MPI tag matching
• Rendezvous protocol offload
• Out-of-order RDMA supporting Adaptive Routing
• Burst buffer offload
• In-Network Memory registration-free RDMA memory access

CPU Offloads
• RDMA over Converged Ethernet (RoCE)
• TCP/UDP/IP stateless offload
• LSO, LRO, checksum offload
• RSS (also on encapsulated packet), TSS, HDS, VLAN and MPLS tag insertion/stripping, receive flow steering
• Data plane development kit (DPDK) for kernel bypass applications
• Open vSwitch (OVS) offload using ASAP2
• Flexible match-action flow tables
• Tunneling encapsulation/decapsulation
• Intelligent interrupt coalescence
• Header rewrite supporting hardware offload of NAT router

 
Full specification and details can be found in the Product Datasheet PDF file

Express View Logo
In Stock: 2-3 Weeks
£1047.42
£1256.90 Inc Vat
Add To Cart
Mellanox MCX623105AN-VDAT CONNECTX6 DX EN Adapter Card
Mellanox Logo
No Crypto Tall Bracket

ConnectX-6 DX Ethernet SmartNIC

Advanced Networking and Security for the Most Demanding Cloud and Data Center Workloads

ConnectX-6 Dx SmartNIC is the industry's most secure and advanced cloud network interface card to accelerate mission-critical data-center applications, such as security, virtualization, SDN/NFV, big data, machine learning, and storage. The SmartNIC provides up to two ports of 100 Gb/s or a single-port of 200 Gb/s Ethernet connectivity and delivers the highest return on investment (ROI) of any smart network interface card.

ConnectX-6 Dx is a member of NVIDIA's world-class, award-winning ConnectX series of network adapters powered by leading 50 Gb/s (PAM4) and 25/10 Gb/s (NRZ) SerDes technology and novel capabilities that accelerate cloud and data-center payloads.




Key Features: • Up to 200 Gb/s bandwidth

• Message rate of up to 215 Mpps

• Sub 0.8 usec latency

• Flexible programmable pipeline for new network flows

• Multi-Host with advanced QoS

• ASAP2 - Accelerated Switching and Packet Processing for virtual switches/routers

• Overlay tunneling technologies

• IPsec and TLS in-line crypto acceleration

• Block crypto acceleration for data-at-rest

• Hardware Root-of-Trust and secure firmware update

• Connection Tracking offload

• Advanced RoCE capabilities

• Best in class PTP for TSN applications

• GPUDirect for GPU-to-GPU communication

• Host chaining technology for economical rack design

• Platform agnostic: x86, Power, Arm

• ODCC compatible


ConnectX®-6 Dx is a highly secure and advanced smart network interface card (SmartNIC) that accelerates mission-critical cloud and data center applications, including security, virtualization, SDN/NFV, big data, machine learning, and storage. ConnectX-6 Dx provides up to two ports of 100Gb/s or a single port of 200Gb/s Ethernet connectivity and is powered by 50Gb/s (PAM4) or 25/10 Gb/s (NRZ) SerDes technology.

ConnectX-6 Dx features virtual switch (vSwitch) and virtual router (vRouter) hardware accelerations delivering orders-of-magnitude higher performance than softwarebased solutions. ConnectX-6 Dx supports a choice of single-root I/O virtualization (SR-IOV) and VirtIO in hardware, enabling customers to best address their application needs. By offloading cloud networking workloads, ConnectX-6 Dx frees up CPU cores for business applications while reducing total cost-of-ownership.




SmartNIC Portfolio • 1/10/25/40/50/100/200 Gb/s Ethernet, PAM4/NRZ

• Various form factors:

-PCIe low-profile

-OCP 3.0 Small Form Factor (SFF)

-OCP 2.0

• Connectivity options:

-SFP28, SFP56, QSFP28, QSFP56, DSFP

• PCIe Gen 3.0/4.0 x16 host interface

• Multi-host and single-host flavors

• Crypto and non-crypto versions


Host Interface
• 16 lanes of PCIe Gen4, compatible with PCIe Gen2/Gen3
• Integrated PCI switch
• NVIDIA Multi-Host and NVIDIA Socket Direct

Remote Boot
• Remote boot over Ethernet
• Remote boot over iSCSI
• UEFI and PXE support for x86 and Arm servers

  Virtualization/Cloud Native
• SR-IOV and VirtIO acceleration
-Up to 1K virtual functions per port
-8 physical functions
• Support for tunneling
-Encap/decap of VXLAN, NVGRE, Geneve, and more
-Stateless offloads for overlay tunnels

Storage Offloads
• Block-level encryption: XTS-AES 256/512-bit key
• NVMe over Fabrics offloads for target machine
• T10 DIF signature handover operation at wire speed, for ingress and egress traffic
• Storage protocols: SRP, iSER, NFS RDMA, SMB Direct, NVMe-oF

  Stateless Offloads
• TCP/UDP/IP stateless offload
• LSO, LRO, checksum offload
• Receive side scaling (RSS) also on encapsulated packet
• Transmit side scaling (TSS)
• VLAN and MPLS tag insertion/stripping
• Receive flow steering

RDMA over Converged Ethernet (RoCE)
• RoCE v1/v2
• Zero-touch RoCE: no ECN, no PFC
• RoCE over overlay networks
• Selective repeat
• Programmable congestion control interface
• GPUDirect®

  Cybersecurity
• Inline hardware IPsec encryption and decryption
-AES-GCM 128/256-bit key
-RoCE over IPsec
• Inline hardware TLS encryption and decryption
-AES-GCM 128/256-bit key
• Data-at-rest AES-XTS encryption and decryption
-AES-XTS 256/512-bit key
• Platform security
-Hardware root-of-trust
-Secure firmware update

Management and Control
• NC-SI, MCTP over SMBus and MCTP over PCIe—Baseboard Management Controller interface, NCSI over RBT in Open Compute Project (OCP) 2.0/3.0 cards
• PLDM for Monitor and Control DSP0248
• PLDM for Firmware Update DSP0267
• I2C interface for device control and configuration

  Accelerated Switching & Packet Processing
• SDN acceleration for:
-Bare metal
-Virtualization
-Containers
• Full hardware offload for OVS data plane
• Flow update through RTE_Flow or TC_Flower
• Flex-parser: user-defined classification
• Hardware offload for:
-Connection tracking (Layer 4 firewall)
-NAT
-Header rewrite
-Mirroring
-Sampling
-Flow aging
-Hierarchical QoS
-Flow-based statistics

Advanced Timing and Synchronization
• Advanced PTP
-IEEE 1588v2 (any profile)
-PTP hardware clock (PHC) (UTC format)
-Nanosecond-level accuracy
-Line rate hardware timestamp (UTC format)
-PPS in and configurable PPS out
• Time-triggered scheduling
• PTP-based packet pacing
• Time-based SDN acceleration (ASAP2)
• Time-sensitive networking (TSN)
• Dedicated precision timing card option

 
Full specification and details can be found in the Product Datasheet PDF file

Express View Logo
In Stock: 2-3 Weeks
£1109.79
£1331.75 Inc Vat
Add To Cart
Mellanox MCX653436A-HDAI CONNECTX-6 VPI Adapter Card
Mellanox Logo
With Host Management Dual-Port QSFP56 PCIE4.0 X16 Interal Lock

ConnectX-6 VPI Card 200Gb/s InfiniBand & Ethernet Adapter Card

Featuring In-Network Computing for Enhanced Efficiency and Scalability

ConnectX-6 Virtual Protocol Interconnect (VPI) cards are a groundbreaking addition to the ConnectX series of industry-leading network adapter cards. Providing two ports of 200Gb/s for InfiniBand and Ethernet connectivity, sub-600ns latency and 215 million messages per second, ConnectX-6 VPI cards enable the highest performance and most flexible solution aimed at meeting the continually growing demands of data center applications. In addition to all the existing innovative features of past versions, ConnectX-6 cards offer a number of enhancements to further improve performance and scalability.

ConnectX-6 VPI supports HDR, HDR100, EDR, FDR, QDR, DDR and SDR InfiniBand speeds as well as 200, 100, 50, 40, 25, and 10Gb/s Ethernet speeds.




Benefits • Industry-leading throughput, low CPU utilization and high message rate

• Highest performance and most intelligent fabric for compute and storage infrastructures

• Cutting-edge performance in virtualized networks including Network Function Virtualization (NFV)

• Host Chaining technology for economical rack design

• Smart interconnect for x86, Power, Arm, GPU and FPGA-based compute and storage platforms

• Flexible programmable pipeline for new network flows

• Efficient service chaining enablement

• Increased I/O consolidation efficiencies, reducing data center costs & complexity


Complex workloads demand ultra-fast processing of high-resolution simulations, extreme-size datasets, and highly-parallelized algorithms. As these computing requirements continue to grow, NVIDIA Quantum InfiniBand—the world’s only fully offloadable, In-Network Computing acceleration technology—provides the dramatic leap in performance needed to achieve unmatched results in high performance computing (HPC), AI, and hyperscale cloud infrastructures—with less cost and complexity.

ConnectX®-6 InfiniBand smart adapter cards are a key element in the NVIDIA Quantum InfiniBand platform. ConnectX-6 provides up to two ports of 200Gb/s InfiniBand and Ethernet(1) connectivity with extremely low latency, high message rate, smart offloads, and NVIDIA In-Network Computing acceleration that improve performance and scalability.




Features • Up to 200Gb/s connectivity per port

• Max bandwidth of 200Gb/s

• Up to 215 million messages/sec

• Sub 0.6usec latency

• Block-level XTS-AES mode hardware encryption

• FIPS capable

• Advanced storage capabilities including block-level encryption and checksum offloads

• Supports both 50G SerDes (PAM4) and 25G SerDes (NRZ) based ports

• Best-in-class packet pacing with sub-nanosecond accuracy

• PCIe Gen 3.0 and Gen 4.0 support

• RoHS compliant

• ODCC compatible




High Performance Computing Environments With its NVIDIA In-Network Computing and In-Network Memory capabilities, ConnectX-6 offloads computation even further to the network, saving CPU cycles and increasing network efficiency. ConnectX-6 utilizes remote direct memory access (RDMA) technology as defined in the InfiniBand Trade Association (IBTA) specification, delivering low latency, and high performance. ConnectX-6 enhances RDMA network capabilities even further by delivering end-to-end packet-level flow control.




Machine Learning and Big Data Environment Data analytics has become an essential function within many enterprise data centers, clouds, and hyperscale platforms. Machine learning (ML) relies on especially high throughput and low latency to train deep neural networks and improve recognition and classification accuracy. With its 200Gb/s throughput, ConnectX-6 is an excellent solution to provide ML applications with the levels of performance and scalability that they require.




Security Including Block-Level Encryption ConnectX-6 block-level encryption offers a critical innovation to network security. As data in transit is stored or retrieved, it undergoes encryption and decryption. ConnectX-6 hardware offloads the IEEE AES-XTS encryption/decryption from the CPU, saving latency and CPU utilization. It also guarantees protection for users sharing the same resources through the use of dedicated encryption keys.

By performing block storage encryption in the adapter, ConnectX-6 eliminates the need for self-encrypted disks. This gives customers the freedom to choose their preferred storage device, including byte addressable and NVDIMM devices that traditionally do not provide encryption. Moreover, ConnectX-6 can offer Federal Information Processing Standards (FIPS) compliance.




Bring NVMe-oF to Storage Environments NVMe storage devices are gaining momentum, offering very fast access to storage media. The evolving NVMe over Fabrics (NVMe-oF) protocol leverages RDMA connectivity to remotely access NVMe storage devices efficiently, while keeping the end-to-end NVMe model at lowest latency. With its NVMe-oF target and initiator offloads, ConnectX-6 brings further optimization to NVMe-oF, enhancing CPU utilization and scalability




Portfolio of Smart Adapters ConnectX-6 is available in two form factors: low-profile stand-up PCIe and Open Compute Project (OCP) Spec 3.0 cards with QSFP connectors. Single-port, HDR, stand-up PCIe adapters are available based on either ConnectX-6 or ConnectX-6 DE (ConnectX-6 Dx enhanced for HPC applications).

In addition, specific PCIe stand-up cards are available with a cold plate for insertion into liquid-cooled Intel Server System D50TNP platforms.




Socket Direct ConnectX-6 also provides options for NVIDIA Socket Direct™ configurations, which improves the performance of multi-socket servers by enabling each of their CPUs to access the network through its dedicated PCIe interface. This enables data to bypass the QPI (UPI) and the other CPU, improving latency, performance, and CPU utilization.

Socket Direct also enables NVIDIA GPUDirect® RDMA for all CPU/GPU pairs by ensuring that GPUs are linked to the CPUs closest to the adapter card. Socket Direct enables Intel® DDIO optimization on both sockets by creating a direct connection between the sockets and the adapter card.

Socket Direct technology is enabled by a main card housing the ConnectX-6 and an auxiliary PCIe card bringing in the remaining PCIe lanes. The ConnectX-6 Socket Direct card is installed into two PCIe x16 slots and connected using a harness. The two PCIe x16 slots may also be connected to the same CPU. In this case, the main advantage of the technology lies in delivering 200Gb/s to servers with PCIe Gen3-only support.




Host Management Host Management includes NC-SI over MCTP over SMBus, and MCTP over PCIe— Baseboard Management Controller (BMC) interface, as well as PLDM for Monitor and Control DSP0248 and PLDM for Firmware Update DSP0267.




Broad Software Support All ConnectX adapters are supported by a full suite of drivers for major Linux distributions, as well as Microsoft® Windows® Server and VMware vSphere®.

HPC software libraries supported include HPC-X, OpenMPI, MVAPICH, MPICH, OpenSHMEM, PGAS, and varied commercial packages.


Storage Offloads
• Block-level encryption: XTS-AES 256/512-bit key
• NVMe over Fabrics offloads for target machine
• T10-DIF signature handover operation at wire speed, for ingress and egress traffic
• Storage protocols: SRP, iSER, NFS RDMA, SMB Direct, and NVMe-oF

Overlay Networks
• RoCE over overlay networks
• Stateless offloads for overlay network tunneling protocols
• Hardware offload of encapsulation and decapsulation of VXLAN, NVGRE, and Geneve overlay networks

  InfiniBand
• 200Gb/s and lower rates
• IBTA Specification 1.3 compliant
• RDMA, send/receive semantics
• Hardware-based congestion control
• Atomic operations
• 16 million I/O channels
• 256 to 4Kbyte MTU, 2Gbyte messages
• 8 virtual lanes + VL15

Remote Boot
• Remote boot over InfiniBand
• Remote boot over Ethernet
• Remote boot over iSCSI
• Unified Extensible Firmware Interface (UEFI)
• Pre-execution Environment (PXE)

  Management and Control
• NC-SI, MCTP over SMBus and MCTP over PCIe—Baseboard Management Controller interface
• PLDM for Monitor and Control DSP0248
• PLDM for Firmware Update DSP0267
• SDN management interface for managing the eSwitch
• I2C interface for device control and configuration
• General Purpose I/O pins
• SPI interface to flash
• JTAG IEEE 1149.1 and IEEE 1149.6

Hardware-Based I/O Virtualization
• Single Root IOV (SR-IOV)
• Address translation and protection
• VMware NetQueue support
-SR-IOV: Up to 1K virtual functions
-SR-IOV: Up to 8 physical functions per host
• Virtualization hierarchies (e.g., NPAR)
-Virtualizing physical functions on a physical port
• SR-IOV on every physical function
• Configurable and user-programmable QoS
• Guaranteed QoS for VMs

  Enhanced Features
• Hardware-based reliable transport
• Collective operations offloads
• Vector collective operations offloads
• NVIDIA PeerDirect® RDMA (a.k.a. NVIDIA GPUDirect) communication acceleration
• 64/66 encoding
• Enhanced atomic operations
• Advanced memory mapping support, allowing user mode registration and remapping of memory (UMR)
• Extended Reliable Connected transport (XRC)
• Dynamically Connected Transport (DCT)
• On demand paging (ODP)
• MPI tag matching
• Rendezvous protocol offload
• Out-of-order RDMA supporting Adaptive Routing
• Burst buffer offload
• In-Network Memory registration-free RDMA memory access

CPU Offloads
• RDMA over Converged Ethernet (RoCE)
• TCP/UDP/IP stateless offload
• LSO, LRO, checksum offload
• RSS (also on encapsulated packet), TSS, HDS, VLAN and MPLS tag insertion/stripping, receive flow steering
• Data plane development kit (DPDK) for kernel bypass applications
• Open vSwitch (OVS) offload using ASAP2
• Flexible match-action flow tables
• Tunneling encapsulation/decapsulation
• Intelligent interrupt coalescence
• Header rewrite supporting hardware offload of NAT router

 
Full specification and details can be found in the Product Datasheet PDF file

Express View Logo
In Stock: 2-3 Weeks
£ POA
£ POA Inc Vat
Mellanox MCX623106AC-CDAT CONNECTX-6 DX EN Adapter Card
Mellanox Logo
Crypto and Secure Boot Tall Bracket

ConnectX-6 VPI Card 200Gb/s InfiniBand & Ethernet Adapter Card

Featuring In-Network Computing for Enhanced Efficiency and Scalability

ConnectX-6 Virtual Protocol Interconnect (VPI) cards are a groundbreaking addition to the ConnectX series of industry-leading network adapter cards. Providing two ports of 200Gb/s for InfiniBand and Ethernet connectivity, sub-600ns latency and 215 million messages per second, ConnectX-6 VPI cards enable the highest performance and most flexible solution aimed at meeting the continually growing demands of data center applications. In addition to all the existing innovative features of past versions, ConnectX-6 cards offer a number of enhancements to further improve performance and scalability.

ConnectX-6 VPI supports HDR, HDR100, EDR, FDR, QDR, DDR and SDR InfiniBand speeds as well as 200, 100, 50, 40, 25, and 10Gb/s Ethernet speeds.




Benefits: • Industry-leading throughput, low CPU utilization and high message rate

• Highest performance and most intelligent fabric for compute and storage infrastructures

• Cutting-edge performance in virtualized networks including Network Function Virtualization (NFV)

• Host Chaining technology for economical rack design

• Smart interconnect for x86, Power, Arm, GPU and FPGA-based compute and storage platforms

• Flexible programmable pipeline for new network flows

• Efficient service chaining enablement

• Increased I/O consolidation efficiencies, reducing data center costs & complexity


ConnectX®-6 Dx is a highly secure and advanced smart network interface card (SmartNIC) that accelerates mission-critical cloud and data center applications, including security, virtualization, SDN/NFV, big data, machine learning, and storage. ConnectX-6 Dx provides up to two ports of 100Gb/s or a single port of 200Gb/s Ethernet connectivity and is powered by 50Gb/s (PAM4) or 25/10 Gb/s (NRZ) SerDes technology.

ConnectX-6 Dx features virtual switch (vSwitch) and virtual router (vRouter) hardware accelerations delivering orders-of-magnitude higher performance than softwarebased solutions. ConnectX-6 Dx supports a choice of single-root I/O virtualization (SR-IOV) and VirtIO in hardware, enabling customers to best address their application needs. By offloading cloud networking workloads, ConnectX-6 Dx frees up CPU cores for business applications while reducing total cost-of-ownership.




Features: • Up to 200Gb/s connectivity per port

• Max bandwidth of 200Gb/s

• Up to 215 million messages/sec

• Sub 0.6usec latency

• OCP 2.0

• FIPS capable

• Advanced storage capabilities including block-level encryption and checksum offloads

• Supports both 50G SerDes (PAM4) and 25G SerDes (NRZ) based ports

• Best-in-class packet pacing with sub-nanosecond accuracy

• PCIe Gen 3.0 and Gen 4.0 support

• RoHS compliant

• ODCC compatible




High Performance Computing Environments With its NVIDIA In-Network Computing and In-Network Memory capabilities, ConnectX-6 offloads computation even further to the network, saving CPU cycles and increasing network efficiency. ConnectX-6 utilizes remote direct memory access (RDMA) technology as defined in the InfiniBand Trade Association (IBTA) specification, delivering low latency, and high performance. ConnectX-6 enhances RDMA network capabilities even further by delivering end-to-end packet-level flow control.




Machine Learning and Big Data Environments Data analytics has become an essential function within many enterprise data centers, clouds, and hyperscale platforms. Machine learning (ML) relies on especially high throughput and low latency to train deep neural networks and improve recognition and classification accuracy. With its 200Gb/s throughput, ConnectX-6 is an excellent solution to provide ML applications with the levels of performance and scalability that they require.




Security Including Block-Level Encryption ConnectX-6 block-level encryption offers a critical innovation to network security. As data in transit is stored or retrieved, it undergoes encryption and decryption. ConnectX-6 hardware offloads the IEEE AES-XTS encryption/decryption from the CPU, saving latency and CPU utilization. It also guarantees protection for users sharing the same resources through the use of dedicated encryption keys.

By performing block storage encryption in the adapter, ConnectX-6 eliminates the need for self-encrypted disks. This gives customers the freedom to choose their preferred storage device, including byte addressable and NVDIMM devices that traditionally do not provide encryption. Moreover, ConnectX-6 can offer Federal Information Processing Standards (FIPS) compliance.




Bring NVMe-oF to Storage Environments NVMe storage devices are gaining momentum, offering very fast access to storage media. The evolving NVMe over Fabrics (NVMe-oF) protocol leverages RDMA connectivity to remotely access NVMe storage devices efficiently, while keeping the end-to-end NVMe model at lowest latency. With its NVMe-oF target and initiator offloads, ConnectX-6 brings further optimization to NVMe-oF, enhancing CPU utilization and scalability.




Portfolio of Smart Adapters ConnectX-6 is available in two form factors: low-profile stand-up PCIe and Open Compute Project (OCP) Spec 3.0 cards with QSFP connectors. Single-port, HDR, stand-up PCIe adapters are available based on either ConnectX-6 or ConnectX-6 DE (ConnectX-6 Dx enhanced for HPC applications).

In addition, specific PCIe stand-up cards are available with a cold plate for insertion into liquid-cooled Intel Server System D50TNP platforms.




Socket Direct ConnectX-6 also provides options for NVIDIA Socket Direct™ configurations, which improves the performance of multi-socket servers by enabling each of their CPUs to access the network through its dedicated PCIe interface. This enables data to bypass the QPI (UPI) and the other CPU, improving latency, performance, and CPU utilization.

Socket Direct also enables NVIDIA GPUDirect® RDMA for all CPU/GPU pairs by ensuring that GPUs are linked to the CPUs closest to the adapter card. Socket Direct enables Intel® DDIO optimization on both sockets by creating a direct connection between the sockets and the adapter card.

Socket Direct technology is enabled by a main card housing the ConnectX-6 and an auxiliary PCIe card bringing in the remaining PCIe lanes. The ConnectX-6 Socket Direct card is installed into two PCIe x16 slots and connected using a harness. The two PCIe x16 slots may also be connected to the same CPU. In this case, the main advantage of the technology lies in delivering 200Gb/s to servers with PCIe Gen3-only support.




Host Management Host Management includes NC-SI over MCTP over SMBus, and MCTP over PCIe— Baseboard Management Controller (BMC) interface, as well as PLDM for Monitor and Control DSP0248 and PLDM for Firmware Update DSP0267.




Broad Software Support All ConnectX adapters are supported by a full suite of drivers for major Linux distributions, as well as Microsoft® Windows® Server and VMware vSphere®.

HPC software libraries supported include HPC-X, OpenMPI, MVAPICH, MPICH, OpenSHMEM, PGAS, and varied commercial packages.

Overlay Networks
• RoCE over overlay networks
• Stateless offloads for overlay network tunneling protocols
• Hardware offload of encapsulation and decapsulation of VXLAN, NVGRE, and Geneve overlay networks

Storage Offloads
• Block-level encryption: XTS-AES 256/512-bit key
• NVMe over Fabrics offloads for target machine
• T10-DIF signature handover operation at wire speed, for ingress and egress traffic
• Storage protocols: SRP, iSER, NFS RDMA, SMB Direct, and NVMe-oF

  InfiniBand
• 200Gb/s and lower rates
• IBTA Specification 1.3 compliant
• RDMA, send/receive semantics
• Hardware-based congestion control
• Atomic operations
• 16 million I/O channels
• 256 to 4Kbyte MTU, 2Gbyte messages
• 8 virtual lanes + VL15

Remote Boot
• Remote boot over InfiniBand
• Remote boot over Ethernet
• Remote boot over iSCSI
• Unified Extensible Firmware Interface (UEFI)
• Pre-execution Environment (PXE)

  Hardware-Based I/O Virtualization
• Single Root IOV (SR-IOV)
• Address translation and protection
• VMware NetQueue support
-SR-IOV: Up to 1K virtual functions
-SR-IOV: Up to 8 physical functions per host
• Virtualization hierarchies (e.g., NPAR)
-Virtualizing physical functions on a physical port
-SR-IOV on every physical function
• Configurable and user-programmable QoS
• Guaranteed QoS for VMs

Management and Control
• NC-SI, MCTP over SMBus and MCTP over PCIe—Baseboard Management Controller interface
• PLDM for Monitor and Control DSP0248
• PLDM for Firmware Update DSP0267
• SDN management interface for managing the eSwitch
• I2C interface for device control and configuration
• General Purpose I/O pins
• SPI interface to flash
• JTAG IEEE 1149.1 and IEEE 1149.6

  Enhanced Features
• Hardware-based reliable transport
• Collective operations offloads
• Vector collective operations offloads
• NVIDIA PeerDirect® RDMA (a.k.a. NVIDIA GPUDirect) communication acceleration
• 64/66 encoding
• Enhanced atomic operations
• Advanced memory mapping support, allowing user mode registration and remapping of memory (UMR)
• Extended Reliable Connected transport (XRC)
• Dynamically Connected Transport (DCT)
• On demand paging (ODP)
• MPI tag matching
• Rendezvous protocol offload
• Out-of-order RDMA supporting Adaptive Routing
• Burst buffer offload
• In-Network Memory registration-free RDMA memory access

CPU Offloads
• RDMA over Converged Ethernet (RoCE)
• TCP/UDP/IP stateless offload
• LSO, LRO, checksum offload
• RSS (also on encapsulated packet), TSS, HDS, VLAN and MPLS tag insertion/stripping, receive flow steering
• Data plane development kit (DPDK) for kernel bypass applications
• Open vSwitch (OVS) offload using ASAP2
• Flexible match-action flow tables
• Tunneling encapsulation/decapsulation
• Intelligent interrupt coalescence
• Header rewrite supporting hardware offload of NAT router

 
Full specification and details can be found in the Product Datasheet PDF file

Express View Logo
In Stock: 2-3 Weeks
£1257.30
£1508.76 Inc Vat
Add To Cart
Mellanox MCX613106A-VDAT CONNECTX-6 EN Adapter Card
Mellanox Logo

ConnectX-6 EN Adapter Card

200GbE Dual-Port QSFP56 PCIe 4.0 x16 Tall Bracket

World’s first 200GbE Ethernet network interface card, enabling industry-leading performance smart offloads and in-network computing for Cloud, Web 2.0, Big Data, Storage and Machine Learning applications.

ConnectX-6 EN provides up to two ports of 200GbE connectivity, sub 0.8usec latency and 215 million messages per second, enabling the highest performance and most flexible solution for the most demanding data center applications.




Benefits: • Most intelligent, highest performance fabric for compute and storage infrastructures

• Cutting-edge performance in virtualized HPC networks including Network Function Virtualization (NFV)

• Advanced storage capabilities including block-level encryption and checksum offloads

• Host Chaining technology for economical rack design

• Smart interconnect for x86, Power, Arm, GPU and FPGA-based platforms

• Flexible programmable pipeline for new network flows

• Enabler for efficient service chaining

• Efficient I/O consolidation, lowering data center costs and complexity


ConnectX-6 is a groundbreaking addition to the Mellanox ConnectX series of industry-leading adapter cards. In addition to all the existing innovative features of past versions, ConnectX-6 offers a number of enhancements to further improve performance and scalability, such as support for 200/100/50/40/25/10/1 GbE Ethernet speeds and PCIe Gen 4.0. Moreover, ConnectX-6 Ethernet cards can connect up to 32-lanes of PCIe to achieve 200Gb/s of bandwidth, even on Gen 3.0 PCIe systems.




Features: • Up to 200GbE connectivity per port

• Maximum bandwidth of 200Gb/s

• Up to 215 million messages/sec

• Sub 0.8usec latency

• Block-level XTS-AES mode hardware encryption

• Optional FIPS-compliant adapter card

• Support both 50G SerDes (PAM4) and 25G SerDes (NRZ) based ports

• Best-in-class packet pacing with sub-nanosecond accuracy

• PCIe Gen4/Gen3 with up to x32 lanes

• RoHS compliant

• ODCC compatible




Cloud and Web 2.0 Environments Telco, Cloud and Web 2.0 customers developing their platforms on Software Defined Network (SDN) environments are leveraging the Virtual Switching capabilities of the Operating Systems on their servers to enable maximum flexibility in the management and routing protocols of their networks.

Open vSwitch (OVS) is an example of a virtual switch that allows Virtual Machines to communicate among themselves and with the outside world. Software-based virtual switches, traditionally residing in the hypervisor, are CPU intensive, affecting system performance and preventing full utilization of available CPU for compute functions.

To address this, ConnectX-6 offers ASAP2 - Mellanox Accelerated Switch and Packet Processing® technology to offload the vSwitch/vRouter by handling the data plane in the NIC hardware while maintaining the control plane unmodified. As a result, significantly higher vSwitch/vRouter performance is achieved without the associated CPU load.

The vSwitch/vRouter offload functions supported by ConnectX-5 and ConnectX-6 include encapsulation and de-capsulation of overlay network headers, as well as stateless offloads of inner packets, packet headers re-write (enabling NAT functionality), hairpin, and more.

In addition, ConnectX-6 offers intelligent flexible pipeline capabilities, including programmable flexible parser and flexible match-action tables, which enable hardware offloads for future protocols.




Storage Environments NVMe storage devices are gaining momentum, offering very fast access to storage media. The evolving NVMe over Fabric (NVMe-oF) protocol leverages RDMA connectivity to remotely access NVMe storage devices efficiently, while keeping the end-to-end NVMe model at lowest latency. With its NVMe-oF target and initiator offloads, ConnectX-6 brings further optimization to NVMe-oF, enhancing CPU utilization and scalability.




Security ConnectX-6 block-level encryption offers a critical innovation to network security. As data in transit is stored or retrieved, it undergoes encryption and decryption. The ConnectX-6 hardware offloads the IEEE AES-XTS encryption/decryption from the CPU, saving latency and CPU utilization. It also guarantees protection for users sharing the same resources through the use of dedicated encryption keys.

By performing block-storage encryption in the adapter, ConnectX-6 excludes the need for self-encrypted disks. This gives customers the freedom to choose their preferred storage device, including byteaddressable and NVDIMM devices that traditionally do not provide encryption. Moreover, ConnectX-6 can support Federal Information Processing Standards (FIPS) compliance.




Machine Learning and Big Data Environments Data analytics has become an essential function within many enterprise data centers, clouds and hyperscale platforms. Machine learning relies on especially high throughput and low latency to train deep neural networks and to improve recognition and classification accuracy. As the first adapter card to deliver 200GbE throughput, ConnectX-6 is the perfect solution to provide machine learning applications with the levels of performance and scalability that they require. ConnectX-6 utilizes the RDMA technology to deliver low-latency and high performance. ConnectX-6 enhances RDMA network capabilities even further by delivering end-to-end packet level flow control.




Mellanox Socket Direct Mellanox Socket Direct technology improves the performance of dualsocket servers, such as by enabling each of their CPUs to access the network through a dedicated PCIe interface. As the connection from each CPU to the network bypasses the QPI (UPI) and the second CPU, Socket Direct reduces latency and CPU utilization. Moreover, each CPU handles only its own traffic (and not that of the second CPU), thus optimizing CPU utilization even further.

Mellanox Socket Direct also enables GPUDirect® RDMA for all CPU/GPU pairs by ensuring that GPUs are linked to the CPUs closest to the adapter card. Mellanox Socket Direct enables Intel® DDIO optimization on both sockets by creating a direct connection between the sockets and the adapter card.

Mellanox Socket Direct technology is enabled by a main card that houses the ConnectX-6 adapter card and an auxiliary PCIe card bringing in the remaining PCIe lanes. The ConnectX-6 Socket Direct card is installed into two PCIe x16 slots and connected using a 350mm long harness. The two PCIe x16 slots may also be connected to the same CPU. In this case the main advantage of the technology lies in delivering 200GbE to servers with PCIe Gen3-only support.

Please note that when using Mellanox Socket Direct in virtualization or dual-port use cases, some restrictions may apply. For further details, Contact Mellanox Customer Support.




Host Management Mellanox host management and control capabilities include NC-SI over MCTP over SMBus, and MCTP over PCIe - Baseboard Management Controller (BMC) interface, as well as PLDM for Monitor and Control DSP0248 and PLDM for Firmware Update DSP0267.

Overlay Networks
• RoCE over overlay networks
• Stateless offloads for overlay network tunneling protocols
• Hardware offload of encapsulation and decapsulation of VXLAN, NVGRE, and Geneve overlay networks

Remote Boot
• Remote boot over Ethernet
• Remote boot over iSCSI
• Unified Extensible Firmware Interface (UEFI)
• Pre-execution Environment (PXE)

  Storage Offloads
• Block-level encryption: XTS-AES 256/512 bit key
• NVMe over Fabric offloads for target machine
• T10 DIF - signature handover operation at wire speed, for ingress and egress traffic
• Storage Protocols: SRP, iSER, NFS RDMA, SMB Direct, NVMe-oF

Management and Control
• NC-SI, MCTP over SMBus and MCTP over PCIe - Baseboard Management Controller interface
• PLDM for Monitor and Control DSP0248
• PLDM for Firmware Update DSP0267
• SDN management interface for managing the eSwitch
• I2C interface for device control and configuration
• General Purpose I/O pins
• SPI interface to Flash
• JTAG IEEE 1149.1 and IEEE 1149.6

  CPU Offloads
• RDMA over Converged Ethernet (RoCE)
• TCP/UDP/IP stateless offload
• LSO, LRO, checksum offload
• RSS (also on encapsulated packet), TSS, HDS, VLAN and MPLS tag insertion/stripping, Receive flow steering
• Data Plane Development Kit (DPDK) for kernel bypass application
• Open vSwitch (OVS) offload using ASAP2
• Flexible match-action flow tables
• Tunneling encapsulation / decapsulation
• Intelligent interrupt coalescence
• Header rewrite supporting hardware offload of NAT router

Hardware-Based I/O Virtualization
• Single Root IOV
• Address translation and protection
• VMware NetQueue support
-SR-IOV: Up to 1K Virtual Functions
-SR-IOV: Up to 8 Physical Functions per host
• Virtualization hierarchies (e.g., NPAR)
• Virtualizing Physical Functions on a physical port
• SR-IOV on every Physical Function
• Configurable and user-programmable QoS
• Guaranteed QoS for VMs

  Ethernet
• 200GbE / 100GbE / 50GbE / 40GbE / 25GbE / 10GbE / 1GbE
• IEEE 802.3bj, 802.3bm 100 Gigabit Ethernet
• IEEE 802.3by, Ethernet Consortium 25, 50 Gigabit Ethernet, supporting all FEC modes
• IEEE 802.3ba 40 Gigabit Ethernet
• IEEE 802.3ae 10 Gigabit Ethernet
• IEEE 802.3az Energy Efficient Ethernet
• IEEE 802.3ap based auto-negotiation and KR startup
• IEEE 802.3ad, 802.1AX Link Aggregation
• IEEE 802.1Q, 802.1P VLAN tags and priority
• IEEE 802.1Qau (QCN) – Congestion Notification
• IEEE 802.1Qaz (ETS)
• IEEE 802.1Qbb (PFC)
• IEEE 802.1Qbg
• IEEE 1588v2
• Jumbo frame support (9.6KB)

Enhanced Features
• Hardware-based reliable transport
• Collective operations offloads
• Vector collective operations offloads
• Mellanox PeerDirect® RDMA (aka GPUDirect®) communication acceleration
• 64/66 encoding
• Enhanced Atomic operations
• Advanced memory mapping support, allowing user mode registration and remapping of memory (UMR)
• Extended Reliable Connected transport (XRC)
• Dynamically Connected transport (DCT)
• On demand paging (ODP)
• MPI Tag Matching
• Rendezvous protocol offload
• Out-of-order RDMA supporting Adaptive Routing
• Burst buffer offload
• In-Network Memory registration-free RDMA memory access

 
Full specification and details can be found in the Product Datasheet PDF file

Express View Logo
In Stock: 2-3 Weeks
£1366.20
£1639.44 Inc Vat
Add To Cart
Mellanox MCX654106A-HCAT CONNECTX-6 VPI Adapter Card Kit
Mellanox Logo
HDR IB (200GB/S) and 200GBE Dual-Port QSFP56 Socket Direct 2X PCIE3.0 X16 Tall Brackets

ConnectX-6 VPI Card 200Gb/s InfiniBand & Ethernet Adapter Card

Featuring In-Network Computing for Enhanced Efficiency and Scalability

Socket Direct 2x PCIe 3.0 x16

ConnectX-6 Virtual Protocol Interconnect (VPI) cards are a groundbreaking addition to the ConnectX series of industry-leading network adapter cards. Providing two ports of 200Gb/s for InfiniBand and Ethernet connectivity, sub-600ns latency and 215 million messages per second, ConnectX-6 VPI cards enable the highest performance and most flexible solution aimed at meeting the continually growing demands of data center applications. In addition to all the existing innovative features of past versions, ConnectX-6 cards offer a number of enhancements to further improve performance and scalability.




Benefits: • Industry-leading throughput, low CPU utilization and high message rate

• Highest performance and most intelligent fabric for compute and storage infrastructures

• Cutting-edge performance in virtualized networks including Network Function Virtualization (NFV)

• Host Chaining technology for economical rack design

• Smart interconnect for x86, Power, Arm, GPU and FPGA-based compute and storage platforms

• Flexible programmable pipeline for new network flows

• Enabler for efficient service chaining capabilities

• Efficient I/O consolidation, lowering data center costs and complexity


ConnectX-6 VPI supports HDR, HDR100, EDR, FDR, QDR, DDR and SDR InfiniBand speeds as well as 200, 100, 50, 40, 25, and 10Gb/s Ethernet speeds.

ConnectX®-6 InfiniBand smart adapter cards are a key element in the NVIDIA Quantum InfiniBand platform. ConnectX-6 provides up to two ports of 200Gb/s InfiniBand and Ethernet(1) connectivity with extremely low latency, high message rate, smart offloads, and NVIDIA In-Network Computing acceleration that improve performance and scalability.




Features: • Up to 200Gb/s connectivity per port

• Max bandwidth of 200Gb/s

• Up to 215 million messages/sec

• Sub 0.6usec latency

• Block-level XTS-AES mode hardware encryption

• FIPS capable

• Advanced storage capabilities including block-level encryption and checksum offloads

• Supports both 50G SerDes (PAM4) and 25G SerDes (NRZ) based ports

• Best-in-class packet pacing with sub-nanosecond accuracy

• PCIe Gen 3.0 and Gen 4.0 support

• RoHS compliant

• ODCC compatible




High Performance Computing Environment With its NVIDIA In-Network Computing and In-Network Memory capabilities, ConnectX-6 offloads computation even further to the network, saving CPU cycles and increasing network efficiency. ConnectX-6 utilizes remote direct memory access (RDMA) technology as defined in the InfiniBand Trade Association (IBTA) specification, delivering low latency, and high performance. ConnectX-6 enhances RDMA network capabilities even further by delivering end-to-end packet-level flow control.




Machine Learning and Big Data Environments Data analytics has become an essential function within many enterprise data centers, clouds, and hyperscale platforms. Machine learning (ML) relies on especially high throughput and low latency to train deep neural networks and improve recognition and classification accuracy. With its 200Gb/s throughput, ConnectX-6 is an excellent solution to provide ML applications with the levels of performance and scalability that they require.




Security Including Block-Level Encryption ConnectX-6 block-level encryption offers a critical innovation to network security. As data in transit is stored or retrieved, it undergoes encryption and decryption. ConnectX-6 hardware offloads the IEEE AES-XTS encryption/decryption from the CPU, saving latency and CPU utilization. It also guarantees protection for users sharing the same resources through the use of dedicated encryption keys.

By performing block storage encryption in the adapter, ConnectX-6 eliminates the need for self-encrypted disks. This gives customers the freedom to choose their preferred storage device, including byte addressable and NVDIMM devices that traditionally do not provide encryption. Moreover, ConnectX-6 can offer Federal Information Processing Standards (FIPS) compliance.




Bring NVMe-oF to Storage Environments NVMe storage devices are gaining momentum, offering very fast access to storage media. The evolving NVMe over Fabrics (NVMe-oF) protocol leverages RDMA connectivity to remotely access NVMe storage devices efficiently, while keeping the end-to-end NVMe model at lowest latency. With its NVMe-oF target and initiator offloads, ConnectX-6 brings further optimization to NVMe-oF, enhancing CPU utilization and scalability.




Portfolio of Smart Adapters ConnectX-6 is available in two form factors: low-profile stand-up PCIe and Open Compute Project (OCP) Spec 3.0 cards with QSFP connectors. Single-port, HDR, stand-up PCIe adapters are available based on either ConnectX-6 or ConnectX-6 DE (ConnectX-6 Dx enhanced for HPC applications).

In addition, specific PCIe stand-up cards are available with a cold plate for insertion into liquid-cooled Intel Server System D50TNP platforms.




Socket Direct ConnectX-6 also provides options for NVIDIA Socket Direct™ configurations, which improves the performance of multi-socket servers by enabling each of their CPUs to access the network through its dedicated PCIe interface. This enables data to bypass the QPI (UPI) and the other CPU, improving latency, performance, and CPU utilization.

Socket Direct also enables NVIDIA GPUDirect® RDMA for all CPU/GPU pairs by ensuring that GPUs are linked to the CPUs closest to the adapter card. Socket Direct enables Intel® DDIO optimization on both sockets by creating a direct connection between the sockets and the adapter card.

Socket Direct technology is enabled by a main card housing the ConnectX-6 and an auxiliary PCIe card bringing in the remaining PCIe lanes. The ConnectX-6 Socket Direct card is installed into two PCIe x16 slots and connected using a harness. The two PCIex16 slots may also be connected to the same CPU. In this case, the main advantage of the technology lies in delivering 200Gb/s to servers with PCIe Gen3-only support.



Host Management Host Management includes NC-SI over MCTP over SMBus, and MCTP over PCIe—Baseboard Management Controller (BMC) interface, as well as PLDM for Monitor and Control DSP0248 and PLDM for Firmware Update DSP0267.




Broad Software Support All ConnectX adapters are supported by a full suite of drivers for major Linux distributions, as well as Microsoft® Windows® Server and VMware vSphere®.

HPC software libraries supported include HPC-X, OpenMPI, MVAPICH, MPICH, OpenSHMEM, PGAS, and varied commercial packages.


InfiniBand
• 200Gb/s and lower rates
• IBTA Specification 1.3 compliant
• RDMA, send/receive semantics
• Hardware-based congestion control
• Atomic operations
• 16 million I/O channels
• 256 to 4Kbyte MTU, 2Gbyte messages
• 8 virtual lanes + VL15

Storage Offloads
• Block-level encryption: XTS-AES 256/512-bit key
• NVMe over Fabrics offloads for target machine
• T10 DIF—Signature handover operation at wire speed for ingress and egress traffic
• Storage protocols: SRP, iSER, NFS RDMA, SMB Direct, NVMe-oF

  Management and Control
• NC-SI, MCTP over SMBus and MCTP over PCIe—Baseboard Management Controller interface
• PLDM for Monitor and Control DSP0248
• PLDM for Firmware Update DSP0267
• SDN management interface for managing the eSwitch
• I2C interface for device control and configuration
• General Purpose I/O pins
• SPI interface to flash
• JTAG IEEE 1149.1 and IEEE 1149.6

Hardware-Based I/O Virtualization
• Single Root IOV (SR-IOV)
• Address translation and protection
• VMware NetQueue support
-SR-IOV: Up to 1K virtual functions
- SR-IOV: Up to 8 physical functions per host
• Virtualization hierarchies (e.g., NPAR)
- Virtualizing physical functions on a physical port
- SR-IOV on every physical function
• Configurable and user-programmable QoS
• Guaranteed QoS for VMs

  Remote Boot
• Remote boot over InfiniBand
• Remote boot over Ethernet
• Remote boot over iSCS
• Unified Extensible Firmware Interface (UEFI)
• Pre-execution Environment (PXE)

Overlay Networks
• RoCE over overlay networks
• Stateless offloads for overlay network tunneling protocols
• Hardware offload of encapsulation and decapsulation of VXLAN, NVGRE, and GENEVE overlay networks

  Enhanced Features
• Hardware-based reliable transport
• Collective operations offloads
• Vector collective operations offloads
• NVIDIA PeerDirect™ RDMA (aka GPUDirect) communication acceleration
• 64/66 encoding
• Enhanced atomic operations
• Advanced memory mapping support, allowing user mode registration and remapping of memory (UMR)
• Extended reliable connected transport (XRC)
• Dynamically connected transport (DCT)
• On demand paging (ODP)
• MPI tag matching
• Rendezvous protocol offload
• Out-of-order RDMA supporting adaptive routing
• Burst buffer offload
• In-Network Memory registration-free RDMA memory access

CPU Offloads
• RDMA over Converged Ethernet (RoCE)
• TCP/UDP/IP stateless offload
• LSO, LRO, checksum offload
• RSS (also on encapsulated packet), TSS, HDS, VLAN and MPLS tag insertion/stripping, receive flow steering
• Data plane development kit (DPDK) for kernel bypass applications
• Open vSwitch (OVS) offload using ASAP2
• Flexible match-action flow tables
• Tunneling encapsulation/decapsulation
• Intelligent interrupt coalescence
• Header rewrite supporting hardware offload of NAT router

 
Full specification and details can be found in the Product Datasheet PDF file

Express View Logo
In Stock: 2-3 Weeks
£ POA
£ POA Inc Vat
Mellanox MCX653106A-HDAT-SP CONNECTX-6 VPI Adapter Card
Mellanox Logo
and 200GBE Dual-Port QSFP56 Tall Bracket

ConnectX-6 VPI Card 200Gb/s InfiniBand & Ethernet Adapter Card

Featuring In-Network Computing for Enhanced Efficiency and Scalability

ConnectX-6 Virtual Protocol Interconnect (VPI) cards are a groundbreaking addition to the ConnectX series of industry-leading network adapter cards. Providing two ports of 200Gb/s for InfiniBand and Ethernet connectivity, sub-600ns latency and 215 million messages per second, ConnectX-6 VPI cards enable the highest performance and most flexible solution aimed at meeting the continually growing demands of data center applications. In addition to all the existing innovative features of past versions, ConnectX-6 cards offer a number of enhancements to further improve performance and scalability.

ConnectX-6 VPI supports HDR, HDR100, EDR, FDR, QDR, DDR and SDR InfiniBand speeds as well as 200, 100, 50, 40, 25, and 10Gb/s Ethernet speeds.




Benefits: • Industry-leading throughput, low CPU utilization and high message rate

• Highest performance and most intelligent fabric for compute and storage infrastructures

• Cutting-edge performance in virtualized networks including Network Function Virtualization (NFV)

• Host Chaining technology for economical rack design

• Smart interconnect for x86, Power, Arm, GPU and FPGA-based compute and storage platforms

• Flexible programmable pipeline for new network flows

• Efficient service chaining enablement

• Increased I/O consolidation efficiencies, reducing data center costs & complexity


ConnectX®-6 Dx is a highly secure and advanced smart network interface card (SmartNIC) that accelerates mission-critical cloud and data center applications, including security, virtualization, SDN/NFV, big data, machine learning, and storage. ConnectX-6 Dx provides up to two ports of 100Gb/s or a single port of 200Gb/s Ethernet connectivity and is powered by 50Gb/s (PAM4) or 25/10 Gb/s (NRZ) SerDes technology.

ConnectX-6 Dx features virtual switch (vSwitch) and virtual router (vRouter) hardware accelerations delivering orders-of-magnitude higher performance than softwarebased solutions. ConnectX-6 Dx supports a choice of single-root I/O virtualization (SR-IOV) and VirtIO in hardware, enabling customers to best address their application needs. By offloading cloud networking workloads, ConnectX-6 Dx frees up CPU cores for business applications while reducing total cost-of-ownership.




Features: • Up to 200Gb/s connectivity per port

• Max bandwidth of 200Gb/s

• Up to 215 million messages/sec

• Sub 0.6usec latency

• OCP 2.0

• FIPS capable

• Advanced storage capabilities including block-level encryption and checksum offloads

• Supports both 50G SerDes (PAM4) and 25G SerDes (NRZ) based ports

• Best-in-class packet pacing with sub-nanosecond accuracy

• PCIe Gen 3.0 and Gen 4.0 support

• RoHS compliant

• ODCC compatible




High Performance Computing Environments With its NVIDIA In-Network Computing and In-Network Memory capabilities, ConnectX-6 offloads computation even further to the network, saving CPU cycles and increasing network efficiency. ConnectX-6 utilizes remote direct memory access (RDMA) technology as defined in the InfiniBand Trade Association (IBTA) specification, delivering low latency, and high performance. ConnectX-6 enhances RDMA network capabilities even further by delivering end-to-end packet-level flow control.




Machine Learning and Big Data Environments Data analytics has become an essential function within many enterprise data centers, clouds, and hyperscale platforms. Machine learning (ML) relies on especially high throughput and low latency to train deep neural networks and improve recognition and classification accuracy. With its 200Gb/s throughput, ConnectX-6 is an excellent solution to provide ML applications with the levels of performance and scalability that they require.




Security Including Block-Level Encryption ConnectX-6 block-level encryption offers a critical innovation to network security. As data in transit is stored or retrieved, it undergoes encryption and decryption. ConnectX-6 hardware offloads the IEEE AES-XTS encryption/decryption from the CPU, saving latency and CPU utilization. It also guarantees protection for users sharing the same resources through the use of dedicated encryption keys.

By performing block storage encryption in the adapter, ConnectX-6 eliminates the need for self-encrypted disks. This gives customers the freedom to choose their preferred storage device, including byte addressable and NVDIMM devices that traditionally do not provide encryption. Moreover, ConnectX-6 can offer Federal Information Processing Standards (FIPS) compliance.




Bring NVMe-oF to Storage Environments NVMe storage devices are gaining momentum, offering very fast access to storage media. The evolving NVMe over Fabrics (NVMe-oF) protocol leverages RDMA connectivity to remotely access NVMe storage devices efficiently, while keeping the end-to-end NVMe model at lowest latency. With its NVMe-oF target and initiator offloads, ConnectX-6 brings further optimization to NVMe-oF, enhancing CPU utilization and scalability.




Portfolio of Smart Adapters ConnectX-6 is available in two form factors: low-profile stand-up PCIe and Open Compute Project (OCP) Spec 3.0 cards with QSFP connectors. Single-port, HDR, stand-up PCIe adapters are available based on either ConnectX-6 or ConnectX-6 DE (ConnectX-6 Dx enhanced for HPC applications).

In addition, specific PCIe stand-up cards are available with a cold plate for insertion into liquid-cooled Intel Server System D50TNP platforms.




Socket Direct ConnectX-6 also provides options for NVIDIA Socket Direct™ configurations, which improves the performance of multi-socket servers by enabling each of their CPUs to access the network through its dedicated PCIe interface. This enables data to bypass the QPI (UPI) and the other CPU, improving latency, performance, and CPU utilization.

Socket Direct also enables NVIDIA GPUDirect® RDMA for all CPU/GPU pairs by ensuring that GPUs are linked to the CPUs closest to the adapter card. Socket Direct enables Intel® DDIO optimization on both sockets by creating a direct connection between the sockets and the adapter card.

Socket Direct technology is enabled by a main card housing the ConnectX-6 and an auxiliary PCIe card bringing in the remaining PCIe lanes. The ConnectX-6 Socket Direct card is installed into two PCIe x16 slots and connected using a harness. The two PCIe x16 slots may also be connected to the same CPU. In this case, the main advantage of the technology lies in delivering 200Gb/s to servers with PCIe Gen3-only support.




Host Management Host Management includes NC-SI over MCTP over SMBus, and MCTP over PCIe— Baseboard Management Controller (BMC) interface, as well as PLDM for Monitor and Control DSP0248 and PLDM for Firmware Update DSP0267.




Broad Software Support All ConnectX adapters are supported by a full suite of drivers for major Linux distributions, as well as Microsoft® Windows® Server and VMware vSphere®.

HPC software libraries supported include HPC-X, OpenMPI, MVAPICH, MPICH, OpenSHMEM, PGAS, and varied commercial packages.

Overlay Networks
• RoCE over overlay networks
• Stateless offloads for overlay network tunneling protocols
• Hardware offload of encapsulation and decapsulation of VXLAN, NVGRE, and Geneve overlay networks

Storage Offloads
• Block-level encryption: XTS-AES 256/512-bit key
• NVMe over Fabrics offloads for target machine
• T10-DIF signature handover operation at wire speed, for ingress and egress traffic
• Storage protocols: SRP, iSER, NFS RDMA, SMB Direct, and NVMe-oF

  InfiniBand
• 200Gb/s and lower rates
• IBTA Specification 1.3 compliant
• RDMA, send/receive semantics
• Hardware-based congestion control
• Atomic operations
• 16 million I/O channels
• 256 to 4Kbyte MTU, 2Gbyte messages
• 8 virtual lanes + VL15

Remote Boot
• Remote boot over InfiniBand
• Remote boot over Ethernet
• Remote boot over iSCSI
• Unified Extensible Firmware Interface (UEFI)
• Pre-execution Environment (PXE)

  Hardware-Based I/O Virtualization
• Single Root IOV (SR-IOV)
• Address translation and protection
• VMware NetQueue support
-SR-IOV: Up to 1K virtual functions
-SR-IOV: Up to 8 physical functions per host
• Virtualization hierarchies (e.g., NPAR)
-Virtualizing physical functions on a physical port
-SR-IOV on every physical function
• Configurable and user-programmable QoS
• Guaranteed QoS for VMs

Management and Control
• NC-SI, MCTP over SMBus and MCTP over PCIe—Baseboard Management Controller interface
• PLDM for Monitor and Control DSP0248
• PLDM for Firmware Update DSP0267
• SDN management interface for managing the eSwitch
• I2C interface for device control and configuration
• General Purpose I/O pins
• SPI interface to flash
• JTAG IEEE 1149.1 and IEEE 1149.6

  Enhanced Features
• Hardware-based reliable transport
• Collective operations offloads
• Vector collective operations offloads
• NVIDIA PeerDirect® RDMA (a.k.a. NVIDIA GPUDirect) communication acceleration
• 64/66 encoding
• Enhanced atomic operations
• Advanced memory mapping support, allowing user mode registration and remapping of memory (UMR)
• Extended Reliable Connected transport (XRC)
• Dynamically Connected Transport (DCT)
• On demand paging (ODP)
• MPI tag matching
• Rendezvous protocol offload
• Out-of-order RDMA supporting Adaptive Routing
• Burst buffer offload
• In-Network Memory registration-free RDMA memory access

CPU Offloads
• RDMA over Converged Ethernet (RoCE)
• TCP/UDP/IP stateless offload
• LSO, LRO, checksum offload
• RSS (also on encapsulated packet), TSS, HDS, VLAN and MPLS tag insertion/stripping, receive flow steering
• Data plane development kit (DPDK) for kernel bypass applications
• Open vSwitch (OVS) offload using ASAP2
• Flexible match-action flow tables
• Tunneling encapsulation/decapsulation
• Intelligent interrupt coalescence
• Header rewrite supporting hardware offload of NAT router

 
Full specification and details can be found in the Product Datasheet PDF file

Express View Logo
In Stock: 2-3 Weeks
£1571.13
£1885.36 Inc Vat
Add To Cart
Other Ranges Available
Mellanox Fan Modules
View Range
Mellanox Network Interface Cards
View Range
Mellanox Brackets and Mounting Kits
View Range
Mellanox Power Supply Units
Mellanox Power Supply Units
View Range
Join Our Mailing List
Social Links
  • Apply for Credit