Call us FREE on 0800 488 000
X
X

Please Log In Below

Forgotten Password?
Retry
Login
loading Gif
Sorry! You can't edit your cart on this page.
Sorry! This item could not be added to your cart as it is no longer available from Comms Express.
Please check the quantity you are adding and try again.
The following item has been added to your cart.
Product Code:
Options:
Qty:
Unit Price:£
View Cart
Checkout
Your cart is empty.
Subtotal:
£0
Remove all items from cart
Are you sure? Yes No
Learn more about how
to collect Data Points
for free gifts.
Comms Express Finance Options
Request A Quote
View Cart
Checkout
Cookie Policy×

Hi there! Our website may store cookies on your computer in order to give you the best experience, such as remembering the items in your cart so you can continue shopping where you left off.

By continuing to use our site, you give consent for cookies to be used.

Spend £100.00 for
FREE DELIVERY.
Free delivery excludes heavy and bulky products
Browse Categories
In Stock: 2-3 Weeks
£1247.24
£1496.69 Inc VAT
Earn 623 Data Points when purchasing this product. More Info

/assets/images/gallery/large/1660118884MCX623106AC-CDAT-1.png

Mellanox MCX623106AC-CDAT CONNECTX-6 DX EN Adapter Card 100GBE Dual-Port QSFP56 PCIE4.0x16

Mellanox MCX623106AC-CDAT CONNECTX-6 DX EN Adapter Card 100GBE Dual-Port QSFP56 PCIE4.0x16

Crypto and Secure Boot Tall Bracket

by Mellanox
See more product details
Part No:FEMCX623106AC-CDAT
Manufacturer No:MCX623106AC-CDAT
Delivery: In Stock: 2-3 Weeks

/assets/images/gallery/large/1660118884MCX623106AC-CDAT-1.png

Mellanox MCX623106AC-CDAT CONNECTX-6 DX EN Adapter Card 100GBE Dual-Port QSFP56 PCIE4.0x16
More Related Items
Click to change options
Length:
Colour:
Apply
£1.10 Ex VAT
Qty:
Click to change options
Colour:
Door:
Flat Pack:
Accessory Kit:
Apply
£670.83 Ex VAT
Qty:
Email product to a friend
X
  • Scroll to top
    Mellanox MCX623106AC-CDAT CONNECTX-6 DX EN Adapter Card 100GBE Dual-Port QSFP56 PCIE4.0x16
    Crypto and Secure Boot Tall Bracket

    ConnectX-6 VPI Card 200Gb/s InfiniBand & Ethernet Adapter Card

    Featuring In-Network Computing for Enhanced Efficiency and Scalability

    ConnectX-6 Virtual Protocol Interconnect (VPI) cards are a groundbreaking addition to the ConnectX series of industry-leading network adapter cards. Providing two ports of 200Gb/s for InfiniBand and Ethernet connectivity, sub-600ns latency and 215 million messages per second, ConnectX-6 VPI cards enable the highest performance and most flexible solution aimed at meeting the continually growing demands of data center applications. In addition to all the existing innovative features of past versions, ConnectX-6 cards offer a number of enhancements to further improve performance and scalability.

    ConnectX-6 VPI supports HDR, HDR100, EDR, FDR, QDR, DDR and SDR InfiniBand speeds as well as 200, 100, 50, 40, 25, and 10Gb/s Ethernet speeds.


    Benefits:

    • Industry-leading throughput, low CPU utilization and high message rate

    • Highest performance and most intelligent fabric for compute and storage infrastructures

    • Cutting-edge performance in virtualized networks including Network Function Virtualization (NFV)

    • Host Chaining technology for economical rack design

    • Smart interconnect for x86, Power, Arm, GPU and FPGA-based compute and storage platforms

    • Flexible programmable pipeline for new network flows

    • Efficient service chaining enablement

    • Increased I/O consolidation efficiencies, reducing data center costs & complexity


    ConnectX®-6 Dx is a highly secure and advanced smart network interface card (SmartNIC) that accelerates mission-critical cloud and data center applications, including security, virtualization, SDN/NFV, big data, machine learning, and storage. ConnectX-6 Dx provides up to two ports of 100Gb/s or a single port of 200Gb/s Ethernet connectivity and is powered by 50Gb/s (PAM4) or 25/10 Gb/s (NRZ) SerDes technology.

    ConnectX-6 Dx features virtual switch (vSwitch) and virtual router (vRouter) hardware accelerations delivering orders-of-magnitude higher performance than softwarebased solutions. ConnectX-6 Dx supports a choice of single-root I/O virtualization (SR-IOV) and VirtIO in hardware, enabling customers to best address their application needs. By offloading cloud networking workloads, ConnectX-6 Dx frees up CPU cores for business applications while reducing total cost-of-ownership.


    Features:

    • Up to 200Gb/s connectivity per port

    • Max bandwidth of 200Gb/s

    • Up to 215 million messages/sec

    • Sub 0.6usec latency

    • OCP 2.0

    • FIPS capable

    • Advanced storage capabilities including block-level encryption and checksum offloads

    • Supports both 50G SerDes (PAM4) and 25G SerDes (NRZ) based ports

    • Best-in-class packet pacing with sub-nanosecond accuracy

    • PCIe Gen 3.0 and Gen 4.0 support

    • RoHS compliant

    • ODCC compatible


    High Performance Computing Environments

    With its NVIDIA In-Network Computing and In-Network Memory capabilities, ConnectX-6 offloads computation even further to the network, saving CPU cycles and increasing network efficiency. ConnectX-6 utilizes remote direct memory access (RDMA) technology as defined in the InfiniBand Trade Association (IBTA) specification, delivering low latency, and high performance. ConnectX-6 enhances RDMA network capabilities even further by delivering end-to-end packet-level flow control.


    Machine Learning and Big Data Environments

    Data analytics has become an essential function within many enterprise data centers, clouds, and hyperscale platforms. Machine learning (ML) relies on especially high throughput and low latency to train deep neural networks and improve recognition and classification accuracy. With its 200Gb/s throughput, ConnectX-6 is an excellent solution to provide ML applications with the levels of performance and scalability that they require.


    Security Including Block-Level Encryption

    ConnectX-6 block-level encryption offers a critical innovation to network security. As data in transit is stored or retrieved, it undergoes encryption and decryption. ConnectX-6 hardware offloads the IEEE AES-XTS encryption/decryption from the CPU, saving latency and CPU utilization. It also guarantees protection for users sharing the same resources through the use of dedicated encryption keys.

    By performing block storage encryption in the adapter, ConnectX-6 eliminates the need for self-encrypted disks. This gives customers the freedom to choose their preferred storage device, including byte addressable and NVDIMM devices that traditionally do not provide encryption. Moreover, ConnectX-6 can offer Federal Information Processing Standards (FIPS) compliance.


    Bring NVMe-oF to Storage Environments

    NVMe storage devices are gaining momentum, offering very fast access to storage media. The evolving NVMe over Fabrics (NVMe-oF) protocol leverages RDMA connectivity to remotely access NVMe storage devices efficiently, while keeping the end-to-end NVMe model at lowest latency. With its NVMe-oF target and initiator offloads, ConnectX-6 brings further optimization to NVMe-oF, enhancing CPU utilization and scalability.


    Portfolio of Smart Adapters

    ConnectX-6 is available in two form factors: low-profile stand-up PCIe and Open Compute Project (OCP) Spec 3.0 cards with QSFP connectors. Single-port, HDR, stand-up PCIe adapters are available based on either ConnectX-6 or ConnectX-6 DE (ConnectX-6 Dx enhanced for HPC applications).

    In addition, specific PCIe stand-up cards are available with a cold plate for insertion into liquid-cooled Intel Server System D50TNP platforms.


    Socket Direct

    ConnectX-6 also provides options for NVIDIA Socket Direct™ configurations, which improves the performance of multi-socket servers by enabling each of their CPUs to access the network through its dedicated PCIe interface. This enables data to bypass the QPI (UPI) and the other CPU, improving latency, performance, and CPU utilization.

    Socket Direct also enables NVIDIA GPUDirect® RDMA for all CPU/GPU pairs by ensuring that GPUs are linked to the CPUs closest to the adapter card. Socket Direct enables Intel® DDIO optimization on both sockets by creating a direct connection between the sockets and the adapter card.

    Socket Direct technology is enabled by a main card housing the ConnectX-6 and an auxiliary PCIe card bringing in the remaining PCIe lanes. The ConnectX-6 Socket Direct card is installed into two PCIe x16 slots and connected using a harness. The two PCIe x16 slots may also be connected to the same CPU. In this case, the main advantage of the technology lies in delivering 200Gb/s to servers with PCIe Gen3-only support.


    Host Management

    Host Management includes NC-SI over MCTP over SMBus, and MCTP over PCIe— Baseboard Management Controller (BMC) interface, as well as PLDM for Monitor and Control DSP0248 and PLDM for Firmware Update DSP0267.


    Broad Software Support

    All ConnectX adapters are supported by a full suite of drivers for major Linux distributions, as well as Microsoft® Windows® Server and VMware vSphere®.

    HPC software libraries supported include HPC-X, OpenMPI, MVAPICH, MPICH, OpenSHMEM, PGAS, and varied commercial packages.

    Overlay Networks
    • RoCE over overlay networks
    • Stateless offloads for overlay network tunneling protocols
    • Hardware offload of encapsulation and decapsulation of VXLAN, NVGRE, and Geneve overlay networks

    Storage Offloads
    • Block-level encryption: XTS-AES 256/512-bit key
    • NVMe over Fabrics offloads for target machine
    • T10-DIF signature handover operation at wire speed, for ingress and egress traffic
    • Storage protocols: SRP, iSER, NFS RDMA, SMB Direct, and NVMe-oF

     

    InfiniBand
    • 200Gb/s and lower rates
    • IBTA Specification 1.3 compliant
    • RDMA, send/receive semantics
    • Hardware-based congestion control
    • Atomic operations
    • 16 million I/O channels
    • 256 to 4Kbyte MTU, 2Gbyte messages
    • 8 virtual lanes + VL15

    Remote Boot
    • Remote boot over InfiniBand
    • Remote boot over Ethernet
    • Remote boot over iSCSI
    • Unified Extensible Firmware Interface (UEFI)
    • Pre-execution Environment (PXE)

     

    Hardware-Based I/O Virtualization
    • Single Root IOV (SR-IOV)
    • Address translation and protection
    • VMware NetQueue support
    -SR-IOV: Up to 1K virtual functions
    -SR-IOV: Up to 8 physical functions per host
    • Virtualization hierarchies (e.g., NPAR)
    -Virtualizing physical functions on a physical port
    -SR-IOV on every physical function
    • Configurable and user-programmable QoS
    • Guaranteed QoS for VMs

    Management and Control
    • NC-SI, MCTP over SMBus and MCTP over PCIe—Baseboard Management Controller interface
    • PLDM for Monitor and Control DSP0248
    • PLDM for Firmware Update DSP0267
    • SDN management interface for managing the eSwitch
    • I2C interface for device control and configuration
    • General Purpose I/O pins
    • SPI interface to flash
    • JTAG IEEE 1149.1 and IEEE 1149.6

     

    Enhanced Features
    • Hardware-based reliable transport
    • Collective operations offloads
    • Vector collective operations offloads
    • NVIDIA PeerDirect® RDMA (a.k.a. NVIDIA GPUDirect) communication acceleration
    • 64/66 encoding
    • Enhanced atomic operations
    • Advanced memory mapping support, allowing user mode registration and remapping of memory (UMR)
    • Extended Reliable Connected transport (XRC)
    • Dynamically Connected Transport (DCT)
    • On demand paging (ODP)
    • MPI tag matching
    • Rendezvous protocol offload
    • Out-of-order RDMA supporting Adaptive Routing
    • Burst buffer offload
    • In-Network Memory registration-free RDMA memory access

    CPU Offloads
    • RDMA over Converged Ethernet (RoCE)
    • TCP/UDP/IP stateless offload
    • LSO, LRO, checksum offload
    • RSS (also on encapsulated packet), TSS, HDS, VLAN and MPLS tag insertion/stripping, receive flow steering
    • Data plane development kit (DPDK) for kernel bypass applications
    • Open vSwitch (OVS) offload using ASAP2
    • Flexible match-action flow tables
    • Tunneling encapsulation/decapsulation
    • Intelligent interrupt coalescence
    • Header rewrite supporting hardware offload of NAT router

     

    Full specification and details can be found in the Product Datasheet PDF file

    ConnectX-6 VPI Card 200Gb/s InfiniBand & Ethernet Adapter Card

    Featuring In-Network Computing for Enhanced Efficiency and Scalability

    ConnectX-6 Virtual Protocol Interconnect (VPI) cards are a groundbreaking addition to the ConnectX series of industry-leading network adapter cards. Providing two ports of 200Gb/s for InfiniBand and Ethernet connectivity, sub-600ns latency and 215 million messages per second, ConnectX-6 VPI cards enable the highest performance and most flexible solution aimed at meeting the continually growing demands of data center applications. In addition to all the existing innovative features of past versions, ConnectX-6 cards offer a number of enhancements to further improve performance and scalability.

    ConnectX-6 VPI supports HDR, HDR100, EDR, FDR, QDR, DDR and SDR InfiniBand speeds as well as 200, 100, 50, 40, 25, and 10Gb/s Ethernet speeds.


    Benefits:

    • Industry-leading throughput, low CPU utilization and high message rate

    • Highest performance and most intelligent fabric for compute and storage infrastructures

    • Cutting-edge performance in virtualized networks including Network Function Virtualization (NFV)

    • Host Chaining technology for economical rack design

    • Smart interconnect for x86, Power, Arm, GPU and FPGA-based compute and storage platforms

    • Flexible programmable pipeline for new network flows

    • Efficient service chaining enablement

    • Increased I/O consolidation efficiencies, reducing data center costs & complexity


    ConnectX®-6 Dx is a highly secure and advanced smart network interface card (SmartNIC) that accelerates mission-critical cloud and data center applications, including security, virtualization, SDN/NFV, big data, machine learning, and storage. ConnectX-6 Dx provides up to two ports of 100Gb/s or a single port of 200Gb/s Ethernet connectivity and is powered by 50Gb/s (PAM4) or 25/10 Gb/s (NRZ) SerDes technology.

    ConnectX-6 Dx features virtual switch (vSwitch) and virtual router (vRouter) hardware accelerations delivering orders-of-magnitude higher performance than softwarebased solutions. ConnectX-6 Dx supports a choice of single-root I/O virtualization (SR-IOV) and VirtIO in hardware, enabling customers to best address their application needs. By offloading cloud networking workloads, ConnectX-6 Dx frees up CPU cores for business applications while reducing total cost-of-ownership.


    Features:

    • Up to 200Gb/s connectivity per port

    • Max bandwidth of 200Gb/s

    • Up to 215 million messages/sec

    • Sub 0.6usec latency

    • OCP 2.0

    • FIPS capable

    • Advanced storage capabilities including block-level encryption and checksum offloads

    • Supports both 50G SerDes (PAM4) and 25G SerDes (NRZ) based ports

    • Best-in-class packet pacing with sub-nanosecond accuracy

    • PCIe Gen 3.0 and Gen 4.0 support

    • RoHS compliant

    • ODCC compatible


    High Performance Computing Environments

    With its NVIDIA In-Network Computing and In-Network Memory capabilities, ConnectX-6 offloads computation even further to the network, saving CPU cycles and increasing network efficiency. ConnectX-6 utilizes remote direct memory access (RDMA) technology as defined in the InfiniBand Trade Association (IBTA) specification, delivering low latency, and high performance. ConnectX-6 enhances RDMA network capabilities even further by delivering end-to-end packet-level flow control.


    Machine Learning and Big Data Environments

    Data analytics has become an essential function within many enterprise data centers, clouds, and hyperscale platforms. Machine learning (ML) relies on especially high throughput and low latency to train deep neural networks and improve recognition and classification accuracy. With its 200Gb/s throughput, ConnectX-6 is an excellent solution to provide ML applications with the levels of performance and scalability that they require.


    Security Including Block-Level Encryption

    ConnectX-6 block-level encryption offers a critical innovation to network security. As data in transit is stored or retrieved, it undergoes encryption and decryption. ConnectX-6 hardware offloads the IEEE AES-XTS encryption/decryption from the CPU, saving latency and CPU utilization. It also guarantees protection for users sharing the same resources through the use of dedicated encryption keys.

    By performing block storage encryption in the adapter, ConnectX-6 eliminates the need for self-encrypted disks. This gives customers the freedom to choose their preferred storage device, including byte addressable and NVDIMM devices that traditionally do not provide encryption. Moreover, ConnectX-6 can offer Federal Information Processing Standards (FIPS) compliance.


    Bring NVMe-oF to Storage Environments

    NVMe storage devices are gaining momentum, offering very fast access to storage media. The evolving NVMe over Fabrics (NVMe-oF) protocol leverages RDMA connectivity to remotely access NVMe storage devices efficiently, while keeping the end-to-end NVMe model at lowest latency. With its NVMe-oF target and initiator offloads, ConnectX-6 brings further optimization to NVMe-oF, enhancing CPU utilization and scalability.


    Portfolio of Smart Adapters

    ConnectX-6 is available in two form factors: low-profile stand-up PCIe and Open Compute Project (OCP) Spec 3.0 cards with QSFP connectors. Single-port, HDR, stand-up PCIe adapters are available based on either ConnectX-6 or ConnectX-6 DE (ConnectX-6 Dx enhanced for HPC applications).

    In addition, specific PCIe stand-up cards are available with a cold plate for insertion into liquid-cooled Intel Server System D50TNP platforms.


    Socket Direct

    ConnectX-6 also provides options for NVIDIA Socket Direct™ configurations, which improves the performance of multi-socket servers by enabling each of their CPUs to access the network through its dedicated PCIe interface. This enables data to bypass the QPI (UPI) and the other CPU, improving latency, performance, and CPU utilization.

    Socket Direct also enables NVIDIA GPUDirect® RDMA for all CPU/GPU pairs by ensuring that GPUs are linked to the CPUs closest to the adapter card. Socket Direct enables Intel® DDIO optimization on both sockets by creating a direct connection between the sockets and the adapter card.

    Socket Direct technology is enabled by a main card housing the ConnectX-6 and an auxiliary PCIe card bringing in the remaining PCIe lanes. The ConnectX-6 Socket Direct card is installed into two PCIe x16 slots and connected using a harness. The two PCIe x16 slots may also be connected to the same CPU. In this case, the main advantage of the technology lies in delivering 200Gb/s to servers with PCIe Gen3-only support.


    Host Management

    Host Management includes NC-SI over MCTP over SMBus, and MCTP over PCIe— Baseboard Management Controller (BMC) interface, as well as PLDM for Monitor and Control DSP0248 and PLDM for Firmware Update DSP0267.


    Broad Software Support

    All ConnectX adapters are supported by a full suite of drivers for major Linux distributions, as well as Microsoft® Windows® Server and VMware vSphere®.

    HPC software libraries supported include HPC-X, OpenMPI, MVAPICH, MPICH, OpenSHMEM, PGAS, and varied commercial packages.

    Overlay Networks
    • RoCE over overlay networks
    • Stateless offloads for overlay network tunneling protocols
    • Hardware offload of encapsulation and decapsulation of VXLAN, NVGRE, and Geneve overlay networks

    Storage Offloads
    • Block-level encryption: XTS-AES 256/512-bit key
    • NVMe over Fabrics offloads for target machine
    • T10-DIF signature handover operation at wire speed, for ingress and egress traffic
    • Storage protocols: SRP, iSER, NFS RDMA, SMB Direct, and NVMe-oF

     

    InfiniBand
    • 200Gb/s and lower rates
    • IBTA Specification 1.3 compliant
    • RDMA, send/receive semantics
    • Hardware-based congestion control
    • Atomic operations
    • 16 million I/O channels
    • 256 to 4Kbyte MTU, 2Gbyte messages
    • 8 virtual lanes + VL15

    Remote Boot
    • Remote boot over InfiniBand
    • Remote boot over Ethernet
    • Remote boot over iSCSI
    • Unified Extensible Firmware Interface (UEFI)
    • Pre-execution Environment (PXE)

     

    Hardware-Based I/O Virtualization
    • Single Root IOV (SR-IOV)
    • Address translation and protection
    • VMware NetQueue support
    -SR-IOV: Up to 1K virtual functions
    -SR-IOV: Up to 8 physical functions per host
    • Virtualization hierarchies (e.g., NPAR)
    -Virtualizing physical functions on a physical port
    -SR-IOV on every physical function
    • Configurable and user-programmable QoS
    • Guaranteed QoS for VMs

    Management and Control
    • NC-SI, MCTP over SMBus and MCTP over PCIe—Baseboard Management Controller interface
    • PLDM for Monitor and Control DSP0248
    • PLDM for Firmware Update DSP0267
    • SDN management interface for managing the eSwitch
    • I2C interface for device control and configuration
    • General Purpose I/O pins
    • SPI interface to flash
    • JTAG IEEE 1149.1 and IEEE 1149.6

     

    Enhanced Features
    • Hardware-based reliable transport
    • Collective operations offloads
    • Vector collective operations offloads
    • NVIDIA PeerDirect® RDMA (a.k.a. NVIDIA GPUDirect) communication acceleration
    • 64/66 encoding
    • Enhanced atomic operations
    • Advanced memory mapping support, allowing user mode registration and remapping of memory (UMR)
    • Extended Reliable Connected transport (XRC)
    • Dynamically Connected Transport (DCT)
    • On demand paging (ODP)
    • MPI tag matching
    • Rendezvous protocol offload
    • Out-of-order RDMA supporting Adaptive Routing
    • Burst buffer offload
    • In-Network Memory registration-free RDMA memory access

    CPU Offloads
    • RDMA over Converged Ethernet (RoCE)
    • TCP/UDP/IP stateless offload
    • LSO, LRO, checksum offload
    • RSS (also on encapsulated packet), TSS, HDS, VLAN and MPLS tag insertion/stripping, receive flow steering
    • Data plane development kit (DPDK) for kernel bypass applications
    • Open vSwitch (OVS) offload using ASAP2
    • Flexible match-action flow tables
    • Tunneling encapsulation/decapsulation
    • Intelligent interrupt coalescence
    • Header rewrite supporting hardware offload of NAT router

     

    Full specification and details can be found in the Product Datasheet PDF file

    Email product to a friend
    Print product details
    View Keywording:
    Recently Viewed Items
    Click to change options
    Colour:
    Apply
    £0.00 Ex VAT
    Click to change options
    Colour:
    Apply
    £0.00 Ex VAT
    Click to change options
    Length:
    Apply
    £122.41 Ex VAT
    Click to change options
    Colour:
    Apply
    £0.00 Ex VAT