Mellanox [MCX413A-BCAT] Mellanox ConnectX-4 EN Network Interface Card, 40/56GbE Single-port QSFP28, PCIe3.0 x8, tall bracket MCX413A-BCAT.. Product #: MCX413A-BCAT based on 0 reviews

[MCX413A-BCAT] Mellanox ConnectX-4 EN Network Interface Card, 40/56GbE Single-port QSFP28, PCIe3.0 x8, tall bracket

MANUFACTURER: Mellanox
P/N (รหัสสินค้า): MCX413A-BCAT

รับประกันสินค้า 5 ปี:

สินค้ารับประกัน 5 ปี

ผู้รับเหมาและโครงการต่างๆ ราคาพิเศษ SI ราคา พิเศษ

- +

Genuine Mellanox MCX413A-BCAT ConnectX-4 EN Network Interface Card, 40/56GbE Single-port QSFP28, PCIe3.0 x8, tall bracket

 

Mellanox ConnectX®-4 EN network controller cards with 100Gb/s Ethernet connectivity provides a high performance and flexible solution for Web 2.0, Cloud, data analytics, database, and storage platforms.With the exponential growth of data being shared and stored by applications and social networks, the need for high-speed and high performance compute and storage data centers is skyrocketing. ConnectX-4 EN provides high performance for demanding data centers, public and private clouds, Web 2.0 and Big Data applications, and storage systems, enabling today’s corporations to meet the demands of the data explosion.

 

ConnectX-4 EN provides an unmatched combination of 100Gb/s bandwidth in a single port, low latency, and specific hardware offloads, addressing both today’s and the next generation’s compute and storage data center demands.

 

I/O Virtualization

ConnectX-4 EN SR-IOV technology provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VMs) within the server. I/O virtualization with ConnectX-4 EN gives data center administrators better server utilization while reducing cost, power, and cable complexity, allowing more Virtual Machines and more tenants on the same hardware.

 

Overlay Networks

In order to better scale their networks, data center operators often create overlay networks that carry traffic from individual virtual machines over logical tunnels in encapsulated formats such as NVGRE. While this solves network scalability issues, it hides the TCP packet from the hardware offloading engines, placing higher loads on the host CPU. ConnectX-4 effectively addresses this by providing advanced NVGRE and GENEVE hardware offloading engines that encapsulate and de-capsulate the overlay protocol headers, enabling the traditional offloads to be performed on the encapsulated traffic. With ConnectX-4, data center operators can achieve native performance in the new network architecture.

 

RDMA over Converged Ethernet (RoCE)

ConnectX-4 EN supports RoCE specifications delivering low-latency and high-performance over Ethernet networks. Leveraging data center bridging (DCB) capabilities as well as ConnectX-4 EN advanced congestion control hardware mechanisms, RoCE provides efficient low-latency RDMA services over Layer 2 and Layer 3 networks.

 

Mellanox PeerDirect

Mellanox PeerDirect® communication provides high efficiency RDMA access by eliminating unnecessary internal data copies between components on the PCIe bus (for example, from GPU to CPU), and therefore significantly reduces application run time. ConnectX-4 advanced acceleration technology enables higher cluster efficiency and scalability to tens of thousands of nodes.

 

Storage Acceleration

Storage applications will see improved performance with the high bandwidth that ConnectX-4 EN delivers. Moreover, standard block and file access protocols can leverage RoCE for high-performance storage access. A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks.

 

Signature Handover

ConnectX-4 EN supports hardware checking of T10 Data Integrity Field/Protection Information (T10-DIF/PI), reducing the CPU overhead and accelerating delivery of data to the application. Signature handover is handled by the adapter on ingress and/or egress packets, reducing the load on the CPU at the initiator and/or target machines.

 

Host Management

Mellanox host management and control capabilities include NC-SI over MCTP over SMBus, and MCTP over PCIe - Baseboard Management Controller (BMC) interface, as well as PLDM for Monitor and Control DSP0248 and PLDM for Firmware Update DSP0267.

 

Software Support

All Mellanox adapter cards are supported by Windows, Linux distributions, VMware, FreeBSD, and Citrix XENServer. ConnectX-4 EN adapters support OpenFabrics-based RDMA protocols and software and are compatible with configuration and management tools from OEMs and operating system vendors.

 

NEW FEATURES

100Gb/s Ethernet per port
1/10/25/40/50/56/100 Gb/s speeds
Single and dual-port options available
T10-DIF Signature Handover
CPU offloading of transport operations
Application offloading
Mellanox PeerDirect communication acceleration
Hardware offloads for NVGRE, VXLAN and GENEVE encapsulated traffic
End-to-end QoS and congestion control
Hardware-based I/O virtualization
RoHS compliant
ODCC compatible

 

BENEFITS

High performance silicon for applications requiring high bandwidth, low latency and high message rate
World-class cluster, network, and storage performance
Smart interconnect for x86, Power, Arm, and GPU-based compute and storage platforms
Cutting-edge performance in virtualized overlay networks NVGRE and GENEVE
Efficient I/O consolidation, lowering data center costs and complexity
Virtualization acceleration
Power efficiency
Scalability to tens-of-thousands of nodes

There are no reviews for this product.
Write a review
BadExcellent