The 8U SuperBlade® networking options include four different Ethernet modules. For simple Layer 2 switching at 1-Gbps the MBM-GEM-004 and MBM-GEM-001 switch offers a cost-effective connectivity option for 20-Blade systems.
Access to 10-Gigabit Ethernet networks is provided by either the Layer 2/3 1/10-Gbps Ethernet Switches - MBM-XEM-002 and MBM-XEM-001 (up to 20 Blades).
For even faster connections, Supermicro offers two different connectivity options. A new InfiniBand switch based on EDR technology - SBM-IBS-E3616 provides connection from Blades to EDR (100G) InfiniBand networks. And other solution is based on Intel’s Omni-Path Architecture with 100G connectivity to the servers with SBM-OPA-C4020.
All SuperBlade® networking options are hot-pluggable.
The 10 Gigabit Ethernet switch modules (part ID MBM-XEM-001 and MBM-XEM-002) include connection to the SupreBlade's LAN interfaces. While MBM-XEM-001 and MBM-XEM-002 connect to the LAN interfaces onboard the Blade servers. These are layer 2 / layer 3 Ethernet switching modules also have two internal Ethernet paths to the SuperBlade Chassis Management Module CMM(s) to allow configuration and management of the switch. Offering such advanced features as link aggregation, VLAN, Spanning Tree, Access Control Lists and Jumbo Frame support, the switches provide a connection between the Ethernet controllers integrated on the mainboard to the external Ethernet systems.
The 1 Gigabit Ethernet switch modules (part ID MBM-GEM-004 and MBM-GEM-001) include connection to the SuperBlade's LAN interfaces. While MBM-GEM-001 and MBM-GEM-004 connect to the LAN interfaces onboard the Blade servers. These are layer 2 / layer 3 Ethernet switching modules also have two internal Ethernet paths to the SuperBlade Chassis Management Module CMM(s) to allow configuration and management of the switch.
InfiniBand Switch Module
The InfiniBand switch is based on point-to-point bi-directional serial link systems. It provides high-speed interconnectivity among the blade modules and to external InfiiniBand peripherals and are especially useful in supporting clustered High-Performance-Computing. The SBM-IBS-E3616 InfiniBand switch supports up to twenty internal and 16 external connections (100G).
Omni-path Architecture Switch Module
Supermicro SBM-OPA-C4020 supports the 100Gbps Intel® Omni-Path Architecture (OPA) providing a unique HPC cluster solution offering excellent bandwidth, latency and message rate that is highly scalable and easily serviceable.
SBM-OPA-C4020 supporting Omni-Path Architecture leverages the Intel Scalable System Framework (SSF) to address evolving demands across high performance data analytics, machine learning, visualization, traditional modeling and simulation workloads. Designed specifically for HPC, the SBM-OPA-C4020 products offer 9.6 Tb/s total fabric bandwidth. High scalability with the capability of 27,648 nodes in 2-tier configuration. Supermicro SBM-OPA-C4020 is designed to overcome the scaling challenges of large-sized clusters. The enhancements include:
High Message Rate Throughput. The SSH-C48Q products are designed to support high message rate traffic from each node through the fabric. That means the fabric can support the high bandwidth as well as high message rate throughput associated with the ever-increasing processing power and core counts of Intel® Processor.
Twenty EDR ports at 100Gbps
External Uplink Ports
Sixteen EDR ports with QSFP28 connectors
7.62Tbps total switch bandwidth (36-Port)
100Gb EDR InfiniBand Switch
Intel Omni-path chipset
Twenty ports at 100Gbps
External Uplink Ports
Twenty four 100Gbs ports with QSFP28 connectors
9.6 Tbps total switch bandwidth (44-Port)
100Gb Omni-path Switch
For any blade to access the InfiniBand or Omni-path module, it must have a respective Mezzanine card installed on its mainboard. The AOC-IBH-X4ES and AOC-OPA-WFR support EDR and Omni-path switch respectively.