The Cisco UCS X-Series Modular System starts with the X9508 chassis, designed for adaptability and hybrid cloud integration. Its midplane-free architecture enables I/O connectivity via front-loading compute nodes intersecting with rear I/O modules, supported by PCIe Gen4 and future protocols.
The 7RU chassis houses up to 8 flexible slots for compute nodes, GPUs, disk storage, and NVMe resources. Two intelligent fabric modules connect to Cisco UCS 6400/6536 Fabric Interconnects, while X-Fabric Technology facilitates modular updates for evolving technologies. Six 2800W PSUs provide efficient power delivery with multiple redundancy options. Advanced thermal management supports future liquid cooling for high-power processors.
The Cisco UCS X-Series Direct adds self-contained integration with internal Cisco UCS Fabric Interconnects 9108, enabling unified fabric connectivity and management through Cisco Intersight or UCS Manager.
Need help with the configuration? Contact us today!
1U
(2) Intel 4th or 5th Gen Processors
(32) DDR4 DIMMs
Up to (6) SAS/SATA/NVMe Drives
Up to (2) GPUs
Up to (8) Per x9508 Enclosure
Cisco x210c M7 Drive OptionsUCSX-X210C-PT4F-D - Up to 6 NVMe drives
UCSX-X210C-RAIDF-D - (6) SAS/SATA/NVMe
UCSX-X210C-GPUFM-D - (2) NVMe + (2) SW GPUs
A maximum of 4 U.2 NVMe drives or 6 U.3 NVMe drives can be ordered with RAID controllerCisco UCS x210c M7 Spec Sheet
Cisco UCS x210c M7 Data Sheet
2U
(4) Intel 4th or 5th Gen Processors
(64) DDR4 DIMMs
Up to (6) SAS/SATA/NVMe Drives
Up to (2) GPUs
Up to (4) Per x9508 Enclosure
Cisco x410c M7 Drive OptionsUCSX-X10C-PT4F-D - Up to 6 NVMe drives
UCSX-X10C-RAIDF-D - Up to 6 SAS/SATA or 4 NVMe drives
Cisco UCS x410c M7 Spec Sheet
Cisco UCS x410c M7 Data Sheet
1U
Up to (4) GPUs
Up to (4) Per x9508
RequiresPaired with an x10c M7 Compute Node
(1) X-Fabric must be installed per x440p M7
Mezzanine Risers have to match in paried Nodes.
Cisco UCS x440p M7 Spec Sheet
Cisco UCS x440p M7 Data Sheet
Cisco UCS x440p M7 Service Guide
The UCSX-V4-PCIME or UCSX-V4-Q25GME is required when a x210c compute node is paired with a x440p PCIe node.
Choose two Fabrics of same type
You can not mix IFM and integrated FI in the same chassis
8x 25-Gbps SFP28 ports
Up to 50 Gbps of unified fabric connectivity per compute node with two IFMs.
8x 100-Gbps QSFP8 ports
Up to 200 Gbps of unified fabric connectivity per compute node with two IFMs.
8 Ports 1/10/25/40/100 Gb
Ethernet, FCoE, and Fibre Channel
Up to two Intelligent Fabric Modules (IFMs) serve as line cards, managing data multiplexing, chassis resources, and compute node communication. They ensure redundancy and failover with paired configurations.
Each IFM features 8x SFP28 (25 Gbps) or 8x QSFP28 (100 Gbps) connectors, linking compute nodes to fabric interconnects. Compute nodes interface with IFMs via upper mezzanine cards (mLOMs) using orthogonal connectors. Supported configurations include UCS 9108-25G, 9108-100G IFMs, or Fabric Interconnect 9108 100G.
The Cisco UCS Fabric Interconnect 9108 100G is a high-performance switch offering up to 1.6 Tbps throughput with 6x 40/100-Gbps Ethernet ports and 2 unified ports supporting Ethernet or 8 Fibre Channel ports (8/16/32-Gbps). All ports support FCoE, and breakout options enable 10/25-Gbps Ethernet or 1-Gbps Ethernet.
It provides eight 100G or thirty-two 25G backplane connections to X-Series compute nodes, depending on the VIC used. Additional features include a network management port, console port, and USB port for configuration.
The UCSX-V4-PCIME or UCSX-V4-Q25GME is required when a compute node is paired with a PCIe node
1. The VIC 15422 only works with mLOM 15420
When the X210c Compute Node is installed, the mLOM card connects directly to the Fabric Module (IFM or FI) at the top rear of the chassis, enabling networking and management traffic. For the GPU version of the X210c, the PCIe Mezzanine card must also be installed to provide the additional PCIe lanes required to support the GPUs and NVMe drives. This configuration ensures sufficient PCIe bandwidth for the compute-intensive and storage needs of the GPU version while maintaining connectivity to the Fabric Module for external networking.
If the X440p PCIe Node is installed, the X210c Compute Node requires a mezzanine card, which can be a PCI Mezzanine Card, VIC 14825, or VIC 15422. When using a VIC (14825 or 15422), a bridge card is required to connect the mLOM VIC to the Mezzanine VIC, enabling proper PCIe connectivity.
In this configuration, 2 PCIe lanes are allocated to the X-Fabric modules at the bottom of the chassis to connect to the X440p, while the remaining PCIe lanes are routed to the Fabric Module (IFM or FI) for external networking. Conversely, when the PCI Mezzanine Card is installed, no bridge card is needed, but the PCIe lane allocation remains the same: 2 lanes to the X-Fabric and the rest to the IFM or FI. This setup ensures efficient connectivity for both internal PCIe resources and external networking capabilities.
The X9508 chassis accommodates up to six power supplies. The six dual feed power supplies provide an overall chassis power capability of greater than 9000 W, and can be configured as N, N+1, N+2, or N+N redundant.
Choose from 2 to 6 power suppliesFor all orders exceeding a value of 100USD shipping is offered for free.
Returns will be accepted for up to 10 days of Customer’s receipt or tracking number on unworn items. You, as a Customer, are obliged to inform us via email before you return the item.
Otherwise, standard shipping charges apply. Check out our delivery Terms & Conditions for more details.