The Cisco UCS X-Series Modular System starts with the X9508 chassis, designed for adaptability and hybrid cloud integration. Its midplane-free architecture enables I/O connectivity via front-loading compute nodes intersecting with rear I/O modules, supported by PCIe Gen4 and future protocols.
The 7RU chassis houses up to 8 flexible slots for compute nodes, GPUs, disk storage, and NVMe resources. Two intelligent fabric modules connect to Cisco UCS 6400/6536 Fabric Interconnects, while X-Fabric Technology facilitates modular updates for evolving technologies. Six 2800W PSUs provide efficient power delivery with multiple redundancy options. Advanced thermal management supports future liquid cooling for high-power processors.
The Cisco UCS X-Series Direct adds self-contained integration with internal Cisco UCS Fabric Interconnects 9108, enabling unified fabric connectivity and management through Cisco Intersight or UCS Manager.
Need help with the configuration? Contact us today!
1U
(2) Intel 3rd Gen Processors
(32) DDR4 DIMMs
Up to (6) SAS/SATA/NVMe Drives
Up to (2) GPUs
Up to (8) Per x9508 Enclosure
Cisco x210c Node OptionsUCSX-X10C-PT4F - Up to 6 NVMe drives
UCSX-X10C-RAIDF - Up to 6 SAS/SATA or 4 NVMe drives
UCSX-X10C-GPUFM - Up to 2 NVIDIA T4 GPUs & 2 NVMe drives
Cisco UCS x210c Spec Sheet
Cisco UCS x210c Service Guide
1U
Up to (2) NVMe Drives
Up to (4) GPUs
Up to (4) Per x9508
RequiresPaired with (1) x210c Compute Node
(1) X-Fabric must be installed per x440p
Mezzanine Risers have to match in paried Nodes.
Cisco UCS x440p Spec Sheet
Cisco UCS x440p Data Sheet
Cisco UCS x440p Service Guide
The UCSX-V4-PCIME or UCSX-V4-Q25GME is required when a x210c compute node is paired with a x440p PCIe node.
The UCSX-V4-PCIME or UCSX-V4-Q25GME is required when a compute node is paired with a PCIe node
1. The VIC 14825 only works with mLOM 14425
2. The VIC 15422 only works with mLOM 15420
When the X210c Compute Node is installed, the mLOM card connects directly to the Fabric Module (IFM or FI) at the top rear of the chassis, enabling networking and management traffic. For the GPU version of the X210c, the PCIe Mezzanine card must also be installed to provide the additional PCIe lanes required to support the GPUs and NVMe drives. This configuration ensures sufficient PCIe bandwidth for the compute-intensive and storage needs of the GPU version while maintaining connectivity to the Fabric Module for external networking.
If the X440p PCIe Node is installed, the X210c Compute Node requires a mezzanine card, which can be a PCI Mezzanine Card, VIC 14825, or VIC 15422. When using a VIC (14825 or 15422), a bridge card is required to connect the mLOM VIC to the Mezzanine VIC, enabling proper PCIe connectivity.
In this configuration, 2 PCIe lanes are allocated to the X-Fabric modules at the bottom of the chassis to connect to the X440p, while the remaining PCIe lanes are routed to the Fabric Module (IFM or FI) for external networking. Conversely, when the PCI Mezzanine Card is installed, no bridge card is needed, but the PCIe lane allocation remains the same: 2 lanes to the X-Fabric and the rest to the IFM or FI. This setup ensures efficient connectivity for both internal PCIe resources and external networking capabilities.
The X9508 chassis accommodates up to six power supplies. The six dual feed power supplies provide an overall chassis power capability of greater than 9000 W, and can be configured as N, N+1, N+2, or N+N redundant.
Choose from 2 to 6 power suppliesFor all orders exceeding a value of 100USD shipping is offered for free.
Returns will be accepted for up to 10 days of Customer’s receipt or tracking number on unworn items. You, as a Customer, are obliged to inform us via email before you return the item.
Otherwise, standard shipping charges apply. Check out our delivery Terms & Conditions for more details.