ESC N8-E11

Products\ NVIDIA HGX
7U NVIDIA HGX H100 8-GPU server with dual 4th Gen Intel Xeon Scalable processors that designed for large scale of AI and HPC, 10+1 or 12+1 PCIe slots, 32 DIMM, 10 NVMe, dual 10G LAN and optional OCP 3.0 socket.
NIC and storage are placed close to the GPUs, use a ratio of up to 1:1 GPUs to network interface card and have GPU Direct Storage design could reduce read/write latency. ESC N8-E11 is optimal for cultivating AI advancements for enterprise applications.
n8-1 (1)

KEY FEATURES

⦿ 7U NVIDIA HGX H100 eight-GPU server with dual 4th Gen Intel Xeon Scalable processors, designed for large-scale AI and HPC and with up to 12+1 PCIe slots, 32 DIMM, 10 NVMe, dual 10Gb LAN and OCP 3.0 (optional)

⦿ Designed for generative AI with optimized server design, plus a LLM-consulting service and AI-infrastructure development integrated by ASUS

⦿ NVIDIA® HGX H100 with eight Tensor Core GPUs for AI/HPC workloads

⦿ Fueled by dual 4th Gen Intel® Xeon® Scalable processors, supporting 350W TDP

⦿ Direct GPU-to-GPU interconnect via NVLink delivers 900GB/s bandwidth for efficient scaling

⦿ Energy-efficient design with independent CPU- and GPU-airflow tunnels for power savings

⦿ Network interface cards (NICs) and storage placed close to the GPUs with 1:1 GPU-to-NIC ratio and NVIDIA GPUDirect Storage design to reduce read/write latency

⦿ 4+2 or 3+3 3000W 80 PLUS® Titanium power supply reduces operating costs

⦿ Onboard ASUS ASMB11-iKVM with ASPEED AST2600 controller for out-of-band management