Category top banner badge image
Next Generation NVIDIA Hopper Architecture

NVIDIA H100 Tensor Core GPU

GPU Servers Supporting H100 PCIe

Solution image

Quad GPU 2U Server with 1x 4th Gen EPYC9004

TS2-145302459

Starting at

$9,273.00

Highlights
CPU1x AMD EPYC 9004 CPU
GPUUp to 4x NVIDIA H100 PCIe
MEMUp to 3TB DDR5 ECC Memory
STO2x 2.5" NVMe, 2x 3.5" NVMe, 2x 3.5" SATA Hotswap
NET2x 1000BASE-T Ethernet
Solution image

Quad GPU Dual Intel Xeon ScalableWorkstation

TWS-154715021

Starting at

$16,222.80

Highlights
CPU2x 4th/5th Gen Intel Xeon Scalable CPU
GPUUp to 4x NVIDIA H100 PCIe
MEMUp to 4TB DDR5 ECC Memory
STO8x 3.5” NVMe Hotswap
NET2x 10GBASE-T Ethernet
Solution image

8x GPU 4U Server with Dual 4th Gen EPYC9004

TS4-169350989

Starting at

$22,577.50

Highlights
CPU2x AMD EPYC 9004 CPU
GPUUp to 8x NVIDIA H100 PCIe
MEMUp to 6TB DDR5 ECC Memory
STO8x3.5" NVMe/SATA Hotswap
NET2x 10GBASE-T Eth. or 1000BASE-T LAN

NVIDIA HGX H100 SXM Solutions

Solution image

NVIDIA HGX 4x H100 Dual Intel Xeon Scalable4U

TS4-101818584

Starting at

$137,870.70

Highlights
CPU2x 4th/5th Gen Intel Xeon Scalable CPU
GPUNVIDIA H100 HGX - 4x NVIDIA H100 SXM5 80GB + NVLink Switch System
MEMUp to 8TB DDR5 ECC Memory
STO6x 2.5" NVMe & SATA Hotswap
NET2x 10GBASE-T or 25Gbe SFP28
Solution image

NVIDIA HGX 8x H100 Dual AMD EPYC 90048U

TS4-193475697

Starting at

$256,810.40

Highlights
CPU2x AMD EPYC 9004 CPU
GPUNVIDIA H100 HGX - 8x NVIDIA H100 SXM5 80GB + NVLink Switch System
MEMUp to 6TB DDR5 ECC Memory
STO24x 2.5" Hotswap (16x NVMe, 8x SATA)
NET8x PCIe 5 LP slots Connected to PLX
Solution image

NVIDIA HGX 8x H100 Dual Intel Xeon Scalable8U

TS4-117847628

Starting at

$265,034.00

Highlights
CPU2x 4th/5th Gen Intel Xeon Scalable CPU
GPUNVIDIA H100 HGX - 8x NVIDIA H100 SXM5 80GB + NVLink Switch System
MEMUp to 8TB DDR5 ECC Memory
STO24x 2.5" Hotswap (16x NVMe, 8x SATA)
NET8x PCIe 5 LP slots Connected to PLX
NVIDIA Chip Overview

9 Times the Performance per Chip

NVIDIA H100 is the world's most advanced chip built with the largest generational leap. The H100 incorporates the ability to processing on FP8 for AI training, calculates using mixed-precision processing, introduces a transformer engine, and executes via DPX Instructions to accelerate dynamic programming algorithms. The H100 provides up to 9x faster in training and 30x faster in inferencing.

NVIDIA H100 is the building block of performance prowess. Scale AI factories with numerous H100 SXM5 (found in DGX and HGX) deliver the highest threshold of computing or super charge mainstream data centers with H100 PCIe for easy to implementation.

NVIDIA DGX H100 Pod

Scalable AI Factory with NVIDIA DGX

NVIDIA DGX H100 is a scalable server designed to accelerate machine learning for HPC and AI workloads. 8 NVIDIA H100 SXM5 GPUs are NVLinked together to operate as one giant GPU capable of 32 PFLOPS of AI Performance reducing your AI model's total time to market from weeks to days.

NVIDIA DGX H100 is just the building block; The DGX SuperPOD™ is 32 DGX H100s (256 H100s) to power the most complex AI systems. Contact our team for more information on NVIDIA DGX H100.

Build your ideal system

Need a bit of help? Contact our sales engineers directly.