Category top banner badge image
Next Generation NVIDIA Hopper Architecture

NVIDIA H100 Tensor Core GPU

Larger & More Performant Memory

NVIDIA DGX H200

The NVIDIA DGX H200 features everything we love about DGX H100 but now with over 1.1TB of upgraded HBM3e memory running at 4.8TB/s of bandwidth. The DGX H200 delivers every data center's out-of-the-world performance and exceptional scalability. Speed up AI and HPC workloads like large-scale analytics to training and inferencing on huge LLMs.
Inquire about EDU discounts.

NVIDIA HGX H200 Platforms

Solution image

NVIDIA HGX H200 (4 GPU) Dual Xeon Scalable3U

TS4-147416516

Starting at

$134,449.70

Highlights
CPU2x 4th/5th Gen Intel Xeon Scalable CPU
GPUNVIDIA HGX H200 - 4x NVIDIA H200 SXM5 141GB HBM3e (564GB) + NVLink
MEM16x DDR5 ECC DIMMs (up to 4TB)
STO8x 2.5" NVMe & SATA Hotswap
NET2x 10GBASE-T Ethernet & 6x PCIe 5.0 LP for NICs
Solution image

NVIDIA HGX H200 (8 GPU) Dual Xeon Scalable6U

TS4-127744315

Starting at

$266,181.67

Highlights
CPU2x 4th/5th Gen Intel Xeon Scalable CPU
GPUNVIDIA HGX H200 - 8x NVIDIA H200 SXM5 141GB HBM3e (1.1TB) + NVLink
MEM32x DDR5 ECC DIMMs (up to 8TB)
STO12x 2.5" Hot-swap Drives (8x NVMe, 4x SATA)
NET2x 1GBASE-T Ethernet & 8x PCIe 5.0 LP for NICs
Solution image

NVIDIA HGX H200 (8 GPU) Dual EPYC 90048U

TS4-118380266

Starting at

$259,227.10

Highlights
CPU2x AMD EPYC 9004 CPU
GPUNVIDIA HGX H200 - 8x H200 SXM5 141GB HBM3e (1.1TB) + NVLink
MEM24x DDR5 ECC DIMMs (up to 6TB)
STO24x 2.5" Hotswap (16x NVMe, 8x SATA)
NET8x PCIe 5.0 LP for NICs & optional 2x 10GBASE-T

NVIDIA HGX H100 SXM Solutions

Solution image

NVIDIA HGX 4x H100 Dual Intel Xeon Scalable4U

TS4-101818584

Starting at

$140,554.21

Highlights
CPU2x 4th/5th Gen Intel Xeon Scalable CPU
GPUNVIDIA H100 HGX - 4x NVIDIA H100 SXM5 80GB + NVLink Switch System
MEMUp to 8TB DDR5 ECC Memory
STO6x 2.5" NVMe & SATA Hotswap
NET2x 10GBASE-T or 25Gbe SFP28
Solution image

NVIDIA HGX 8x H100 Dual AMD EPYC 90048U

TS4-193475697

Starting at

$259,155.60

Highlights
CPU2x AMD EPYC 9004 CPU
GPUNVIDIA H100 HGX - 8x NVIDIA H100 SXM5 80GB + NVLink Switch System
MEMUp to 6TB DDR5 ECC Memory
STO24x 2.5" Hotswap (16x NVMe, 8x SATA)
NET8x PCIe 5 LP slots Connected to PLX
Solution image

NVIDIA HGX 8x H100 Dual Intel Xeon Scalable8U

TS4-117847628

Starting at

$268,215.20

Highlights
CPU2x 4th/5th Gen Intel Xeon Scalable CPU
GPUNVIDIA H100 HGX - 8x NVIDIA H100 SXM5 80GB + NVLink Switch System
MEMUp to 8TB DDR5 ECC Memory
STO24x 2.5" Hotswap (16x NVMe, 8x SATA)
NET8x PCIe 5 LP slots Connected to PLX

GPU Servers Supporting H100 PCIe

Solution image

Quad GPU 2U Server with 1x 4th Gen EPYC9004

TS2-145302459

Starting at

$9,673.40

Highlights
CPU1x AMD EPYC 9004 CPU
GPUUp to 4x NVIDIA H100 PCIe
MEMUp to 3TB DDR5 ECC Memory
STO2x 2.5" NVMe, 2x 3.5" NVMe, 2x 3.5" SATA Hotswap
NET2x 1000BASE-T Ethernet
Solution image

Quad GPU Dual Intel Xeon ScalableWorkstation

TWS-154715021

Starting at

$12,155.00

Highlights
CPU2x 4th/5th Gen Intel Xeon Scalable CPU
GPUUp to 4x NVIDIA H100 PCIe
MEMUp to 4TB DDR5 ECC Memory
STO8x 3.5” NVMe Hotswap
NET2x 10GBASE-T Ethernet
Solution image

8x GPU 4U Server with Dual 4th Gen EPYC9004

TS4-169350989

Starting at

$23,251.80

Highlights
CPU2x AMD EPYC 9004 CPU
GPUUp to 8x NVIDIA H100 PCIe
MEMUp to 6TB DDR5 ECC Memory
STO8x3.5" NVMe/SATA Hotswap
NET2x 10GBASE-T Eth. or 1000BASE-T LAN
NVIDIA Chip Overview

9 Times the Performance per Chip

NVIDIA H100 is the world's most advanced chip built with the largest generational leap. The H100 incorporates the ability to processing on FP8 for AI training, calculates using mixed-precision processing, introduces a transformer engine, and executes via DPX Instructions to accelerate dynamic programming algorithms. The H100 provides up to 9x faster in training and 30x faster in inferencing.

NVIDIA H100 is the building block of performance prowess. Scale AI factories with numerous H100 SXM5 (found in DGX and HGX) deliver the highest threshold of computing or super charge mainstream data centers with H100 PCIe for easy to implementation.

NVIDIA DGX H100 Pod

Scalable AI Factory with NVIDIA DGX

NVIDIA DGX H100 is a scalable server designed to accelerate machine learning for HPC and AI workloads. 8 NVIDIA H100 SXM5 GPUs are NVLinked together to operate as one giant GPU capable of 32 PFLOPS of AI Performance reducing your AI model's total time to market from weeks to days.

NVIDIA DGX H100 is just the building block; The DGX SuperPOD™ is 32 DGX H100s (256 H100s) to power the most complex AI systems. Contact our team for more information on NVIDIA DGX H100.

Build your ideal system

Need a bit of help? Contact our sales engineers directly.