Exxact AI Reference Architecture

EMLI AI POD Powered By NVIDIA A100

value propositon

Turnkey Datacenter Infrastructure

EMLI AI POD is an optimized datacenter building block containing multiple NVIDIA powered servers, parallel storage, and networking for AI model training and inference using NVIDIA NGC software.

value propositon

Colocation, Managed Services & Leasing

A variety of colocation, leasing, and managed services options are available for customers looking to quickly deploy AI infrastructure solutions while alleviating the headaches of purchasing and building out infrastructure.

value propositon

Train Models in Record Time

Each EMLI POD Cluster is powered by multiple TensorEX servers featuring NVIDIA A100 Tensor Core GPUs and configured with Mellanox SHARP v2 for faster AI model training.

Flexible Storage Options Depending on Development and Deployment Environments

Solution image

NVIDIA A100 Servers + NVMe RDMA Storage


Solution value property imageHighly customizable, flexible, scalable, and cost effective flash tier storage
Solution value property imageOptimized for analysis and AI training for multiple NVIDIA A100 powered server configurations
Solution image

NVIDIA A100 Servers + Panasas ActiveStor Ultra

Compute + Linearly Scaling Storage Capacity

Solution value property imageHigh-performance POSIX parallel filesystem w/ RDMA
Solution value property imageStorage capacity and storage scale linearly
Solution value property imageFully automated online failure recovery delivers exceptional reliability
Solution value property imageSimple management (GUI)
Solution image

NVIDIA A100 Servers + BeeGFS Storage


Solution value property imageHigh-performance POSIX parallel filesystem w/ RDMA
Solution value property imageComplete storage architecture customizability
Solution value property imageOptimized for redundancy, performance and storage pool complexion (NVMe vs spinning disk)
emli pod and wireframe

Why EMLI AI POD?

  • AI Infrastructure "Building Blocks"

    EMLI AI POD offers expandable architectures per use case. Add additional compute servers or storage servers as needed.

  • EMLI AI POD Infrastructure is Agile

    Flexible AI infrastructure that adapts to the pace of enterprise by using Multi-Instance GPU (MIG) to allocate GPU resources to workloads.

  • Mellanox SHARP v2 Enabled

    Mellanox In-Network Computing and network acceleration engines such as RDMA, GPUDirect®, and Scalable Hierarchical Aggregation and Reduction Protocol (SHARP)™ to enable the highest performance and scalability.

  • Set-it-and-Forget-It Storage with Panasas PanFS Parallel File System

    Simplicity, speed, and reliability while offering 1.2PB to 10PB+ usable capacity, 16GB/s to 128GB/s+ sustained speed, and performance.

  • Full Cluster Management & NVIDIA Maintained Docker Containers

    Includes NGC containers for AI and HPC application development and optimized to perform on NVIDIA GPUs.

Scale High-Density Deployments with Colocation

We've partnered with Colovore, an industry leader in professional colocation services to make it easy to deploy, optimize, and scale-out high-density deployments.

NVIDIA has certified Colovore's facilities as DGX-ready with 1000 DGX systems running the liquid-cooled data center engineered specifically for HPC and AI deployments

With 35 kW in every rack, fully pack and scale your HPC in contiguous cabinets with no power, cooling, or distance limation for the ultimate infrastructure.

Exxact helps design, procure, and install your infrastructure and manage firmware for your NVIDIA DGX so you don't have to.