Exxact TensorEX 2U Deep Learning & AI Server - 2x AMD EPYC processor - TS2-171138844-DPN
The TensorEX TS2-171138844-DPN is a 2U rack mountable HGX A100 Deep Learning NVIDIA GPU server supporting 2x AMD EPYC 7002/7003 series processors, a maximum of 8 TB DDR4 memory, and four A100 Ampere GPUs (SXM4), with up to 600GB/s NVLINK interconnect.
GPUs have provided groundbreaking performance to accelerate deep learning research with thousands of computational cores and up to 100x application throughput when compared to CPUs alone. Exxact has developed the Deep Learning server, featuring NVIDIA GPU technology coupled with state-of-the-art NVLINK GPU-GPU interconnect technology, and a full pre-installed suite of the leading deep learning software, for developers to get a jump-start on deep learning research with the best tools that money can buy.
Features:
- NVIDIA DIGITS software providing powerful design, training, and visualization of deep neural networks for image classification
- Pre-installed standard Ubuntu 18.04/20.04 w/ Exxact Machine Learning Image (EMLI)
- Google TensorFlow software library
- Automatic software update tool included
- A turn-key server with NVLINK GPU-GPU interconnect topology.
An EMLI Environment for Every Developer
Conda EMLI
For developers who want pre-installed deep learning frameworks and their dependencies in separate Python environments installed natively on the system.
Container EMLI
For developers who want pre-installed frameworks utilizing the latest NGC containers, GPU drivers, and libraries in ready to deploy DL environments with the flexibility of containerization.
DIY EMLI
For experienced developers who want a minimalist install to set up their own private deep learning repositories or custom builds of deep learning frameworks.
NVIDIA A100 Tensor Core GPU
Blistering Double Precision Accelerator for AI & HPC
NVIDIA A100 introduces double-precision Tensor Cores, providing the most significant milestone since the introduction of double-precision computing in GPUs for HPC. This enables researchers to reduce a 10-hour, double-precision simulation running on NVIDIA V100 Tensor Core GPUs to just four hours on A100.
- Accelerates and enables the most serious HPC and data center workloads
- With 80GBs of High Bandwidth Memory (HBM2e), A100 never skips a beat
A100 SXM4 GPU Options
Model | Standard Memory | Memory Bandwidth (GB/s) | CUDA Cores | Tensor Cores | Single Precision (TFLOPS) | Double Precision (TFLOPS) | Power (W) | Explore |
---|---|---|---|---|---|---|---|---|
A100 80 GB SXM4 | 80 GB HBM2e | 2039 | 6912 | 432 | 19.5 | 9.7 | 400 | --- |
More Cores, More Cache, More Performance
AMD EPYC Processors Ignite EPYC Performance
Data centers that require the best performance, security, and scalability gravitate to AMD EPYCâ„¢. AMD EPYCâ„¢ processors are built to handle large scientific and engineering datasets - ideal for compute-intensive modeling and advanced analysis techniques. AMD EPYCâ„¢ enables fast time-to-results for HPC.
- Exceptional performance per watt and per-core performance
- 3D V-Cacheâ„¢ delivers breakthrough on-die memory with up to 768MB of L3 cache (available only on 7003X-series EPYC)