510.226.7366 | sales@exxactcorp.com |
Loading

Workstation Graphics and HPC Solutions  


Built For Machines Driven by Intelligent Applications

The new Radeon Instinct accelerators offer organizations powerful GPU-based solutions for deep learning inference and training. Radeon Instinct accelerators feature passive cooling, AMD MultiGPU (MxGPU) hardware virtualization technology conforming with the SR-IOV (Single Root I/O Virtualization) industry standard, and 64-bit PCIe addressing with Large Base Address Register (BAR) support for multi-GPU peer-to-peer support.

The Radeon Instinct Initiative

Along with the new hardware offerings, AMD will deliver MIOpen, a free, open-source library for GPU accelerators intended to enable high-performance machine intelligence implementations, and new, optimized deep learning frameworks on AMD's ROCm software to build the foundation of the next evolution of machine intelligence workloads:

  • MIOpen GPU-accelerated library: To help solve high-performance machine intelligence implementations, the free, open-source MIOpen GPU-accelerated library is planned to be available in Q1 2017 to provide GPU-tuned implementations for standard routines such as convolution, pooling, activation functions, normalization and tensor format
  • ROCm deep learning frameworks: The ROCm platform is also now optimized for acceleration of popular deep learning frameworks, including Caffe, Torch 7, and Tensorflow*, allowing programmers to focus on training neural networks rather than low-level performance tuning through ROCm's rich integrations. ROCm is intended to serve as the foundation of the next evolution of machine intelligence problem sets, with domain-specific compilers for linear algebra and tensors and an open compiler and language runtime

The Radeon Instinct Accelerators

Radeon Instinct accelerators are designed to address a wide-range of machine intelligence applications:
  • The Radeon Instinct MI6 accelerator based on the acclaimed Polaris GPU architecture will be a passively cooled inference accelerator optimized for jobs/second/Joule with 5.7 TFLOPS of peak FP16 performance at 150W board power and 16GB of GPU memory
  • The Radeon Instinct MI8 accelerator, harnessing the high-performance, energy-efficient "Fiji" Nano GPU, will be a small form factor HPC and inference accelerator with 8.2 TFLOPS of peak FP16 performance at less than 175W board power and 4GB of High-Bandwidth Memory (HBM)
  • The Radeon Instinct MI25 accelerator will use AMD's next-generation high-performance Vega GPU architecture and is designed for deep learning training, optimized for time-to-solution
 
AMD Instinct Accelerators
AMD Instinct Series
AMD Radeon Instinct MI25 AMD Radeon Instinct MI8 AMD Radeon Instinct MI6