cancel
Showing results for 
Search instead for 
Did you mean: 

Unleashing Radeon Instinct™ to Drive the Next Era of True Heterogeneous Compute

ogi_brkic
Staff
Staff
0 0 2,568

Capture.JPG

[Originally posted on 06/20/17 by Ogi Brkic]

Back in December 2016, we first announced our Radeon Instinct initiative, combining our strength in compute with our dedication to open software. We later announced our Radeon Vega Frontier Edition, an enabler of Radeon Instinct.

Today, we’re excited to tell you about the next chapter in our vision for instinctive computing. AMD’s Radeon Instinct™ accelerators will soon ship to our partners (including Boxx, Colfax, Exxact Corporation, Gigabyte, Inventec and Supermicro, among others) and power their deep learning and HPC solutions starting in Q3 2017.

Artificial intelligence and machine learning are changing the world in ways we never could have imagined only a few years ago, enabling life-changing breakthroughs that can solve previously unsolvable problems. Radeon Instinct™ MI25, MI8, and MI6, together with AMD’s open ROCm 1.6 software platform, can dramatically increase performance, efficiency, and ease of implementation, speeding through deep learning inference and training workloads. We’re not just looking to accelerate the drive to machine intelligence, but to power the next era of true heterogeneous compute.

New Radeon Instinct Accelerators

Through our Radeon Instinct server accelerator products and open ecosystem approach, we’re able to offer our customers cost-effective machine and deep learning training, edge-training and inference solutions, where workloads can take the most advantage of the GPU’s highly parallel computing capabilities.

We’ve also designed the three initial Radeon Instinct accelerators to address a wide range of machine intelligence applications, which includes data-centric HPC-class systems in academics, government labs, energy, life science, financial, automotive and other industries:

mi25-700x700.png

The Radeon Instinct™ MI25 accelerator, based on the new “Vega” GPU architecture with a 14nm FinFET process, will be the world’s ultimate training accelerator for large-scale machine intelligence and deep learning datacenter applications. The MI25 will deliver superior FP16 and FP32 performance in a passively-cooled single GPU server card with 24.6 TFLOPS of FP16 or 12.3 TFLOPS of FP32 peak performance through its 64 compute units (4,096 stream processors). With 16GB of ultra–high bandwidth HBM2 ECC GPU memory and up to 484 GB/s of memory bandwidth, the Radeon Instinct MI25’s design is optimized for massively parallel applications with large datasets for Machine Intelligence and HPC-class systems.

mi8-1-700x700.png

The Radeon Instinct™ MI8 accelerator, harnessing the high-performance, energy-efficiency of the “Fiji” GPU architecture, is a small form factor HPC and inference accelerator with 8.2 TFLOPS of peak FP16|FP32 performance at less than 175W board power and 4GB of High-Bandwidth Memory (HBM) on a 512-bit memory interface. The MI8 is well suited for machine learning inference and HPC applications.

mi6-700x700.png

The Radeon Instinct™ MI6 accelerator, based on the acclaimed “Polaris” GPU architecture, is a passively cooled inference accelerator with 5.7 TFLOPS of peak FP16|FP32 performance at 150W board power and 16GB of ultra-fast GDDR5 GPU memory on a 256-bit memory interface. The MI6 is a versatile accelerator ideal for HPC and machine learning inference and edge-training deployments.

Radeon Instinct hardware is fueled by our open-source software platform, including:

  • Planned for June 29th rollout, the ROCm 1.6 software platform with performance improvements and now support for MIOpen 1.0 is scalable and fully open source providing a flexible, powerful heterogeneous compute solution for a new class of hybrid Hyperscale and HPC-class systems. Comprised of an open-source Linux® driver optimized for scalable multi-GPU computing, the ROCm software platform provides multiple programming models, the HIP CUDA conversion tool, and support for GPU acceleration using the Heterogeneous Computing Compiler (HCC).

  • The open-source MIOpen GPU-accelerated library available June 29th with the ROCm platform and supports machine intelligence frameworks including planned support of Caffe®, TensorFlow® and Torch®.

Revolutionizing the Datacenter with “Zen”-based Epyc™ Servers and Radeon Instinct Accelerators

The Radeon Instinct MI25, combined with our new “Zen”-based Epyc servers and the revolutionary ROCm open software platform, will provide a progressive approach to open heterogeneous compute and machine learning from the metal forward.

We plan to ship Radeon Instinct products to our technology partners in Q3 for design in their deep learning and HPC solutions, giving customers a real choice of vendors for open, scale-out machine learning solutions.

For more details and specifications on these cards, please check out the product pages below.

Radeon Instinct MI25

Radeon Instinct MI8

Radeon Instinct MI6