Last fall we announced that the AMD Instinct™ MI200 series accelerators were bringing the Oak Ridge National Laboratory’s Frontier system into the exascale era. Ever since, the world has been waiting to use this technology to accelerate mainstream HPC and AI/ML workloads in enterprise data centers.
The wait is over because today, the AMD Instinct MI210 accelerator is launching to benefit the full spectrum of data center computing—using the same technologies that power the many of the world’s fastest supercomputers—but in a PCIe® form-factor package bringing industry performance leadership in accelerated compute for double precision (FP64) to mainstream innovators in HPC and AI.1
The AMD CDNA™ 2 architecture is our purpose-built, optimized architecture designed to do one thing very well: accelerate compute-intensive HPC and AI/ML workloads. AMD CDNA 2 includes 2nd generation AMD Matrix Cores bringing new FP64 Matrix capabilities, optimized instructions, and more memory capacity with faster memory bandwidth of previous gen AMD Instinct GPU compute products to feed data-hungry workloads.2
Our 3rd Gen AMD Infinity Fabric™ technology brings advanced platform connectivity and scalability enabling fully connected dual and quad P2P GPU hives through three Infinity Fabric links delivering up to 300 GB/s (Dual) and 600 GB/s (Quad) of total aggregate peak P2P theoretical I/O bandwidth for lightning-fast P2P communications and data sharing.3 Finally, the AMD ROCm™ 5 open software platform enables your HPC and AI/ML codes to tap the full capabilities of the MI210 and the rest of your GPU accelerators with a single code base. The AMD Infinity Hub gives you ready-to-run containers preconfigured with many popular HPC and AI/ML applications. Putting the MI210 to work in your data center couldn’t be easier.
Word’s Fastest PCIe® Accelerator for HPC
How does it perform, you ask? The AMD Instinct MI210 is the world’s fastest double-precision (FP64) data center PCIe accelerator with up to 22.6 teraflops FP64 and 45.3 teraflops FP64 Matrix peak theoretical performance for HPC delivering a 2.3x FP64 performance boost over NVIDIA Ampere A100 GPUs.1 MI210 also brings up to 181 teraflops FP16 and BF16 performance for machine learning training. And it hosts 64 GB of HBM2e memory with 33% more bandwidth at 1.6 TB/s memory bandwidth of previous Gen AMD Instinct GPU compute products to handle the most demanding workloads.2
So, how does this translate to real-world applications? Visit the AMD Instinct benchmark page to see how AMD Instinct accelerators stack-up against the competition. You may be surprised.
Now you want to know, how do you get access to it? We work with a broad range of technology partners to help make sure that you get the best out of your investment.
First, start with visiting our extensive partner server solutions HPC and AI catalog pages to choose a qualified platform from
your favorite server vendor. Next, check out the ROCm 5 open software platform that will help to bring your HPC codes alive. Or make it even easier by visiting the AMD Infinity Hub and download optimized HPC codes encapsulated in containers, ready to run. If you want to test drive the latest AMD hardware and software before you buy, visit the AMD Accelerator Cloud (AAC) to remotely access and gain hands-on experience with our next-gen high performance technologies.
Additional Resources:
Learn more about the AMD Instinct™ MI200 series accelerators
Download the full AMD Instinct™ Accelerators Server Solutions Catalog
Test drive the latest AMD hardware and software on the AMD Accelerator Cloud
To see the full list of available application and frameworks containers, visit the AMD Infinity Hub.
Learn more about the AMD ROCm™ open software platform
Access the latest ROCm drivers, support docs, and ROCm education materials on the new AMD ROCm™ Information Portal.
Guy Ludden is Sr. Product Marketing Mgr. for AMD. His postings are his own opinions and may not represent AMD’s positions, strategies or opinions. Links to third party sites are provided for convenience and unless explicitly stated, AMD is not responsible for the contents of such linked sites and no endorsement is implied
Endnotes: