Instinct Accelerators

cancel
Showing results for 
Search instead for 
Did you mean: 

Instinct Accelerators


From high-performance computing, deep-learning, and rendering systems, to cloud computing, training complex neural networks, and AMD’s ROCm open ecosystem these blogs offer more insights and updates into our products and solutions.


A deep technical overview of the new MoE Align & Sort algorithm. By fully enabling concurrent multiple blocks execution with arbitrary expert numbers, and with aggressive usage of shared memory and registers, the MoE Align & Sort significant performance gains on AMD hardware, providing up to a 10x acceleration on AMD InstinctTM MI100 GPUs and 7x on AMD Instinct MI300X/MI300A GPUs.

 

Evergrid-SGL.png

more
0 0 930
Ronak_Shah
Staff
Staff

3298000_MLPerf_5.0_Instinct_Promotions_Graphic_FNL.png

Customers evaluating AI infrastructure today rely on a combination of industry-standard benchmarks and real-world model performance metrics—such as those from Llama 3.1 405B, DeepSeek-R1, and other leading open-source models—to guide their GPU purchase decisions.

At AMD, we believe that delivering value across both dimensions is essential to driving broader AI adoption and real-world deployment at scale. That’s why we take a holistic approach—optimizing performance for rigorous industry benchmarks like MLperf while also enabling Day 0 support and rapid tuning for the models most widely used in production by our customers. This strategy helps ensure AMD Instinct™ GPUs deliver not only strong, standardized performance, but also high-throughput, scalable AI inferencing across the latest generative and language models used by customers.

In this blog, we explore how AMD’s continued investment in benchmarking, open model enablement, software and ecosystem tools helps unlock greater value for customers—from MLPerf Inference 5.0 results to Llama 3.1 405B and DeepSeek-R1 performance, ROCm software advances, and beyond.

more
0 0 3,305
Janet_Morss
Staff
Staff

 

El Capitan_Full_00354_sm.jpg

Today, we continue to celebrate the dedication of  the El Capitan supercomputer with Lawrence Livermore National Laboratory (LLNL), in collaboration with the National Nuclear Security Administration (NNSA) and Hewlett Packard Enterprise (HPE).

more
2 0 7,208
Ronak_Shah
Staff
Staff

Camtasia_screenshot.jpg

 

The AI era is here, and it's hungry—hungry for performance, scalability, and efficiency. Whether you're building next-gen data centers, fine-tuning your AI/ML workloads, or crafting cutting-edge HPC solutions, one thing is clear: the right ingredients matter.  This blog will guide you through the AMD recommended ingredients, the secret sauce, and the cooking techniques needed to create an AI/HPC infrastructure that’s as efficient as it is powerful. Let’s get cooking.

more
1 0 4,376

The power of AMD Instinct GPUs is now available from leading cloud providers, so that enterprises, and AI innovators can now tap into these accelerators easily through their preferred cloud solution providers.

 

Mahesh_Balasubramanian_1-1733331272928.jpeg

 

more
2 0 2,828

AMD MI250X Blog .pngPartner Spotlight: Nscale Cloud Service Provider
Strategically located near the Arctic Circle in Norway, Nscale's Glomfjord data center leverages the environment’s natural cooling to optimize energy efficiency. Powered by 100% renewable energy, the data center features advanced adiabatic cooling, ensuring streamlined operations, scalable solutions and an eco-friendly footprint.

Nscale has launched its latest GPU cluster, powered by AMD InstinctTM MI250X accelerators in its Glomfjord data center in Norway. Currently powering some of the world’s top supercomputers, AMD InstinctTM MI250X GPUs are designed to supercharge HPC workloads and meet the growing demands of AI training, fine-tuning, and inference.

 

more
0 0 1,700