
Whether you're deploying inference across multi-node clusters, training multi-billion parameter models or managing large GPU clusters, ROCm 6.4 software offers a seamless path to high performance with AMD Instinct GPUs.
This blog spotlights five key innovations in ROCm 6.4 that directly address common challenges faced by AI researchers, model developers, and infrastructure teams—making AI development fast, simple, and scalable.
more
To effectively train and deploy generative AI, large language models, or agentic AI, it's crucial to build parallel computing infrastructure that offers the best performance to meet the demands of AI/ML workloads but also offers the kind of flexibility that the future of AI demands. A key aspect for consideration is the ability to scale-out the intra-node GPU-GPU communication network in the data center.
At AMD, we believe in preserving customer choice by providing customers with easily scalable solutions that work across an open ecosystem, reducing total cost of ownership—without sacrificing performance. Remaining true to that ethos, last October, we announced the upcoming release of the new AMD Pensando ™ Pollara 400 AI NIC. Today we’re excited to share the industry’s first fully programmable AI NIC designed with developing Ultra Ethernet Consortium (UEC) standards and features is available for purchase now
more
AMD has commissioned IDC to conduct a global survey to better understand the AI PC market and to make the benefits of that understanding more widely available to you.
more
Artificial intelligence (AI) workloads have deservedly placed GPUs in the spotlight, but there’s a clutch player often overlooked: the host CPU.
KEY TAKEAWAYS
- Your GPU needs a fast CPU—it’s the air traffic controller keeping AI inference running smoothly.
- High-frequency CPUs like AMD EPYC 9575F improve GPU efficiency by reducing latency in key AI tasks.
- Benchmarks show up to 10% faster inference times with AMD EPYC 9575F over Intel Xeon.
- Maximize AI ROI by choosing the right CPU for your GPUs!

Ask a typical IT professional today whether they're leveraging AI, and there's a good chance they'll say yes–after all, they have reputations to protect! Kidding aside, many will report that their teams may use Web-based tools like ChatGPT or even have internal chatbots that serve their employee base on their intranet, but for that not much AI is really being implemented at the infrastructure level.
more
AI PCs have evolved rapidly in the two years since AMD introduced the first x86 AI PC CPUs. In a new report, Forrester argues that 2025 will be an important year for AI PC adoption as system capabilities expand and software support improves.
more