Server Processors

cancel
Showing results for 
Search instead for 
Did you mean: 

Server Processors


Elevate your business productivity with AMD EPYC™ server processors. Read the latest blogs on how AMD EPYC™ is delivering exceptional performance and energy efficiency across various deployments and industries.


As artificial intelligence (AI) continues to transform the business landscape, deploying and operating successful AI systems depends on the underlying infrastructure. However, there is no one-size-fits-all approach to building AI-ready infrastructure. In this blog post, we break down key takeaways from IDC's latest white paper, which outlines a clear, balanced approach to AI infrastructure for CIOs, enterprise architects, and infrastructure and operations (I&O) teams. Discover how a balanced approach to infrastructure can help you embrace AI-driven innovation with confidence and contribute to sustainable, secure, and cost-effective operations.

more
0 0 21
Jim_Greene
Staff
Staff

Supermicro is one the world's premier producers of servers and storage systems, as well as a top provider of fast and efficient AI infrastructure. Linda Yang, a Supermicro senior solution manager, recently joined me on the AMD TechTalk podcast to discuss how the rapid evolution of AI models and workloads is driving change in AI infrastructure. I'm pleased to share highlights from the conversation in this blog post. You can listen to the full podcast here.

more
0 0 863
Jim_Greene
Staff
Staff

BeeKeeperAI operates a privacy-preserving, confidential computing platform that protects data and enables secure collaborations. At a time when security threats seem ubiquitous, BeeKeeperAI, with the help of security technologies built into AMD EPYC processors, help safeguard data. I had a great time speaking with Alan Czeszynski, BeeKeeperAI's marketing and product development leader. The full podcast can be accessed here.     

more
0 0 782
Jim_Greene
Staff
Staff

I recently had the pleasure of speaking about the future of AI, large language models (LLM) and retrieval augmented generation (RAG) with my colleague, Meena ArunachalamAMD AI Systems Engineering Director and Fellow. We were able to pack a lot into our discussion, so I have highlighted just a handful of Meena’s invaluable insights and recommendations in this blog post.  For more detail, you can listen to the podcast here

more
0 0 840
Sarina_Sit
Staff
Staff

The Tencent Accelerated Computing Optimizer (TACO) Kit integrates the ZenDNN AI inference library for improved inference performance on AMD EPYC CPU backends, enabling Tencent's end customers to evaluate and deploy optimal solutions for their diverse workloads, particularly in the face of growing hardware heterogeneity.

more
1 0 9,553