cancel
Showing results for 
Search instead for 
Did you mean: 

In the enterprise and cloud, AMD EPYC™-based servers are widely supported for Red Hat® OpenShift®

Steve_Bassett
1 0 1,371

When AMD re-entered the server market with the EPYC processor in 2017, it changed the game completely. With record-setting performance, strong ecosystem support and platforms optimized for modern workflows, EPYC quickly began gaining market share. After earning a modest 2% market share in its first year, analysts estimate that AMD EPYC now holds more than 30% of the market. EPYC processors come on a broad range of platforms from all major OEMs, including Dell, HPE, Cisco, Lenovo and Supermicro.

 

Now, with a major footprint in the enterprise server market as well as in public cloud and with myriad world records for performance and efficiency, it's clear that AMD EPYC is more than capable of supporting the Red Hat OpenShift container orchestration platform. As the foundation of modern enterprise architecture and cutting-edge cloud capabilities, EPYC is the best choice for supporting application modernization. Red Hat Summit was a compelling opportunity to make our case and show how AMD EPYC should be considered for an OpenShift deployment.

 

Gaining market share with world class performance

 

The performance of EPYC, meanwhile, has continued to raise the bar over four generations. The 4th Gen AMD EPYC is the world's fastest data center processor. In comparison to the 64-core Intel Xeon Platinum 8592+, the 128-core EPYC offers 73% more performance at 1.53x the performance per estimated system watt in general purpose workloads (SP5-175A).

 

EPYC also delivers the leadership inference performance to handle growing AI penetration. For instance, using  the industry standard end-to-end AI benchmark TPCx-AI SF30, an AMD EPYC 9654 powered server has ~1.65x the aggregate throughput compared to an Intel Xeon Platinum 8592+ (SP5-051A).

 

A full data center portfolio and presence in the cloud

 

 

As you aim to optimize your application performance, you can rest assured that the infrastructure you're already using is either already running on AMD or is ready to run on AMD.

 

The best-selling and most appropriate servers for the OpenShift market -- from all major vendors -- are certified for Red Hat OpenShift. If you're curious, take a moment to peruse the Red Hat partner catalog to fully appreciate how many AMD-powered options support OpenShift.

 

On the cloud side, you can find a number of AMD-powered instances on AWS and Microsoft Azure that are OpenShift certified. On AWS, for example, EC2 instances that are powered by EPYC include T3a, C5a, C5ad, C6a, M5a, M5ad, M6a, M7a, R5a and R6a.

 

Powering the workloads of the future

 

The enterprise’s benefit of AMD's growing prominence in the server market is knowing that, whether you're running workloads on premise or in the cloud, you can count on getting the optimal performance out of your EPYC infrastructure. This becomes an even more compelling point as a growing number of enterprises look to burst to the cloud at business-critical moments, such as Black Friday sales in the retail sector, when performance is key.

 

Along with native scaling flexibility, modern applications also increasingly leverage or generate AI features for rich user benefit. This is another advantage of running on AMD EPYC CPUs -- they're proven to deliver fast large language model inference responsiveness. LLM inference latency is one of the most critical parameters of any AI deployment​. We took the opportunity at Red Hat Summit to show just that.

 

To demonstrate the 4th Gen AMD EPYC's performance, AMD ran Llama2-7B-Chat-HF, at bf16 precision​, over Red Hat OpenShift on Red Hat Enterprise Linux CoreOS. AMD demonstrated EPYC's capabilities on a handful of different use cases, including a customer support chatbot. In this case, the Time to First Token was 219 milliseconds -- easily meeting the patience of a human user, who likely expects a response in less than 1 second. The token throughput was 8 tokens per second, while the upper end of performance required for most English readers is about 6.5 tokens per second, equivalent to around 5 English words per second. The latency per token was 127 milliseconds, meaning the model’s performance can easily deliver words faster than a fast reader can typically keep up.

 

It is always great to meet developers, partners and customers at events like Red Hat Summit and hear the unfiltered voice of the customer. AMD has put in the work to prove it has more than competitive infrastructure for modern application development and deployment. OpenShift developers should feel confident that they can rely on EPYC processors, EPYC-based commercial servers and the Red Hat Enterprise Linux and OpenShift ecosystem that surrounds them.

 

It was great to get into the community at the Summit and it is always good to show how AMD collaborates with leaders such as Red Hat. We’ll come back with an update with Kubecon coming this fall.

 

END NOTES

SP5-175A

SPECrate®2017_int_base comparison based on published scores from www.spec.org as of 01/03/2024. Comparison of published 2P AMD EPYC 9754 (1950 SPECrate®2017_int_base, 720 Total TDP W, 256 Total Cores, $30823 Est system $, 1047 est system W, https://www.spec.org/cpu2017/results/res2023q2/cpu2017-20230522-36617.html ) is 1.73x the performance of published 2P Intel Xeon Platinum 8592+ (1130 SPECrate®2017_int_base, 700 Total TDP W, 128 Total Cores, $27207 Est system $, 930 est system W, https://www.spec.org/cpu2017/results/res2023q4/cpu2017-20231127-40064.html )  [at 1.53x the performance/system W] [at 1.52x the performance/system $]. AMD 1Ku pricing and Intel ARK.intel.com specifications and pricing as of 01/03/2024.  SPEC®, SPEC CPU®, and SPECrate® are registered trademarks of the Standard Performance Evaluation Corporation. See www.spec.org for more information. The system pricing and watt estimates are based on Bare Metal GHG TCO v9.60. Actual costs and system watts will vary.

 

SP5-051A:

TPCx-AI SF30 derivative workload comparison based on AMD internal testing running multiple VM instances as of 4/13/2024. The aggregate end-to-end AI throughput test is derived from the TPCx-AI benchmark and as such is not comparable to published TPCx-AI results, as the end-to-end AI throughput test results do not comply with the TPCx-AI Specification. AMD system configuration: Processors: 2 x AMD EPYC 9654; Frequencies: 2.4 GHz | 3.7 GHz; Cores: 96 cores per socket (1 NUMA domain per socket); L3 Cache: 384MB/socket (768MB total); Memory: 1.5TB (24) Dual-Rank DDR5-5600 64GB DIMMs, 1DPC (Platform supports up to 4800MHz); NIC: 2 x 100 GbE Mellanox CX-5 (MT28800); Storage: 3.2 TB Samsung MO003200KYDNC U.3 NVMe; BIOS: 1.56; BIOS Settings: SMT=ON, Determinism=Power, NPS=1, PPL=400W, Turbo Boost=Enabled; OS: Ubuntu® 22.04.3 LTS; Test config: 6 instances, 64 vCPUs/instance, 2663 aggregate AI use cases/min vs. Intel system configuration: Processors: 2 x Intel® Xeon® Platinum 8592+; Frequencies: 1.9 GHz | 3.9 GHz; Cores: 64 cores per socket (1 NUMA domain per socket); L3 Cache: 320MB/socket (640MB total); Memory: 1TB (16) Dual-Rank DDR5-5600 64GB DIMMs, 1DPC; NIC: 4 x 1GbE Broadcom NetXtreme BCM5719 Gigabit Ethernet PCIe; Storage: 3.84TB KIOXIA KCMYXRUG3T84 NVMe; BIOS: ESE124B-3.11; BIOS Settings: Hyperthreading=Enabled, Turbo boost=Enabled, SNC=Disabled; OS: Ubuntu® 22.04.3 LTS; Test config: 4 instances, 64 vCPUs/instance, 1607 aggregate AI use cases/min. Results may vary due to factors including system configurations, software versions and BIOS settings.  TPC, TPC Benchmark and TPC-C are trademarks of the Transaction Processing Performance Council.

 

SPEC®, SPEC CPU®, SPECrate®, SPECint®, SPECstorage® and SPECpower_ssj® are registered trademarks of the Standard Performance Evaluation Corporation. See www.spec.org for more information. TPC Benchmark is a trademark of the Transaction Processing Performance Council. Xeon® is a trademark of Intel Corporation or its subsidiaries. Other product names used in this publication are for identification purposes only and may be trademarks of their respective companies.