Skip navigation

AMD Business

12 Posts authored by: forrest.norrod Employee

On August 7 this year, AMD changed the data center market with the launch of the 2nd Gen AMD EPYC processor, the world’s first 7nm and highest performance x86 data center CPU[i]. We hosted an amazing launch event in San Francisco, joined by leading industry partners including Google, Twitter, HPE, Lenovo and others, where we showcased the world record performance[ii], breakthrough architecture and broad ecosystem support for the 2nd Gen AMD EPYC family.


Since launch, we have seen significant traction with customers and partners. They recognize the overall breakthrough performance, and the superior single socket performance of the 2nd Gen EPYC vs. the competition. As well, they know our higher core counts and support for compelling features like PCIe 4.0 make AMD EPYC the right choice for the future of the data center.


Today, we are proud to have new platforms from Dell and new customers pledging to use the 2nd Gen AMD EPYC for cloud, HPC and even 5G. And with the original codename of “Rome,” what better place to reach this next round of milestones than Rome, Italy.


Earlier today I was joined by our CTO Mark Papermaster, as well as our incredible European team and customers, to share the latest progress with our 2nd Gen AMD EPYC processors and introduce our newest customers. Here are the highlights:


  • Yesterday, Dell EMC announced five new PowerEdge platforms using the 2nd Gen AMD EPYC processor. These platforms were designed from the ground-up and optimized to support the features of the new AMD EPYC processor including PCI 4.0. You can read more about the new PowerEdge systems here, including purchasing details for the new systems that are available now.
  • Satinder Sethi, GM of IBM Cloud infrastructure, joined me to discuss how IBM Cloud views performance and works to deliver it to its customers. Enterprises moving to cloud want higher levels of performance to support compute-intensive workloads for AI and big data, without jeopardizing security. Security is a critical component of IBM’s hybrid cloud strategy, and technologies like 2nd Gen EPYC with SEV-ES help drive new levels of security in the hybrid cloud era. IBM Cloud customers are also asking for better memory bandwidth for big data and analytics workloads. With 45% greater memory bandwidth in its class,[iii] 2nd Gen EPYC provides fantastic memory bandwidth scaling for big data and analytics workloads. Finally, the core scaling and breakthrough performance of 2nd Gen EPYC provides a superior quality of service and a higher level of performance for container workloads. IBM plans to have more to share in 2020 about its new performance offerings for clients.
  • Nokia joined AMD CTO, Mark Papermaster, on stage and talked about the potential performance implications of the 2nd Gen AMD EPYC processor for 4G and 5G networks. Nokia has tested 2nd Gen AMD EPYC processors in its Cloud Packet Core system, which helps service providers deliver converged broadband, IoT, and machine-type communication services while evolving to a 5G core. In these tests, the 2nd Gen AMD EPYC processors are providing an 80% increase in packet throughput performance compared to previous solutions. This means that with AMD EPYC, Nokia is providing its customers better capacity, performance and scale for their networks.
  • European pure player cloud provider OVHcloud showcased an upcoming high-end hosting instance that is based on the 2nd Gen AMD EPYC processor, specifically the EPYC 7402P. The EPYC processor is used in a full flash server and the instances will be available at the end of 2019.
  • TSMC joined us on stage to highlight its capacity and capabilities for 7nm fabrication and it also announced its adoption of AMD EPYC processors helping power its next gen research and leading process technology
  • Finally, ATOS and its customer Genci, which fosters the use of supercomputing for the benefit of French scientific communities, joined me to highlight Genci’s use of the ATOS BullSequana X system using the 2nd Gen AMD EPYC processor. Genci specifically chose the 2nd Gen EPYC due to its TCO and fantastic sustained performance efficiency per watt. Additionally, ATOS and AMD showcased a new 2nd Gen AMD EPYC SKU specifically designed for HPC customers that need the highest performance and can support liquid cooling. The AMD EPYC 7H12 is a 64 core/128 thread, 280W part[iv] with a 2.6Ghz base frequency and 3.3Ghz max boost frequency that performs ~11% better at LINPACK compared to the AMD EPYC 7742[v] in testing by ATOS on their BullSequana XH2000 platform. The AMD EPYC 7H12 is being used by Genci, CSC Finland and Uninett in Norway.


Today we continued to take EPYC to new heights. We are thrilled to have the ecosystem supporting us across hardware, software and cloud providers as we face the challenges of the modern data center head-on with 2nd Gen AMD EPYC. You can find numerous OEMs and channel partners that are selling platforms with the new EPYC processors here.


Expect to hear more from us and our partners this year as we continue to expand our reach with the 2nd Gen AMD EPYC processor.


Forrest Norrod is the SVP and GM of the Datacenter and Embedded Solutions Group at AMD. His postings are his own opinions and may not represent AMD’s positions, strategies or opinions. Links to third party sites are provided for convenience and unless explicitly stated, AMD is not responsible for the contents of such linked sites and no endorsement is implied.  GD-5


Cautionary Statement

This blog contains forward-looking statements concerning Advanced Micro Devices, Inc. (AMD) including, but not limited to the features, functionality, performance, availability, timing, expectations and expected benefits of the 2nd Gen AMD EPYCTM processors and the expected timing and benefits of new partner offerings,  which are made pursuant to the Safe Harbor provisions of the Private Securities Litigation Reform Act of 1995. Forward-looking statements are commonly identified by words such as "would," "may," "expects," "believes," "plans," "intends," "projects" and other terms with similar meaning. Investors are cautioned that the forward-looking statements in this blog are based on current beliefs, assumptions and expectations, speak only as of the date of this blog and involve risks and uncertainties that could cause actual results to differ materially from current expectations. Such statements are subject to certain known and unknown risks and uncertainties, many of which are difficult to predict and generally beyond AMD's control, that could cause actual results and other future events to differ materially from those expressed in, or implied or projected by, the forward-looking information and statements. Investors are urged to review in detail the risks and uncertainties in AMD's Securities and Exchange Commission filings, including but not limited to AMD's Quarterly Report on Form 10-Q for the quarter ended June 29, 2019.



[i] A 2P EPYC 7742 processor powered server has SPECrate2017_int_peak score of 749, and a int_base score of 682 as of August 7, 2019. The next highest peak score is a 2P Intel Platinum 9282 server at 676, base 643: as of July 28, 2019. SPEC, SPECrate and SPEC CPU are registered trademarks of the Standard Performance Evaluation Corporation. See for more information. ROM-91.

[ii] See for details.

[iii] EPYC 7002 series has 8 memory channels, supporting 3200 MHz DIMMs yielding 204.8 GB/s of bandwidth vs. the same class of Intel Scalable Gen 2 processors with only 6 memory channels and supporting 2933 MHz DIMMs yielding 140.8 GB/s of bandwidth. 204.8 / 140.8 = 1.454545 - 1.0 = .45 or 45% more.  AMD EPYC has 45% more bandwidth. Class based on industry-standard pin-based (LGA) X86 processors. ROM-11

[iv] EPYC 7H12 processor boost frequencies may be achieved only with a cooling solution that meets group ‘Z’ requirements.  Achievable boost frequencies may vary depending on the effectiveness of the actual cooling solution. ROM-282

[v] Based on Atos testing of HPL v2.1 benchmark, as of September 13, 2019, using a 2P AMD EPYC 7H12 powered production server versus AMD internal testing of HPL v2.1 benchmark, as of July 17, 2019, using a 2P AMD EPYC 7742 powered AMD reference server. AMD has not independently verified the 7H12 scores. Results may vary. ROM-287

Today, I had the pleasure to address attendees at the 2019 Rice Oil & Gas HPC conference and discuss AMD’s vision for the HPC community and how the required compute power can continue to grow.

With 5.6 million barrels of oil expected to be pumped every day this year, Texas ranks only behind Russia and Saudi Arabia in production. One driver for all that output is technology, including high-performance computing to model oil resources and guide drilling. HPC system architecture has evolved dramatically over the past two decades, from monolithic supercomputers to clusters of industry standard servers to heterogenous nodes incorporating CPUs and accelerators such as GPUs. These new architectures have provided an incredible increase in performance and enabled new application areas beyond traditional HPC, most notably Big Data Analytics, Machine Learning, and Artificial Intelligence (AI).

The problem is the traditional levers used to increase the performance are becoming less effective. A more scalable, powerful, and secure approach is required to meet the ever-growing demands. Pushing the envelope of computing is the bread and butter of AMD, and there are a few key areas where we see innovation making a significant near-term contribution to HPC.


Chiplet design is an example of an area where the industry is moving to continue delivering performance gains even though the pace of Moore’s Law is slowing. Chiplets enable more silicon to be cost effectively used allowing companies, like AMD, to efficiently match processor IP to the best manufacturing process. AMD introduced the chiplet approach in 2017 with AMD EPYC server processors featuring the “Zen” architecture. We are taking it to the next level mid-year with our next generation 7nm, 64-core EPYC processor (codenamed “Rome”) featuring our “Zen 2” core. We demonstrated Rome in a single socket configuration running a popular NAMD benchmark outperforming the 2P Xeon 8180 powered server by an average of up to 15 percent1. (See video of demo here)

Next Generation I/O and Fabrics

The AMD “Zen 2” core is an amazing piece of technology that evolves the already legendary “Zen” design, driving the performance of AMD processors to new heights. But for HPC workloads, you must “feed the beast”, through connections to peripherals, networks, storage and memory. Rome is the first x86 server CPU to support PCIe® Gen 4.0 which doubles the performance of each I/O connection and thus boosts performance. We also joined early in supporting new, open standards for coherent fabrics including CCIX and Gen-Z that have tremendous potential.

Heterogenous Processing

The oil and gas industries were some of the first to see the potential for using different processing architectures for different workloads to maximize performance. Combining serial processing CPUs, like AMD EPYC, with high-performance, parallel GPUs, including AMD Radeon Instinct™, is the new normal for the highest performance HPC systems. Other accelerators, like FPGAs, are another exciting option for specialized workloads. And let’s not forget about software. The key to unlocking this potential is software, and open ecosystems like the one AMD established with ROCm are critical. Expect to hear a lot this year about the continued evolution of heterogeneous computing as the industry rallies around open solutions rather than closed, single vendor options.

I look forward to sharing more perspectives in the year ahead around how AMD views the future of HPC and the datacenter.

  1. Based on AMD internal testing of the NAMD Apo1 v2.12 benchmark. AMD tests conducted on AMD reference platform configured with 1 x preproduction EPYC 7nm 64 core SoC, 8 x 32GB DDR4 2666MHz DIMMs, and Ubuntu 18.04, 4.17 kernel and using the AOCC 1.3 beta compiler with OpenMPI 4.0, FFTW 3.3.8 and Charms 6.7.1, achieved an average of 9.83 ns/day; versus Supermicro SYS-1029U-TRTP configured with 2 x Intel Xeon Platinum 8180 CPUs, 12 x 32GB DDR4 2666MHz DIMMs and Ubuntu 18.04 , kernel 4.15 using the ICC 18.0.2 complier with FFTW 3.3.8 and Charms 6.8.2, achieved an average of 8.4 ns/day.ROM-01

High Performance Computing (HPC) is one of the most important and fastest growing markets in the datacenter. It’s perhaps an overused term, but HPC as referring to applying massive computing resources to solve complex problems has become critical well beyond its start in scientific research. Multiple workloads from finance, retail, oil and gas, weather, engineering, and education leverage HPC today. Common to many of these applications is the importance of memory, and I/O bandwidth.


A large percentage of HPC workloads are dependent on memory bandwidth as the problems being addressed often don’t fit into caches like other applications can. Insufficient memory bandwidth or insufficient memory capacity can result in CPU compute engines waiting idle. You can have the most CPU cores in the world, but if they aren’t fed the right data in an efficient manner, they can’t do useful work. The situation is analogous to race cars - you can have the biggest engine ever made under the hood, but if you have a tiny fuel line that can’t provide enough fuel to the engine, the car won’t go very fast.


Beyond memory bandwidth, you also need enough Input/Output (I/O) bandwidth to ensure that data can get in and out of the CPU and memory. Critical I/O interfaces to storage and the network – be it Ethernet or Infiniband-  are usually connected via PCIe. Bandwidth and latency on those interfaces can quickly become the bottleneck in systems with overloaded PCIe links. When balanced optimally, jobs are loaded and run faster, you can do deeper analysis to get better results, and/or the number of systems to achieve this analysis is reduced.


In recent years the PCIe connections are also being increasingly used to extend the compute capability of the system by connecting to GPUs or FPGA-based accelerators. Many applications scale well with the vector math capabilities of GPUs or by dedicating logic in FPGAs to the inner loops of critical algorithms.  Perhaps the most important emerging applications in machine learning are where “heterogenous” systems with high-performance CPUs and accelerators are the right answer.


All of this thinking went into the design of the AMD EPYC™ processor, and it shows. EPYC is an architecture built for the workloads and applications of current and future datacenters.


  • AMD EPYC has up to 33% more memory bandwidth per core than the competition to keep data flowing to the processors1;
  • A 2P AMD EPYC 7601 processor offers up to 2.6x the memory capacity than a 2P Intel Xeon Platinum 81802;
  • All AMD EPYC processors have the ability to support up to 128 PCIe lanes so that I/O does not become a bottleneck3;
  • EPYC has outstanding floating point capabilities with world record performance in multiple floating-point benchmarks and real HPC applications4;
  • Single and dual-socket EPYC-based server solutions allows up to six GPUs or FPGAs to be attached to the CPU with enough lanes left over for high-speed storage devices and high-speed Ethernet or InfiniBand connections.


Many AMD EPYC platforms on the market today deliver outstanding performance on memory bound workloads. For virtualized and memory-centric solutions, both HPE and Dell offer 2U rack-based systems – the HPE ProLiant DL385 Gen10 and the Dell PowerEdge R7425. For ultra-dense compute solutions, Supermicro, Cray and Cisco have 4 nodes in a 2U (4N/2U) solutions. The Supermicro AS -2123BT-HNC0R, Cray CS500 and Cisco UCS C4200/C125.


AMD EPYC has been met with great excitement by the market, and its balanced architecture delivers world record performance. And looking ahead, we have a strong roadmap that is primed to deliver premium performance and innovation for years to come.





1 AMD EPYC™ 7601 processor supports up to 8 channels of DDR4-2667, versus the Xeon Platinum 8180 processor at 6 channels of DDR4-2667. NAP-42


2 A single AMD EPYC™ 7601 processor offers up to 2TB/processor (x 2 = 4TB), versus a single Xeon Platinum 8180 processor at 768Gb/processor (x 2 = 1.54TB). NAP-44


3AMD EPYC™ processor supports up to 128 PCIe® Gen 3 I/O lanes (in both 1 and 2-socket configuration), versus the Intel® Xeon® SP Series processor supporting a maximum of 48 lanes PCIe® Gen 3 per CPU, plus 20 lanes in the chipset (max of 68 lanes on 1 socket and 116 lanes on 2 socket). NAP-56





Cautionary Statement

This blog contains forward-looking statements concerning Advanced Micro Devices, Inc. (AMD) including, but not limited to, the strength, expectations and benefits regarding AMD’s technology roadmap, which are made pursuant to the Safe Harbor provisions of the Private Securities Litigation Reform Act of 1995. Forward-looking statements are commonly identified by words such as "would," "may," "expects," "believes," "plans," "intends," "projects" and other terms with similar meaning. Investors are cautioned that the forward-looking statements in this blog are based on current beliefs, assumptions and expectations, speak only as of the date of this blog and involve risks and uncertainties that could cause actual results to differ materially from current expectations. Such statements are subject to certain known and unknown risks and uncertainties, many of which are difficult to predict and generally beyond AMD's control, that could cause actual results and other future events to differ materially from those expressed in, or implied or projected by, the forward-looking information and statements. Investors are urged to review in detail the risks and uncertainties in AMD's Securities and Exchange Commission filings, including but not limited to AMD's Quarterly Report on Form 10-Q for the quarter ended March 31, 2018.

115706_One_Year_of_EPYC_Graphic_Twitter_R2 (1) (002).jpg

One year ago, we launched the revolutionary AMD EPYC™ processor family into the market. It’s been an amazing year as AMD upended the status-quo in the server industry reintroducing innovation and choice to the datacenter.


EPYC brings a distinct advantage in both dual and single socket configurations, with up to 32 high-performance “Zen” cores, 8 memory channels and 128 lanes of PCIe available on all EPYC processors. EPYC delivers a leadership two-socket system that can compete with the best that the competition has to offer. Our no-compromise single-socket server processor allows customers to buy the right size and the right system for their workload without compromising on performance, reliability or features. Thus, ducking the constraints that force users of Intel-based systems into a two-socket server when a single-socket system would offer a better choice.


I am very proud of what we have accomplished the past 365 days, from setting world records in performance to winning a continuous stream of customer deployments.


Some of my favorite moments of this year include:

  • Participating in the launch event in 2017. Seeing the support from the ecosystem and all the companies that stood on stage with us, committing to AMD EPYC – HPE, Microsoft Azure, Dell Technologies, VMWare and Baidu - was truly awesome. Since then, we have 14 system partners in the market we have delivered with more than 50 different platforms, which will continue to grow.
  • Announcing the OEM deployments of Hewlett Packard Enterprise (HPE), Dell EMC, SuperMicro, Cisco, Sugonand Lenovo. These server solutions demonstrate the flexibility of the single-socket and/or dual-socket designs offered by AMD EPYC, giving customers exceptional choice based on their performance needs and future scalability.
  • Seeing our single socket servers compete in the market and win over our customers previously forced into two-socket systems.
  • Joining forces with global supercomputer leader, Cray. EPYC processors are powering the new Cray CS500 cluster high performance computing (HPC) systems.
  • Seeing Microsoft Azure, Baiduand Tencent Cloud deploy AMD EPYC for public cloud instances for a variety of workloads like virtualization, AI and e-commerce.


We’re already taking great strides with the next-generation, 7nm AMD EPYC processor – codenamed “Rome” – which is up and running in AMD labs ahead of its launch in 2019. We are committed to this market for the long term, our product roadmap is on track, and we are engaged across the ecosystem to change the datacenter with EPYC. We deeply appreciate the ecosystem support and look forward to working with them to drive innovation in datacenter for years to come. The time is right for AMD and for our customers and partners.

One of the leading cloud computing companies in China, Tencent Cloud, has announced the immediate availability of the AMD EPYC based SA1 Cloud offering. Tencent Cloud is a secure, reliable, high-performance cloud compute service provider that integrates Tencent’s infrastructure capabilities with the advantages of its massive-user platform and ecosystem – best known for supporting hundreds of millions of people on Tencent’s QQ and WeChat applications. The AMD EPYC 7551 two-socket platform delivers the compute power needed for workloads ranging from social media platforms, gaming and e-commerce while providing a 30% lower cost structure per virtual machine for Tencent Cloud customers.


The SA1 Cloud offering is online now and is proving to be extremely popular for companies who are leveraging the cloud for their workloads. Tencent’s testing has also shown AMD EPYC to offer an exceptional level of performance at a lower total cost of ownership (TCO) compared to other solutions. Tencent Cloud customers can access high-performance “Zen” core counts of 1, 4, 8 or 16 cores for virtual machines with access to up to 8 GB of memory per core.  The SA1 instance can be purchased via the Tencent Cloud Services portal.


With the addition of Tencent Cloud Services, AMD continues the momentum with cloud service providers that want to offer a compelling level of performance and features to their customers.


For more on the performance of the EPYC processors, visit

We are less than a month away from celebrating AMD EPYC’s first birthday.  What a year it’s been! We’ve built a revolutionary product line eagerly adopted by world class customers, ecosystem partners, and their customers. We now have 14 server systems partners and over 50 server platforms introduced and ramping.  But we’re just getting started.  To that end, I’m immensely proud to publicly welcome a proven datacenter innovator, Cisco and its Unified Computing System (UCS) Portfolio to the AMD EPYC™ family.  The UCS C4200 solution featuring AMD EPYC holds enormous promise for UCS to continue its decade long drumbeat of bringing unparalleled innovation to their customers.


Cisco and AMD have a legacy of rethinking the status quo, shunning incrementalism, and introducing disruptive innovation with compelling customer value. Now a server and datacenter leader, many forget that it was almost ten years ago when Cisco turned the server industry upside down with its launch of “Project California” now known as UCS. UCS transformed the industry by unifying servers, virtualization, and storage access to help customers move towards a programmable infrastructure.


The new UCS platform powered by AMD EPYC extends the Cisco UCS value of programmable, application centric infrastructure to use cases where core, memory, and storage density are key requirements. With AMD EPYC, Cisco is able to deliver 128% more cores, 50% more servers, and 20% more storage per rack than other Cisco UCS Servers. Coupling Cisco UCS Intersight and ACI with AMD’s Secure Memory Encryption and Secure Encrypted Virtualization technologies, holds enormous promise to help service providers and hybrid cloud administrators isolate multiple tenants and applications more securely.  It really is an incredible match.


We are committed to continuing to work with industry leaders like Cisco to help organizations meet the increased demand for data management and cost optimization. AMD EPYC provides the right balance of compute, memory, I/O and security for high density environments, so organizations can continue to keep pace and even stay ahead of emerging workload requirements. Stay tuned for more EPYC momentum in the coming weeks.

Competition in the datacenter processor market for the past few years has been limited, but the arrival of EPYC™ has changed that. We’ve had momentum with major customer wins, the ecosystem is rallying behind us, and killer product reviews showcase the performance of EPYC going head-to-head with Intel at the top of the processor stack and absolutely dominating in SPEC performance in the middle of the stack.

One of our key innovations has been the introduction of a true enterprise-class single-socket processor. For too long Intel has artificially limited the single-socket market to keep eyes and dollars focused on their two-socket offerings. However, over half of the systems that have been using Intel’s two-socket parts could have had their performance and feature needs better met if an unconstrained, datacenter-capable single-socket processor were available. Coming back into the market we have the freedom to disrupt and AMD’s no-compromise, single-socket capability is a true game changer for many workloads including virtualized storage, VM farms, and Web hosting.

Dell debuted its EPYC-based PowerEdge R7415 single-socket for storage and analytics applications that offers up to 20 percent lower total cost of ownership than the alternatives. ITPro recently put the R7415 through its paces, running single-socket EPYC solutions up against two-socket Intel solutions.  Concluding that it is a “serious alternative to more costly 2P Xeon SP servers” and “a great choice for datacenters that want a single socket rack server with support for up to 32 CPU cores, a high memory capacity and a sharp focus on storage-centric workloads.” When evaluating server options for the datacenter, the EPYC one-socket offering should be a serious contender because there are many use cases and workloads where it makes the most sense. An optimized 1P EPYC offering can pay huge dividends in storage and compute applications, digging in with more I/O, memory and saving holistically on total cost of ownership.

This week*, Dell Technologies will bring together all its brands and thousands of business and tech professionals for the massive Dell Technologies World in Las Vegas. AMD will be there featured in several of Dell’s new platforms including their new single sockets (booth #705).  We couldn’t be more excited to participate and be able to showcase our highly scalable, single- and dual-socket servers. The Dell platforms leverage the high-performance EPYC 7000 series processors to deliver exceptional performance in key workloads like virtualized storage, cloud, and big data.

*Originally posted on LinkedIn Pulse on 4/30/18.

As we approach the end of the year we have tremendous momentum and excitement building for AMD EPYC™ processors. Two weeks ago we were proud to have numerous partners with us at SC17 introducing EPYC to the HPC community. This week at HPE Discover, our teams will be showing off the latest EPYC-based system, the ProLiant DL385 Gen10 server.


DL385_Gen10_Snap_1_Images_v00_00001 (2).png

With the new DL385, we’re once again partnering with HPE to bring AMD to the best-selling server in the industry, and this latest version is breaking records. An AMD EPYC model 7601-based HPE DL385 Gen10 system scored 257 on SPECrate®2017_fp_base and a 1980 on SPECfp®_rate2006, both of which are higher than any other two socket system score published by SPEC®. We’re extremely proud of this performance, which is a testament to our floating point implementation in Zen and HPE’s platform leadership. This week at HPE Discover, we are also showing how we are changing the economics of virtualization with EPYC in the heart of the server market, delivering up to 50% lower cost per VM.



EPYC is substantially lowering the total cost of ownership for the datacenter and giving customers a real choice for the first time in nearly a decade. Clearly this has many in the industry very excited. Even our competitor is taking notice.


A lot of our competitor’s angst around EPYC vs Xeon has been at the top of the performance envelope, which of course is very relevant for certain types of high-performance applications, but not critical for the vast majority of the server market today. Our teams have taken the great Zen core and built a highly differentiated part with unique memory, I/O, and security capabilities. Put it together and we have performance leadership for the workloads we’ve targeted. But beyond those workloads, in the middle of the performance stack, where most of the server business sits, EPYC has incredible performance advantages over Xeon Skylake scalable processors for the majority of applications.


Serve The Home recently showed some very compelling data focused at the heart of the server market. When testing our 16 core EPYC 7301 family, the data shows that in the highest volume portions of the market EPYC delivers more cores, more memory bandwidth and a significant performance advantage over the Xeon Silver product line across multiple benchmarks.


Our partners appreciate the choice and differentiation that EPYC provides to their solutions. In addition to the HPE, Sugon, Supermicro and other solutions we’ve already announced, we’re working with Dell and many other OEM/ODMs to bring their first EPYC-based platforms to market. Three of the Super 7 mega datacenter providers have publicly announced plans to deploy EPYC-based products, including Baidu, Microsoft Azure and Tencent, and we have strong engagements with other major cloud providers.


I’m extremely proud of the AMD team and the EPYC products they have created. With EPYC we are restoring choice and innovation to the server market. Stay tuned, because this is just the beginning of an exciting time for EPYC and new era of choice for the datacenter market. We’ll have more big news to share soon.


Forrest Norrod is Senior Vice President and General Manager of the Enterprise, Embedded and Semi-Custom Business Group at AMD. His postings are his own opinions and may not represent AMD’s positions, strategies or opinions. Links to third party sites are provided for convenience and unless explicitly stated, AMD is not responsible for the contents of such linked sites and no endorsement is implied.

Every year it’s inspiring to see an eclectic group of students, scientists, researchers and technological innovators from the high performance computing (HPC) community come together at SuperComputing 2017 (SC17). It’s the one place where some of the brightest minds in computing can share ideas and map out of the future of cutting edge technologies.


At the heart of all the work and collaboration accomplished this week is a foundation set in mathematics and the ability of computers to handle a mind-boggling number of computations and an incredible amount of data. Modeling, simulation, and data analysis now underpin everything from protein folding and gene research to high particle physics and cosmology.  


When you think about math in the context of the datacenter and with high performance CPUs, more often than not you’re really talking about how you leverage the utility of floating point.


Uncovering the Utility of Floating Point


Earlier this year, we released our new family of high-performance server and datacenter processors, the AMD EPYC™ 7000 series. The EPYC processor is a highly-scalable CPU that includes a very powerful and capable floating point unit (FPU). The philosophy of the EPYC FPU is to deliver the most easily usable performance. Its unique co-processor architecture enables a high floating point instruction issue rate, great memory bandwidth, with energy efficiency to maintain a high frequency even when loaded. We support all the floating point instructions used by today’s systems, up through AVX2. Running existing codes or even the latest floating point benchmarks, the numbers speak for themselves: EPYC delivers incredible FP performance.


We define utility as usefulness over costs. It’s an idea that underpins computing technology, businesses and overall strategies. We’re working to provide and apply the technologies that let the HPC community do more with much less. When you do the math, we’re delivering an EPYC processer that runs at up to 3X* the performance per dollar than the competition. Whether you’re thinking about the immediate or long-term implications, it’s the utility that makes all the difference. And when you couple the EPYC CPU with the utility of the Radeon™ GPU, you can extend that even further.


Unlocking the Power of AMD at SC17: EPYC + RADEON INSTINCT


Although floating point is key to how we approach the next generation of CPUs, for massively vectorized workloads there is an even better choice: the power of today’s GPUs which far exceed any CPU on highly parallel applications. AMD offers leading compute-GPUs in our Radeon Instinct™ line. You can now pair the best of AMD in high-performance CPU and GPU with EPYC and Radeon Instinct to create a heterogeneous supercomputing solution that tackles real-world applications that floating point thrives in – from fluid dynamics and weather mapping to oil and gas exploration and more.


At SC17, we’re bringing the power of a combined EPYC and Radeon Instinct platform to the show floor with Project 47 (P47). Inventec’s P47 platform provides direct access to four Radeon Instinct GPUs through a single EPYC processor without the need for PCI switches, which removes design barriers and streamlines performance. The AMD-based platform then flexes its
scalability by contributing to 20 1P EPYC processor-based Inventec servers to produce a petaFLOPS of single-precision computing. By supporting both heterogeneous supercomputing systems and memory-bound CPU platforms, EPYC addresses several real-world applications to support safer, more productive operations.


Focusing on Math for the Masses


I found my inspiration in science and electronics early on as a student, and there’s a group of university students joining SC17 to compete in the Student Cluster Competition that we’re thrilled to help find their individual sparks.


AMD, along with Supermicro and Mellanox, is supporting a student team from Northeastern University. Using a system developed around EPYC and Radeon Instinct, the team will square off against international competitors to run a mix of known and unknown HPC codes around the clock over a couple days to test their high-performance skills.


The lessons learned from that competition will be invaluable. With any luck, the next-generation of visionaries will find their moment of math inspiration and use the high-performance technologies of today to define a more promising tomorrow.


*Based on SPECfp®_rate2006 scores published on as of October 25, 2017.  2 x EPYC 7601 CPU ($4,200 per processor at AMD 1ku pricing) in Sugon A620-G30, Ubuntu 17.04, x86 Open64 v4.5.2.1 Compiler Suite, 512 GB PC4-2666V-R memory, running at 2400  1 x 1TB SATA 7200RPM has a peak score of 1850 (base score 1670); versus 2P Xeon Platinum 8180M ($13,011 per processor per Cisco UCS C240 M5 system with SUSE Linux Enterprise Server 12 SP2, ICC, 384GB PC4-2666V-R memory, 1x240GB SATA SSD score of 1830 (base score 1800). SPEC and SPECfp are registered trademarks of the Standard Performance Evaluation Corporation. See for more information. NAP-49

I’ve spent the majority of my career focused on making systems that weave the fabric of data into our lives. It’s critical that we do not simply create technology for technology’s sake, but that we create solutions to make a difference.


This is why I joined AMD almost three years ago. To be part of a team with the creativity and willingness to innovate and differentiate for our customers. To be a part of a team that would harness AMD’s heritage of innovation and disruption, anticipating customer demands to create more than a choice, but a better choice in high-performance CPUs.


AMD has a great history focused on bringing innovative products to market, and this month marks a strong continuation of that heritage. This is a huge moment for our entire industry as we launch a new family of high-performance server and datacenter processors, the AMD EPYC™ 7000-series, which will deliver greater performance than current solutions at every competitive price point.


The inventive design of EPYC achieves record-setting performance, with to up to 32 high-performance cores supported by a rich set of capabilities. All11488_epyc_02_02_0002_4K.jpg EPYC processors include:

  • Industry-leading memory bandwidth including 8-channels of DDR 4 memory
  • Unprecedented support for integrated, high-speed I/O with 128 lanes of PCIe® 3
  • Dedicated security subsystem that takes data encryption to the next level


EPYC represents a new, comprehensive approach to processors and system design that has been tailored to solve the unique challenges of the datacenter and today’s workloads. We’re incredibly proud of EPYC’s power and performance, and this platform will help companies optimize and right size in new ways. Single socket options with 32, 24 and 16 cores ensure customers new choice to maximize utilization and efficiency. They drive up to 20 percent lower cap-ex letting companies big and small to do more with less.


One of the principal goals for EPYC was to create a processor for the cloud era, tuned not only for the performance demands of modern scale-out software, but also able to meet the challenge of cloud security.  EPYC includes a Hardware Validated Boot capability that helps assure the user that the firmware in the system is free of malware.


AMD added unique encryption technology to further enhance security.  The Secure Memory Encryption (SME) feature allows an admin to encrypt all memory transactions leaving the processor. This enables a whole new level of protection against physical attacks and uniquely protecting even non-volatile memory DIMMS that would otherwise be vulnerable to being removed from the system and compromised.  A step beyond SME is Secure Encrypted Virtualization (SEV) which enables VMs or containers to encrypt themselves and their memory with unique keys, protecting VMs from each other or even from a rogue systems administrator.  SEV is truly security for the cloud era as it allows users to be more secure even in a multi-tenant cloud environment. Both SME and SEV are enabled via a cryptographic engine in the on-chip memory controller. This performs the encryption tasks with minimal impacts to application performance.


We know that product innovation is not enough. Any server must be supported with a global ecosystem of partners and customers and we are proud to be engaged with industry leaders including HPE, Dell, Supermicro, Lenovo, Microsoft Azure, Baidu, Dropbox, Sugon, Tyan, Asus, Gigabyte, Inventec, and Wistron. In addition, primary hypervisor and server operating system providers Microsoft, VMware, and Red Hat, are showcasing optimized support for EPYC, while key server hardware ecosystem partners like Samsung, Mellanox, and Xilinx are also featured in EPYC-optimized platforms.


EPYC is the processor that can tackle the toughest demands of the datacenter, whether you are talking about high-performance computing, the cloud, machine learning or big data and analytics. Today marks the start of a new era, not just for AMD, but for our customers, partners and the entire industry.


Forrest Norrod is Senior Vice President and General Manager of the Enterprise, Embedded and Semi-Custom Business Group at AMD. His postings are his own opinions and may not represent AMD’s positions, strategies or opinions. Links to third party sites are provided for convenience and unless explicitly stated, AMD is not responsible for the contents of such linked sites and no endorsement is implied.

It is clear that the datacenter has long been the lifeblood of large businesses, but it’s increasingly at the core of our culture and plays a central role in everyday life. Every interaction with the Internet and virtually every application you open on your phone relies on the compute and storage capabilities of a remote datacenter – a constant barrage of correlating, analyzing and delivering huge streams of data to people and devices all over the world.


At the core of every datacenter are the racks of servers running the code. The server is typically a non-descript looking box that has supported an incredible amount of innovation in software, with transformational streaming services and online transportation networks disrupting traditional business models and delighting users. But most surprisingly, all these fantastic new experiences run on server designs that are basically the same as they were 10 years ago – before that smart phone was in your pocket.


Last decade, AMD drove innovations that became the fundamental underpinnings of today’s server architecture. AMD firsts include support for x86 64-bit code; multiple processor cores on a single chip; high-performance, scalable interconnects that allows the system to scale up or down as needed, and integrated memory controllers that feed the cores. Virtualization technology is also fundamental today and AMD drove the first virtualization hardware support allowing the server to be sliced up in many different virtualized services that are easily deployed at scale.


It’s this heritage that brought me to AMD a little over two years ago, and this same ingenuity is fueling AMD to make its much-needed return to the datacenter market. AMD understands what it takes to build the modern server and soon we will deliver on that promise.


This week AMD is disclosing for the first time information on “Naples,” a server CPU based on the highly regarded, high-performance “Zen” core.  This 32-core, 64-thread CPU signals AMD's re-entry into the high-performance server market and our intention to once again be a significant player in the datacenter. The new AMD server processor exceeds today’s top competitive offering on critical parameters, with 45% more cores1, 60% more input2 / output capacity (I/O), and 122% more memory bandwidth3.


With up to 64 cores, 4 TB of memory, and 128 lanes of PCIe® connectivity, two-socket servers built with the AMD “Naples” processor will have the flexibility, performance and security to support workloads that once required 4-socket or larger server configurations. With this much capacity, organizations can support even more virtual machines per server in virtualized and cloud computing environments. In addition, they can process even more data in parallel, and execute even more high-performance computing workloads that require massive parallelism.


Take Control of Your Technology Future

As we approach the opening of the 2017 Open Compute Project Summit, we’re energized to once-again join the global community of technology leaders who also see the value in rethinking datacenter hardware to create more efficient, flexible and scalable solutions.


The server market supports industries that are rapidly innovating such as machine learning, software defined storage, web services and data analytics. Being able to effectively support growing demands is a must. “Naples” is a result of AMD focusing on maximizing data center advancements and reducing complexity in technology components to deliver greater choice, customization and cost savings. The response from customers and partners has been tremendous. We are incredibly excited to bring a truly balanced system to the market that reflects the name, heritage and vision of “Zen.” We’re looking forward to sharing more at the Open Compute Summit this week and continuing conversations with other technology leaders on what the future holds for next-generation data centers and how “Naples” works in an OCP world.


Forrest Norrod is Senior Vice President and General Manager of the Enterprise, Embedded and Semi-Custom Business Group at AMD. His postings are his own opinions and may not represent AMD’s positions, strategies or opinions. Links to third party sites are provided for convenience and unless explicitly stated, AMD is not responsible for the contents of such linked sites and no endorsement is implied.


  1. AMD "Naples" processor includes up to 32 CPU cores versus the Xeon E5-2699A v4 processor with 22 CPU cores.  NAP-02
  2. AMD "Naples" processor offers up to 64 PCI Express high speed I/O lanes per socket, versus the Xeon E5-2699A v4 processor at 40 lanes per socket.  Note that the "Naples" pre-production processor used for this comparison is not yet certified as PCI Express-compliant. NAP-05
  3. AMD "Naples" processor supports up to 21.3 GB/s per channel with DDR4-2667 x 8 channels (total 170.7 GB/s), versus the Xeon E5-2699A v4 processor

I’m a hands-on guy and hobbyist when it comes to technology. Twenty years ago, I was perhaps an outlier ‘nerd,’ but today consumers of all levels are innovating technology and customizing applications to meet their unique needs. The “maker” community is vibrant. Students are now excited to be in STEM programs and it’s ‘cool’ to race robots. Open source software and hardware projects are democratizing and accelerating innovation. Grade school kids are writing cell-phone apps; 3D printers and incredibly powerful microcontrollers are becoming common. The uptake of technology innovation at the consumer level is unprecedented.  10746_Graphic_Forrest_Norrod_1200x600_R2 (002).png

Likewise, the datacenter is undergoing radical change, driven by the demand from consumers and cloud servers, and enabled by open source development.  The traditional tower tucked in a corner humming away and hosting everyone’s e-mail and files is being quickly supplemented or replaced by cloud hosting.

These macro trends changing server dynamics are well known by insiders, but bear repeating:

  • Virtualization is decoupling users, operating systems and applications from the hardware underneath; containers are taking this a step further by enabling a massive number of microservices through dynamic resource allocation.
  • Delivery of IT as a service has made the mega datacenter and the Cloud driving forces in technology innovation.

The growth of off-premise IT infrastructure means companies with tens, hundreds, thousands and tens of thousands of employees may not own a single server. They lease their infrastructure, their applications, and their IT services, often from facilities thousands of miles away. Billion dollar businesses serve millions of customers simultaneously via server farms the size of several football fields. And, protecting consumer data is a number one concern touching every part of the ecosystem.

These trends and others are important to chip providers like AMD that must account for these changes in order to secure the market. At the processor level, datacenter innovation is leading toward some simple tenets:

  1. Data Security is Priority One. Securing data while work is being done is the emerging frontier of data security. Utilizing hardware for encrypting memory and virtual machines is the cutting edge of locking out unauthorized access.
  2. Processor Cores Matter. In a world of cloud computing, being able to deliver more useful work across more cores and their supporting resources equals more efficient provisioning of services to more users and lower TCO. Simple as that.
  3. Single Socket CPU Platforms Rising. Thanks to the move to more advanced chip manufacturing processes and the availability of more transistors, a single SoC (1P) server can now fill the need for many of today’s 2P server platforms. This is great news for both on-premises and off-premises customers of IT hardware.
  4. Heterogeneous Systems go Mainstream. GPUs and other accelerators supporting the CPU will become fundamental building blocks of computing. A host of new applications incorporating deep neural networks and machine learning, artificial intelligence, virtual and augmented reality will be supported in the datacenter by combinations of GPUs, CPUs and FPGAs.

In August, AMD demonstrated its upcoming “Naples” processor for the first time. This 32-core, 64-thread CPU signals AMD’s re-entry back into the high-performance server market and our intention to once again be a significant player in the datacenter.  “Naples” is built around the new, ground up “Zen” x86 core that was 4 years in the making, with exceptional memory and I/O capability and an industry-leading security solution.  With 40 percent more instructions per clock expected, and simultaneous multithreading for the first time in an AMD server processor, we are very excited about the prospects for “Naples.”

As we look forward to launching “Naples” in the first half of this year, I and my team will be sharing more about AMD’s vision for the datacenter. It is designed with these transformative changes in mind. We look forward to starting a dialogue in the industry about choice and competition, the role of our products in that equation, and the partners who will help us change the dynamics of an industry. I hope you will join us!

Forrest Norrod is Senior Vice President and  General Manager of the Enterprise, Embedded and Semi-Custom Business Group at AMD. His postings are his own opinions and may not represent AMD’s positions, strategies or opinions. Links to third party sites are provided for convenience and unless explicitly stated, AMD is not responsible for the contents of such linked sites and no endorsement is implied.