cancel
Showing results for 
Search instead for 
Did you mean: 

2018 Flash Memory Summit Recap: The Data-Centric Era has Arrived

Cindy_Lee
Staff
Staff
0 0 242

This article was originally published on August 31, 2018.

 

2017 Flash Memory Summit was all about a new industry standard - NVMe-over-fabrics technology. This year at the Summit, data-centric computing overshadowed all other themes, dominating the entire show. While data-centric acceleration is not a new concept, it has transitioned from a research topic to a deployment topic - Xilinx, its partners, and other industry players have launched production-grade solutions that accomplish compute at the storage node with accelerators that offload the overwhelmed CPU.

 

Keynotes

Manish Muthal, Vice President of Data Center Marketing at Xilinx, resonated with the audience, as he articulated the three trends of Moore’s Law’s demise, the dawn of AI, and the deluge of data, all creating the imperative to move compute closer to storage. He then explained that innovative technologies based on FPGAs are available now and are enabling this movement today. FMS keynotes from industry players such as Scaleflux, Solarflare, CNEX-Labs, and NGD Systems also bolstered this theme.

 

Open-Channel Solid State Drive Model is Thriving

Interest in Open-Channel SSDs and NVMe I/O-Determinism/Streaming concepts surged this year. The need to offload the storage stack from the host server to provide deterministic latency for real-world applications is driving change. The standards are now being re-architected to move the storage stack, NAND data placement & timing functions to an SoC/FPGA.

Open-Channel SSD with Xilinx MPSoC/FPGAOpen-Channel SSD with Xilinx MPSoC/FPGA

 

In the revised Open Channel SSD model, the host server is dedicated to compute. The SoC/FPGA is the host server's co-processor and accelerator device. The processor cores within the SoC/FPGA manage the storage stack and flash translation layer (FTL). Purpose-built hardware manages I/O streams, NAND data placement & timing, garbage collection, and wear leveling to provide deterministic I/O latency to the application. Hardware accelerators also offload near storage application functions such as database analytics, video streaming, and key/value offloads. Finally, hardware accelerators perform key storage optimizations such as compression, encryption, deduplication, and RAID/erasure-coding.

 

Demos at the Xilinx Booth

Check out key demos powered by Xilinx at Flash Memory Summit:

  • Storage compression offload using Eideticom NoLoad and NVM Express peer-to-peer processing
    • Eideticom describes the peer-to-peer compression technology using their NoLoad NVM Express U.2 acceleration platform running on an AMD EPYC enabled Hewlett Packard Enterprise server.
  • Burlywood TrueFlash leverages the power of Xilinx FPGAs to deliver affordable software-defined flash storage
    • Burlywood’s TrueFlash software-defined flash, which is the heart of the industry's first modular flash controller architecture, takes full advantage of the power, performance, and cost improvements powered by Xilinx UltraScale FPGAs.
About the Author
Cindy Lee is the Sr. Product Marketing Manager in the Adaptive and Embedded Computing Group (AECG) at AMD. In this role, she leads content creation, positioning, and messaging of all AECG products, such as FPGAs, adaptive SoCs, and design tools. Cindy has over 20 years of technology industry experience across several engineering and marketing organizations.