This article was originally published on July 2, 2019.
Editor’s Note: This content is contributed by Curt Wortman, Sr. Product Marketing Manager in Data Center Group.
Xilinx’s new streaming QDMA (Queue Direct Memory Access) shell platform, available on Alveo™ accelerator cards, provides developers with a low latency direct streaming connection between host and kernels. The QDMA shell includes a high-performance DMA that uses multiple queues optimized for both high bandwidth and high packet count data transfers.
The QDMA shell provides
- Streaming directly to continuously running kernels
- High bandwidth and low latency transfers
- Kernel support for both AXI4-Stream and AXI4 Memory Mapped
Streaming directly to continuously running kernels allows for immediate ingestion of data to offer the fastest computation result provided immediately back to the host. The QDMA solution is ideal for applications that require small packet performance at low latency.
Typical XDMA Shell vs. QDMA Shell
What’s the Difference between QDMA and other DMAs?
The main difference between QDMA and other DMA offerings is the concept of queues, derived from the “queue set” concepts of Remote Direct Memory Access (RDMA) from high-performance computing (HPC) interconnects. These queues can be individually configured by interface type. Based on how the DMA descriptors are loaded for a single queue, each queue provides a very low overhead option for continuous update functionality.
What’s Your Best Option?
Let’s look at a detailed comparison between the XDMA and QDMA to guide you through the selection of the best DMA suited for your application.
Xilinx Shell Comparison
For more information on the QDMA shell, visit https://www.xilinx.com/member/qdma-shell.html.