This article was originally published on Jan 6, 2020.
I’d like to kick of the new year with a summary of a White Paper published by our AI partner Mipsology. The paper was written in conjunction with Dell and was recently posted on Dell’s web site.
The Zebra Acceleration Stack from Mipsology provides CNN inference acceleration with remarkable ease. Their tools provide an easy path to migrate CNN models from GPUs to FPGAs, allowing non-FPGA experts to benefit from superior throughput and latency. When combined with Alveo data center acceleration cards and Dell EMC PowerEdge Servers, Mipsology Zebra provides a complete solution for AI Inference acceleration. The image below summarizes the Zebra Acceleration Stack, compared with a GPU stack.
Zebra Acceleration Stack Compared with a GPU Stack
An example application of Zebra running on a Dell EMC PowerEdge R740 server using an Alveo Acceleration Card is shown below. This example generates high-quality, high-resolution images from low resolution images by mapping a very deep super-resolution (VDSR) algorithm onto the Zebra stack.
Very Deep Super-Resolution (VDSR) Algorithm
Super algorithms have highly demanding processing requirements, yet this powerful hardware and software combination can deliver real-time VDSR enhancement to video streams.
Conclusion
Zebra does not require changes to the Neural Net and does not require re-training either. It supports industry-standard frameworks and provides a seamless interface for FPGA-based AI inference acceleration without the user needing to become an expert in the underlying hardware.
For more information on Zebra, please visit the Mipsology website.
For more information on Alveo, please visit the Alveo page.
For more information on Dell EMC PowerEdge Servers, please visit Dell's website.