Adaptive Computing

cancel
Showing results for 
Search instead for 
Did you mean: 

Adaptive Computing


Your source for Adaptive Computing announcements, customer success stories, industry trends, and more.


Xilinx netted a double win at 2021 Computer Vision and Pattern Recognition (CVPR) and IEEE International Conference on Computer Vision  (ICCV)  two of the top 3 worldwide computer vision academic conferences. Both CVPR and ICCV organizations double-honored Xilinx AI products team, which is a strong recognition of the team’s technical strength and innovation in global competitions.

more
0 0 900
Mike_Sanchez
Staff
Staff

This week at the SC20 virtual conference, Xilinx is presenting a technology demonstration showcasing the integration of Xilinx Alveo accelerator cards with the AMD ROCm™ open software platform.

more
0 0 938

Super Resolution refers to the process of reconstructing a higher-resolution image or sequence from the observed lower – resolution images. An image may have a “lower resolution” due to a smaller spatial resolution (i.e., size) or due to a result of degradation (such as blurring). It has a wide range of applications including but not limited to satellite imaging, medical imaging, video surveillance as well as video streaming which is the primary focus of this article.

more
2 0 1,049

Regardless of the final target technology (e.g., FPGA or CPU), available resources are typically restricted. Thus, an optimized architecture and network design is highly necessary when integrating neural network-based approaches into embedded projects.

This article covers the Solectrix AI workflow, including both the proper network handling and the transfer to the chosen target technology. An example is given for an object detection task running on XILINX MPSoC technology using Vitis AI.

more
0 0 631

The use of artificial intelligence (AI) – including machine learning (ML) and deep learning techniques (DL) is poised to become a transformational force in medical imaging. Patients, healthcare service providers, hospitals, professionals, and various stakeholders in the ecosystem all stand to benefit from ML driven tools. From anatomical geometric measurements to cancer detection, the possibilities are endless. In these scenarios, ML can lead to increased operational efficiencies and generate positive outcomes. 

There’s a broad spectrum of ways that ML can be used in medical imaging. For example, radiology, dermatology, vascular diagnostics, digital pathology and ophthalmology all use standard image processing techniques.

more
0 0 665

You may not be intimately familiar with Baidu's DeepSpeech2 Automatic Speech Recognition model (Amodei et al., 2015: ) but I am willing to bet that if you are reading this, speech recognition is now part of your daily life.

The roots of ASR technology date back to the late 1940s and early 1950s.  In 1952, Bell Labs (Davis, Biddulph, Balashek) designed "AUDREY", an "Automatic Digit REcognition" device which could recognize the digits 0 – 9.  This system could be trained (tuned, actually) per user and could achieve accuracies beyond 90% for speaker-dependent recognition and ~50-60% for speaker-independent recognition.

more
0 0 633
Cindy_Lee
Staff
Staff

I’d like to kick of the new year with a summary of a White Paper published by our AI partner Mipsology. The paper was written in conjunction with Dell and was recently posted on Dell’s web site . 

The Zebra Acceleration Stack from Mipsology provides CNN inference acceleration with remarkable ease. Their tools provide an easy path to migrate CNN models from GPUs to FPGAs, allowing non-FPGA experts to benefit from superior throughput and latency. When combined with Alveo data center acceleration cards and Dell EMC PowerEdge Servers, Mipsology Zebra provides a complete solution for AI Inference acceleration. The image below summarizes the Zebra Acceleration Stack, compared with a GPU stack.

more
0 0 604

In previous posts in this series, we discussed the breakdown of Dennard Scaling and Moore’s Law and the need for specialized and adaptable accelerators.  We then delved into the problem of power consumption and discussed the high-level advantages of network compression.

In this third post, we will explore both the benefits and challenges of purpose-built “computationally efficient” neural network models.

more
0 0 831

In our previous post, we briefly presented the higher-level problems that have set the stage for a need for optimized accelerators. As a poignant reminder of the problem, let’s now consider the computational cost and power consumption associated with a very simple image classification algorithm.

Leveraging the data points provided by Mark Horowitz, we can consider the relative power consumption of our image classifier with differing spatial restrictions. While you will note that Mark’s energy consumption estimates were for the 45nm node, it has been suggested by industry experts that these data points continue to scale to current semiconductor process geometries. This is to say that the energy cost of an INT8 operation remains an order of magnitude less than the energy cost of an FP32 operation, without regard for whether the process geometry is 45 or 16nm.

more
0 0 610

In 2014, Stanford Professor Mark Horowitz published a paper entitled “Computing’s Energy Problem (and what we can do about it)”. This seminal paper discussed the challenges that the semiconductor industry faces related to the breakdown of Dennard Scaling and Moore’s Law. 

If I can be so bold, I would like to borrow and adapt the title of Mark’s paper so that I might provide some perspectives as to why you should consider specialized hardware for Machine Learning inference applications

more
0 0 650