A post by Mayank Daga, Director, Deep Learning Software, AMD
Deep Learning has burgeoned into one of the most important technological breakthroughs of the 21st century. GPUs have played a critical role in the advancement of deep learning. The massively parallel computational power of GPUs has been influential in reducing the training time of complex deep learning models, hence accelerating the time to discovery. The availability of open source frameworks like TensorFlow is another cornerstone for the fast-paced innovation in deep learning. At AMD, we strongly believe in the open source philosophy and have purposely developed the ROCm™ open software platform for accelerated computing on Linux.
I am excited to announce that all the ROCm™ specific modifications for TensorFlow have now been upstreamed to the TensorFlow master repository, embarking on the same open source ethos as Google and the entire deep learning community. Our efforts have culminated in the availability of Community Supported Builds for ROCm™ for both nightly and stable releases. TensorFlow on ROCm™ enables the rich feature set that TensorFlow provides including half-precision support and multi-GPU execution, and supports a wide variety of applications like image and speech recognition, recommendation systems, and machine translation. We have published installation instructions, and also a pre-built Docker image.
We believe the future of deep learning optimization, portability, and scalability will have roots in domain-specific compilers. We are motivated by the results of MLIR and XLA, and we are working towards enabling and optimizing these technologies for AMD GPUs.
We would like to earnestly acknowledge the support of Christian Sigg, Gunhan Gulsoy, Justin Lebar, Tatiana Shpeisman, and Thiru Palanisamy for helping AMD achieve this milestone, as well as others at Google who have tirelessly reviewed our pull requests and provided feedback.
For more information on the AMD work in this area, see https://www.amd.com/en/graphics/servers-solutions-rocm-ml