cancel
Showing results for 
Search instead for 
Did you mean: 

Devgurus Discussion

seanc4s
Adept I

AMD and AI

I think you should have a general AI forum where people can post general ideas and algorithms about AI.  Like the one Numenta have, not one as closed down and excluding like Intel's forums. 

I would suggest that AMD look at the Walsh Hadamard transform algorithm.  You can use it for fast random projections, associative memory, extreme learning machines and reservoir computing. That is a range of things and it is one way AMD could differentiate itself from the crowd.

In the end AMD could even do specialized hardware for the algorithm.  In the integer form it only requires patterns of addition, subtraction and barrel shifts.  No multiplications.  That would be very economical in transistor count and power consumption. 

At least you should do optimal library code for the algorithm and ancillary things like re-computable random sign flip. And then mention the algorithms in some of your documentation.

https://github.com/S6Regen

Random Projection AI

https://discourse.numenta.org/

0 Likes
3 Replies
dipak
Big Boss

Thank you for your suggestions.

I'm not an expert to comment on this topic. However, I think AMD is already working on few projects to provide more support for AI.

For example, AMD has started developing a deep learning acceleration library named MIOpen. Just wanted to share the below links in case you are interested to know more about it. If you've any comment or suggestion related to MIOpen, please post there.

https://gpuopen.com/compute-product/miopen/

Welcome to MIOpen — MIOpen: AMD's deep learning library

And sorry for this late reply.

0 Likes
seanc4s
Adept I

Okay, thanks for the links.

0 Likes

Okay AMD, you don't need to develop your own Walsh Hadamard transform code for your numeric libraries (presuming you don't have one already.)  I found a very good one on github:

https://github.com/FALCONN-LIB/FFHT

The numbers are great.  On one of your 32 core CPU chips you should see GPU level performance.

0 Likes