Hi, I have a few questions. I hope you can help me.
I am trying to learn neural nets/ML on my older, Fx based hardware.
I very much prefer the openCL development model.
As discussed elsewhere, people like myself with older, Fx based HW still can't use the ROCm ecosystem because of my MB/CPU not supporting PCIe v3 atomics.
I do have a working OpenCL legacy installation. I have an Rx580 GPU.
What OpenCL-accelerated ML apps do I have a reasonable chance of being able to use now with my current OpenCL and my current hardware?
How long will it be before the ROCm HIP cuda to Hip capability percolates down to my HW (or will it?)
(so I can compile Cuda code, because a lot of apps I would like to try are still Cuda only.)
Is there much chance that I would be successful trying to run it now with this HW?
if so what are the determinants of success? (besides persistence)
Could somebody make an unofficial post explaining what would give somebody like me using Fx HW more of a chance, and what would be most likely to work, with current AMD GPUs- Some of the AMD github repos readme files (for example, tensorflow) omit needed info and finding out what needs to be changed is very time consuming and for a beginner, the chance of success on this with incomplete info is slim.
I really cannot afford to replace my Mb and RAM right now. I can work with reduced performance for now-
Where are the best places to learn ML using OpenCL?