Hello!
Finally, after too many years with a GTX 1050 I'm ready to return to my beloved world of PC gaming.
To be honest I was waiting for the right moment to take a RTX 3060 but then I decided to take a RADEON RX 6600. It will arrive within this month. In the past I always use to take Intel processor, this time (last summer) I decided to use a Ryzen for building my new desktop. And now, after years of Nvidia, I'm returned to "ATI".
But a part for gaming I want also use the new GPU for experimenting with machine learning. I couldn't wait to get a latest generation graphics card to try gaugan but this changes my programs. AMD has not tensor cores and I guess it's the reason because it's more accessible for my wallet, but in every case I can still get a good acceleration from the 6600, even if I couldn't find anything clear about the difference performance in percentage.
And I don't give up on the gaugan: I know you are implementing a lot of libraries to make AMD GPUs a good tool for machine learning anyway, but I want to advise you to try implementing such an image generator too directly in your assets.
I think that this could be a good start:
https://github.com/lucidrains/deep-daze
https://arxiv.org/abs/2203.13856
I can't deny the limitations of the lack of tensor cores, but I like the idea of squeezing the hardest out of the hardware while optimizing it to get the best of it! I hope it's not a total utopia.