If you've all ever seen a basic simple C++ lighting and shadow vertices shader math thing its one table with like 9x9 or 18x18 rows of values with another similar table of values beneath for the angles and distance calculation from the lightsource origin to determine the gradients for lighting and shadows or something right?
I'm guessing theres some maths going on there.. anyway why not just say make a larger 32x32 table for values and only calculate a bare minimum number of values and then play a game of sudoku with the outcomes to populate the rest of the screen? It could be called sudoku shader maths or something.. and be like RTX but better and work with every game.
For the "ray traced mirror reflections" the same way you make one grass model/texture instance in memory and show on screen and then just have all the grass on the land as far as the eye can see be the same one or two grass types being instanced in memory multiple times so theres a sea of grass but its all really just the one grass.. Couldnt you do an inverted instance of nearby area to the camera as an instance in the same way? use some sort of modern texture compression format like say from the year 2000 microsoft direct draw surfaces that can stay compressed while in memory?
Also nvidia's DLSS 2.0 looks disturbingly alot like waifu2x to me.
Maybe AMD should look into waifu2x and similar and come out with their own "upscaling" engine maybe even by talking to the waifu2x guys to start they might be willing to help? maybe you can find the exact same whatever free software that nvidia uses for their DLSS engine and improve on it. I'd love it if my video game waifu's became 2x bigger. From what i hear nvidia just got a bunch of university students to develop the software for DLSS or whatever.