i wonder why it isnt even considered.
a lot of TVs have motion interpolation and people use it ony console games to enhance the fps for a much smoother gameplay experience.
so what could be negative?
at first there seems to be higher response and blurring, but the blurring can be removed and the from using motion interpolation is based on the fps itself. so with lesser base fps the response gets worse, but imagine you would only have 30 fps gaming.
with motion interpolation it could easily be 60 fps, by just adding one extra frame inbetween rendered frames.
60 fps with like 17ms input delay plays much better than 30 fps.
and this works with whatever technology is used. it works with raytracing, hdr and just everything displayed on screen.
it could be just an option in the driver settings, so people use it to their likings.
it would be far superior to nvidias dlss, because you would just double the fps instead of lowering the resolution and ki upscale it after to enhance fps.
Motion Interpolation is a feature within your Display (typically TV Display., but some Monitors also do feature a "Game Mode" setting that enables it) … and the point in said Feature is to add Interpolated Frames up to the Refresh Rate of your Display.
It was developed specifically to allow for Smooth Motion of Low (but Consistent) Frame Rates., such-as 24/25Hz on a 60Hz Display.
This is entirely different from NVIDIA's DLSS (Deep Learning Super Sampling) technology.
AMD do have their own form of this in Virtual Resolution, Temporal Anti-Aliasing and Radeon Image Sharpening; with each of these three elements being Individually capable of being Enabled / Disabled … entirely depending on preferred performance output; although on Navi, these have a negligible performance impact thus can simply always be left enabled (and are recommended to be by default).
In regards to Motion Interpolation for Games., you're really talking more about technologies like Motion Blur (which amusingly is commonly switched off by most Gamers) and TXAA.
Now TXAA will always introduce some "Natural" Motion Blur as simply an artefact of how it works... personally I prefer Frame Prediction CMAA from Intel; as it has much better performance, better image quality and tuneable Frame Interpolation … thus negating the need for a Post-Process Sharpening Pass like TXAA requires.
Intel might not be doing so well on the Hardware Front., but there's no denying they have some outstanding and innovative Software Engineers doing some excellent work.
The thing is Developers are typically going to side with NVIDIA (TXAA)., which ironically is a Software Solution to AMD's Hardware Temporal MSAA.
Now the other thing to keep in mind about DLSS, is that it doesn't "Just Work" with any game.
There's a reason it's not a universal feature like AMDs Options., and that's because you NEED to first create a Deep-Learned Cache for the Game because there's no way to do it "Just-in-Time" … all that is happening at runtime is DLSS is using Tensor to search the Pattern Match Library that's been pre-calculated for a Close Enough Solution.
Performance Wise, sure; it provides better Performance than VSR + RIS + TMAA … but not by much, especially not for the cost or restrictions that DLSS actually has.
I'm sure in time it will begin to produce better results and have a bit better performance; especially as Tensor Cores increase in 2nd and 3rd Gen RTX Cards... but for now; AMD's Solution is just as capable, more widely supported, more customisable and Game Agnostic - which frankly makes it a better solution.
no im not talking about motion blur, im realy talking of motion interpolation done by a gpu to enhance fps in games.
it could be an extra chip on the gpu that scans for motion inbetween two frames to add an interpolated frame. (or even more)
and it would work with just everything.
i mean if a tv can boost fps on console games, why shouldnt the same be able with a pc gpu.
it cant be that complicated?
im not asking for dlss on amd, im asking for something better. i think. ^^;
I'm not trying to be mean here., but you clearly have no idea what you're asking for.
Interpolation is a simple case of knowing State A and B, then having a Percentile Difference between the two; typically based upon how much closer it should be from one to the next.
This is more typically known as LERP (Linear Interpolation) ...
That in effect is ... (1 - α ) * |A| + α * |B| = Interpolated Output
There are of course other Interpolation Approaches, but this is generally going to be the best approach ... there is a cheaper version for FMAD Hardware; but let's not confuse the issue right now.
As I said... this REQUIRES knowing State A and B.
DLSS assumes that we know State A but we don't know State B., merely that it'll likely be something we can best guess from a series of patterns... in said case we can just get the closest potential pattern and use that as a substitute to provide the result we're looking for.
Topaz Studio does this., but doesn't use dedicated Hardware (i.e. Tensor Cores) for Rapid Pattern Recognition.
Could this be used to literally construct non-existing Frames? Sure., but you're getting into the territory of Predictive Rendering; that is to say... because it's trying to reconstruct State B without knowing what State C, D, E, etc.. are; the result will be akin to another area where we try to use Motion Prediction in a Chaotic System (i.e. Lag Compensation); which typically results in a need to Reset the Player when ACTUAL Data is recovered ... this is typically known as Latency Jumping and anyone whose played games with Players who have poor Latency will attest; this is far from a "Smooth" or "Ideal" scenario; it's better just to enforce a minimum connection standard, allowing for minor Compensation.
And honestly I'd argue that DLSS (Machine Learning Upscaling) can be good from a starting position., but in Real-Time it's little more than a Desperate Method by NVIDIA to ACTUALLY add some measure of value to including Tensor Cores ... as well as aid in making their GPU appear more Powerful / Capable than they actually are.
As said Hardware is frankly better suited for things like AI, Physics, Blockchain etc... anything with a High Level of Simplified Recursion.
...
Now as I originally stated., Motion Interpolation and Motion Blur are functionally identical to a point.
Where they diverge is Motion Interpolation is a Simplified Version intended to create an Additional Frame between two known Frames in order to Smooth the Playback via the illusion of more Interim Frames.
Motion Blur on the other hand (today) uses a form of Motion Prediction, to calculate the Length and Direction of the LERP (over-time) that is Super-Imposed over the existing Frame to provide a more accurate version of Motion Interpolation.
Could AMD support Motion Interpolation... of course., and actually they do for VIDEO Playback.
Most Playback Codecs do actually, AMD (and NVIDIA) just support this via Hardware Acceleration should Developers chose to utilise their Media SDKs.
For Games however., well there's nothing stopping Developers implement their own Motion Interpolation Post-Effect (should they chose to) ... it's just Motion Blur, Temporal Frame Blur, etc. aren't that much more expensive (or can oft even be by-products of other effects like TXAA) that; there's just no point in using it.
Remember that Motion Interpolation ONLY works with reasonably Consistent Frame Rates... and you also MUST sacrifice Latency for it. Hence why it's essentially exclusive to non-interactive (i.e. TV, Sports, Movies) Playback, because for Gaming you want as Low Latency as possible.
This comment aged like fine wine bro.
AMD Needs their interpolation tech competitor and hopefully open source.
i also dont want to be mean to. but maybe i dont understand it, or i just cant explain it good enough, because english is not my mothers tongue.
im talking about "motion interpolation" not just simple interpolation. and why do you come up with dlss everytime? it has nothing to do with it?
examples:
Frame Doubling Interpolation (SmoothVideo Project) - YouTube
AI Learns Video Frame Interpolation | Two Minute Papers #197 - YouTube (ai example, must not be with ai)
now if tvs can create extra frames based on motion analysis in realtime, why shouldnt a gpu not be capable of?
it works with games so it musnt be video (mpg) files witch already use motion for better compression.
i mostly see benefits, as the negative effects can be processed out to some degree.
as the guy mentioned in the video of the first post, he cant see the negative effects gaming.
and i see no problems with knowing state a and b. or call it frame a and frame b.
there are already techniks that use frame buffer. and thats where the delay comes from. less delay with higher base fps as far is i understand it.
dlss is just upscaling a frame via ai.
it would be just:
1) render frame a
2) render frame b and hold frame a in buffer
3) analyse motion between frame a and frame b
4) but out frame a, compute frame ab
5) put out frame ab, make frame b frame a
continue with 2)
Because YOU mentioned DLSS in each of your posts.
Yet what you're asking for is a Default Feature of Video Playback on AMD Cards., and has been since UVD was introduced.
What you're asking for is for this to be adopted for Gaming., except it already has...
As noted TXAA does this as a by-product of the Temporal (which is what T stands for) approach., but it's also how most Modern Implementations of Motion Blur work; as well you can do a Motion Vector Analysis on a Frame / Array of Frames with very little cost or overhead today.
Could this be done as a "Universal" Feature... eh., sure but not well.
Remember Classic Motion Blur approaches were purely Post-Process., rather than communicating with the Game Logic / Physics … thus the results were quite crude, and as I noted in my previous replies … they somewhat HAVE to be when you don't have a Consistent Framerate., because in order for Interpolation to work without looking like a "Frame Skip" or "Frame Jitter"; you need to be able to have a Dynamic Interpolation.
The only way to really do this is to have Frames Rendered Ahead of time (i.e. what Double / Triple / Quad Buffering is doing) … that way if a Frame is missed., you have time to correct that in the Interpolated Frame Output.
Still this is just the illusion of better FPS at the cost of Input Latency., and again the key point in Game Engines is to REDUCE Latency (ideally you want to realistically be operating as a Just-in-Time approach without any buffering; it's why things like VRR are being pushed and popularised.
It's fine for Video Playback (and again., has been a feature of Radeon UVD Playback for a Decade Now... hence why it's painfully obvious when a Video Playback Service :coughiTunescough: doesn't support Hardware Accelerated Playback) but it's not a good idea for Gaming outside of situations where you have access to data that isn't present within the Final Composition Frame; because there is far less overhead in simply stepping the Physics Engine 1 Tick Forward; then Interpolating that … instead of trying to Predict what the output of the next Frame (that has yet to be processed) will be, or alternatively Rendering said Frame Ahead and doubling your Input Latency for each Frame Ahead.
This as a note is why Double Buffer at 60Hz feels like 30Hz., and why Triple/Quad Buffering has a habit of dropping frames just to maintain said "Smooth" 60Hz while introducing more latency.
The only way to really do this is to have Frames Rendered Ahead of time (i.e. what Double / Triple / Quad Buffering is doing) … that way if a Frame is missed., you have time to correct that in the Interpolated Frame Output.
Still this is just the illusion of better FPS at the cost of Input Latency.
i think thats only true for 3d multiplayer shooters.
and illusional 60 fps with a few ms delay can feel much better than stuttery 30 fps.
i still wish it as an option like ris to use it if one likes or not.
but if im the only one that wiches for this feature, it can not be helped.
i might even buy a monitor with motion interpolation. there are just none. and uhd tvs are to big to my liking for pc gaming.
but thank you for the additional explanations.
Well technically the concept is great.
Both TAA and a predictive frame extrapolation by machine learning Are based on temporal image analysis instead of spatial processing. TAA tries to refine the current frame though not the upcoming ones.
A an example, DLSS1 was more of a special algorithm I believe. However, this is just an example and DLSS and soon Direct ML based algorithms use super resolution convolution based and other Learning techniques to decrease the load and increase the FPS. In VR industry there is a great interest and study on methods to increase the frame while keeping lag super low. One method is to receive inputs at higher FPS from user controller, And then use data to affect the interpolation prediction. Or, eye tracking to generate image only where we see.
There are two issues right now with predictive interpolation, assuming analytical methods are very compute intensive and the method should be based on learning techniques,. First, the overhead on GPU processing should be negligible to prevent more delays. Second, for fast paced games, artifacts might be introduced. But I see this method could be very possible going forward. We will see this idea adoption first in video industry.
another method I was thinking about was to render only half of the pixels in a frame, and then render another complementary half in the flowing frame. And using de nosing algorithms to fill in the space. somehow like checkerboard rendering or other alternative rendering methods, but using AI denosing for more scalability and higher quality.
This is very doable in the near future, but have not seen anyone working in it yet. Well, technically Methods like VRS can generate an output log maps that maybe in the future could be used for even more radical pixel level filtering and manipulations as well. The advantage of this de noise technique could be that it does not need any prediction.
Good luck
They use it with Oculus Vr headsets but they call it "Asynchronous Spacewarp" They probably don't use it for pc because it would hinder hardware sales you could hold onto your old laptop for twice as long.