The use case Iāve been thinking of is more from a point of view of letting you take a game like Starfield that was designed to run at 30fps and using frame interpolation to make it feel like 60fps.
As I said I use the auto motion setting on my Samsung TV and it can definitely make 30fps feel like 60, but there are some drawbacks. There is increased input latency but also at higher settings it can cause the screen to go blurry when moving the camera horizontally. I find itās a bit distracting for first-person games but in 3rd-person ones you donāt really notice it that much. I played through Starfield, Jedi Survivor, and FF16 in this way.
It works surprisingly well so it got me thinking that with these new ML techniques if you could do the frame generation on the hardware and use data embedded in the frame to aid it, which is how DLSS works, then you could probably minimise a lot of these issues and create a much smoother experience.
I can imagine a world where you donāt really need to care what the framerate of a game is, you can get 30fps games to run at close to 60 regardless of what framerate the developers targeted. If it works as well as DLSS does for resolution then it could be really big thing for gaming in my view.