CD Projekt Red just recently added AMD's FidelityFX Super Resolution 3 with Frame Generation to Cyberpunk 2077 and its Phantom Liberty expansion. Although the results in a nearly 100% bump in frame rates, there are visual trade-offs that fans aren't exactly pleased with. https://www.notebookcheck.net/Cyberpunk-2077-AMD-FSR-3-disappoints-with-severe-image-quality-degradation-shimmering-and-blurry-textures.888344.0.html
Frame gen is not meant to be used with such a low framerate
Cyberpunk is an nVidia title. Unofficial mods have already implemented it with FSR 3.1 and have been available for some time, yet Cyberpunk obviously had to implement the mediocre FSR 3.0 (FSR 2.1 is better that FSR 3.0, great job Cyberpunk at implementing the worst version of FSR! 🤷�♂️)
Just buy Nvidia, problem solved
Quote from: 123 on September 15, 2024, 14:09:05Just buy Nvidia, problem solved
That was Cyberpunk (nVidia's) message all along. It seems that their message worked on you!
This is a poor shill article not mentioning CDPR choose to implement an older version of FSR yet again. FSR 3.1 is much better and should have been implemented. FSR3.0 is just FSR 2.2 with frame generation and doesn't utilize the improved upscaler in 3.1.
Quote from: YodaMann on September 15, 2024, 18:13:54This is a poor shill article not mentioning CDPR choose to implement an older version of FSR yet again. FSR 3.1 is much better and should have been implemented. FSR3.0 is just FSR 2.2 with frame generation and doesn't utilize the improved upscaler in 3.1.
I mean, for starters, it is mentioned in the article that there is a newer version available.
QuoteIt's unclear if and when CD Projekt Red will update FSR 3 to FSR 3.1, but FSR 3.1 has promised further visual and performance improvements, compared to the previous implementations.
Secondly, this is what CDPR has given gamers, why does it matter that there's a newer version available if it didn't implement that newer version? We're not talking about the overall quality of AMD FSR — only how CP2077 has used it, which is clearly suboptimal.
Yet another Nvidia shill article