Why VRR is not a magic bullet for fixing poor performance

August 14, 2024
Comments off
40 Views

It began in 2013 with the arrival of Nvidia G-Sync – the first form of variable refresh rate (VRR) display technology. Rather than attempt to synchronise, or not synchronise GPU output with your screen, the host hardware took control – kicking off a new display refresh when the GPU was ready with a new grame. V-sync judder didn’t happen, screen-tearing would (by and large!) disappear. FreeSync and HDMI VRR would follow, but essentially they all did the same thing – smoothing off variable performance levels and delivering a superior gameplay experience. But let’s be clear: VRR is not a cure-all. It’s not a saviour for poor game performance. It has its limits and it’s important to understand them and in the process, we’ll gain a better understanding of performance more generally – and why frame-rate isn’t that important compared to other, more granular metrics.

Let’s talk about VRR basics. Displays have a native refresh rate – whether it’s 60Hz, 120Hz, 165Hz or whatever. Without VRR you have limited options for smooth, consistent play. First of all, there’s the idea of matching game frame-rate to the screen’s refresh rate. Every display refresh gets a new frame. The most popular example of this is the ‘locked 60 frames per second’ concept, where a new frame is generated every 16.7ms to match the refresh rate of a 60Hz screen – a truly tricky thing to deliver on consoles while maxing out their capabilities.

Secondly, you can ask your hardware to offer a clean divider of the refresh rate – the classic example being a 30fps game running on a 60Hz screen. In this case, every other refresh receives a new frame from the source hardware. There are issues with this, like ghosting, for example, but this is the classic compromise for maintaining consistency when unable to match the refresh rate.

Read more

Comments are closed.