Intel’s XeSS tested in depth vs DLSS – the Digital Foundry technology review

September 14, 2022
Comments off
136 Views

Intel’s debut Arc graphics cards are arriving soon and ahead of the launch, Digital Foundry was granted exclusive access to XeSS – the firm’s promising upscaling technology, based on machine learning. This test – and indeed our recent interview with Intel – came about in the wake of our AMD FSR 2.0 vs DLSS vs native rendering analysis, where we devised a gauntlet of image quality scenarios to really put these new technologies through their paces. We suggested to Intel that we’d love to put XeSS to the test in a similar way and the company answered the challenge, providing us with pre-release builds and a top-of-the-line Arc A770 GPU to test them on.

XeSS is exciting stuff. It’s what I consider to be a second generation upscaler. First-gen efforts, such as checkerboarding, DLSS 1.0 and various temporal super samplers attempted to make half resolution images look like full resolution images, which they achieved to various degrees of quality. Second generation upscalers such as DLSS 2.x, FSR 2.x and Epic’s Temporal Super Resolution aim to reconstruct from quarter resolution. So in the case of 4K, the aim is to make a native-like image from just a 1080p base pixel count. XeSS takes its place alongside these technologies.

To do this, XeSS uses information from current and previous frames, jittered over time. This information is combined or discarded based on an advanced machine learning model running on the GPU – and in the case of Arc graphics, it’s running directly on its XMX (matrix multiplication) units on its Xe cores. The Arc A770 is the largest GPU in the Arc stack, with 32 Xe cores in total. Arc’s XMX units process using the int-8 format in a massively parallel way, making it quick. For non-Arc GPUs, XeSS works differently. They use a “standard” (less advanced) machine learning model, with Intel’s integrated GPUs using a dp4a kernel and non-intel GPUs using kernel using technologies enabled by DX12’s Shader Model 6.4. That means there’s nothing to stop you running XeSS on, say, an Nvidia RTX card – but as it’s not tapping into Nvidia’s own ML hardware, you shouldn’t expect it to run as fast as DLSS. Similarly, as it is using the “standard” model, there may be image quality concession in comparison to the more advanced XMX model exclusive to Arc GPUs. The performance and quality of XeSS on non-Intel cards is something we’ll be looking at in future.

Read more

Comments are closed.