Since the discussion about how good the encoding quality of GPUs is in our forums, maybe nVidia itself can contribute some information. Background: Compressing videos with modern codecs requires a lot of computing power. In order to exploit all the compression tricks of a codec, the entire stream has to be analyzed in a first pass, and then in a second pass better decisions have to be made about the methods used (2-pass encoding).
These decisions include, among other things, which image areas are to be stored in particularly high detail and how the scarce data rate can best be allocated to the individual frames.
Encoders on the graphics card (GPU), on the other hand, are usually designed to encode in real time. This is to be able to stream the game directly into the Internet. Therefore all encoders known to us on current GPUs are only 1-pass encoders, i.e. they are inevitably not as effective as 2-pass encoders, because they simply lack important analysis information for compression. 1-pass encoders can never achieve the efficiency of 2-pass encoding without analyzing the stream. Therefore, at the same data rate, 2-pass CPU encoders like x264 are in fact always superior to GPU encoders in image quality. Conversely, 2-pass encoders with a lower data rate can always achieve a comparable picture quality of a 1-pass encoder with a higher data rate.
So if you use GPU encoders for video editing to export your videos, you might save some time, but you won&t get the best possible quality for your data rate. And here we make the arch round and link to the current Golem article, in which nVidia states that the new Turing generation operates at eye level with the x264 medium preset in terms of quality. And just to give you an idea: there are not only one but three higher quality levels in the presets: Slow, Slower and Very Slow. When it comes to the final distribution, we would always rely on the CPU for the export of a clip.