[15:36 thu,28.February 2019 by Rudi Schmidts] |
Encoders on the graphics card (GPU), on the other hand, are usually designed to encode in real time. This is to be able to stream the game directly into the Internet. Therefore all encoders known to us on current GPUs are only 1-pass encoders, i.e. they are inevitably not as effective as 2-pass encoders, because they simply lack important analysis information for compression. 1-pass encoders can never achieve the efficiency of 2-pass encoding without analyzing the stream. Therefore, at the same data rate, 2-pass CPU encoders like x264 are in fact always superior to GPU encoders in image quality. Conversely, 2-pass encoders with a lower data rate can always achieve a comparable picture quality of a 1-pass encoder with a higher data rate. So if you use GPU encoders for video editing to export your videos, you might save some time, but you won&t get the best possible quality for your data rate. And here we make the arch round and link to the current Golem article, in which nVidia states that the new Turing generation operates at eye level with the x264 medium preset in terms of quality. And just to give you an idea: there are not only one but three higher quality levels in the presets: Slow, Slower and Very Slow. When it comes to the final distribution, we would always rely on the CPU for the export of a clip. ![]() deutsche Version dieser Seite: Encoding per GPU vs. CPU: nVidia Turing-Encoding auf x264-Medium-Preset Niveau? |
![]() |
|