Thus one can not speak of stops to describe the tonal range.
But still we can cheat and say that the camera light conditions of 10 f-stops without even drown or effloresce can map nonlinear ..... which is nothing special, can my Ex1 with the Kneefunktion synonymous.
If I could still add that the Red ONE record in 12bit, the result is an effective dynamic range of 72dB and a theoretical range of aperture 12th
Would not it have been better to let drown as black and at least linearly for the remaining range of values represent?
The encoded MOV files in the H.264 data stream has only 8-bit and so it comes to 255:1, or 20 log 255 = 48dB, which results in f-stop 48 / 6 = 8 - if the signal is NOT a Noise included. But this is impossible in practice.
One can increase the Dynamics of dithering.
The encoded MOV files in the H.264 data stream has only 8-bit and so it comes to 255:1, or 20 log 255 = 48dB, which results in f-stop 48 / 6 = 8 - if the signal is NOT a Noise included. But this is impossible in practice.
One can increase the Dynamics of dithering.
This is common in audio for many years. So it is no problem, which with a simple calculation for a 16 bit audio CD resulting 96 dB Dynamics, in the audible range with dithering to increase by 30 dB.
What Canon is doing what here, I do not.
The downside to this is that consequently there is no lower-order bit dithering within H.264 and so minor fluctuations appear in the large expanse of color, especially on HD material.
and to meet drown black areas. These procedures are in fact used the same for MPEG2 - Video or H.264
If "drown black areas to avoid," usually the Dynamics has been enlarged. That is a core feature of high-Dynamics.
@ Deti
Antwort von carstenkurz:

What have since measured the Sensor + A / D range in video mode. Actually is understandable that not substantially different from the range in Stillimage mode.
The difference with the RED is: RAW is the full extent subsequently recorded for white balance, exposure and Grading available.
The Canon provides nunmal but synonymous with a flat tone curve is only 8 bits in the source material, moreover only 4:2:0.
In other words, it succeeds in analogy to the HDR technique effectively accommodate a greater range in 8 bits. In it, the video mode is different from actually not that great Stillimage-JPEG mode (if you can before the color-coding time outside).
The difference is then precisely in the post.
Who can imagine the more, the Canon is like a diamond / reverse recording, the RED (or other RAW cameras) negative one. In the Canon image characteristic in Videmodus is set pretty final at the recording, the codec allows grading then only in a very narrow range. RAW is much more than just Negativanalogie possible.
The slide looks like when it is properly exposed, immediately from crunchy, but you can not turn much more to it. If it is on top of poorly exposed, is hardly what to do. That looks negative 'raw' from first to garnix, but can be further treated in the much broader than even the optimal exposure 'Stop'.
The Canon certainly brings the best out of 'their' out technology, but the Comparison with the REDs or other RAW cameras is complete nonsense.
- Carsten
Antwort von WoWu:

This is so colorful bya. Here you have basically yes, all right. Mathematically, of course, can be accommodated only eight linear apertures.
So when I assumed that RAW is created linear, it can not be more than 8 stops.
But it is not recorded on film, still in the electronics RAW. Even film has a gamma process and is far away of RAW. That is, depending on the material varies. In electronics, we have agreed on the gamma curve 2.2, the essential work on the synonymous monitor manufacturer. The course is so that we can accommodate 10 covers.
As regards the second issue, the resolution (steps) in certain areas, so the more of a question, how is quantized and is now anything but linear, come at the same distance n quantization levels used. On a non-linear today are the manufacturers of the samples determined pursuant to the hardware performance all the way to the individual requirements or / and. So, you want a better Resolutionin dark areas, is there with Resolutiongesampled higher than in light areas, the broadcast of the camera anyway can not transfer in full. Or a Manufacturer takes the quantization according to Huffman / Fano, in which the quanta are chosen such that they correspond to equiprobable areas of signal amplitude, that is: rare brightness = gross quantum, frequent brightness = fine quanta.
This means that the quantization (in new cameras) can even change dynamically with the image content.
Also, the quantization will be elected at high signal densities.
And that brings us the signal to noise ratio, the third component, which is not easy, as the (base) Theory says it, here is linear can be included.
Statistically, results in a lower effective value of the quantization rungsrauschens or a larger signal to noise ratio. Depending on the type of weighting changes the Störabstandswerte, benefit or detriment of the additional possibilities provided by such a signal processing.
So it can happen that 12 - and 14-bit quantized signals have identical noise ratios, even though the theory says yes, that one possible signal to noise ratio can be correspondingly greater in a 14-bit environment.
Advantage of such a system, compared to the 12-bit system, then the extended possibilities inherent in 16 384 samples to 4096 samples or even only 256 samples in an 8-bit system. In information theory one speaks of a measure of uncertainty removed.
The more samples are generally of a source are received, the more information is obtained and simultaneously reduces the uncertainty about what could have been sent.
Not least, the fill factor of the sensor plays a role in this context because it is not no preference, if I for example with a 18 000 electrons readout noise of 9 electrons with 40 000 or get ... 10 electrons. Without knowing the fill factor is not at all the subsequent quantization are determined, therefore, is pure speculation.
Higher quantization suggests, therefore, not necessarily reflected in a better signal to noise ratio, but in a better image information. The Manufacturer will be able to choose by such means the expected signal to noise ratio even purposefully. For this they preferably use the advanced features of a dynamic quantization.
So:
The whole discussion is really waste because only very Manufacturer give limited information about what they actually do.
Therefore, the picture look like a (not very) cheap camcorder significantly better than that of a (perceived quality) RAW recording.
Most of the taste since playing with. Lens can not claim that such images are always better because the quality of the post here very much involved. However, you can fix with a top-class camera editing defects. It is thus clear for the Manufacturer of a camera cheaper and easier to simply what he gets as RAW to transfer almost untouched, but to create high-quality algorithms in the processing. However, one can shine with such processing synonymous ... and one must not forget synonymous
Antwort von carstenkurz:

In H.264 appropriate dithering algorithms are intended to reduce image data to a greater degree.
Soso ;-)
'The downside to this is that consequently there is no lower-order bit dithering within H.264 and so minor fluctuations appear in the large expanse
of color, especially on HD material. "
What you probably mean is exactly the opposite - it will usually dither applied in post processing to compress. or suppress Quantisierungsartefakte. This increases the contrast but not extent, but only veiled Banding & Co.
Who dither when applying Encoding: pants, pliers and so ...
- Carsten
Antwort von Harald_123:

Who dither when applying Encoding: pants, pliers and so ...
I can do with audio (consciously synonymous with professional use) for 10 years altogether. UV22, UV22 HR ... It must be added when creating the data, one data reduction. For audio, it brings afterwards m. E. nothing more.
For video, I do not know really. How WoWu writes: The Manufacturer is not disclosed.
About Microsoft's "Windows Media Encoder Studio Edition" can be
Antwort von WoWu:

Harald,
the "dithering" has an opposite effect in H.264. In other compression methods it has been so used to not durchgezeichnete space before the "termination process" and thus to the representation of MPEG-artifacts (macro blocks to preserve).
In H.264 (depending), only one motion vector can be specified for individual areas. leading to data reduction. If one were to animate the artificial surfaces "that the number of vectors would again increase dramatically ... and thus the required bit rate.
The BBC is increasing since last October, no more material, whose origin is 16mm film, without the grain was removed.
This is precisely the reason.
"VC-1 Advanced tuned 10-bit to 8-bit dithering"
And what is being done is, yes, only the lack of gradations in a 10 bit signal that is represented in 8 bits, replaced by a pixel structure. A common procedure which actually leads to a little data reduction. Rather, the fact means that no 10-bit but only 8 bits are transmitted.
Or are you here for something completely different: The self-dithering sA / D converters?