.footer { } Logo Logo
directory schraeg
Knowledge
Hardware
Software
DV-Movies
HowTo
Misc
A DV(L)-FAQ [e]

DVL-Digest 520 - Postings:
Index


Clarifying GL1 "Movie Mode"
scanline, resolution,interpolation of pixels - (2)


Clarifying GL1 "Movie Mode" - Adam Wilt


> > GL1 (at least) has a "Movie Mode" button, but
> > the setting is either for "frame" or "normal" mode.
>
> And the difference between frame and normal mode is?
Mormal mode is 60 fields, interlaced: normal TV.
Frame/movie mode is 30 frames (both fields captured at the same time), half
the temporal resolution of normal video. It's still recorded as 60 interlaced
fields, so display is always:
F1 F2 F1 F2 F1 F2
Interlaced (normal) capture has the temporal sequence:
F1 F1 F1
F2 F2 F2
whereas frame movie mode is captured as:
F1 F1 F1
F2 F2 F2
> Does the Movie Mode on the GL1 look anything like a
> movie? Does the XL1 look like a movie in movie mode?
The XL1 and GL1 look exactly the same in frame movie mode. It looks like a
"movie" when shot with regard to the limitations of low temporal rate imaging;
in other words, if a shooter with film experience shoots it, it'll look like a
movie. If a video-only shooter shoots it, it's likely to look like blurry,
stuttery, unwatchable muck. The difference is that film is captured at 24
fps, similar to FMM's 30 fsp (or PAL FMM's 25 fps) so it's even *more*
subject to temporal problems; film people have adapted their shooting styles
to account for this in ways that video people have never had to deal with.
Cheers,
Adam Wilt



scanline, resolution,interpolation of pixels - Perry Mitchell


Steve Mullen posted:
>2) When I use the word "interpolation" I mean something sophisticated
enough to create synthetic pixels based upon surrounding pixels. I would
think a very simply set of "rules" would handle a static res chart and
so increase "resolution." Now I admit that it couldn't do that except
for static images -- which is why I guess it isn't done.<
When Adam said "dumb", he means REALLY stupid! Let's consider a diagonal
edge of a solid area, and pretend that we are doing a 1:2 interpolation. At
a line level, we have two successive transitions from solid to background,
one further along the line than the other. If we as humans have to
interpolate this situation, we would place an intermediate transition
halfway between the other two and in so doing we would reduce the 'jaggies'.
The camera 'chip' can only do a simple mix of the two lines and therefore
not significantly improve the jaggies.
Note that neither interpolation adds any information, since they both make
assumptions about the picture content; it's just that the human version uses
rather more intelligence and experience than the chip could possibly muster.
If the apparent straight edge was really jagged like a saw blade, then a
full frame image sensor might detect this detail and give us an image that
actually looked jaggy on purpose. This would indeed be REAL extra
information, and therefore by definition something that it is not possible
to interpolate.


scanline, resolution,interpolation of pixels - Adam Wilt


> 1) I assume the camera must store each field and build the new 480-line
> field from the central 360-lines in the store. Correct?
I don't know exactly how it's done, but in general, that's correct. It
probably is done on a field-by-field basis, hence 180 lines -> 240 lines.
> 2) When I use the word "interpolation" I mean something sophisticated
> enough to create synthetic pixels based upon surrounding pixels. I would
> think a very simply set of "rules" would handle a static res chart and
> so increase "resolution." Now I admit that it couldn't do that except
> for static images -- which is why I guess it isn't done.
Correct. Rarely does the real world present us with static test charts -- and
in such cases unsophisticated object detection features do more harm than
good.
> 3) when you say "dumb filter chip" what do you mean? what does it do?
Assume for the sake of the example a three-tap filter (it may be three, it may
be two, it may be four).
For any resultant scanline N in a field of information, at column C,
pixel(N,C)
p(M1,C) + bp(M2,C) + cp(M3,C)
where a,b,c are coefficients typically in the range of -0.25 to +1.0 or
thereabouts, and p(Mx,C) are the pixel values for scanline Mx of the source
field image and the same column value. The choice of coefficients will vary
based on the number of taps used and the averaging function (simple linear,
Gaussian, cubic), and on which synthesized line of the 3->4 upsampling cycle
you're working on. The coefficients are calculated once by the designer and
burned into the chips; no fancy math occurs in real time, only a fixed set of
multiply/add instructions.
In short, it's a fixed function of the pixels in the same column and the
spatially adjacent scanlines of the same field in the source image.
Does that help?
AJW




(diese posts stammen von der DV-L Mailingliste - THX to Adam Wilt and Perry Mitchell :-)


Match term in Search Index:


[up]



last update : 21.Februar 2024 - 18:02 - slashCAM is a project by channelunit GmbH- mail : slashcam@--antispam:7465--slashcam.de - deutsche Version