.footer { } Logo Logo
directory schraeg
Knowledge
Hardware
Software
DV-Movies
HowTo
Misc
A DV(L)-FAQ [e]

DVL-Digest 516 - Postings:
Index


anamorphic -- one last squeeze - (2)
Anamorphic revisited again
Multiple posts
New camera thoughts in light o


anamorphic -- one last squeeze - "Perry"

Steve Mullen posted :
>1) The optical anti-aliasing filter should eliminate any aliasing from
optical anamorphic.
{PM} This filter is pretty inefficient and can only work really well when
used on an oversampling CCD sensor. Practical examples will only reduce
aliasing, never eliminate 'any'.
>2) Nevertheless, FINE detail may be optically compressed to SUPER FINE
detail which will be filtered out by the anti-aliasing filter. Hence,
unlike 35mm film with its abundance of resolution, prosumer
"non-oversampled" CCD cameras may well lose detail toward the edges that
won't be restored when the image is stretched horizontally. In short, in
a prosumer video camera, optical anamorphic compression is lossy.<
{PM} I had always assumed that the optical filters were stretching the
verticals, are you telling me that they compress the horizontals? Do they
in other words give me a wider shot with the same height?
Whatever, the effects Steve mentions are irrelevant, the fact is that
Anamorphic video BY DEFINITION must have lower 'resolution' on the expanded
lines, rather than a dedicated wide screen format with higher bandwidth (or
perhaps an alternate compression codec with more efficiency such as MPEG).
In 525 you have a considerably oversampled sample rate on digital formats,
so the 'reduced' resolution still exceeds the NTSC pass band. If you
broadcast the anamorphic images on NTSC then you lose that margin of course.
I still struggle to understand why Steve talks about 'edges', the anamorphic
'losses' are across the whole screen and only optical deficiencies would
give any differences at the sides.
>3) The optical anamorphic lens may itself contribute to overall image
softness.<
{PM} True, especially since they don't offer full zoom through, and it is
notoriously difficult to focus accurately on even professional viewfinders.
I suspect you need to use film techniques with measured focus but these
video camera lenses don't offer marked manual focus positions.
>4) Unless the camera has a CRT viewfinder that can be adjusted for
vertical height, composition is going to problematic with optical
anamorphic.<
{PM} True
>5) Electronic 16:9 that uses LINE REPLICATION (every 4th line is
duplicated) will cause a pre-compression loss of 25% vertical
resolution.<
{PM} Surely nothing uses a system this crude?
>6) Contrary to Perry, I believe that line INTERPOLATION (using the line
above and below the "missing" line) can create lines with information
that is very similar to what the "missing" lines would have been if
captured. (That's why jaggies are reduced.) While INTERPOLATION can't
double vertical resolution, it could be expected to create a 25%
increase in pre-compression resolution. This could make 16:9 mode no
worse than letterboxing a 4:3 image.<
{PM} NO! NO! Interpolation will smooth the existing information but cannot
provide information that doesn't exist anymore! A simple study of the limit
value should make this more obvious. Suppose we have a grating with 240
printed lines, actually called line pairs, and related to the video
resolution of 480 lines. This is obviously the theoretical limit of
resolution for a full height image. If we only use the centre 75% then we
can see only 180 printed lines and if we then interpolate this to full
height then we will still have 180 lines! There is no computer in the world
that can look at the 180 lines and know that it 'should' be 240 lines
without making assumptions about what is in the image. This is possible for
test charts but not real images.
What cameras can and do achieve is edge enhancement. This is making an
assumption that if there is an edge in the picture, then it was really
sharper than the way the sensor is representing it. We all know that this
form of 'aperture correction' is a bit hit and miss.
Note that the above limit scenario is completely academic. Real cameras
have lower vertical resolution to improve sensitivity and to avoid interlace
flicker.
>7) Prior to compression, the 360 lines are not simply stretched over
480-lines as is claimed. Whether INTERPOLATION or LINE REPLICATION is
used to expand the image, there are 480-lines of information to be
compressed. Thus I can't really see any difference between compressing
these 480-lines using 60 blocks or 360-lines with 45 blocks. The
compression load is almost the same.<
{PM} The point was that since DV is a constant data rate system, the overall
frame data has a fixed limited value. If the information is blanked for 25%
then the remaining lines could enjoy some extra data and suffer less
compression loss. Theoretically you could therefore get better quality by
expanding the image in post. In practice the DV codec appears not to work
on the data limit, so there is no such improvement.
>8) No one is sure exactly how vertical resolution is compressed and
restored.
{PM} Not sure what you mean here?
9) In any case, we agree that electronic anamorphic couldn't BY ITSELF
cause a huge INCREASE in vertical resolution as shown in the screen
shots. (I would add, that if true line interpolation was used, I could
see how a STATIC res. chart could show some increase!)
{PM} I would expect a lossless optical anamorphic adaptor to offer the same
performance as a dedicated wide screen CCD chip, and both to offer better
vertical resolution than an interpolated 16:9 system as used on all current
consumer cameras. As we have seen, the optical adapters are not loss free
or easy to use and the full vertical resolution is not used anyway, so the
differences are a lot closer than the theory predicts. To repeat a post on
another list yesterday, the great thing about cameras is you can look at the
results on suitable images and make up your own mind.
Perry Mitchell
Video Facilities
http://
www.perrybits.co.uk/



anamorphic -- one last squeeze - Adam Wilt


> 1) The optical anti-aliasing filter should eliminate any aliasing from
> optical anamorphic.
Only to the extent it alerady does so in isomorphic 4:3 production. Some
cameras are better than other at this.
> 2) Nevertheless, FINE detail may be optically compressed to SUPER FINE
> detail which will be filtered out by the anti-aliasing filter.
No more so than zooming wider to the same horizontal angle of view.
> "non-oversampled" CCD cameras may well lose detail toward the edges that
> won't be restored when the image is stretched horizontally. In short, in
> a prosumer video camera, optical anamorphic compression is lossy.
It won't be towards the edges unless the lens is poorly designed, it'll be a
25% reduction in horizontal resolution over the entire image.
At the same horizontal angle of view, it'll be exactly as lossy as "fake" 16:9
since the same number of pixels horizonatlly are being used in either case
(this assumes no MTF reduction from the anamorphic adapter itself). In both
cases resoltion is reduced 25% (remember resolution is normalized to active
picture height).
> 3) The optical anamorphic lens may itself contribute to overall image
> softness.
Yes. This is the biggest problem with this approach.
> 4) Unless the camera has a CRT viewfinder that can be adjusted for
> vertical height, composition is going to problematic with optical
> anamorphic.
Oddly enough, one adjusts to it. But yes.
> THEREFORE, optical anamorphic is not a great solution. And it costs
> money.
Er, ah, it's got tradeoffs, just like anything else. It's not great, but then,
neither is shoot-and-protect 16:9 in the 4:3 frame nor "fake" 16:9.
And even shooting true 16:9 (DSR-500WS, AJ-D610WA, etc.) is annoying: it's
still SDTV. Now, what you really need is an HDW-700A -- or an F900 for
proscan. Hey, it's only money! :-)
> 5) Electronic 16:9 that uses LINE REPLICATION (every 4th line is
> duplicated) will cause a pre-compression loss of 25% vertical
> resolution.
I can't think of a single system currently available (some cheap PC scan
converters aside) that operate through line replication or decimation. Every
"fake" 16:9 camcorder I've seen does multitap upsampling interpolation
> 6) Contrary to Perry, I believe that line INTERPOLATION (using the line
> above and below the "missing" line) can create lines with information
> that is very similar to what the "missing" lines would have been if
> captured. (That's why jaggies are reduced.) While INTERPOLATION can't
> double vertical resolution, it could be expected to create a 25%
> increase in pre-compression resolution. This could make 16:9 mode no
> worse than letterboxing a 4:3 image.
The jaggies are reduced because high-frequency information has been lost. You
cannot make missing detail out of thin air, but you can upsample smoothly so
as to prevent jaggies and render a pleasing image. Apparent increases in
detail can be attributed to lower losses going through the DV codec, since the
upsampled image stresses the codec less.
> 7) Prior to compression, the 360 lines are not simply stretched over
> 480-lines as is claimed. Whether INTERPOLATION or LINE REPLICATION is
> used to expand the image, there are 480-lines of information to be
> compressed.
Well, yes, but the interpolation IS stretching the 360 lines to 480 lines.
> Thus I can't really see any difference between compressing
> these 480-lines using 60 blocks or 360-lines with 45 blocks. The
> compression load is almost the same.
The total useful data, as it were, is the same, but on the block level, the
stretched (fake) image reduces the detail in each 8x8 pixel block by 25%,
reducing compression difficulty.
Better than shoot-and-protect 4:3 if all you want is the 16:9 center panel,
but a no-go if you also need to release in 4:3.
But shooting anamorphic uses *all* 480 scanlines to capture the image and is
neither better nor worse horizontally for the same final angle of view (all
else being equal, which as discussed is not always the case).
> 8) No one is sure exactly how vertical resolution is compressed and
> restored.
The images I see are consistent with at least a simple linear interpolation
filter. These are trivial to implement either in software or hardware since
there's no need for a general purpose solution, only four sets of fixed
coefficients (one set per line in the repeating 3->4 line upsampling cycle).
But no, I don't know how they're doing it.
> 9) In any case, we agree that electronic anamorphic couldn't BY ITSELF
> cause a huge INCREASE in vertical resolution as shown in the screen
> shots.
Yes.
>(I would add, that if true line interpolation was used, I could
> see how a STATIC res. chart could show some increase!)
Only when accounting for improved codec efficiency due to lower complexity in
each 8x8 block.
> 10) If row-summation were turned off when in 16:9 mode, resolution could
> increase. But there would be noticeable side-effects!
Grab a DSR-300 or DSR-500 and turn on "enhanced vertical sharpness." Or pick
up a TRV900, VX2000, etc. and switch it to proscan (lock it down on a static
shot to avoid temporal complications) You'll see it! Sensitivity drops one
stop, and the static image is both sharper vertically and more prone to
jaggies. And on an interlaced monitor, especially with sharpness cranked, it
can be VERY painful to watch: flicker, flicker!
> The question is how much difference is there between a
> 500WS in 4:3 mode and a 300 shooting in its natural (4:3) mode. If there
> is little difference, then buying a switchable camera is a good deal. So
> what's the resolution of the 300?
DSR-300: 800 TVL/ph
DSR-500: 750 TVL/ph
In practice, the pix are very close, but with sharpness up I can detect just a
trace of horizontal aliasing on the 500 on certain details whereas I can't on
the 300 on the same scenes. But the difference is very subtle and difficult to
see in 99% of subject material.
> Here is an alternative 16:9 solution. Mark ONE line on your 4:3
> viewfinder TOWARD THE BOTTOM -- 75% from the TOP. Now shoot in regular
> 4:3 mode composing with the left/right/top edges -- leaving "fluff"
> below this line.
Or crop some off the top and bottom. I've seen shoot-and-protect done well
both ways -- and screwed up both ways!
> I've done this in real-time with the DigiSuite. I don't know if the
> C-cube based products can do this -- meaning I know they don't
Well, the DTV uses the C-Cubes, grin... but I get your drift. For the
non-Matrox C-Cube set (DV500 etc.), simply build a cropping matte graphic and
put it in the other channel as a super. Works fine.
I've also done it in real time with a double-edged wipe in the WJ-MX50 (any
other vision mixer will do as well) -- which also, with its compression
option, lets me make letterboxed 4:3 versions of vertically stretched material
-- great for quick-n-dirty demo tapes or 4:3 workprints from widescreen
DSR-500 footage or just for getting a feel for widescreen material on 4:3
monitors.
> I make no claims that this solves the vertical resolution issues, BUT it
> does mean you can get 16:9, 14:9, and 4:3 from the same source material.
> And it will work with ANY viewfinder. Moreover, it solves the "common
> sides" issue when editing for different aspect ratio releases.
On this, finally, we agree completely! :-)
Cheers,
Adam Wilt



Anamorphic revisited again - Adam Wilt


> However after numerous posts on this lists, the consensus seems to be
> the "fake" 16:9 is not that bad after all, and certainly far superior
> to cropping in post.
Well, it gives better quality than cropping in post, but you lose the option
of a full-frame 4:3 release. TANSTAAFL.
Cheers,
Adam Wilt



Multiple posts - "Perry"

On behalf of my ISP, British Telecom, I would like to apologise for the
multiple posts! They've been having major problems for a week now, and if
they didn't give me free access to the Internet (they also give me my
telephone service) then I'd be long gone! Any insider information would be
appreciated (off list of course).
Perry Mitchell
Video Facilities
http://
www.perrybits.co.uk/



New camera thoughts in light o - "Perry"

> Your right the VX1000 has more pixels then the
> VX2000. More pixels is a good thing.
[Crittenden, Jan]
Actually Hank, more pixels ca be a bad thing as well. It really
depends on what the manufacturer does with the additional fixed pattern
noise that comes from the additional pixels. The denser the chip, the more
noticeable the fixed pattern noise. It is all a compromise, resolution vs.
noise vs. detail. Can't say how it looks in comparing the two cameras in
question, just wanted to point out that more is not always better.<
Panasonic certainly champion lower pixel counts in their own consumer
cameras and the chips that Canon use in their cameras. Higher sensitivity
due to bigger effective active sensor area is the other touted advantage.
The big advantage of using a high pixel count is the ability to use an
effective optical anti-alias filter. These are not very steep, and have to
work around the CCD 'sample rate'. If the 'sample rate' (pixel number) is
higher than otherwise needed for resolution, then the losses due to the
filter will not impact the video pass band.
The great thing about a camera is that it is relatively easy to point it at
a suitable subject, see what happens to the picture and make up your own
mind which option you prefer.
If you want to get an idea of how the anti-aliasing is working, make
yourself a 'zone chart'. Use Illustrator or similar to print a page with
500 or so concentric circles, so you have equal width black and white lines.
Now mount it in front of the camera on a tripod and slowly zoom in and out
and watch the pretty patterns!
Perry Mitchell
Video Facilities
http://
www.perrybits.co.uk/




(diese posts stammen von der DV-L Mailingliste - THX to Adam Wilt and Perry Mitchell :-)


Match term in Search Index:


[up]



last update : 21.Februar 2024 - 18:02 - slashCAM is a project by channelunit GmbH- mail : slashcam@--antispam:7465--slashcam.de - deutsche Version