Ad
Ad
Ad
Pages: [1] 2 »   Bottom of Page
Print
Author Topic: Downsizing and aliasing  (Read 9550 times)
ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 6922


WWW
« on: January 07, 2010, 01:49:54 PM »
ReplyReply

Hi,

I have seen a good discussion of downsampling/aliasing on another topic and got very interested in some info on this site:
http://www.xs4all.nl/~bvdwolf/main/foto/do...down_sample.htm


I'd suggest that you try the experiment below:

The idea with this experiment is to demonstrate the effect of downsizing on aliasing and the effect of optical low pass filter.

1) Download this image: http://www.xs4all.nl/~bvdwolf/main/foto/do...iles/Rings1.gif
2) Open in Photoshop (or your favorite tool) at "Actual Pixels"
3) Convert to RGB
4) Duplicate the image
5) Downsize one of the images to 50% using your favorite method (lets call this image A)
6) Take now the other (unscaled image)
7) Apply "Gaussian blur with radius 0.2)
Cool Downscale the image (let's call this Image
9) Sharpen image B, using unsharp mask radius 0.2 amount 1
Compare Image A and Image B

Now, try to resize both image A and Image B to 200 %

You can try some variations of the parameters...

Best regards
Erik
« Last Edit: January 07, 2010, 02:14:39 PM by ErikKaffehr » Logged

BartvanderWolf
Sr. Member
****
Online Online

Posts: 3013


« Reply #1 on: January 10, 2010, 07:12:51 AM »
ReplyReply

Quote from: ErikKaffehr
The idea with this experiment is to demonstrate the effect of downsizing on aliasing and the effect of optical low pass filter.


Hi Erik,

I also thought it was something that needed to be considered, in this age of large MPixel cameras and webpublishing, that's why I put that page together. The amazing thing is that even after several years, there are still only a few programs that put in the effort to do things right from the start. Even Photoshop chooses speed over quality. What's worse, it doesn't even offer an option to do it as it should be done, even if it is a bit slower. Bicubic Sharper is not a good method to use, (Lanczos windowed) Sinc based approaches are usually much better.

With respect to the Gaussian pre-blur, whether 0.2, 0.25, or 0.3 per downsampling factor, it is a suboptimal cludge, but it does help. Because it is so suboptimal (a Gaussian blur has a very large support radius, long tails), it should be followed by a (smart)sharpening on the final output size, which may reintroduce some aliasing/jaggies on some subjects.

Photoshop does allow to specify one's own blur filter shape in the Filter/Other/Custom... dialog. That will help to get a potentially better performance for moderate (say 50%) downsampling ratios (1:2). Unfortunately Photoshop's Custom dialog only allows a support of a 5x5 filter kernel. Fortunately there is a slight improvement possible with a free Custom filter plugin from Reindeer Graphics. It extends the support to 7x7 kernel sizes with floating point precision, which adds accuracy and potentially larger downsampling steps. It remains a clumsy way of having to do it, but it does deliver better results, with fewer surprises, and it can be automated with an action to a certain extent.

Cheers,
Bart
Logged
Guillermo Luijk
Sr. Member
****
Offline Offline

Posts: 1273



WWW
« Reply #2 on: January 10, 2010, 11:29:52 AM »
ReplyReply

Quote from: BartvanderWolf
Unfortunately Photoshop's Custom dialog only allows a support of a 5x5 filter kernel. Fortunately there is a slight improvement possible with a free Custom filter plugin from Reindeer Graphics. It extends the support to 7x7 kernel sizes with floating point precision, which adds accuracy and potentially larger downsampling steps.
Bart how do you think downsizing in two (or more) steps using an appropiate 5x5 kernel for each step compares to a single-step downsampling using a larger filter?

Regards
Logged

ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 6922


WWW
« Reply #3 on: January 10, 2010, 03:24:55 PM »
ReplyReply

Hi,

I'm not an expert in this area, just interested. In my Linux days I used to use a script based on Image Magick using "Lanczos" for downscaling but now I'm using Lightroom on Apple. Still I think I'm going to revisit Image Magick again.

Best regards
Erik


Quote from: BartvanderWolf
Hi Erik,

I also thought it was something that needed to be considered, in this age of large MPixel cameras and webpublishing, that's why I put that page together. The amazing thing is that even after several years, there are still only a few programs that put in the effort to do things right from the start. Even Photoshop chooses speed over quality. What's worse, it doesn't even offer an option to do it as it should be done, even if it is a bit slower. Bicubic Sharper is not a good method to use, (Lanczos windowed) Sinc based approaches are usually much better.

With respect to the Gaussian pre-blur, whether 0.2, 0.25, or 0.3 per downsampling factor, it is a suboptimal cludge, but it does help. Because it is so suboptimal (a Gaussian blur has a very large support radius, long tails), it should be followed by a (smart)sharpening on the final output size, which may reintroduce some aliasing/jaggies on some subjects.

Photoshop does allow to specify one's own blur filter shape in the Filter/Other/Custom... dialog. That will help to get a potentially better performance for moderate (say 50%) downsampling ratios (1:2). Unfortunately Photoshop's Custom dialog only allows a support of a 5x5 filter kernel. Fortunately there is a slight improvement possible with a free Custom filter plugin from Reindeer Graphics. It extends the support to 7x7 kernel sizes with floating point precision, which adds accuracy and potentially larger downsampling steps. It remains a clumsy way of having to do it, but it does deliver better results, with fewer surprises, and it can be automated with an action to a certain extent.

Cheers,
Bart
Logged

BartvanderWolf
Sr. Member
****
Online Online

Posts: 3013


« Reply #4 on: January 10, 2010, 04:35:52 PM »
ReplyReply

Quote from: Guillermo Luijk
Bart how do you think downsizing in two (or more) steps using an appropiate 5x5 kernel for each step compares to a single-step downsampling using a larger filter?

Hi Guillermo,

Due to accumulating round-off and imperfect filter errors the results are probably not as good as with a single step, but depending on the image (and the filter kernel) it might be good enough.

To illustrate, a regular bicubic stair step downsampling sequence with e.g. 10% difference creates minor aliasing artifacts, but it also loses more resolution than necessary. Photoshop's (regular) bicubic downsampling with steps of 50% difference and a Gaussian pre-blur of 0.25 radius delivers results close to Lanczos windowed Sinc filtering, with only slightly more aliasing, but much better than without pre-blurs.

Cheers,
Bart
Logged
bjanes
Sr. Member
****
Offline Offline

Posts: 2714



« Reply #5 on: January 10, 2010, 05:05:42 PM »
ReplyReply

Quote from: BartvanderWolf
I also thought it was something that needed to be considered, in this age of large MPixel cameras and webpublishing, that's why I put that page together. The amazing thing is that even after several years, there are still only a few programs that put in the effort to do things right from the start. Even Photoshop chooses speed over quality. What's worse, it doesn't even offer an option to do it as it should be done, even if it is a bit slower. Bicubic Sharper is not a good method to use, (Lanczos windowed) Sinc based approaches are usually much better.
Bart,

That demonstration with the concentric rings and interference patterns was very interesting. Aliasing in digital cameras, especially those lacking a blur filter, is also quite evident with resolution charts but its effect on most naturally occurring images is hotly debated. How does the down sampling aliasing affect most images? I have not personally observed it in my own images, but was not aware of the possibility and have not looked for it.

BTW, a Siemens star chart is quite helpful when one is focusing with live view for lens testing. Interference patterns appear when one is in focus.
Logged
BartvanderWolf
Sr. Member
****
Online Online

Posts: 3013


« Reply #6 on: January 10, 2010, 08:48:26 PM »
ReplyReply

Quote from: bjanes
Bart,

That demonstration with the concentric rings and interference patterns was very interesting. Aliasing in digital cameras, especially those lacking a blur filter, is also quite evident with resolution charts but its effect on most naturally occurring images is hotly debated.

Hotly debated, mostly by those who haven't needed to deliver commercial assignments (=time pressure) which were struck by aliasing artifacts ...

Quote
How does the down sampling aliasing affect most images? I have not personally observed it in my own images, but was not aware of the possibility and have not looked for it.

The problem is that one sometimes doesn't know when it will strike other than, under Murphy's law, when it's least helpful. Well, with a little experience (and sharp lenses), one can predict the issues reasonably well, although sometimes it even surprises me because it can be affecting small areas that I overlooked. In general, repetitive features (parallel lines, bricks (street and wall) especially under an angle, roof tiles, fabric, but also diagonal edges/lines/hair) are sensitive to developing aliasing artifacts. It does require good focus(!) and technique (tripod, mirror lockup), a sharp lens, a non-diffraction or spherical aberration limited lens, and a mild (or non-existing) AA-filter to reliably run into visible risk territory.

Aliasing also affects the (high ISO) noise in images that will be downsampled, similar to what had already been noticed in undersampled film scans such as in Grain Aliasing.

Quote
BTW, a Siemens star chart is quite helpful when one is focusing with live view for lens testing. Interference patterns appear when one is in focus.

Yes, the hyperbolic aliasing artifacts jump out. AFAIK I'm the one who introduced the Sinusoidal version of the original Jewel/Siemens star because it's more suited to digital imaging (discrete sampling) than analog/film capture. The sinusoidal version is by now also adopted by the ISO organisation for some of their standards.

I'm also the one who introduced the first Autofocus Manual Adjustment (AFMA) target that's utilising the moiré for optimal focus confirmation. In that tread I also mentioned the potential use of a 'zoneplate' target to verify optimal focus.

Cheers,
Bart
Logged
Guillermo Luijk
Sr. Member
****
Offline Offline

Posts: 1273



WWW
« Reply #7 on: January 11, 2010, 03:36:01 PM »
ReplyReply

I have not come across real life aliasing situations. I guess trying to predict if moire will appear after RAW development is a verydifficult task though.

Let's assume we know that a particular area of a scene will produce aliasing problems (for example when aliasing already took place and we are able to repeat the shot). I was wondering what would be the best approach:

1. Try postprocessing over the aliased RAW file (what techniques?)

Mask the aliased area in PS from an additional shot using...
2. A lower resolution lens with the same focal length (that could produce registration issues)
3. Use the original lens, but slightly defocusing it over the aliased area
4. Use diffraction by stopping down a lot the lens (f/22, f/25) as an antialiasing filter
5. Antialiasing external filters? (if they exist)
« Last Edit: January 11, 2010, 03:39:39 PM by Guillermo Luijk » Logged

ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 6922


WWW
« Reply #8 on: January 11, 2010, 03:57:46 PM »
ReplyReply

Hi,

My guess is that stopping down is best cure, f/16 may be enough.

Best regards
Erik


Quote from: Guillermo Luijk
I have not come across real life aliasing situations. I guess trying to predict if moire will appear after RAW development is a verydifficult task though.

Let's assume we know that a particular area of a scene will produce aliasing problems (for example when aliasing already took place and we are able to repeat the shot). I was wondering what would be the best approach:

1. Try postprocessing over the aliased RAW file (what techniques?)

Mask the aliased area in PS from an additional shot using...
2. A lower resolution lens with the same focal length (that could produce registration issues)
3. Use the original lens, but slightly defocusing it over the aliased area
4. Use diffraction by stopping down a lot the lens (f/22, f/25) as an antialiasing filter
5. Antialiasing external filters? (if they exist)
Logged

Daniel Browning
Full Member
***
Offline Offline

Posts: 142


« Reply #9 on: January 11, 2010, 06:06:03 PM »
ReplyReply

Quote from: ErikKaffehr
I'm not an expert in this area, just interested. In my Linux days I used to use a script based on Image Magick using "Lanczos" for downscaling but now I'm using Lightroom on Apple. Still I think I'm going to revisit Image Magick again.

The braindamaged downsampling filters in Lightroom and Photoshop frustrate me to no end. How many hours have I wasted exporting full-res files just to avoid Adobe's craptastic software? And who is to blame for it? Is it just good old fashioned incompetence?

If you believe Hanlon's Razor, then it's Adobe's fault: they must have blown all the money hiring brilliant engineers like Eric Chan to do most of the software, so when it came to the downsampling, they only had enough to hire one monkey and a typewriter.  Personally, I don't subscribe to that theory. I think in this case it really is malice: Adobe knows that many users like aliasing artifacts, so they purposefully designed all of their algorithms to do a terrible job of downsampling.
Logged

--Daniel
joofa
Sr. Member
****
Offline Offline

Posts: 485



« Reply #10 on: January 11, 2010, 08:54:57 PM »
ReplyReply

Quote from: Guillermo Luijk
how do you think downsizing in two (or more) steps using an appropiate 5x5 kernel for each step compares to a single-step downsampling using a larger filter?

Using the same downsizing filter in two steps would be equivalent to this: Take the downsizing filter and insert N-1 zeros between each element, where N is the decimation factor, and obtain a new array. Now convolve this array with the original downsizing filter and you get a new filter. This new filter can be considered as what you refer as the "larger" filter. Apply it directly to the input signal and then decimate by N^2.
« Last Edit: January 11, 2010, 10:10:22 PM by joofa » Logged

Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins
Jonathan Wienke
Sr. Member
****
Offline Offline

Posts: 5759



WWW
« Reply #11 on: January 11, 2010, 09:15:29 PM »
ReplyReply

Using a "larger filter" means combining more input data values into a single output value, not padding the inputs with a bunch of extra zeroes.
Logged

joofa
Sr. Member
****
Offline Offline

Posts: 485



« Reply #12 on: January 11, 2010, 09:22:10 PM »
ReplyReply

Quote from: Jonathan Wienke
Using a "larger filter" means combining more input data values into a single output value, not padding the inputs with a bunch of extra zeroes.

No, I'm not talking about input padding. I said interleave (not pad) appropriate number of zeros into the downsizing filter and convolve with the original (uninterleaved) filter. This operation will give the larger filter. And this larger filter will, as you say, "combine more input data values into single output value" when convolved with the original (untouched) input.
« Last Edit: January 12, 2010, 01:11:37 AM by joofa » Logged

Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins
Jonathan Wienke
Sr. Member
****
Offline Offline

Posts: 5759



WWW
« Reply #13 on: January 12, 2010, 06:43:38 AM »
ReplyReply

Quote from: joofa
No, I'm not talking about input padding. I said interleave (not pad) appropriate number of zeros into the downsizing filter and convolve with the original (uninterleaved) filter. This operation will give the larger filter. And this larger filter will, as you say, "combine more input data values into single output value" when convolved with the original (untouched) input.

Not if all the additional input values are zeroes...doesn't matter whether you call it interleaving or padding or whatever. How does the filter accept more data inputs if you're simply feeding it more zeroes?
« Last Edit: January 12, 2010, 06:48:57 AM by Jonathan Wienke » Logged

joofa
Sr. Member
****
Offline Offline

Posts: 485



« Reply #14 on: January 12, 2010, 12:08:10 PM »
ReplyReply

Quote from: Jonathan Wienke
Not if all the additional input values are zeroes...doesn't matter whether you call it interleaving or padding or whatever. How does the filter accept more data inputs if you're simply feeding it more zeroes?

Jonathan, linear convolution increases the size of the output filter based upon the lengths of the convolved input arrays. It is basic DSP. That is how it accepts "more inputs" when convolved with incoming data.

I wrote a small program quickly for you to ascertain what I am saying. I shall double check for bugs as I was in a hurry, but the output produced is pretty satisfactory. I can provide the program to you or anybody else if requested.

After providing a random array of 128x128 as an input image and a normalized filter of 16x16, which coefficients were again random, the following results were gotten for the corresponding decimation factors (D):

Here error is the difference between cascaded convolution and "larger" convolution.

error =  1.7823e-12 (D = 2)
error =  6.0269e-13 (D = 4)
error =  6.9421e-13 (D = 5)
error =  8.4018e-14 (D = Cool

The above errors are almost zero. The following is a detailed output for a decimation factor of D = 11.

Cascaded-two convolutions:

    0.14434   -3.23658    2.43976
    3.07258    3.39879    8.39013
   -2.39454  -30.46508   -1.78851

Single "larger" convolution:

    0.14434   -3.23658    2.43976
    3.07258    3.39879    8.39013
   -2.39454  -30.46508   -1.78851

error =  3.8223e-14 (D = 11)
« Last Edit: January 12, 2010, 02:05:43 PM by joofa » Logged

Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins
madmanchan
Sr. Member
****
Offline Offline

Posts: 2100


« Reply #15 on: January 12, 2010, 12:13:19 PM »
ReplyReply

Daniel, I was under the impression that the downsampling methods used in Lightroom 2.2 and later (and Camera Raw 5.2 and later) resolved the ringing artifacts that were present in earlier versions of CR/LR. When combined with judicious capture (and optionally creative) sharpening, and appropriate output sharpening, the downsampled output should appear sharp, with few artifacts. If you have a specific image that you're concerned about, I'd be happy to take a look.

Erik, Lanczos is an approximation to the sinc filter, which is ideal theoretically from an aliasing perspective, but problematic in other ways. There are other types of image reconstruction artifacts that can occur even with perfect treatment of aliasing. For example, any simple linear filter (e.g., the ones listed on the ImageMagick page -- nice compilation of methods) with more than 1 negative lobe can lead to objectionable ringing artifacts on either side of a strong edge.
« Last Edit: January 12, 2010, 12:17:51 PM by madmanchan » Logged

ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 6922


WWW
« Reply #16 on: January 12, 2010, 12:58:55 PM »
ReplyReply

Hi,

I think that Jeff Schewe mentioned that LR was using Lanczos when scaling images for print but it was dropped for a scheme developed by yourself. Nice to hear an explanation.

I have looked at a lot of resizing algorithms and programs but could not see enough benefit over Lanzcsos or bicubic variants to shell out money or make workflow more complex.

Best regards
Erik

Quote from: madmanchan
Daniel, I was under the impression that the downsampling methods used in Lightroom 2.2 and later (and Camera Raw 5.2 and later) resolved the ringing artifacts that were present in earlier versions of CR/LR. When combined with judicious capture (and optionally creative) sharpening, and appropriate output sharpening, the downsampled output should appear sharp, with few artifacts. If you have a specific image that you're concerned about, I'd be happy to take a look.

Erik, Lanczos is an approximation to the sinc filter, which is ideal theoretically from an aliasing perspective, but problematic in other ways. There are other types of image reconstruction artifacts that can occur even with perfect treatment of aliasing. For example, any simple linear filter (e.g., the ones listed on the ImageMagick page -- nice compilation of methods) with more than 1 negative lobe can lead to objectionable ringing artifacts on either side of a strong edge.
Logged

ejmartin
Sr. Member
****
Offline Offline

Posts: 575


« Reply #17 on: January 12, 2010, 02:09:18 PM »
ReplyReply

Quote from: Jonathan Wienke
Not if all the additional input values are zeroes...doesn't matter whether you call it interleaving or padding or whatever. How does the filter accept more data inputs if you're simply feeding it more zeroes?

The issue (I think) is how the filtering is combined with decimation.  Consider the following example -- you want to downsample by binning pixels.  If you want to bin and downsample by a factor of four, you can do that in one step by a filter (1,1,1,1)/4 and then subsampling the array every fouth pixel.  Or you can do it in two steps with a filter (1,1)/2 and downsampling by a factor of two at each step.  Or, you can do two filterings at the original resolution -- the first with the filter kernel (1,1)/2 and the second with the filter kernel (1,0,1,0)/2, and then subsampling by a factor four, since the convolution of these two kernels is (1,1,1,1)/4.   I believe that's what Joofa had in mind by zero padding.
Logged

emil
Schewe
Sr. Member
****
Offline Offline

Posts: 5255


WWW
« Reply #18 on: January 12, 2010, 02:20:17 PM »
ReplyReply

Quote from: ErikKaffehr
I think that Jeff Schewe mentioned that LR was using Lanczos when scaling images for print but it was dropped for a scheme developed by yourself. Nice to hear an explanation.


Camera Raw (and as a result Lightroom) used to use a Lanczos variant. The problem was that it could produce pretty serious ringing artifacts. Currently ACR/LR uses Bicubic Sharper for downsampling and a Bicubuc/Bicubuc Smoother adaptive hybrid for upsampling.

I was involved in the original resampling testing back when Photoshop incorporated multiple flavors of Bicubic with Photoshop CS. The engineer working on the algorithms, Chis Cox, went through literally _ALL_ of the various resampling schemes out there. In real world testing with photographic images (both scans and digital captures), we all settled on Bicubic Sharper for downsampling, Bicubuc Smoother for upsampling and regular Bicubuc for general use or as a substitute if Bicubuc Sharper produced artifacts on an image by image basis.

It's clear you can cherry pick all sort of exotic interpolation algorithms that can do certain things better than others based on the patterns, textures or objects in an image. But as Erik notes, in the vast majority of cases, the differences are generally hard to predict and often difficult or impossible to see.
Logged
madmanchan
Sr. Member
****
Offline Offline

Posts: 2100


« Reply #19 on: January 12, 2010, 02:27:18 PM »
ReplyReply

To clarify one point that Jeff made: CR/LR does use a method similar to bicubic sharper when downsampling, but it is not the same as the method used by the named "Bicubic Sharper" method of Photoshop.
Logged

Pages: [1] 2 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad