Ad
Ad
Ad
Pages: [1] 2 3 4 »   Bottom of Page
Print
Author Topic: Is the problem of diffraction over-rated?  (Read 17728 times)
David Sutton
Sr. Member
****
Offline Offline

Posts: 899


WWW
« on: March 16, 2013, 02:59:01 AM »
ReplyReply

Hello folks.
Short answer: probably.
I ran some tests and found with deconvolution sharpening most of my lenses are as good at f/22 as they are at f/8. I am somewhat nonplussed.
It's another layer of processing, but so what.
I've just written it up on my blog. I'd appreciate any feedback. Sometimes what is clear in ones own mind becomes impenetrably wordy when written down.
Cheers,
David

Edit: I've just noticed see Erik has done similar test on his site
Logged

scooby70
Full Member
***
Offline Offline

Posts: 228


« Reply #1 on: March 16, 2013, 07:24:08 AM »
ReplyReply

For me... probably.

Add some sharpening and fiddle with the contrast and whatever else needs to be fiddled with and for most people it'll probably be fine  Grin
Logged
RFPhotography
Guest
« Reply #2 on: March 16, 2013, 08:27:59 AM »
ReplyReply

Is the problem of diffraction overrated?  Probably, yes. 

You produced your examples with no sharpening of the raw files.  My understanding is that the Detail panel in LR/ACR also uses a form of deconvolution.  Would you have been able to achieve even better results with some judicious sharpening at the raw conversion stage and would the f/32 image have been recoverable then?  Without raw sharpening, you're mixing the effects of diffraction, the AA filter and demosaicing.  Working to reverse the second two at the raw stage might help, mightn't it?
Logged
BartvanderWolf
Sr. Member
****
Online Online

Posts: 3864


« Reply #3 on: March 16, 2013, 08:57:06 AM »
ReplyReply

Hello folks.
Short answer: probably.
I ran some tests and found with deconvolution sharpening most of my lenses are as good at f/22 as they are at f/8. I am somewhat nonplussed.

Hi David,

I've been advocating the use of deconvolution sharpening for a long time, and there is a interesting thread with examples covering that subject on LuLa as well. However, there is something that many people overlook. When looking at the MTF curve of diffraction, it becomes clear that the spatial frequencies near the Nyquist frequency (the highest level of detail that can be reliably reconstructed from discrete samples), will suffer most loss of contrast. That means that when detail already has a low contrast (such as all high spatal frequency already does), it will be reduced to zero much earlier than higher contrast detail with the same spatial frequency. So low contrast microdetail will become unrecoverable, while some of the higher contrast microdetail may still be recoverable by deconvolution (having a low noise image helps to increase the chance of pulling it off).

So yes, you will be able to recover a lot of what seemed to be lost by using high quality deconvolution, but the low contrast micro detail may be lost forever (also in the focus plane, not only at the edges of the DOF zone) when stopping down too far. An aperture of f/22 is guaranteed to limit your average luminance microcontrast detail to being totally unrecoverable above 90% of your 40Ds maximum resolution! When you limit the narrowest aperture to f/20, then some lower contrast micro detail can theoretically be recovered all the way up to the maximum resolution of your camera (although in practice, the residual lens aberrations will still throw a spanner in the works, but at least you're not throwing away every possibility of recovery).

The challenges with deconvolution are:
  • a. Finding the optimal model of the Point spread function (PSF) with which to deconvolve,
  • b. Keeping the noise amplification down as much as possible.

I've made available a free tool to determine point a. for a given camera/lens/aperture combination. That will alow to determine the correct settings for Topaz InFocus (which you apparently used), or other deconvolution tools (e.g. RawTherapee offers Richardson Lucy deconvolution, even when using existing files, not only Raws), much more accurately than we generally can achieve by eye with trial and error. It can also reveal some interesting facts, such as when using tele-extenders or some zoomlenses. My tool will also allow, under its section 4. that operates independently from the 3 prior steps, to figure out what the diffraction limits for certain aperture sensel pitch combinations are.

Good deconvolution also benefits from good tools (preferably with tweakable regularization of noise amplification, and custom PSF input), and high computational accuracy (to avoid round-off errors creeping into the results).

Cheers,
Bart
Logged
theguywitha645d
Sr. Member
****
Offline Offline

Posts: 970


« Reply #4 on: March 16, 2013, 10:44:49 AM »
ReplyReply

Yes, the problem is very much overstated. Part of this is because it is done at pixel level, meaning there is no difference to a 2MP sensor and an 80MP sensor nor the format, and that diffraction is always done in a comparative way where it would probably not be seen in a single image.
Logged
BartvanderWolf
Sr. Member
****
Online Online

Posts: 3864


« Reply #5 on: March 16, 2013, 11:39:50 AM »
ReplyReply

Yes, the problem is very much overstated. Part of this is because it is done at pixel level, meaning there is no difference to a 2MP sensor and an 80MP sensor nor the format,

Hi,

Are you suggesting that an image from a 2MP sensor doesn't need more magnification for a given output size than the image from an 80MP sensor? Because that would give the same sized diffraction pattern on the sensor for a same size aperture and focal length.

Quote
and that diffraction is always done in a comparative way where it would probably not be seen in a single image.

Of course it is done in a comparative way, because otherwise we cannot show how much is lost to diffraction. Closing one's eyes doesn't make the light go out ...

Cheers,
Bart
Logged
Fine_Art
Sr. Member
****
Offline Offline

Posts: 1145


« Reply #6 on: March 16, 2013, 01:11:30 PM »
ReplyReply

Bookmarked your slanted edge tool, thanks Bart!
Logged
Fine_Art
Sr. Member
****
Offline Offline

Posts: 1145


« Reply #7 on: March 16, 2013, 01:33:07 PM »
ReplyReply

Deconvolution is not a panacea. It can put messy artifacts into areas that should be OOF. That is where PS layers and masking really pays for PS.
Logged
theguywitha645d
Sr. Member
****
Offline Offline

Posts: 970


« Reply #8 on: March 16, 2013, 02:02:18 PM »
ReplyReply

Hi,

Are you suggesting that an image from a 2MP sensor doesn't need more magnification for a given output size than the image from an 80MP sensor? Because that would give the same sized diffraction pattern on the sensor for a same size aperture and focal length.

Output size is a function of format, not pixel pitch and that is where evaluation at 100% is problematic. An image from a 35mm sensor is not impacted by diffraction at differing amounts because of pixel pitch. The viewer perception does not change and viewing distance and print size is not done in relation to pixel pitch--I make print to a size I want, say 16x20, I don't vary my print size because of how many pixels I have.

Quote
Of course it is done in a comparative way, because otherwise we cannot show how much is lost to diffraction. Closing one's eyes doesn't make the light go out ...

Cheers,
Bart

But if the loss can only be viewed under very tight constraints, then how significant is it? So what you are saying is the loss is overstated--my position?

Just because there is a loss, does not mean the loss would be perceived in normal condition. An image can still be sharp even though there are conditions that are "sharper." Then there is DoF. Which is greater and more significant to the final image, the loss of sharpness through diffraction or the loss through DoF? And this is where the comparison is meaningless. What I want is an image that looks sharp and the point where my system maximizes MTF is not the only point where a sharp images is made.
Logged
BartvanderWolf
Sr. Member
****
Online Online

Posts: 3864


« Reply #9 on: March 16, 2013, 04:43:47 PM »
ReplyReply

Bookmarked your slanted edge tool, thanks Bart!

Hi Arthur,

You're welcome. It was discussed in a bit more detail here.

Cheers,
Bart
Logged
BartvanderWolf
Sr. Member
****
Online Online

Posts: 3864


« Reply #10 on: March 16, 2013, 05:08:30 PM »
ReplyReply

But if the loss can only be viewed under very tight constraints, then how significant is it? So what you are saying is the loss is overstated--my position?

That totally depends on the circumstances. That's why I won't say it is always overstated, because it sometimes is very relevant, especially in Photomacrography and with large format output, where the individual pixel quality counts. When the final image is shrunk, so are many issues ...

Quote
Just because there is a loss, does not mean the loss would be perceived in normal condition. An image can still be sharp even though there are conditions that are "sharper." Then there is DoF. Which is greater and more significant to the final image, the loss of sharpness through diffraction or the loss through DoF? And this is where the comparison is meaningless.

Not necessarily. It could just point to the need for a different technique, such as focus stacking (which will allow to maximize the per-pixel quality).

Cheers,
Bart
Logged
David Sutton
Sr. Member
****
Offline Offline

Posts: 899


WWW
« Reply #11 on: March 16, 2013, 05:22:53 PM »
ReplyReply

Thanks for your replies folks, and thank you Bart for the link.
Reading your comments, here are some more random thoughts.
While detail may be lost with at f/22 and deconvolution sharpening, if I can't see it it doesn't matter. YMMV
Sharpness is no longer on the top of my priorities. It is too often a substitute for content. The quality of the blur in the out-of-focus areas is more important to me.
Many people who worry about diffraction go out and photograph with the lens wide open. Shocked
If the camera is hand held all bets are off anyway.
When I have time I'll continue to take multiple photographs and focus blend, as this gives me a lot of control over the plane of focus holding the subject matter- deciding what is sharp and what isn't in that plane.
But otherwise if I want DOF f/22 will do fine as long as it doesn't drive my shutter speed down to around the half second area where the "thump" of the shutter on the 5DII shows up worst.
Logged

BartvanderWolf
Sr. Member
****
Online Online

Posts: 3864


« Reply #12 on: March 16, 2013, 05:53:32 PM »
ReplyReply

Sharpness is no longer on the top of my priorities. It is too often a substitute for content.

Hi David,

You are correct, it shouldn't be a substitute. But diffraction should also not be a distraction, if it can be reduced or avoided. It's all subject dependent IMHO.

Quote
The quality of the blur in the out-of-focus areas is more important to me.

To me as well, ugly bokeh can seriously distract from the subject of focus (pun intended).

Quote
But otherwise if I want DOF f/22 will do fine as long as it doesn't drive my shutter speed down to around the half second area where the "thump" of the shutter on the 5DII shows up worst.

Yes, it's all a balancing act, also between artistry and technical skills. However, I'd suggest a comparison between f/18 or f/ 20 versus f/22. It will probably not change the DOF much, while the improvement in overall image quality (based on the 40D camera specs) is almost a given (because deconvolution will be more effective, and also because you can use a shorter shutter speed).

Cheers,
Bart
Logged
David Sutton
Sr. Member
****
Offline Offline

Posts: 899


WWW
« Reply #13 on: March 16, 2013, 07:06:25 PM »
ReplyReply

However, I'd suggest a comparison between f/18 or f/ 20 versus f/22. It will probably not change the DOF much, while the improvement in overall image quality (based on the 40D camera specs) is almost a given

Yes, that's a sensible suggestion.
Cheers
Logged

Jack Hogan
Sr. Member
****
Offline Offline

Posts: 250


« Reply #14 on: March 18, 2013, 09:30:21 AM »
ReplyReply

    The challenges with deconvolution are:
    a. Finding the optimal model of the Point spread function (PSF) with which to deconvolve,[/li][/list]

    The good thing about deconvolution is that PSFs can usually be broken down into a product of subPSFs which can then be applied separately to the image in any desired sequence - much like any real number can be broken down into a product of prime numbers and applied commutatively in a division or multiplication.  The other good thing is that in any natural situation there will typically be a decent-sized random (Gaussian) component to the total PSF - which I believe is what most deconvolvers target and try do undo.  Having gotten the easy Gaussian component of PSF out of the way first, the next challenge is figuring out the other specific PSF components, which I assume include fixed ones like for aperture, AA, pixel pitch and shape - and who knows what other variable ones.

    Aside from the varying difficulty of determining each of these 'component' PSFs (given focal length and f/number it should be pretty easy to calculate the aperture's for instance), I suspect the key for good looking capture sharpening is determining the appropriate strength of each and not getting carried away with the sliders.  In addition I think though that we should also not get carried away with trying to calculate PSFs too precisely, because our lenses and systems are not perfect, they cannot be modelled perfectly and, last but not least, we are not dealing with monochromatic light.  So we need to maintain enough wiggle room in the system to accommodate our real life situation lest our PSFs invent stuff that was never there in the first place.

    I've made available a free tool to determine point a. for a given camera/lens/aperture combination. That will alow to determine the correct settings for Topaz InFocus ...

    Nice tool Bart.  I like InFocus and use it (with a light hand) on many of my landscapes.  How would you use the output of your tool to determine the correct settings for it?

    Jack
    « Last Edit: March 18, 2013, 09:37:15 AM by Jack Hogan » Logged
    xpatUSA
    Sr. Member
    ****
    Offline Offline

    Posts: 305



    WWW
    « Reply #15 on: March 18, 2013, 11:34:41 AM »
    ReplyReply

    . . . . I think though that we should also not get carried away with trying to calculate PSFs too precisely, because our lenses and systems are not perfect, they cannot be modelled perfectly and, last but not least, we are not dealing with monochromatic light.

    One of my favorite images is this one, which is worth a look whenever one is getting too obsessive about detail. It predates DSLR's, I believe.



    A little difficult interprete at first glance because the frequency (X) axis is normalized for each f-number. It shows how close an f-number gets to the dashed theoretical diffraction-limited curve. Quite some surprises there.

    Ted
    Logged

    best regards,

    Ted
    BartvanderWolf
    Sr. Member
    ****
    Online Online

    Posts: 3864


    « Reply #16 on: March 18, 2013, 01:12:52 PM »
    ReplyReply

    The good thing about deconvolution is that PSFs can usually be broken down into a product of subPSFs which can then be applied separately to the image in any desired sequence - much like any real number can be broken down into a product of prime numbers and applied commutatively in a division or multiplication.  The other good thing is that in any natural situation there will typically be a decent-sized random (Gaussian) component to the total PSF - which I believe is what most deconvolvers target and try do undo.

    Hi Jack,

    It gets even better, convolving several different waveforms will converge to a Gaussian shaped distribution, even if the underlying waveforms are non-Gaussian. This chapter of a free book on Digital Signal Processing explains the differences between parallel and sequential filtering in a bit more detail (check out the section on the Central Limit Theorem at the end).

    Quote
    Having gotten the easy Gaussian component of PSF out of the way first, the next challenge is figuring out the other specific PSF components, which I assume include fixed ones like for aperture, AA, pixel pitch and shape - and who knows what other variable ones.

    That may not even be necessary if the main component is close enough to a single Gaussian (which it often is, as my tool will show). In my experience, the combination of several Gaussians can sometimes add a tiny bit more accuracy, but the improvement is usually marginal.

    Quote
    Aside from the varying difficulty of determining each of these 'component' PSFs (given focal length and f/number it should be pretty easy to calculate the aperture's for instance), I suspect the key for good looking capture sharpening is determining the appropriate strength of each and not getting carried away with the sliders.  In addition I think though that we should also not get carried away with trying to calculate PSFs too precisely, because our lenses and systems are not perfect, they cannot be modelled perfectly and, last but not least, we are not dealing with monochromatic light.  So we need to maintain enough wiggle room in the system to accommodate our real life situation lest our PSFs invent stuff that was never there in the first place.

    The "800 pound gorilla" in the room are sensels that do area sampling instead of point sampling (even the diffraction pattern itself starts to look a bit more Gaussian due to area sampling), in combination with some 'defocus'. Anything not in the plane of focus will have a defocus blur added to its signal. That means that as we get closer to the edges of the DOF zone, the influence of defocus will start to grow (demanding a larger radius Gaussian based deconvolution). Combining the  overall effect of diffraction and aperture sampling of our sensels, will basically turn all blur sources combined into a rather Gausian like blur.

    Quote
    Nice tool Bart.  I like InFocus and use it (with a light hand) on many of my landscapes.  How would you use the output of your tool to determine the correct settings for it?

    The important notion is that InFocus combines 2 operations, first a generic (or deblur) deconvolution, and second an optional sharpening operation. One should probably not try and solve the entire blur issue with only a single deconvolution (because you have no influence on the 'strength' of the effect), they are supposed to work in tandem (although I'd rather prefer more control over the deconvolution process). The optimal deconvolution radius setting seems to correspond reasonably well with the Sigma radius that my tools determines, maybe dialing in a tad smaller radius can help avoid the generation of excessive artifacts. It is not obvious which deconvolution method (generic/deblur) would be best. Deblur seems a bit more aggressive, it seems to be more than just a different PSF shape. Many unsatisfied reactions are caused by using too large a deconvolution radius. Assuming the two operations are executed in sequence, the sharpening radius to use should then be smaller than the deconvolution radius, unless one tries to achieve creative sharpening.

    Cheers,
    Bart
    Logged
    Wayne Fox
    Sr. Member
    ****
    Offline Offline

    Posts: 2925



    WWW
    « Reply #17 on: March 18, 2013, 05:50:28 PM »
    ReplyReply

    if I can't see it it doesn't matter. YMMV

    Just because you can't see it so you don't know you lost it, does that necessarily mean it doesn't matter?

    I agree with your overall premise that you can stop down much further than the optimal f/stop and use sharpening to get a great image.  But like many other things, at some point it becomes a trade off ... acceptable loss of some information for the sake of better information such as depth of field.  But doing a focus stack at a more optimal setting certainly may preserve some detail as discussed by Bart ...  may being the operative word based on the scene itself.

    For me if things are static and I'm not in a hurry so I can focus stack I will usually do it.  But sometimes that's not possible and I'm certainly not going to skip shooting something just because I can't shoot it at f/8 or f/11.  I know at f/22 the d800 is very soft, hard to believe I'm not losing something.  16 is substantially better and I'm very comfortable shooting there.  On the Phase gear, f/22 is similar to f/16 on the nikon and even f/32 isn't as soft as the d800 is at 22, so I will use those when I need to.
    Logged

    David Sutton
    Sr. Member
    ****
    Offline Offline

    Posts: 899


    WWW
    « Reply #18 on: March 19, 2013, 12:11:32 AM »
    ReplyReply

    Hello Wayne. I quite agree. I too will continue to focus stack when I can. I like to choose from the different layers what will be sharp and what will be blurred.
    Though I have certainly lost detail at f/22, if I can't see it in print up to around 20 x 30 inches does that necessarily mean it doesn't matter? Does it matter I don't know what has been lost?
    There are so many trade-offs about which we have to make personal decisions. After using the same camera for over three years I have a good idea what to expect from it visually, so I can usually tell, for example, if I've hand held.  As I am more interested in exploring my memory of what I saw, and not so much interested in what the camera recorded (often a lot of pixel mangling involved) I'll take the "it doesn't matter" approach.  This wouldn't suit lots of folks.
    So if I was only displaying on the web I'd certainly go to f/32. The fine detail is total mush but once posted on the web I can't see it on a 1900x1200 screen.

    Logged

    Wayne Fox
    Sr. Member
    ****
    Offline Offline

    Posts: 2925



    WWW
    « Reply #19 on: March 19, 2013, 01:13:33 AM »
    ReplyReply

    Though I have certainly lost detail at f/22, if I can't see it in print up to around 20 x 30 inches does that necessarily mean it doesn't matter? Does it matter I don't know what has been lost?

    But it seems maybe you are "assuming" you would lose it anyway when it goes to print so it doesn't matter  .  I guess I'm not sure that's always the case.  Sometimes you may lose it and it doesn't matter, but it seems there certainly may be times it would improve the image if it was preserved and it might be more visible than one might think.

    I think we're basically on the same page ... just a different perspective on it.
    Logged

    Pages: [1] 2 3 4 »   Top of Page
    Print
    Jump to:  

    Ad
    Ad
    Ad