deejjjaaaa


« Reply #100 on: July 31, 2010, 01:12:13 AM » 
Reply

We now have it on Eric Chan's authority that, when the detail slider is cranked up to 100%, the sharpening in ACR 6 is deconvolution based. actually he was saying that it always involves deconvolution for as long as the detail slider is > 0, just at 100 it is a pure deconvolution, and between 0 and 100 it is a blend of the output provided by USM and deconvolution... unless Eric Chan wants to provide any further clarifications.



Logged




Schewe


« Reply #101 on: July 31, 2010, 01:43:08 AM » 
Reply

But I would ask again, why do you want to throw dirt on deconvolution methods if you are lavishing praise on ACR 6.1? I'm not throwing dirt on deconvolution methods other than to state that in MY experience (which is not inconsiderable) the effort does not bear the fruit that advocates seem to expound on–read that to mean, I can't seem to find a solution better than the tools I'm currently using without going through EXTREME effort. ACR 6.1 seems pretty darn good to me, how about you? You got any useful feedback to contribute? What do YOU want in image sharpening? Do you think computational solutions will solve everything? Have you actually learned how to use ACR 6.1? How many hours do YOU have in ACR 6.1 (the odds are I've prolly got a few more hours in ACR 6.1/6.2 than you might–and worked to improve the ACR sharpening more than most people may have).



Logged




ejmartin


« Reply #102 on: July 31, 2010, 01:55:27 AM » 
Reply

I'm not throwing dirt on deconvolution methods other than to state that in MY experience (which is not inconsiderable) the effort does not bear the fruit that advocates seem to expound on–read that to mean, I can't seem to find a solution better than the tools I'm currently using without going through EXTREME effort.
ACR 6.1 seems pretty darn good to me, how about you?
You got any useful feedback to contribute?
What do YOU want in image sharpening?
Do you think computational solutions will solve everything?
Have you actually learned how to use ACR 6.1?
How many hours do YOU have in ACR 6.1 (the odds are I've prolly got a few more hours in ACR 6.1/6.2 than you might–and worked to improve the ACR sharpening more than most people may have). A clumsy attempt to change the subject. You still seem to be making an artificial distinction between deconvolution methods and ACR 6.x



Logged

emil



Schewe


« Reply #103 on: July 31, 2010, 02:08:43 AM » 
Reply

A clumsy attempt to change the subject. You still seem to be making an artificial distinction between deconvolution methods and ACR 6.x No, I was responding to the actual results as posted by Ray that showed the 1K deconvolution results compared to ACR 6.1 as posted by Ray. What are you responding to? Simply the fact that I'm actually posting a response in this thread?



Logged




joofa


« Reply #104 on: July 31, 2010, 03:28:13 AM » 
Reply

Well, Joofa, you obviously appear to know what you are talking about. I confess I have almost zero knowledge about the Gibb's phenomenon, but I can appreciate that it may be useful to be able to indentify and name any artifacts one may see in an image, especially if one is examining an Xray of someone's medical condition, or indeed searching for evidence of alien life on a distant planet. Hi Ray, I never said anything regarding the comparison of Bart's and your images. I just mentioned that not all "ringing" artifacts are Gibbs, and in the usual denconvolution, if any ringing is found, then it may not be Gibbs, rather arising from other reasons. A more technical note: The deconvolutionproblem is typically illposed, at least, initially. In the continuous domain the usual distortion due to blurring effect acts as an integral operator and problem statement boils down to a Fredholm integral equation of the first kind. In the discrete domain, which we usually operate due to digitization, the inherent illposedness is inherited, while some of the problems are ameliorated. More wellbehaved solutions can be obtained by introducing some sort of "smoothness" or regularization criterion at this stage. RichardsonLucy deconvolution converges to maximumlikelihood (ML) estimation. Maximumlikelihood techniques just do the analysis of image data, and hence, in general may not be smooth enough. However, some regularization is imparted by incorporating some notions regarding the a priori (default) distribution of image data, and hence, converting the problem to max a priori (MAP) estimation, which might provide more acceptable results. Under the assumptions of Gaussianity of certain image parameters (NOTE: notnecessarily the Gaussanity of the blur function) some equivalence of minimum mean square error estimation (MMSE), linearity, and MAP estimation can be obtained. Further optimizations can be introduced by using a more realistic nonstationary form of the blur function and variations of image data distribution and noise distribution  the draw back being that one might have to forgo some quick operations in the form of fast Fourier transforms (FFT) embedded somewhere in many deconvolution techniques.


« Last Edit: July 31, 2010, 03:35:03 AM by joofa »

Logged




John R Smith


« Reply #105 on: July 31, 2010, 06:11:41 AM » 
Reply

A more technical note: The deconvolutionproblem is typically illposed, at least, initially. In the continuous domain the usual distortion due to blurring effect acts as an integral operator and problem statement boils down to a Fredholm integral equation of the first kind. In the discrete domain, which we usually operate due to digitization, the inherent illposedness is inherited, while some of the problems are ameliorated. More wellbehaved solutions can be obtained by introducing some sort of "smoothness" or regularization criterion at this stage. RichardsonLucy deconvolution converges to maximumlikelihood (ML) estimation. Maximumlikelihood techniques just do the analysis of image data, and hence, in general may not be smooth enough. However, some regularization is imparted by incorporating some notions regarding the a priori (default) distribution of image data, and hence, converting the problem to max a priori (MAP) estimation, which might provide more acceptable results. Under the assumptions of Gaussianity of certain image parameters (NOTE: notnecessarily the Gaussanity of the blur function) some equivalence of minimum mean square error estimation (MMSE), linearity, and MAP estimation can be obtained. Further optimizations can be introduced by using a more realistic nonstationary form of the blur function and variations of image data distribution and noise distribution  the draw back being that one might have to forgo some quick operations in the form of fast Fourier transforms (FFT) embedded somewhere in many deconvolution techniques. I suppose, sometimes, if I start feeling a bit pleased with myself and think that I know quite a lot about photography, it is probably good for me to be taken down a peg or two and realise that there are people in this world with whom I would actually be unable to communicate, except on the level of "Would you like a cup of tea?". John



Logged

Hasselblad 500 C/M, SWC and CFV39 DB and a case full of (very old) lenses and other bits



ErikKaffehr


« Reply #106 on: July 31, 2010, 08:50:55 AM » 
Reply

Hi, My conclusion from the discussion is that: 1) It is quite possible to regain some of the sharpness lost to to diffraction using deconvolution, even if the PSF (Point Spread Function) is not known. It also seems to be the case that we have deconvolution built into ACR 6.1 and LR 3. 2) Setting "Detail" to high and varying the radius in LR3 and ACR 6.1 is a worthwhile experiment, but we may need to gain some more experience how this tools should be used. My experience is that "Deconvolution" in both ACR 6.1 and LR3 amplifies noise, we need to find out how to use all settings to best effect. Best regards Erik I suppose, sometimes, if I start feeling a bit pleased with myself and think that I know quite a lot about photography, it is probably good for me to be taken down a peg or two and realise that there are people in this world with whom I would actually be unable to communicate, except on the level of "Would you like a cup of tea?".
John



Logged




deejjjaaaa


« Reply #107 on: July 31, 2010, 09:01:10 AM » 
Reply

My experience is that "Deconvolution" in both ACR 6.1 and LR3 amplifies noise, we need to find out how to use all settings to best effect. We can just ask mr Schewe, can't we ? With his endless hours spent on the sharpening with ACR he can just tell us and we will be all set.



Logged




John R Smith


« Reply #108 on: July 31, 2010, 09:14:01 AM » 
Reply

Hi,
My conclusion from the discussion is that:
1) It is quite possible to regain some of the sharpness lost to to diffraction using deconvolution, even if the PSF (Point Spread Function) is not known. It also seems to be the case that we have deconvolution built into ACR 6.1 and LR 3.
2) Setting "Detail" to high and varying the radius in LR3 and ACR 6.1 is a worthwhile experiment, but we may need to gain some more experience how this tools should be used.
My experience is that "Deconvolution" in both ACR 6.1 and LR3 amplifies noise, we need to find out how to use all settings to best effect.
Best regards Erik Erik Thank you so much for this summary which even I can understand. John



Logged

Hasselblad 500 C/M, SWC and CFV39 DB and a case full of (very old) lenses and other bits



Ray


« Reply #109 on: July 31, 2010, 09:36:10 AM » 
Reply

I suppose, sometimes, if I start feeling a bit pleased with myself and think that I know quite a lot about photography, it is probably good for me to be taken down a peg or two and realise that there are people in this world with whom I would actually be unable to communicate, except on the level of "Would you like a cup of tea?".
John A more technical note: The deconvolutionproblem is typically illposed, at least, initially. In the continuous domain the usual distortion due to blurring effect acts as an integral operator and problem statement boils down to a Fredholm integral equation of the first kind. In the discrete domain, which we usually operate due to digitization, the inherent illposedness is inherited, while some of the problems are ameliorated. More wellbehaved solutions can be obtained by introducing some sort of "smoothness" or regularization criterion at this stage. RichardsonLucy deconvolution converges to maximumlikelihood (ML) estimation. Maximumlikelihood techniques just do the analysis of image data, and hence, in general may not be smooth enough. However, some regularization is imparted by incorporating some notions regarding the a priori (default) distribution of image data, and hence, converting the problem to max a priori (MAP) estimation, which might provide more acceptable results. Under the assumptions of Gaussianity of certain image parameters (NOTE: notnecessarily the Gaussanity of the blur function) some equivalence of minimum mean square error estimation (MMSE), linearity, and MAP estimation can be obtained. Further optimizations can be introduced by using a more realistic nonstationary form of the blur function and variations of image data distribution and noise distribution  the draw back being that one might have to forgo some quick operations in the form of fast Fourier transforms (FFT) embedded somewhere in many deconvolution techniques. I sympathise with your frustration here, John, but let's not be intimidated by poor expression. Here's my translation, for what it's worth, sentence by sentence. (1) The deconvolution problem is typically illposed. means: The sharpening problem is often poorly defined. (That's easy).(2) In the continuous domain the usual distortion due to blurring effect acts as an integral operator and problem statement boils down to a Fredholm integral equation of the first kind. means: The analog world, which is a smooth continuum, is different from the digital world with discrete steps. You need complex mathematics to deal with this problem, such as a Fredholm integral equation. (Whatever that is).(3) In the discrete domain, which we usually operate due to digitization, the inherent illposedness is inherited, while some of the problems are ameliorated. means: We're now stuck with the digital domain. There's a hangover from the analog world with incorrect definitions, but we can fix some of the problems. There's hope.(4) More wellbehaved solutions can be obtained by introducing some sort of "smoothness" or regularization criterion at this stage. means: We can achieve a balanced result by sacrificing detail for smoothness.(5) RichardsonLucy deconvolution converges to maximumlikelihood (ML) estimation. means: The RichardsonLucy method attempts to provide the best result, in terms of detail.(6) Maximumlikelihood techniques just do the analysis of image data, and hence, in general may not be smooth enough. means: The best result may introduce noise.(7) However, some regularization is imparted by incorporating some notions regarding the a priori (default) distribution of image data, and hence, converting the problem to max a priori (MAP) estimation, which might provide more acceptable results. means: With a bit of experimentation we might be able to fix the noise problem.( Under the assumptions of Gaussianity of certain image parameters (NOTE: notnecessarily the Gaussanity of the blur function) some equivalence of minimum mean square error estimation (MMSE), linearity, and MAP estimation can be obtained. means: Gaussian mathematics is used to get the best estimate for sharpening purposes. (Guass was a German mathematical genius, considered to be one of the greatest mathematicians who has ever lived. Far greater than Einstein, in the field of mathematics).(9) Further optimizations can be introduced by using a more realistic nonstationary form of the blur function and variations of image data distribution and noise distribution  the draw back being that one might have to forgo some quick operations in the form of fast Fourier transforms (FFT) embedded somewhere in many deconvolution techniques. means: You can get better results if you take more time and have more computing power.Okay! Maybe I've missed a few nuances in my translation. Noone's perfect. Any improved translation is welcome.



Logged




John R Smith


« Reply #110 on: July 31, 2010, 09:55:00 AM » 
Reply

Well, good shot at it, Ray. I'm afraid I never got any further than OLevel maths, and I only just scraped that.
Don't mind me, do carry on chaps
John



Logged

Hasselblad 500 C/M, SWC and CFV39 DB and a case full of (very old) lenses and other bits



crames


« Reply #111 on: July 31, 2010, 11:32:24 AM » 
Reply

I'm afraid that sharpening cannot overcome the hard limit on resolution due to diffraction. Here are versions of some of the posted images where the high and low frequencies have been separated into layers. They show what is being sharpened: only the detail that remains below the diffraction limit. The detail above the diffraction limit is lost and is not being recovered. Original Crop (Undiffracted)Diffracted CropLucy DeconvolutionRay FM SharpenedThe Lowpass layers include all of the detail that is enclosed within the central diffraction limit "oval" seen in the spectra I posted before. The Hipass layers include everything else outside of the central diffraction oval. The following is a comparison of the Lowpass layers. This where the sharpening is taking place, and amounts only to approaching the quality of the Lowpass of the Original Crop. Look at the Original Crop Hipass layer. This shows all the fine detail that the eye is craving for, but hasn't come back with any of sharpening attemps. For fun, paste a copy of the original Hipass layer in overlay mode onto any of the sharpened versions. Or double the Hipass layer for a supersharp effect. Since diffraction prettymuch wipes out the detail outside of the diffraction limit, deconvolution sharpening is generally limited to massaging whatever is left within the central cutoff. From what I've read, detail beyond the diffraction cutoff has to be extrapolated ("Gerchberg method", for one), or otherwise estimated from the lower frequency information. The methods are generally called "superresolution". The Lucy method, due to a nonlinear step in the processing is supposed to have an extrapolating effect, but I'm not sure if it's visible here. Cliff


« Last Edit: July 31, 2010, 11:33:17 AM by crames »

Logged

Cliff



joofa


« Reply #112 on: July 31, 2010, 11:45:50 AM » 
Reply

However, some regularization is imparted by incorporating some notions regarding the a priori (default) distribution of image data, and hence, converting the problem to max a priori (MAP) estimation, Sorry made a typo. The above means maximum a posteriori estimation and not a priori estimation. Hi Ray, you did an interesting translation Since diffraction prettymuch wipes out the detail outside of the diffraction limit, deconvolution sharpening is generally limited to massaging whatever is left within the central cutoff.
From what I've read, detail beyond the diffraction cutoff has to be extrapolated ("Gerchberg method", for one), or otherwise estimated from the lower frequency information. The methods are generally called "superresolution". The Lucy method, due to a nonlinear step in the processing is supposed to have an extrapolating effect, but I'm not sure if it's visible here. Yes, Gerchberg technique is effective in theory, (because a bandlimited signal is analytic and hence extrapolatable), but in practise noise limitations stop such solutions from becoming very effective.


« Last Edit: July 31, 2010, 11:49:09 AM by joofa »

Logged




ejmartin


« Reply #113 on: July 31, 2010, 12:16:03 PM » 
Reply

Hi Cliff,
A rather illuminating way of looking at things.
I don't think one is expecting miracles here, like undoing the Rayleigh limit; zero MTF is going to remain zero. But nonzero microcontrast can be boosted back up to close to prediffraction levels, and the deconvolution methods seem to be doing that rather well.
I am wondering whether a good denoiser (perhaps Topaz, which seems to use nlmeans methods) can help squelch some of the noise amplified by deconvolution without losing recovered detail such as the venetian blinds.



Logged

emil



joofa


« Reply #114 on: July 31, 2010, 01:18:54 PM » 
Reply

zero MTF is going to remain zero. Not in theory. For bounded functions the Fourier transform is an analytic function, which means that if it is known for a certain range then techniques such as analytic continuation can be used to extend the solution to all the frequency range. However, as I indicated earlier, such resolution boosting techniques have difficulty in practise due to noise. It has been estimated in a particular case that to succeed in analytic continuation an SNR (amplitude ratio) of 1000 is required. I don't think one is expecting miracles here, like undoing the Rayleigh limit IIRC, even in the presence of noise, it has been claimed that a twofold to fourfold improvement of the Rayleigh resolution in the restored image over that in the acquired image may be achieved if the transfer function of the imaging system is known sufficiently accurately and using the resolution boosting analytic continuation techniques.


« Last Edit: July 31, 2010, 01:54:07 PM by joofa »

Logged




eronald


« Reply #115 on: July 31, 2010, 02:34:48 PM » 
Reply

Ray, Unfortunately, from about (5) my feeling is that your jargonreduction algorithm is oversmoothing and losing semantic detail But then, what do I know ? Edmund I sympathise with your frustration here, John, but let's not be intimidated by poor expression. Here's my translation, for what it's worth, sentence by sentence. (1) The deconvolution problem is typically illposed. means: The sharpening problem is often poorly defined. (That's easy).(2) In the continuous domain the usual distortion due to blurring effect acts as an integral operator and problem statement boils down to a Fredholm integral equation of the first kind. means: The analog world, which is a smooth continuum, is different from the digital world with discrete steps. You need complex mathematics to deal with this problem, such as a Fredholm integral equation. (Whatever that is).(3) In the discrete domain, which we usually operate due to digitization, the inherent illposedness is inherited, while some of the problems are ameliorated. means: We're now stuck with the digital domain. There's a hangover from the analog world with incorrect definitions, but we can fix some of the problems. There's hope.(4) More wellbehaved solutions can be obtained by introducing some sort of "smoothness" or regularization criterion at this stage. means: We can achieve a balanced result by sacrificing detail for smoothness.(5) RichardsonLucy deconvolution converges to maximumlikelihood (ML) estimation. means: The RichardsonLucy method attempts to provide the best result, in terms of detail.(6) Maximumlikelihood techniques just do the analysis of image data, and hence, in general may not be smooth enough. means: The best result may introduce noise.(7) However, some regularization is imparted by incorporating some notions regarding the a priori (default) distribution of image data, and hence, converting the problem to max a priori (MAP) estimation, which might provide more acceptable results. means: With a bit of experimentation we might be able to fix the noise problem.( Under the assumptions of Gaussianity of certain image parameters (NOTE: notnecessarily the Gaussanity of the blur function) some equivalence of minimum mean square error estimation (MMSE), linearity, and MAP estimation can be obtained. means: Gaussian mathematics is used to get the best estimate for sharpening purposes. (Guass was a German mathematical genius, considered to be one of the greatest mathematicians who has ever lived. Far greater than Einstein, in the field of mathematics).(9) Further optimizations can be introduced by using a more realistic nonstationary form of the blur function and variations of image data distribution and noise distribution  the draw back being that one might have to forgo some quick operations in the form of fast Fourier transforms (FFT) embedded somewhere in many deconvolution techniques. means: You can get better results if you take more time and have more computing power.Okay! Maybe I've missed a few nuances in my translation. Noone's perfect. Any improved translation is welcome.



Logged




eronald


« Reply #116 on: July 31, 2010, 02:37:14 PM » 
Reply

Not in theory. For bounded functions the Fourier transform is an analytic function, which means that if it is known for a certain range then techniques such as analytic continuation can be used to extend the solution to all the frequency range. However, as I indicated earlier, such resolution boosting techniques have difficulty in practise due to noise. It has been estimated in a particular case that to succeed in analytic continuation an SNR (amplitude ratio) of 1000 is required. IIRC, even in the presence of noise, it has been claimed that a twofold to fourfold improvement of the Rayleigh resolution in the restored image over that in the acquired image may be achieved if the transfer function of the imaging system is known sufficiently accurately and using the resolution boosting analytic continuation techniques. It has been claimed ... references please?? Edmund



Logged





crames


« Reply #118 on: July 31, 2010, 03:17:32 PM » 
Reply

I don't think one is expecting miracles here, like undoing the Rayleigh limit; zero MTF is going to remain zero. But nonzero microcontrast can be boosted back up to close to prediffraction levels, and the deconvolution methods seem to be doing that rather well.
I am wondering whether a good denoiser (perhaps Topaz, which seems to use nlmeans methods) can help squelch some of the noise amplified by deconvolution without losing recovered detail such as the venetian blinds. Hi Emil, No, I agree, deconvolution sharpening is certainly useful, since most images don't have f/32 diffraction, and there is real detail that can be restored. The RL sharpening of RawTherapee does a very good job, with just a gaussian kernel. I wonder if knowing the exact PSF for deconvolution could be any better? Somehow I doubt it (except in the case of motion blur, which has PSFs like jagged snakes.) Simple techniques that boost high frequencies can also do the job, exposing detail as long as the detail is there in the first place. A simple way that I use to sharpen is a variation on highpass sharpening. Instead of the High Pass filter, I convolve with an inverted "Laplacian" kernel in PS Custom Filter. I think it reduces haloing: 0 1 2 1 0 0 2 12 2 0 0 1 2 1 0 Scale: 4 Offset: 128 This filter has a response that slope up from zero, roughly the opposite slope of a lens MTF. (The strength can be varied by changing Scale.) I copy the image to a new layer, change mode to Overlay (or Hard, etc.), then run the above filter on the layer copy. Noise can be controlled by applying a little Surface Blur on the filtered layer. With a little tweaking the results can approach FM and be even less noisy. Although this usually works pretty well, it didn't on Bart's f/32 diffraction image, hence the little investigation... Cliff



Logged

Cliff



ejmartin


« Reply #119 on: July 31, 2010, 03:35:25 PM » 
Reply

The RL sharpening of RawTherapee does a very good job, with just a gaussian kernel. I wonder if knowing the exact PSF for deconvolution could be any better? Somehow I doubt it (except in the case of motion blur, which has PSFs like jagged snakes.) Thanks for the PS tip. Well, for lens blur I imagine it could be a bit better to use something more along the lines of a rounded off 'top hat' filter (perhaps more of a bowler ) rather than a Gaussian, since that more accurately approximates the structure of OOF specular highlights which in turn ought to reflect (no pun intended) the PSF of lens blur. Another thing that RT lacks is any kind of adaptivity to its RL deconvolution; that could mitigate some of the noise amplification if done properly. The question is whether that would add significantly to the processing time. Its on my list of things to look into.


« Last Edit: July 31, 2010, 04:57:30 PM by ejmartin »

Logged

emil



