Ad
Ad
Ad
Pages: « 1 ... 8 9 [10] 11 12 ... 18 »   Bottom of Page
Print
Author Topic: Deconvolution sharpening revisited  (Read 110974 times)
ejmartin
Sr. Member
****
Offline Offline

Posts: 575


« Reply #180 on: September 05, 2010, 08:59:12 PM »
ReplyReply

And then there is the OLPF convolved with the Airy pattern before it gets to the box blur of the sensels.  It all makes me suspect that a Gaussian is going to be a reasonable approximation in the end, given all the inaccuracies introduced all along the way.  Do you think that the difference between using the precise PSF and Gaussian is going to be noticeable?
Logged

emil
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3765


« Reply #181 on: September 06, 2010, 04:50:22 AM »
ReplyReply

And then there is the OLPF convolved with the Airy pattern before it gets to the box blur of the sensels.  It all makes me suspect that a Gaussian is going to be a reasonable approximation in the end, given all the inaccuracies introduced all along the way.

Correct. The issue is indeed that there are several cascading PSFs involved. They may or may not produce local maxima or minima when combined, so it is hard to predict how important it will be. One approach of estimating the PSF of the unknown OLPF is to factor out  known components, such as diffraction and/or defocus, from the combined system PSF.  It will make the resulting mix of PSF contributions more predictable and steerable with a simpler (Gaussian) model. Then the recombined PSF can be used for efficient processing.

Quote
Do you think that the difference between using the precise PSF and Gaussian is going to be noticeable?

The difference will not be huge most of the time, but probably noticeable when the best result is required. The examples earlier in the thread show that even the more general solutions do a resonably good job, but there is more potential to be utilized with a 'perfect' PSF. Also note that I've only used the Richardson-Lucy deconvolution algorithm because it's readily available in several programs and allows people to do their own experiments, but there are modern variants available that perform better. And who knows what the future has in store ... I think that if it is possible to get closer to the actual PSF by doing a little extra preprocessing, then the effort is justified and will pay-off in the end (even if only as an option for less time critical or processor intensive jobs). Also, the insights may lead to new efficient shortcuts.

It is probably my experience with quality control that has taught me that sloppiness earlier in the process will take more effort to set straight in the end. So that's why I tend to seek for optimization earlier in the chain of events (which also means when taking the image), but with cascading deterioration it's best to intervene early.

Cheers,
Bart
« Last Edit: September 06, 2010, 12:57:08 PM by BartvanderWolf » Logged
XFer
Newbie
*
Offline Offline

Posts: 12


« Reply #182 on: September 06, 2010, 09:19:16 AM »
ReplyReply

Ok but let's not forget special-purpose deconvolution.
It's not only about inverting diffraction or box blur; it's also about spherical aberration, coma, defocusing, motion blur.
That's why we need a way to comfortably explore different PSFs.
I have this small utility that, at the moment, can accept hardcoded kernels, but I can extend it to load a kernel from file.
The problem is having meaningful PSFs to experiment with.

Right now I have an Excel sheet which can compute a 9x9 diffraction kernel (input parameters are pixel pitch, f-number, wavelength), but that's too limited.  Undecided
Logged
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3765


« Reply #183 on: September 06, 2010, 12:50:40 PM »
ReplyReply

Ok but let's not forget special-purpose deconvolution.
It's not only about inverting diffraction or box blur; it's also about spherical aberration, coma, defocusing, motion blur.
That's why we need a way to comfortably explore different PSFs.

I agree, and that's why (as I've disclosed earlier) I'm working on a flexible tool to do just that.

Quote
I have this small utility that, at the moment, can accept hardcoded kernels, but I can extend it to load a kernel from file.
The problem is having meaningful PSFs to experiment with.

Since I've already invested considerable resources in this project, I hope you can understand that I'm not going to give everything away for free, but when my PSF generator (part of a much larger set of integrated software solutions) is a bit further in it's development I do need some beta testers Wink . I'm making progress but there is a lot to do to make it usable for normal human beings, and I unfortunately have to protect my intellectual property against copyright violations and reverse engineering. I probably will need to apply for some patent protection for the proprietary stuff further down the line as well.

Quote
Right now I have an Excel sheet which can compute a 9x9 diffraction kernel (input parameters are pixel pitch, f-number, wavelength), but that's too limited.  Undecided

Yes, I know the frustration, I've been using spreadsheets for a long time as wel, but one really needs a better integration with other software. There is also the feeling that the imageprocessing industry has been dragging their feet (remember Pixmantec's Rawshooter, Photomatix, etc.,...), and it's up to the smaller innovators to really get things moving forwards. That's why I started my project. It's just that my resources are limited, so it can take a bit longer before it's commercially available, but the potential looks promising.

If you need certain kernels to play with, I'm willing to help and make a few available as data files, just like the f/32 diffraction kernel (for a 6.4 micron sensel pitch) I shared in this thread. Send me a PM about what you are thinking of, and we can work something out so you can continue your investigations which might e.g. help RawTherapee.

Cheers,
Bart
Logged
XFer
Newbie
*
Offline Offline

Posts: 12


« Reply #184 on: September 07, 2010, 09:06:20 AM »
ReplyReply

Hi Bart,
thanks for the offer; anyway I'm tooling up to build the basic PSF I need for now (Airy patterns, Gaussians, convolutions of the two).

I have a few question regarding how to get discrete kernels from these continuous functions.

1) Do you just evaluate the function at the grid points? This would be like using a rectangular filtering window for the sampling. Doesn't this lead to issues? Or are you using more sophisticated ways to get the samples (triangular windows, Chebychev windows etc.)?
2) For Airy patterns which emulate strong diffraction (F/32 and beyond), even a 9x9 kernel leaves out a certain percentage of the total signal intensity.
Are you just truncating the function at the edges of the kernel, or do you perform some kind of smoothing? I think that just truncating could lead to ripples -> ringing on the image.
3) Expecially for "tight & pointy" PSFs (think small-radius Gaussians), I have the feeling that a grid with a pitch of 1 pixel is too rough. Too much approximation from the continuous function to the kernel. I think we're going to need sub-pixel accuracy to avoid some artifacts (mosquitos around high-contrast details, ringing, edge overshooting, noise amplification, hot pixels).
What do you think about it?

Thanks a lot.

Fernando

PS: if anyone is interested, I can publish actual kernels of the type mentioned above.
Logged
MichaelEzra
Sr. Member
****
Offline Offline

Posts: 659



WWW
« Reply #185 on: September 07, 2010, 10:21:42 AM »
ReplyReply

Fernando,

you might also look at ImageJ (http://rsbweb.nih.gov/ij/)

and PSF generator plugin:

http://bigwww.epfl.ch/deconvolution/?p=plugins

Cheers,
Logged

BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3765


« Reply #186 on: September 07, 2010, 11:20:20 AM »
ReplyReply

Hi Fernando,

I'll try to avoid boring the other readers of this thread with (too many) programming details, but I also don't want to give the (wrong) impression that I'm avoiding an answer. I like to help others where possible, so I'll answer in general terms and propose to deal with the specifics in a PM if needed.

I have a few question regarding how to get discrete kernels from these continuous functions.

1) Do you just evaluate the function at the grid points? This would be like using a rectangular filtering window for the sampling. Doesn't this lead to issues? Or are you using more sophisticated ways to get the samples (triangular windows, Chebychev windows etc.)?

I evaluate the functions over the finite area of the sensel apertures. For reasons of calculation efficiency (=speed) that may be done either by integration in the spatial domain or by filtering in the frequency domain.

Quote
2) For Airy patterns which emulate strong diffraction (F/32 and beyond), even a 9x9 kernel leaves out a certain percentage of the total signal intensity.
Are you just truncating the function at the edges of the kernel, or do you perform some kind of smoothing? I think that just truncating could lead to ripples -> ringing on the image.

In principle I do not use a specific fixed size of the filter kernels (unless dictated by other software implementations), but due to the significant impact on processing time, one does need to use some sort of trade-off sooner or later. Fortunately most defects can be tackled with reasonably sized kernels before we are faced with diminishing returns. When the filter exhibits significant signal at the edges, it is wise to use a windowing function to suppress the potential ringing. It depends on the particular dimensions and goals if and when to choose for windowed functions or larger kernels. Heuristics can be used to switch between the methods.

Quote
3) Expecially for "tight & pointy" PSFs (think small-radius Gaussians), I have the feeling that a grid with a pitch of 1 pixel is too rough. Too much approximation from the continuous function to the kernel. I think we're going to need sub-pixel accuracy to avoid some artifacts (mosquitos around high-contrast details, ringing, edge overshooting, noise amplification, hot pixels).
What do you think about it?

From a mathematical point of view it's not important as long as enough calculation precision is maintained, e.g. by clever programming, or by using e.g. floating point or rational numbers. Of course there is a limit to the usefulness of very small kernels, however because of their limited support it is also not too processing expensive to do it in e.g. floating point, even in the spatial domain.

Cheers,
Bart
« Last Edit: September 07, 2010, 12:14:07 PM by BartvanderWolf » Logged
XFer
Newbie
*
Offline Offline

Posts: 12


« Reply #187 on: September 07, 2010, 05:24:55 PM »
ReplyReply

A quick example, on a diffraction-limited image.
100% crops (raw-converted with Raw Therapee 3.0alpha and ejmartin's AMaZE algorythm).

First picture.
On the left a shot taken with a Canon 5DmkII and 135/2L at f/5.6.
On the right, same stuff at f/22.
Both shots unsharpened.

The difference is obvious.
The right image is diffraction-limited (the 135/2L is a very sharp lens).

Second picture: same images, sharpened with R-L deconvolution.
f/5.6 on the left, f/22 on the right.

The f/5.6 shot is still sharper, but look at aliasing artifacts.
The lens transmitted spatial frequencies far beyond the Nyquist limit and the AA filter could not do much about it.
Smaller pattern are totally destroyed by aliasing.
The f/22 is a bit softer, but almost entirely aliasing-free; I'd say that almost all the useable details are there, and smaller patterns are much more gracefully handled.

Fernando
Logged
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3765


« Reply #188 on: September 08, 2010, 04:59:23 AM »
ReplyReply

Second picture: same images, sharpened with R-L deconvolution.
f/5.6 on the left, f/22 on the right.

The f/5.6 shot is still sharper, but look at aliasing artifacts.
The lens transmitted spatial frequencies far beyond the Nyquist limit and the AA filter could not do much about it.

Well, it shows that the OLPF does not prevent all aliasing, although it does reduce the risk of it occurring. It also shows that the dreaded stories about destructively blurring one's image are grossly exaggerated in relation to what the effects of poor technique (or unavoidable DOF compromises) can be.

What type of PSF did you use for the RL restoration? Was it diffraction only, or a (mix with a) Gaussian type?

Quote
Smaller pattern are totally destroyed by aliasing.
The f/22 is a bit softer, but almost entirely aliasing-free; I'd say that almost all the useable details are there, and smaller patterns are much more gracefully handled.

The restoration also shows that additional precautions to avoid the processing of low S/N areas need to be taken, to constrain the grittiness when extreme proccessing is required. While diffraction can be used as an AA-tool, it does have a negative effect on the per pixel microdetail. An interesting fact is that slight defocus has a very dramatic effect on aliasing, so that can be used for certain (flat) structures, whereas diffraction has a more gentle effect on AA-suppression. When a subject is positioned at e.g. 5 metres (some 16.4 feet) distance, shifting the focus plane to 5.10 metres will, with a 135mm lens at f/5.6, create a blur disc of 13.12 micron diameter which is more than 2 sensel widths on a 6.4 micron sensel pitch sensor array. It will effectively half the resolution capability, although deconvolution can restore part of that (with reduced aliasing). So using a wider aperture will kill more moiré except for in the plane of focus, while a narrower aperture will also kill moiré even in the plane of optimal focus.

Cheers,
Bart
« Last Edit: July 13, 2011, 05:38:56 AM by BartvanderWolf » Logged
Christoph C. Feldhaim
Sr. Member
****
Offline Offline

Posts: 2509


There is no rule! No - wait ...


« Reply #189 on: September 08, 2010, 05:31:14 AM »
ReplyReply

That sounds interesting!
So - thinking about scanning negatives it would probably
make sense to manually defocus a bit to reduce the gritty sky
and then use deconvolution later to get back details.
I have a Nikon LS 9000 scanner which allows for controled, manual defocusing,
but I'm always fighting with pseudo-aliasing (film grain/scanner CCD interaction).
Logged

BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3765


« Reply #190 on: September 08, 2010, 08:06:41 AM »
ReplyReply

That sounds interesting!
So - thinking about scanning negatives it would probably
make sense to manually defocus a bit to reduce the gritty sky
and then use deconvolution later to get back details.
I have a Nikon LS 9000 scanner which allows for controled, manual defocusing,
but I'm always fighting with pseudo-aliasing (film grain/scanner CCD interaction).


Hi Christoph,

Film scans are a bit different in this respect because we are talking about a second generation capture (analog camera on film, analog film on discrete sampling sensor) and the layered grain filaments (or clouds of dye) is the image. Noise or grain in the presence of signal hampers the successful restoration of only the signal component. For filmscans the best approach is to avoid grain-aliasing by oversampling (scanning at 6000-8000 PPI, or at least at the maximum native scanner resolution). Then, after pre-blurring (=convolution) and adaptive noise reduction,  downsampling can take place, followed by final output sharpening. It's there where I see the most use for deconvolution, after a number of preprocessing steps.

With direct digital capture we are faced with a number of processing steps that all introduce loss of resolution due to the digitization. Just like with film it already starts with the lens and aperture used, but the (if present AA-filter and) CFA also add their specific fingerprint to the mix, as does the Raw converter. That digitization process is well explored territory in Digital Signal Processing. It's the application to general photographic imaging that seems to pick up way too slow, it is in fact holding back (creative) progress in some areas.

Properly addressing it also has more applications than many realise. Not only restoring sharpness and recovering from motion blur, but also resampling, and even adjustable/variable DOF and glare reduction can be tackled with PSFs and deconvolution.

Exciting times are ahead...

Cheers,
Bart
Logged
XFer
Newbie
*
Offline Offline

Posts: 12


« Reply #191 on: September 08, 2010, 11:08:02 AM »
ReplyReply

Hi Bart,

for this example I used a basic 3x3 Gaussian approximation; I ran my "strange" version of R-L which allows me very small sigmas (0.45 here) and few iterations (10 here).

I can't seem to avoid some ringing when using "true" R-L.

I still think that sub-pixel processing and smooth sampling windows are needed to fully exploit R-L with larger kernels.

Fernando
Logged
XFer
Newbie
*
Offline Offline

Posts: 12


« Reply #192 on: September 08, 2010, 11:11:04 AM »
ReplyReply

I have a Nikon LS 9000 scanner which allows for controled, manual defocusing,
but I'm always fighting with pseudo-aliasing (film grain/scanner CCD interaction).

I posted a sample a few days ago: film scanning with Nikon 8000 and R-L deconvolution vs. standard USM

In case you missed it, it may be of interest:

http://www.luminous-landscape.com/forum/index.php?topic=45038.msg384006#msg384006

Fernando
Logged
sjprg
Newbie
*
Offline Offline

Posts: 41



« Reply #193 on: December 20, 2010, 09:05:46 PM »
ReplyReply

Did this subject die out? haven't seen an entry since September 2010.
Logged

Paul
Galleries: www.sjprg.us
              www.pbase.com/sjprg
feppe
Sr. Member
****
Offline Offline

Posts: 2909

Oh this shows up in here!


WWW
« Reply #194 on: December 21, 2010, 12:40:10 PM »
ReplyReply

Did this subject die out? haven't seen an entry since September 2010.

It was until you started grave-digging Grin
Logged

eronald
Sr. Member
****
Offline Offline

Posts: 4121



« Reply #195 on: December 21, 2010, 03:05:37 PM »
ReplyReply

It was until you started grave-digging Grin

Nah, the topic got oversharpened, and suffered from ringing and a noise explosion Smiley

As Steve would say:

Divide by Zero, (Apple)Core dumped!

Edmund
Logged
David Ellsworth
Newbie
*
Offline Offline

Posts: 11


« Reply #196 on: February 01, 2011, 04:30:34 PM »
ReplyReply

I hope no one minds my entering this a bit late. I've written a small C program that does deconvolution by Discrete Fourier Transform division (using the library "FFTW" to do Fast Fourier Transforms). To me this seems to beat all the other deconvolution algorithms that have been presented in this thread... I'd like to see what others think.

My algorithm currently only works on square images, and deals with edge effects very badly, so I added black borders to Bart's max-quality jpeg crop making it 1115x1115, then applied the convolution myself using his provided 9x9 kernel. I operated entirely on 16-bit per channel data. The exact convoluted image that I worked from can be downloaded here (5.0 MB PNG file). My result, saved as a maximum-quality JPEG (after re-cropping it to 1003x1107): 0343_Crop+Diffraction+DFT_division_v2.jpg (1.2 MB)

My algorithm takes a single white pixel on a black background the same size as the original image, applies the PSF blur to it, and divides the DFT of the blurred pixel by the DFT of the pixel (element by element, using complex arithmetic); this takes advantage of the fact that the DFT of a single white pixel in the center of a black background has a uniformly gray DFT. Then it takes the DFT of the convoluted image and divides this by the result from the previous operation. Division on a particular element is only done if the divisor is above a certain threshold (to avoid amplifying noise too much, even noise resulting from 16-bit quantization). An inverse DFT is done on the final result to get a deconvoluted image.

This algorithm is very fast and does not need to go through iterations of successive approximation; it gets its best result right off the bat.

1. I've taken a crop of a shot taken with my 1Ds3 (6.4 micron sensel pitch + Bayer CFA) and the TS-E 90mm af f/7.1 (the aperture where the diffraction pattern spans approx. 1.5 pixels).
0343_Crop.jpg (1.203kb) I used 16-b/ch TIFFs throughout the experiment, but provide links to JPEGs and PNGs to save bandwidth.

2. That crop is convolved with a single diffraction (at f/32) kernel for 564nm wavelength (the luminosity weighted average of R, G and B taken as 450, 550 and 650 nm) at a 6.4 micron sensel spacing (assuming 100% fill-factor). That kernel was limited to the maximum 9x9 kernel size of ImagesPlus, a commercial Astrophotography program chosen for the experiment because a PSF kernel can be specified and the experiment can be verified. That means that only a part of the infinite diffraction pattern (some 44 micron, or 6.38 pixel widths, in diameter to the first minimum) could be encoded. So I realise that the diffraction kernel is not perfect, but it covers the majority of the energy distribution. The goal is to find out how well certain methods can restore the original image, so anything that resembles diffraction will do.
The benefit of using a 9x9 convolution kernel is that the same kernel can be used for both convolution and deconvolution, so we can judge the potential of a common method under somewhat ideal conditions (a known PSF, and computable in a reasonable time). it will present a sort of benchmark for the others to beat.
Crop+diffraction (5.020kb !) This is the subject to restore to it's original state before diffraction was added.

3. And here (945kb) is the result after only one Richardson Lucy restoration (although with 1000 iterations) with a perfectly matching PSF. There are some ringing artifacts, but the noise is almost the same level as in the original. The resolution has been improved significantly, quite usable for a simulated f/32 shot as a basis for further postprocessing and printing. Look specifically at the Venetian blinds at the first floor windows in the center. Remember, the restoration goal was to restore the original, not to improve on it (that will take another postprocessing step).
I have supplied a link to the data file.
« Last Edit: February 01, 2011, 04:39:23 PM by David Ellsworth » Logged
eronald
Sr. Member
****
Offline Offline

Posts: 4121



« Reply #197 on: February 01, 2011, 04:48:13 PM »
ReplyReply

I hope no one minds my entering this a bit late. I've written a small C program that does deconvolution by Discrete Fourier Transform division (using the library "FFTW" to do Fast Fourier Transforms). To me this seems to beat all the other deconvolution algorithms that have been presented in this thread... I'd like to see what others think.

My algorithm currently only works on square images, and deals with edge effects very badly, so I added black borders to Bart's max-quality jpeg crop making it 1115x1115, then applied the convolution myself using his provided 9x9 kernel. I operated entirely on 16-bit per channel data. The exact convoluted image that I worked from can be downloaded here (5.0 MB PNG file). My result, saved as a maximum-quality JPEG (after re-cropping it to 1003x1107): 0343_Crop+Diffraction+DFT_division_v2.jpg (1.2 MB)

My algorithm takes a single white pixel on a black background the same size as the original image, applies the PSF blur to it, and divides the DFT of the blurred pixel by the DFT of the pixel (element by element, using complex arithmetic); this takes advantage of the fact that the DFT of a single white pixel in the center of a black background has a uniformly gray DFT. Then it takes the DFT of the convoluted image and divides this by the result from the previous operation. Division on a particular element is only done if the divisor is above a certain threshold (to avoid amplifying noise too much, even noise resulting from 16-bit quantization). An inverse DFT is done on the final result to get a deconvoluted image.

This algorithm is very fast and does not need to go through iterations of successive approximation; it gets its best result right off the bat.


Sounds simple. Results are really nice.
Can you post the code please, to avoid my having to recreate it. ?

Edmund
Logged
EricWHiss
Sr. Member
****
Offline Offline

Posts: 2427



WWW
« Reply #198 on: February 01, 2011, 05:44:29 PM »
ReplyReply

David,
The results from method looked impressive!
Eric
Logged

Authorized Rolleiflex Dealer:
Find product information, download user manuals, or purchase online - Rolleiflex USA
David Ellsworth
Newbie
*
Offline Offline

Posts: 11


« Reply #199 on: February 01, 2011, 09:33:33 PM »
ReplyReply

Thanks Edmund, and thanks Eric. It is indeed simple, which makes me wonder why seemingly nobody else has thought of it. However I do think it has a lot of room for improvement, for example dealing with a noisy or quantized image — my current solution is to cut off frequencies that are noisy, but that results in ringing artifacts. Maybe I can add an algorithm that fiddles with the noisy frequencies in order to reduce the appearance of ringing (not sure at this point how to go about it, though). And there's of course the issue of edge effects, which I haven't tried to tackle yet.

I intend to post the source code, but it's rather messy right now (the main problem is that it uses raw files instead of TIFFs), so I'd like to clean it up first. Unless you'd really like to play with it right away, in which case I can post it as-is...

Meanwhile, I've improved the algorithm: 1) Do a gradual frequency cutoff instead of a threshold discontinuity; 2) Use the exact floating point kernel for deconvolution, instead of using a kernel-blurred white pixel rounded to 16 bits/channel.
The result: 0343_Crop+Diffraction+DFT_division_v3.jpg (1.2 MB)

David
« Last Edit: February 01, 2011, 10:01:06 PM by David Ellsworth » Logged
Pages: « 1 ... 8 9 [10] 11 12 ... 18 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad