Ad
Ad
Ad
Pages: « 1 ... 9 10 [11] 12 13 ... 18 »   Bottom of Page
Print
Author Topic: Deconvolution sharpening revisited  (Read 99992 times)
bradleygibson
Sr. Member
****
Offline Offline

Posts: 829


WWW
« Reply #200 on: February 01, 2011, 09:42:00 PM »
ReplyReply

David, what PSF did you use on your sample?  A simple Gaussian, or something more complex?
Logged

David Ellsworth
Newbie
*
Offline Offline

Posts: 11


« Reply #201 on: February 01, 2011, 10:28:46 PM »
ReplyReply

David, what PSF did you use on your sample?  A simple Gaussian, or something more complex?

Brad, I used the same exact one that Bart used, both for convolution and deconvolution:

I have supplied a link to the data file. You can read the dat file with Wordpad or a similar simple document reader. You can input those numbers(rounded to 16-bit values, or converted to 8-bit numbers by dividing by 65535 and multiplying by 255 and rounding to integers). A small warning, the lower the accuracy, the lower the output quality will be. For convenience I've added a 16-bit Greyscale TIFF (convert to RGB mode if needed). I have turned it into an 11x11 kernel (9x9 + black border) because the program you referenced apparently (from the description) requires a zero backgound level.

I used the floating point data file, not the TIFF.

My algorithm can work on a PSF up to the size of the image itself, in this case 1115x1115... but I'd have to synthesize an Airy disk myself, and haven't learned how to do that yet. So I just padded Bart's 9x9 one with black.
« Last Edit: February 01, 2011, 10:33:04 PM by David Ellsworth » Logged
MichaelEzra
Sr. Member
****
Offline Offline

Posts: 650



WWW
« Reply #202 on: February 01, 2011, 10:46:04 PM »
ReplyReply

David, this looks really interesting, especially considering that algorithm is fast.

It would be lovely to see something like this in a tool like RawTherapee, which provides a great image processing platform.
If you would be interested, I could help with UI implementation there.
Logged

David Ellsworth
Newbie
*
Offline Offline

Posts: 11


« Reply #203 on: February 01, 2011, 11:21:06 PM »
ReplyReply

David, this looks really interesting, especially considering that algorithm is fast.

It would be lovely to see something like this in a tool like RawTherapee, which provides a great image processing platform.
If you would be interested, I could help with UI implementation there.

Michael,

Thanks for your interest. I'm not sure my algorithm is ready for that, though!

When I said it's "very fast" I meant relative to other deconvolution algorithms.
It takes about 2.5 seconds on my Intel E8400 (3.0 GHz) with DDR2 800 to process a 1115x1115 image single-threaded. It probably scales roughly as O(N log N) where N=X*Y, the number of pixels, since most of its time is taken calculating FFTs.
It takes 39 seconds to process a 4096x4096 image (yep, that's O(N log N) all right). But with multi-threading it could be much faster, so maybe it could indeed be practical for raw conversion...

But first things first. I need to write some code to make it adapt its frequency cutoff to which frequencies are most garbled by noise, and I need to address the awful edge effects it currently has, and of course make it work with non-square images (EDIT: It now works with non-square images).
« Last Edit: February 02, 2011, 01:18:36 AM by David Ellsworth » Logged
ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7257


WWW
« Reply #204 on: February 02, 2011, 12:12:06 AM »
ReplyReply

Hi!

Your effort is really appreciated! Thanks for sharing ideas, results and code.

Best regards
Erik

Thanks Edmund, and thanks Eric. It is indeed simple, which makes me wonder why seemingly nobody else has thought of it. However I do think it has a lot of room for improvement, for example dealing with a noisy or quantized image my current solution is to cut off frequencies that are noisy, but that results in ringing artifacts. Maybe I can add an algorithm that fiddles with the noisy frequencies in order to reduce the appearance of ringing (not sure at this point how to go about it, though). And there's of course the issue of edge effects, which I haven't tried to tackle yet.

I intend to post the source code, but it's rather messy right now (the main problem is that it uses raw files instead of TIFFs), so I'd like to clean it up first. Unless you'd really like to play with it right away, in which case I can post it as-is...

Meanwhile, I've improved the algorithm: 1) Do a gradual frequency cutoff instead of a threshold discontinuity; 2) Use the exact floating point kernel for deconvolution, instead of using a kernel-blurred white pixel rounded to 16 bits/channel.
The result: 0343_Crop+Diffraction+DFT_division_v3.jpg (1.2 MB)

David
Logged

Graham Mitchell
Sr. Member
****
Offline Offline

Posts: 2282



WWW
« Reply #205 on: February 02, 2011, 01:34:16 AM »
ReplyReply

Great results so far. I look forward to playing around with it some day (assuming code is released Smiley )
Logged

Graham Mitchell - www.graham-mitchell.com
BartvanderWolf
Sr. Member
****
Online Online

Posts: 3466


« Reply #206 on: February 02, 2011, 03:14:02 AM »
ReplyReply

I hope no one minds my entering this a bit late. I've written a small C program that does deconvolution by Discrete Fourier Transform division (using the library "FFTW" to do Fast Fourier Transforms). To me this seems to beat all the other deconvolution algorithms that have been presented in this thread... I'd like to see what others think.

Hi David,

The more the merrier, I'm glad you joined.

Quote
My algorithm currently only works on square images, and deals with edge effects very badly, so I added black borders to Bart's max-quality jpeg crop making it 1115x1115, then applied the convolution myself using his provided 9x9 kernel. I operated entirely on 16-bit per channel data. The exact convoluted image that I worked from can be downloaded here (5.0 MB PNG file). My result, saved as a maximum-quality JPEG (after re-cropping it to 1003x1107): 0343_Crop+Diffraction+DFT_division_v2.jpg (1.2 MB)

David, the results look too good to be true. Could you verify that you (your software) used the correct (convolved) file as input? It's not that I don't like the results, it's that all other algorithms I've tried cannot restore data that has been lost (too low S/N ratio) to f/32 diffraction. Maybe you didn't save the intermediate convolved/diffracted result, thus truncating the accuracy to 16-bit at best. Maybe you were performing all subsequent steps while keeping the intermediate results in floating point?

Quote
My algorithm takes a single white pixel on a black background the same size as the original image, applies the PSF blur to it, and divides the DFT of the blurred pixel by the DFT of the pixel (element by element, using complex arithmetic); this takes advantage of the fact that the DFT of a single white pixel in the center of a black background has a uniformly gray DFT. Then it takes the DFT of the convoluted image and divides this by the result from the previous operation. Division on a particular element is only done if the divisor is above a certain threshold (to avoid amplifying noise too much, even noise resulting from 16-bit quantization). An inverse DFT is done on the final result to get a deconvoluted image.

Yes, that's basic restoration by deconvolution in Fourier space.

Quote
This algorithm is very fast and does not need to go through iterations of successive approximation; it gets its best result right off the bat.

The iterations that were mentioned are part of the Richardson-Lucy algorithm, it's an iterative Bayesian maximum likelihood method. Regular deconvolution is faster, but poses other challenges. One of the things you can do to improve the edge performance is padding with e.g. mirrored image data, and making a final crop after deconvolution.

Cheers,
Bart
Logged
David Ellsworth
Newbie
*
Offline Offline

Posts: 11


« Reply #207 on: February 02, 2011, 12:12:01 PM »
ReplyReply

Hi Bart,

David, the results look too good to be true. Could you verify that you (your software) used the correct (convolved) file as input? It's not that I don't like the results, it's that all other algorithms I've tried cannot restore data that has been lost (too low S/N ratio) to f/32 diffraction. Maybe you didn't save the intermediate convolved/diffracted result, thus truncating the accuracy to 16-bit at best. Maybe you were performing all subsequent steps while keeping the intermediate results in floating point?

Well in a way, it is too good to be true. It's VERY sensitive to tiny changes in the convoluted input, and the edges need to be completely intact and fade out to black. But my program is indeed using a 16-bit/ch data file as input (the PNG I posted), which I'm absolutely sure of as 1) the convolution and deconvolution are done in separate runs, with only files on the hard disk as input and output, and 2) I've tried messing with the convoluted input, which thoroughly ruins the deconvoluted output unless it is made less sensitive by increasing the frequency drop-off threshold, and 3) I've tried pasting your 16-bit/ch convoluted PNG on top of mine, and the deconvolution still looks good but has a bit of noise near the edges, probably because of the difference between the JPEG crop and the original crop.

Here's what happens if I pollute the convoluted input with just one white pixel, all else being equal: 0343_Crop+Diffraction+DFT_division_v3_whitepixel.jpg (1.5 MB)
I added the white pixel in an image editor, not using my program.

I would enjoy it if you posted another 16-bit/ch PNG of a convoluted image, without posting the original, but this time with edges that fade to black (i.e., pad the original with 8 pixels of black before applying a 9x9 kernel).

Best regards,
David
Logged
BartvanderWolf
Sr. Member
****
Online Online

Posts: 3466


« Reply #208 on: February 02, 2011, 01:24:12 PM »
ReplyReply

I would enjoy it if you posted another 16-bit/ch PNG of a convoluted image, without posting the original, but this time with edges that fade to black (i.e., pad the original with 8 pixels of black before applying a 9x9 kernel).

Hi David,

Okay, here it is, a 16-b/ch RGB PNG file with an 8 pixel black border, convolved with the same "N=32" kernel as before:


Cheers,
Bart
Logged
David Ellsworth
Newbie
*
Offline Offline

Posts: 11


« Reply #209 on: February 02, 2011, 02:42:55 PM »
ReplyReply

Hi David,

Okay, here it is, a 16-b/ch RGB PNG file with an 8 pixel black border, convolved with the same "N=32" kernel as before:

Tada:


But I really want to get this working with cropped-edge images. Right now that's the Achilles heel of the algorithm: having missing edges corrupts the entire image, not just the vicinity of the edges. I tried masking edges by multiplying them by a convoluted white rectangle, but that still left significant ringing noise over the whole picture. I'll try that mirrored-edge idea, but I doubt it'll work (EDIT: indeed, it didn't work). I have another idea, of tapering off the inverse PSF so that it doesn't have that "action at a distance", but that might remove its ability to reconstruct fine detail... it's a really hard concept to wrap my mind around. It seems that this algorithm works on a gestalt of the whole image to reconstruct even one piece of it, even though the PSF is just 9x9.

BTW, the edges can actually be any color, as long as it's uniform. I can just subtract it out (making some values negative within the image itself), and then add it back in before saving the result.

Cheers,
David
« Last Edit: February 02, 2011, 03:51:02 PM by David Ellsworth » Logged
ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7257


WWW
« Reply #210 on: February 02, 2011, 02:48:27 PM »
ReplyReply

+1

Great results so far. I look forward to playing around with it some day (assuming code is released Smiley )
Logged

eronald
Sr. Member
****
Offline Offline

Posts: 3881



WWW
« Reply #211 on: February 02, 2011, 03:47:15 PM »
ReplyReply

David,


Seeing the algorithm works, I think that some code in Matlab would significantly advance the topic at this point.
Maybe you could post the existing code, and then we could recode the thing in Matlab?

I'm a bit confused by the fact that you said you operate on the Raw; how are you doing the Raw deconvolution?

Edmund

Tada:


But I really want to get this working with cropped-edge images. Right now that's the Achilles heel of the algorithm: having missing edges corrupts the entire image, not just the vicinity of the edges. I tried masking edges by multiplying them by a convoluted white rectangle, but that still left significant ringing noise over the whole picture. I'll try that mirrored-edge idea, but I doubt it'll work. I have another idea, of tapering off the inverse PSF so that it doesn't have that "action at a distance", but that might remove its ability to reconstruct fine detail... it's a really hard concept to wrap my mind around. It seems that this algorithm works on a gestalt of the whole image to reconstruct even one piece of it, even though the PSF is just 9x9.

BTW, the edges can actually be any color, as long as it's uniform. I can just subtract it out (making some values negative within the image itself), and then add it back in before saving the result.

Cheers,
David
Logged

Edmund Ronald, Ph.D. 
BartvanderWolf
Sr. Member
****
Online Online

Posts: 3466


« Reply #212 on: February 02, 2011, 04:20:30 PM »
ReplyReply

Tada:


Pretty close. And here is (a max quality JPEG version of) the original before convolution:



Quote
But I really want to get this working with cropped-edge images. Right now that's the Achilles heel of the algorithm: having missing edges corrupts the entire image, not just the vicinity of the edges. I tried masking edges by multiplying them by a convoluted white rectangle, but that still left significant ringing noise over the whole picture. I'll try that mirrored-edge idea, but I doubt it'll work. I have another idea, of tapering off the inverse PSF so that it doesn't have that "action at a distance", but that might remove its ability to reconstruct fine detail... it's a really hard concept to wrap my mind around. It seems that this algorithm works on a gestalt of the whole image to reconstruct even one piece of it, even though the PSF is just 9x9.

As your experiment with the single white pixel shows, each and every pixel contributes to the reconstruction of the entire image, although the amplitude reduces with distance. The ringing at edges has to do with the abrupt discontinuity of the pixels contributing to the image. An image is presumed to have infinite dimensions, which can be approximated by creating a symmetric image, such as in the "Reflected" image padding example at:
http://reference.wolfram.com/mathematica/ref/ImagePad.html

This also allows to low-pass filter the larger Fourier transform with an appropriate function (a Gaussian is a simple one), but the highest spatial frequencies are sacrificed to prevent other artifacts.

It is also important to note that here the PSF is known to high accuracy, where in real time situations we can only approximate it, and noise can spoil the party.

Cheers,
Bart
« Last Edit: February 02, 2011, 04:23:26 PM by BartvanderWolf » Logged
crames
Full Member
***
Offline Offline

Posts: 210


WWW
« Reply #213 on: February 03, 2011, 08:19:29 AM »
ReplyReply

David, the results look too good to be true...

Inverse filtering can be very exact if the blur PSF is known and there is no added noise, as in these examples.

But what is really surprising is how much detail is coming back - it's as though nothing is being lost to diffraction. Detail above what should be the "cutoff frequency" is being restored.

I think what is happening is that the 9x9 or 11x11 Airy disk is too small to simulate a real Airy disk. It is allowing spatial frequencies above the diffraction cutoff to leak past. Then David's inverse filter is able to restore most of those higher-than-cutoff frequency details as well as the lower frequencies (on which it does a superior job).

To be more realistic I think it will be necessary to go with a bigger simulated Airy disk.

--typo edited
« Last Edit: February 03, 2011, 12:54:41 PM by crames » Logged

Cliff
eronald
Sr. Member
****
Offline Offline

Posts: 3881



WWW
« Reply #214 on: February 03, 2011, 08:44:22 AM »
ReplyReply

I'm vaguely following this - however, I would have thought that if the fourier transform of the blur function is invertible then it's pretty obvious that you'll get the original back - with some uncertainty in areas with a lot of higher frequency due to noise in the original and computational approximation.
Edmund

Inverse filtering can be very exact if the blur PSF is known and there is no added noise, as in these examples.

But what is really surprising how much detail is coming back - it's as though nothing is being lost to diffraction. Detail above what should be the "cutoff frequency" is being restored.

I think what is happening is that the 9x9 or 11x11 Airy disk is too small to simulate a real Airy disk. It is allowing spatial frequencies above the diffraction cutoff to leak past. Then David's inverse filter is able to restore most of those higher-than-cutoff frequency details as well as the lower frequencies (on which it does a superior job).

To be more realistic I think it will be necessary to go with a bigger simulated Airy disk.
Logged

Edmund Ronald, Ph.D. 
PierreVandevenne
Sr. Member
****
Offline Offline

Posts: 510


WWW
« Reply #215 on: February 03, 2011, 10:14:05 AM »
ReplyReply

I think (possibly wrongly) that the main problem is estimating the real world PSF, both at the focal plane and ideally in a space around the focal plane (see Nijboer-Zernike). Inverting known issues is relatively easy in comparison. This is only based on my experience trying to fix astronomical images but I suspect (photographically vs astrophotographically) optical issues are equivalent, tracking errors are similar to camera movement and turbulence errors are similar to subject movement. Depth of field should make things more complex for photos. And of course, one doesn't lack distorted point sources in astro images.

Apologies if I missed the point, I haven't re-read the whole thread.
Logged
crames
Full Member
***
Offline Offline

Posts: 210


WWW
« Reply #216 on: February 03, 2011, 12:47:41 PM »
ReplyReply

I'm vaguely following this - however, I would have thought that if the fourier transform of the blur function is invertible then it's pretty obvious that you'll get the original back - with some uncertainty in areas with a lot of higher frequency due to noise in the original and computational approximation.

The frequencies above the cutoff frequency should be zero, so should not be invertible. (1/zero = Huh)
Logged

Cliff
eronald
Sr. Member
****
Offline Offline

Posts: 3881



WWW
« Reply #217 on: February 03, 2011, 02:29:52 PM »
ReplyReply

The frequencies above the cutoff frequency should be zero, so should not be invertible. (1/zero = Huh)

Speaking without experience, again, I would assume that is always the case. When you fourier transform or DFT or whatever, you will truncate at the cutoff or twice the cutoff or whatever - after that it doesn't make sense anymore.

Maybe I should go and try actually doing some computational experiments with Matlab and an image processing book, and refresh my knowledge,  rather than spout nonsense.
Edmund
Logged

Edmund Ronald, Ph.D. 
David Ellsworth
Newbie
*
Offline Offline

Posts: 11


« Reply #218 on: February 03, 2011, 04:19:57 PM »
ReplyReply

I think what is happening is that the 9x9 or 11x11 Airy disk is too small to simulate a real Airy disk. It is allowing spatial frequencies above the diffraction cutoff to leak past. Then David's inverse filter is able to restore most of those higher-than-cutoff frequency details as well as the lower frequencies (on which it does a superior job).

To be more realistic I think it will be necessary to go with a bigger simulated Airy disk.

Wow, you are absolutely right. It makes a HUGE difference in this case; 9x9 was far too small to nullify the higher-than-cutoff frequencies in the same way the full Airy disk does. I tried a 127x127 kernel and indeed, my algorithm now cannot recover those frequencies at all. Also I notice that global contrast is quite visibly reduced using the 127x127 kernel, which didn't happen with the 9x9 for obvious reasons.

I used the Airy disk formula from Wikipedia, using the libc implementation of the Bessel function, double _j1(double). My result differed slightly from Bart's in the inner 9x9 pixels. Any idea why? Bart, was your kernel actually an Airy disk convolved with an OLPF?

I attempted to save the 127x127 kernel in the same format as Bart posted his 9x9 (except for using a PNG instead of a TIFF for the picture version):
Airy127x127_6.4micron_564nm_f32.zip (data file)
Airy127x127_6.4micron_564nm_f32.png (16-bit PNG)

Here's the original, convoluted and deconvoluted fourier transforms (logarithm of the absolute value of each color channel) click one to see full DFT as a max-quality JPEG:


The residual frequencies outside the ellipse are due to the use of a 127x127 kernel instead of one as big as the picture.

Convoluted 16-bit/channel PNG: 0343_Crop+Diffraction127x127.png (5.1 MB)
Deconvoluted max-quality JPEG: 0343_Crop+Diffraction127x127+DFT_division.jpg (1.1 MB)

So as you can see, my algorithm doesn't do so well with a real Airy disk. It has significant ringing. Do the other algorithms/programs demonstrated earlier in the thread deal with this 127x127-convoluted 16-bit PNG just as well as they dealt with the 9x9-convoluted one?

BTW, Bart, do you have protanomalous or protanopic vision? I notice you always change your links to blue instead of the default red, and I've been doing the same thing because the red is hard for me to tell at a glance from black, against a pale background.

Cheers,
David
« Last Edit: February 03, 2011, 06:22:16 PM by David Ellsworth » Logged
eronald
Sr. Member
****
Offline Offline

Posts: 3881



WWW
« Reply #219 on: February 03, 2011, 05:05:26 PM »
ReplyReply

I guess I can ask Norman whether Imatest could dump some real-world diffraction kernels.
Logged

Edmund Ronald, Ph.D. 
Pages: « 1 ... 9 10 [11] 12 13 ... 18 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad