Ad
Ad
Ad
Pages: « 1 ... 7 8 [9] 10 11 ... 18 »   Bottom of Page
Print
Author Topic: Deconvolution sharpening revisited  (Read 112330 times)
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3793


« Reply #160 on: August 23, 2010, 11:30:47 AM »
ReplyReply

Please guide me to where the original crop png file can be found.

Here it is.

I need something like Smart sharpening, advanced/lens blur, amount 500, radius 1.3 to get something a bit more usable.

Cheers,
Bart
« Last Edit: August 23, 2010, 01:52:19 PM by BartvanderWolf » Logged
XFer
Newbie
*
Offline Offline

Posts: 12


« Reply #161 on: August 24, 2010, 10:15:10 AM »
ReplyReply

Hi Bart, how are you?
We used to write in comp.periph.scanners, some years ago.  Smiley

I'm playing with deconvolution a bit.
I've studied RawTherapee sources and put up a quick hack in C to experiment with various kernels (PSFs).
If you have images and PSFs to play with, I would be really happy to show up the results.
I can deal with float PSFs of square shape and whatever size. Only grayscale pictures at the moment.
Hopefully we can work out a set of PSFs to complement the Gaussian 3x3 approximation that RT is using right now.
The quick hack is commandline and really ugly with lots of limitations, so I would be ashamed to share it for now; but I can download test images and upload the results.

Fernando
Logged
ced
Sr. Member
****
Offline Offline

Posts: 267


« Reply #162 on: August 24, 2010, 11:38:06 AM »
ReplyReply

Xfer the post from Bart above yours leads to an image you can use to test.
Logged
XFer
Newbie
*
Offline Offline

Posts: 12


« Reply #163 on: August 24, 2010, 11:45:57 AM »
ReplyReply

Sorry, I didn't make myself clear.
I'm looking for specific images and associated PSFs to try together (example: picture taken at very small aperture and diffraction PSF model, defocused picture and defocus PSF model, etc.)
I don't have any PSF to use as of now (apart from the standard Gaussian approximation, not very interesting).
Logged
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3793


« Reply #164 on: August 24, 2010, 11:51:10 AM »
ReplyReply

Hi Bart, how are you?
We used to write in comp.periph.scanners, some years ago.  Smiley

Hi Fernando,

Doesn't time fly, it's 'a bit more' than some years by now.

Quote
I'm playing with deconvolution a bit.
I've studied RawTherapee sources and put up a quick hack in C to experiment with various kernels (PSFs).
If you have images and PSFs to play with, I would be really happy to show up the results.
I can deal with float PSFs of square shape and whatever size. Only grayscale pictures at the moment.
Hopefully we can work out a set of PSFs to complement the Gaussian 3x3 approximation that RT is using right now.

IMHO there are 3 obvious fundamental candidates:
  • A mix of Gaussians
  • Defocus blur (DOF related or plain OOF)
  • Diffraction dominated

It is usually some sort of mix between them, although the mix of Gaussians can be tailored to approximate a lot of different scenarios (although it's not simple to find the mix by trial and error).
 
Quote
The quick hack is commandline and really ugly with lots of limitations, so I would be ashamed to share it for now; but I can download test images and upload the results.

Frankly, I'm in the process of programming a "PSF generator" application, but I'm not ready yet. So it shouldn't be too difficult to generate all sorts of mixes in the foreseeable future. I have to do a bit more coding before it's usable enough to release, but I also want to build in intelligence for deriving the PSF needed from an actual image (although that's a lot harder to program). Of course, when that's done, the logical next step is doing the actual deconvolution with it ...

In this thread I posted an image crop that has the diffraction of f/32 added, and the PSF (as data and as an image) that was used to convolve the original with. There is also a number of results from various methods, so it would make most sense for the short term to start with that. We can add some more as we go and if there is enough interest in the subject.

Cheers,
Bart
« Last Edit: August 24, 2010, 11:55:27 AM by BartvanderWolf » Logged
XFer
Newbie
*
Offline Offline

Posts: 12


« Reply #165 on: August 24, 2010, 03:51:39 PM »
ReplyReply


Doesn't time fly, it's 'a bit more' than some years by now.

So true!  Roll Eyes
BTW, I've added a couple of drum scanners, a Nikon 8000 and a V700 to my collection (besides the Minolta 5400, Epson 2450 and Microtek 120).

Quote
IMHO there are 3 obvious fundamental candidates:
  • A mix of Gaussians
  • Defocus blur (DOF related or plain OOF)
  • Diffraction dominated

What about segmenting the image and applying a different PSF to each relevant portion?
I was talking about that with ejmartin in the RT forum.
Real lenses have different issues near center, at borders and at corners.
Maybe trying to compensate all different effects and aberrations with a single PSF across the whole frame is asking too much.
Example: a fast lens shooting at large aperture may show strong coma at the edges a just some spherical aberration at the center.

Quote
I'm in the process of programming a "PSF generator" application

Now, this is such a wonderful idea!!  Cheesy
I'm driving crazy trying to write down discrete kernels for my tests.

Quote
In this thread I posted an image crop that has the diffraction of f/32 added

Here we have a couple of tests of mine.
Please note that since my dirty little app only manages gray images at this time, I had to convert to Lab and deconvolve Lightness only.

First test: RT Gaussian approximation (3x3 kernel).
Radius (sigma) = 1.2, 2000 iterations, no damping.


Your deconvolution has more hi-freq details, but more ringing. See the angled white bar near the bottom of the tree.


Second test: same kernel, but I tried a special "turbo" mode, modifying the R-L implementation so that I can use a very narrow Gaussian (small sigma) and very few iterations.
radius = 0.35, 60 iterations, no damping.


This method is very fast but really "nervous", can diverge easily.  Grin

EDIT: Mmmm, I dont' understand, why are my images downsized? I'm sure the original links are full-res!  Huh

Well, here you can find direct http links:
http://img840.imageshack.us/img840/3705/cropdiffractionrtlr.jpg
http://img830.imageshack.us/img830/4939/cropdiffractionrtlrturb.jpg
« Last Edit: August 24, 2010, 04:01:18 PM by XFer » Logged
MichaelEzra
Sr. Member
****
Offline Offline

Posts: 659



WWW
« Reply #166 on: August 24, 2010, 04:22:43 PM »
ReplyReply

Second test: same kernel, but I tried a special "turbo" mode, modifying the R-L implementation so that I can use a very narrow Gaussian (small sigma) and very few iterations.
radius = 0.35, 60 iterations, no damping.

XFer, Is this code googlecode by any chance? I would love to compile it to try out!
Logged

XFer
Newbie
*
Offline Offline

Posts: 12


« Reply #167 on: August 24, 2010, 04:46:56 PM »
ReplyReply

XFer, Is this code googlecode by any chance? I would love to compile it to try out!

Hi Michael,

no, not yet: it's just an ugly hack in single-threaded C (original RT code is multithreaded C++).
I quickly put it together just to experiment with some PSFs.
It's so ugly I can't release it just now (I would be hunted down by Kernighan and Ritchie!), but I'll polish it up a bit in coming days.  Smiley
Logged
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3793


« Reply #168 on: August 24, 2010, 05:51:50 PM »
ReplyReply

What about segmenting the image and applying a different PSF to each relevant portion?

Yes, that will help, but it does require adaptive PSF generation. Another approach is determining a different PSF for center and corners, and then blend between them.

Quote
Here we have a couple of tests of mine.
Please note that since my dirty little app only manages gray images at this time, I had to convert to Lab and deconvolve Lightness only.

Yes, that's fine for now, but one ultimate needs to do either 3 layers or just the luminosity. The 3 layers will allow to address things like diffraction even better, although my application handles that for luminosity as well via a weighted combination of R/G/B.

Quote
First test: RT Gaussian approximation (3x3 kernel).
Radius (sigma) = 1.2, 2000 iterations, no damping.

That's not bad, given the small kernel size. It really requires a 7x7 or 9x9 kernel to get most of the power of the diffraction pattern.


Quote
Your deconvolution has more hi-freq details, but more ringing. See the angled white bar near the bottom of the tree.

That's right, compromises, compromises. Still no free lunch ...

Quote
Second test: same kernel, but I tried a special "turbo" mode, modifying the R-L implementation so that I can use a very narrow Gaussian (small sigma) and very few iterations.
radius = 0.35, 60 iterations, no damping
This method is very fast but really "nervous", can diverge easily.  Grin.

It's quite a small radius, but the result is getting close to what can be achieved with this algorithm.

Cheers,
Bart
Logged
XFer
Newbie
*
Offline Offline

Posts: 12


« Reply #169 on: August 24, 2010, 06:17:27 PM »
ReplyReply

Yes, that will help, but it does require adaptive PSF generation. Another approach is determining a different PSF for center and corners, and then blend between them.

Emil Martinec suggested the same thing. Subdividing the image in tiles and using an interpolated PSF according to the tile position (given a PSF for the center and 4 for the corners).

Quote
That's right, compromises, compromises. Still no free lunch ...

You have actually measured the PSF, right?
It's strange, it resembles so much a Gaussian, instead of the Airy disc I would have expected from a heavy diffraction-limited image. I see no fringes in the PSF.

Fernando
Logged
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3793


« Reply #170 on: August 24, 2010, 06:38:13 PM »
ReplyReply

You have actually measured the PSF, right?
It's strange, it resembles so much a Gaussian, instead of the Airy disc I would have expected from a heavy diffraction-limited image. I see no fringes in the PSF.

I generated the PSF from an integrated 3D Airy-disc function, and convolved the original (unblurred) image with it. All for the purpose of having a perfect PSF to work with, and determine the benefit of prior knowledge for the RL algorithm or others. The only drawback is the limitation of the (ImagesPlus) software that I used which is limited to a maximum 9x9 kernel size. As it (f/32) happens to work out for a sensel pitch of 6.4 micron and 564 nanometer wavelength, the first minus of the Airy diffraction pattern just fits within that limitation. So we have something like 84% of the total power covered.

The center of the pattern can be approximated reasonably well by a Gaussian, but defocus has a markedly different shape.

Cheers,
Bart
Logged
ejmartin
Sr. Member
****
Offline Offline

Posts: 575


« Reply #171 on: August 25, 2010, 11:48:10 AM »
ReplyReply

What's a good PSF for defocus?
Logged

emil
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3793


« Reply #172 on: August 25, 2010, 12:15:53 PM »
ReplyReply

What's a good PSF for defocus?

Hi Emil,

I'd say a disc of somewhat uniform intensity would come close. It's a bit like taking a slice (the focal plane) out of a cone of light as its focal point is in front or behind of the focal plane. Of course a real world defocus PSF is a combination of many things, but if one want to isolate defocus, a disc seems appropriate.

I think that a model of a PSF can be split into some quantifiable contributors, such as diffraction and defocus, and the remaining bit to add can be a Gaussian. By extracting some known contributors, it should be easier to find the residual contribution. Another approach is to take multiple Gaussians, where one with a large sigma could simulate the diffraction or defocus part, and a small sigma could represent some more localized blur.

Cheers,
Bart
Logged
ejmartin
Sr. Member
****
Offline Offline

Posts: 575


« Reply #173 on: August 25, 2010, 12:26:59 PM »
ReplyReply

Yes, well I was imagining it should look like the little disks of OOF specular highlights, those being extreme versions of OOF point sources.  But there one sees some structure near the edges, perhaps diffraction off the edge of the aperture blades?  As well of course as a slight polygonal shape due to the aperture blades.  But I'm not sure any of those are significant, and a disk is perhaps good enough.  I was just wondering if there was any discussion eg in the literature or in some online source. 

In the service of keeping it simple, perhaps since apart from the side lobes of the Airy pattern the central peak is fairly well approximated by a Gaussian, one could use a suitable combination (the successive convolution) of a disk, Gaussian, and line (for motion deblur).
Logged

emil
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3793


« Reply #174 on: August 25, 2010, 03:41:32 PM »
ReplyReply

Yes, well I was imagining it should look like the little disks of OOF specular highlights, those being extreme versions of OOF point sources.  But there one sees some structure near the edges, perhaps diffraction off the edge of the aperture blades?  As well of course as a slight polygonal shape due to the aperture blades. But I'm not sure any of those are significant, and a disk is perhaps good enough.  I was just wondering if there was any discussion eg in the literature or in some online source.

That's right, there is also some irregularity caused by (the correction of) lens aberrations, such as spherical aberration, vignetting, and other phenomenae. See an explanation by Paul van Walree for some background. Therefore it would be nice to be able and distract the PSF information from an image itself, which would allow to make spatially determined PSFs (although one might not want to treat an OOF background, but rather the slight misfocus of the main subject).

Quote
In the service of keeping it simple, perhaps since apart from the side lobes of the Airy pattern the central peak is fairly well approximated by a Gaussian, one could use a suitable combination (the successive convolution) of a disk, Gaussian, and line (for motion deblur).

Yes, that alone will already allow a huge improvement. Of course, for speed and accuracy, one might want to use a single run with a combined PSF, but it's not an absolute necessity.

Cheers,
Bart
Logged
XFer
Newbie
*
Offline Offline

Posts: 12


« Reply #175 on: August 25, 2010, 04:07:40 PM »
ReplyReply

Ok, I can try whatever kernel on whatever image now (also RGB 48bpp).
Here's another quick comparison, from a scan (Nikon 8000 @ 4000 dpi).
Original screenshot:

http://a.imageshack.us/img830/5715/sharpen01.jpg

Reduced size (sic!):


Deconvolution was with RT gaussian approximation. Radius = 0.45, 15 iterations, "turbo mode", no damping.

Can't wait to test some of your PSFs!  Grin
Logged
ablankertz
Newbie
*
Offline Offline

Posts: 17


« Reply #176 on: August 30, 2010, 10:47:04 AM »
ReplyReply

To gain some insight into how it works, here is a recipe for RL deconvolution using Photoshop commands:

1. Duplicate the ORIGINAL blurry image, call it "COPY1"

2. Duplicate COPY1, call it "COPY2"

3. Blur COPY2 with the PSF. For a gaussian, use Gaussian Blur. Other PSFs can be defined with the Custom Filter.

4. Divide the ORIGINAL blurry image by COPY2, with the result in COPY2.

5. Blur COPY2 with the PSF (as in step 3).

6. Multiply COPY1 by COPY2, with the result in COPY1. (Apply Image with Blending Mode: Multiply)

7. Go to step #2 and repeat for the number of iterations you want. Each iteration gets a little sharper. The final result is in COPY1.

To me, this set of operations implies that deconvolution with a seperable kernel is seperable. Is it?
Logged
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3793


« Reply #177 on: August 30, 2010, 01:08:41 PM »
ReplyReply

To me, this set of operations implies that deconvolution with a seperable kernel is seperable. Is it?

I don't know how you conclude that from that procedure, but perhaps it is related to how you view the concept of separability of kernels ?

Cheers,
Bart
Logged
XFer
Newbie
*
Offline Offline

Posts: 12


« Reply #178 on: September 05, 2010, 10:36:31 AM »
ReplyReply

I'd like to build a NxN kernel approximating an Airy pattern (diffraction figure from circular aperture).

Let's say the parameters are:

Lambda = 560nm = 0.56*10^-3 mm
Aperture radius = 1mm

X,Y domain:
-N/2 <= X <= N/2
-N/2 <= Y <= N/2

The function is defined in these terms (if I recall well):

R = sqrt(X^2+Y^2)

K = 2 * PI / Lambda
A = 1 (it's the aperture radius)
T = K * A * Sin(R)

Airy(R) = (2 * J1(T) / (T))^2

(where J1() is the Bessel function of degree 1, first kind)

So I need to table Airy(R) on a NxN grid

Is there a Matlab expert who can help me?
I need a real working example: I found a huge number of so called "tutorials" on the web but none of them actually works... for example, note that Airy(R) as defined is singular for R=0, one must somehow tell Matlab that R=0 must give Airy(R) = 1.

Thanks.

Fernando
Logged
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3793


« Reply #179 on: September 05, 2010, 12:24:50 PM »
ReplyReply

[...]
So I need to table Airy(R) on a NxN grid

Is there a Matlab expert who can help me?
I need a real working example: I found a huge number of so called "tutorials" on the web but none of them actually works... for example, note that Airy(R) as defined is singular for R=0, one must somehow tell Matlab that R=0 must give Airy(R) = 1.

Hi Fernando,

I used Mathematica to develop and test my models, and based the calculation on a module (a self made function) that contains the following core logic:

This is a visual representation as produced by Mathematica, but there can be some code optimizations performed like you did.
SamplePitch and Wavelength are both in the same (micron) units, e.g. 6.4 micron pitch and 0.564 micron wavelength, N is the aperture number e.g. 32.


However, IMHO there is also an integration of the function required to account for the finite apertures of the sensels (which I presumed to be square with a 100% fill factor as a result of the microlenses). What I've been struggling with is that there seems to be no simple solution other than basic 2D integration, and a workhorse like Mathematica takes it's time doing that because it's apparently not a simple function to integrate with rigorous precision (which is what Mathematica strives to do, but what may be overkill in this practical implementation).

Cheers,
Bart
« Last Edit: September 05, 2010, 05:24:48 PM by BartvanderWolf » Logged
Pages: « 1 ... 7 8 [9] 10 11 ... 18 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad