Ad
Ad
Ad
Pages: [1] 2 3 4 »   Bottom of Page
Print
Author Topic: New Lens Correction Software  (Read 21111 times)
Jonathan Wienke
Sr. Member
****
Offline Offline

Posts: 5759



WWW
« on: November 22, 2009, 03:19:35 AM »
ReplyReply

I've started work on a lens correction program. The idea is to duplicate all of the features of DXO, PTLens, and other similar programs to correct the following:
  • Lens blur, coma, and other similar blurring/smearing lens problems
  • Barrel, pincushion, and mustache distortion
  • Both types of chromatic aberration
  • Vignetting/falloff
  • Lens casts (color balance different in the center of the image circle vs the edges)

The main difference between my program and DXO is the handling of lens blur profiles. DXO offers a limited selection of "canned" blur profiles for various camera/lens combinations. If your particular combination isn't on their list, or if your lens behaves differently than the one used to create DXO's profile (either better or worse), you're SOL. I'm designing a method for users to create their own custom blur profiles specific to their own equipment, regardless of how common or obscure it may be. I expect the benefits of this approach to be similar in magnitude to the difference between using "canned" printer profiles compared to custom profiles, especially when using third-party papers.

The proposed workflow at this point is to use Bridge or Lightroom to convert the RAWs to linear DNGs. This demosaics Bayer-matrix images to 16-bit linear RGB, but does not convert the RAW data from the camera color space, so you still have complete flexibility to select a white balance or output color space after the DNG is processed through my program. After being processed through my program, the corrected DNG is saaved and then can be opened in any DNG-compatible image editor (LR, ACR, etc) for final processing.

If there is sufficient interest, future versions of the program may offer bracketed focus/exposure stacking, or possibly panorama stitching capability.

If anyone has feature requests or ideas regarding how you'd like the program to work, please post them here. Thanks in advance.
Logged

feppe
Sr. Member
****
Offline Offline

Posts: 2907

Oh this shows up in here!


WWW
« Reply #1 on: November 22, 2009, 04:42:15 AM »
ReplyReply

This sounds promising! I've occasionally looked at DXO, but they don't have the camera/lens combos I have, so making my own profiles would be a killer feature.

I think one of the key things for this is to have seamless, easy and quick way to integrate into an existing workflow. DNG sounds like a good start, but batch processing based on EXIF data would take it a step further. Not sure how many of the features can be applied automatically, and how many need manual tweaking for each image, though. This could be batched as well by first running the images through the tweaking dialogs, then doing the actual CPU intensive adjustments on the second pass based on earlier inputs.

I know you already have quite a few features to implement, but I'll propose lens/lighting calibration feature using common color targets. I think this would complement the feature set, making it a pretty full lens and camera calibration suite - only focus calibration left for the hardware end.
Logged

alain
Sr. Member
****
Offline Offline

Posts: 268


« Reply #2 on: November 22, 2009, 05:13:01 AM »
ReplyReply


Why restricting you're self to a adobe only format, this seems rather silly to me.

Logged
feppe
Sr. Member
****
Offline Offline

Posts: 2907

Oh this shows up in here!


WWW
« Reply #3 on: November 22, 2009, 05:17:53 AM »
ReplyReply

Quote from: alain
Why restricting you're self to a adobe only format, this seems rather silly to me.

AFAIK there is no working non-proprietary RAW format. The only one quick googling gets is OpenRAW, but their website hasn't been updated since 2006. While DNG is far from open, and as much as I'd like to see a truly open format free from Adobe's yoke (or any one entity), it's less locked down than CR2 or NEF, for example.
« Last Edit: November 22, 2009, 05:21:10 AM by feppe » Logged

alain
Sr. Member
****
Offline Offline

Posts: 268


« Reply #4 on: November 22, 2009, 06:18:21 AM »
ReplyReply

Quote from: feppe
AFAIK there is no working non-proprietary RAW format. The only one quick googling gets is OpenRAW, but their website hasn't been updated since 2006. While DNG is far from open, and as much as I'd like to see a truly open format free from Adobe's yoke (or any one entity), it's less locked down than CR2 or NEF, for example.

Hi,

Jonathan is using the demosaic data, only the colorspace and white balance are not applied then.   For quite a few corrections those aren't that important.

It would be nice to have a flexible backbone which could be format independed.  Jonathan then could make it possible to have several formats or even a sort of plugin architecture.  I suppose that quite a few makers of RAW software and/or image editors could be buyers.

Bibble labs are using noise ninja for example and do have a plugin architecture.


Alain
Logged
Jonathan Wienke
Sr. Member
****
Offline Offline

Posts: 5759



WWW
« Reply #5 on: November 22, 2009, 09:33:04 AM »
ReplyReply

Quote from: feppe
This sounds promising! I've occasionally looked at DXO, but they don't have the camera/lens combos I have, so making my own profiles would be a killer feature.

I think one of the key things for this is to have seamless, easy and quick way to integrate into an existing workflow. DNG sounds like a good start, but batch processing based on EXIF data would take it a step further. Not sure how many of the features can be applied automatically, and how many need manual tweaking for each image, though.

The design is based heavily on reading the EXIF data to automate adjustment parameters. The most important EXIF data are camera make/model/serial, lens make/model/serial, focal length, and aperture. The data needed to really correct lens blur properly is far too complex to adjust manually

I'm sampling 32-64 points from center to edge of image circle. Each data point has an array of values representing blur amount in various directions and various distances from the sample point, as well as distortion and vignette correction parameters. There are separate arrays for each color channel. The user interface for manually tweaking the data would literally be a screen filled with text boxes or sliders with no room for labels or captions to explain what they all did. The only manual interaction with the program will be to specify which DNG files are to be processed, output folder, filling in data not in EXIF (if you use a lens that doesn't communicate electronically with body), and rise/fall/shift data when using a lens with shift capability.

Quote
Why restricting you're self to a adobe only format, this seems rather silly to me.

DNG is an open file format; the specifications for creating and reading DNG files is freely available, and I can create an application that reads & writes DNG without having to pay license fees to anyone. DNG has already been adopted by several camera manufacturers as their RAW format, and DNGs can be read by many non-adobe programs. Like it or not, it is the closest thing to a universal open RAW format out there.

Quote
Jonathan then could make it possible to have several formats or even a sort of plugin architecture.

I'm keeping the guts of the program separate from the user interface, to make it easier to integrate into a plugin or whatever for a RAW converter or image editor. I'm going to get the standalone version working first before trying to make plugin versions though.

Quote
I know you already have quite a few features to implement, but I'll propose lens/lighting calibration feature using common color targets.

All of the corrections I'm doing take place in the camera's native space, so if you're using custom DNG camera profiles they will work exactly the same whether the file has been run through my program or not, unless your camera lens combination has significant vignetting or lens cast issues (different color balance in the center of image circle vs edge). In that case, you'll want to run your profiling target image through my program before feeding it to Passport or whatever.

The target for profiling the lens corrections will be completely different than a target used for color profiling; it will be an array of regularly-spaced small white dots on a black background. I'm still working on the design.
Logged

alain
Sr. Member
****
Offline Offline

Posts: 268


« Reply #6 on: November 22, 2009, 10:36:09 AM »
ReplyReply

Quote from: Jonathan Wienke
...


DNG is an open file format; the specifications for creating and reading DNG files is freely available, and I can create an application that reads & writes DNG without having to pay license fees to anyone. DNG has already been adopted by several camera manufacturers as their RAW format, and DNGs can be read by many non-adobe programs. Like it or not, it is the closest thing to a universal open RAW format out there.

I'm keeping the guts of the program separate from the user interface, to make it easier to integrate into a plugin or whatever for a RAW converter or image editor. I'm going to get the standalone version working first before trying to make plugin versions though.
...

The target for profiling the lens corrections will be completely different than a target used for color profiling; it will be an array of regularly-spaced small white dots on a black background. I'm still working on the design.
Hi Jonathan

If you're keeping in mind that the majority of users isn't using DNG...   I find it strange that you're not using the original RAW information because you record the complete color information...

Remembering that those targets need to be rather large (imatest recommends at least 24" short side, ptlens recommends buildings) I think a mostly white background would be far more economical.  Making a test setup will be some work to and those need space.
Logged
Jonathan Wienke
Sr. Member
****
Offline Offline

Posts: 5759



WWW
« Reply #7 on: November 22, 2009, 09:39:30 PM »
ReplyReply

Quote from: alain
I find it strange that you're not using the original RAW information because you record the complete color information...

You have no clue what you're talking about here. The linear RGB DNG has all of the original RAW data in it, it just has the two missing interpolated values added to the uninterpolated color channel value. Adding the interpolated values does not destroy or degrade the original uninterpolated values. If you process a linear RGB DNG and the original RAW side by side with the same conversion settings, the results are an exact match. The linear DNG conversion step has zero effect on converted image quality or one's ability to adjust color or tonality.

Quote
Remembering that those targets need to be rather large (imatest recommends at least 24" short side, ptlens recommends buildings) I think a mostly white background would be far more economical.

Unless your lens blurs significantly differently at close focus distances than at infinity, the target does not need to be building-sized. But the target has to be white dots on a black background, or the image analysis routine that generates the PSF data from the target images can't make accurate calculations. There's some fundamental mathematical principles involved that can't be ignored without seriously compromising the results; it's a signal-to-noise ratio issue. Making the target background solid black isn't that big of a deal; you only need one to make all your profiles. If we're talking printing your own target, the cost of paper and ink for a black-background target is not going to be a deal-breaker. Even if the ink cost $15 (highly doubtful), it would be well worth the investment. Ever heard the phrase "penny wise, pound foolish"?
Logged

alain
Sr. Member
****
Offline Offline

Posts: 268


« Reply #8 on: November 23, 2009, 12:05:33 PM »
ReplyReply

Quote from: Jonathan Wienke
You have no clue what you're talking about here. The linear RGB DNG has all of the original RAW data in it, it just has the two missing interpolated values added to the uninterpolated color channel value. Adding the interpolated values does not destroy or degrade the original uninterpolated values. If you process a linear RGB DNG and the original RAW side by side with the same conversion settings, the results are an exact match. The linear DNG conversion step has zero effect on converted image quality or one's ability to adjust color or tonality.



Unless your lens blurs significantly differently at close focus distances than at infinity, the target does not need to be building-sized. But the target has to be white dots on a black background, or the image analysis routine that generates the PSF data from the target images can't make accurate calculations. There's some fundamental mathematical principles involved that can't be ignored without seriously compromising the results; it's a signal-to-noise ratio issue. Making the target background solid black isn't that big of a deal; you only need one to make all your profiles. If we're talking printing your own target, the cost of paper and ink for a black-background target is not going to be a deal-breaker. Even if the ink cost $15 (highly doubtful), it would be well worth the investment. Ever heard the phrase "penny wise, pound foolish"?

The problem is to identify the original pixels and separate them from the interpolated ones which are not to be used, if you're after the maximum resolution.

All info on Barrel, pincushion, and mustache distortion correction that I've seen say that the distance plays a role.  Try shooting a target with a 17mm lens and then also think about the flatness of the target versus the size.  If you need white inside black it's easy to only use small patches black.  Those people that don't have a 24" of even 44" wide printer have maybe access to A0 plotters, but those won't plot a completely black surface.  A 70*100cm photoprint on foamcore is about 75euro's here, but I doubt they will print a completely black one for that price.

Another problem is lighting a complete black surface, I suppose it needs to be very evenly lit without reflections.




Logged
Piero
Newbie
*
Offline Offline

Posts: 2


« Reply #9 on: November 23, 2009, 03:49:06 PM »
ReplyReply

So basically you seem to want to extract the point spread function of the lens at different locations across the frame. Then do deconvolution, right ?

You mention in your last post that you do consider that the PSF will not change with focus distance... Have you tested this assumption before going straight to programming ? Because I'm very much afraid that the PSF WILL depend a lot on focus distance.
You shoud test this ! As for targets, I think that setting up your camera in the dark and photographing a led seen through a small aperture  could be a good way to get it.

Do you think you'll have to get different PSFs for different apertures too ? This will be a very long procedure !
Logged
Misirlou
Sr. Member
****
Offline Offline

Posts: 545


WWW
« Reply #10 on: November 23, 2009, 05:13:58 PM »
ReplyReply

The version of DxO that came out last week includes a new ability to remove distortions from non-DXO-profiled lens/camera combinations, or so it is written. I'd be surprised if it were much more than a barrel/pincushion stretch, but I haven't tried it yet.

At any rate, Jonathan's effort sounds like it has a lot of potential to me.
Logged
tived
Sr. Member
****
Offline Offline

Posts: 674


WWW
« Reply #11 on: November 23, 2009, 09:09:02 PM »
ReplyReply

Hi Jonathan,

this sounds great, when will you have a trail version ready? what are you expecting the program to cost?

keep us informed

Henrik
Logged
Eric Myrvaagnes
Sr. Member
****
Offline Offline

Posts: 7473



WWW
« Reply #12 on: November 24, 2009, 10:34:10 AM »
ReplyReply

Quote from: tived
Hi Jonathan,

this sounds great, when will you have a trail version ready? what are you expecting the program to cost?

keep us informed

Henrik
Put me on the potential early adopters list, too.

Eric


Logged

-Eric Myrvaagnes

http://myrvaagnes.com  Visit my website. New images each season.
Jonathan Wienke
Sr. Member
****
Offline Offline

Posts: 5759



WWW
« Reply #13 on: November 25, 2009, 11:05:39 AM »
ReplyReply

Quote from: Piero
As for targets, I think that setting up your camera in the dark and photographing a led seen through a small aperture  could be a good way to get it.

Do you think you'll have to get different PSFs for different apertures too ? This will be a very long procedure !

A complete blur profile will need to have at least shots spanning the entire range of focal lengths and apertures, but not necessarily every possible combination. If the EXIF data shows a combination of settings that isn't in the PSF database, the set of PSFs used to process the image will be interpolated from the nearest available data. For example, if you have data for f/4 and f/8, and the shot was taken at f/5.6, then the f/4 and f/8 data will be interpolated to make a custom PSF set for that image. Focal length adds an additional dimension to the algorithm.

I don't doubt that focus distance may affect blur and distortion characteristics to some degree, but my experience is that the focal length (of a zoom lens) and aperture setting have a much greater effect on lens blur than focus distance. For example, my 17-40/4 L lens is not very sharp in the corners at 17mm at any focus distance, but at 40mm it is sharp in the corners regardless of focus distance. I'm focusing (pun intended) on the most significant factors affecting blur first.

As a practical matter, focal length and aperture are pretty much always included in EXIF data, but focus distance is only rarely included. This means that dealing with it would have to be a manual process of entering the focus distance for each image processed.

Profiling a lens won't be too onerous. Set the camera to aperture priority mode, and adjust exposure compensation to so it is exposed to the right but RAW is not clipped. Select a focal length, position the target to fill the frame, and shoot a series of frames covering the entire aperture range. For zoom lenses, increment the focal length, reposition the target, and repeat as needed. When you're done, simply point the program at the folder containing the images, and it will automatically process them and add the resulting PSF datasets to its database for future use. The processing might take a while, but will be completely automated.
« Last Edit: November 25, 2009, 11:44:40 AM by Jonathan Wienke » Logged

Jonathan Wienke
Sr. Member
****
Offline Offline

Posts: 5759



WWW
« Reply #14 on: November 25, 2009, 11:34:01 AM »
ReplyReply

Quote from: alain
The problem is to identify the original pixels and separate them from the interpolated ones which are not to be used, if you're after the maximum resolution.

I'm designing the program to work on target dots that are 10 pixels in diameter or larger. This avoids the issue of Bayer interpolation problems, and also allows PSF data to be calculated at any arbitrary angle from the source pixel--something that cannot be done if the target dot is a single pixel point source.

Quote
If you need white inside black it's easy to only use small patches black.  Those people that don't have a 24" of even 44" wide printer have maybe access to A0 plotters, but those won't plot a completely black surface.  A 70*100cm photoprint on foamcore is about 75euro's here, but I doubt they will print a completely black one for that price.

Small black patches around the white dots is no good; it limits the number of target dots the image can contain, and having large white areas will reduce contrast and skew the results. The farther the PSF analyzer can process image data from target spot before reaching the noise floor, the better the PSF will be able to compensate for diffraction and other large-radius blur and contrast reduction phenomena.

I don't see why a 97% black print would be a problem. It won't use much more ink than a normal photo printed the same size; it will just use the dark black ink exclusively instead of a mix of color inks. The cost difference is not the big deal you seem to think it is. I'd definitely recommend a matte surface to prevent stray reflections from affecting the PSF calculations.

Quote
Another problem is lighting a complete black surface, I suppose it needs to be very evenly lit without reflections.

Yes, just like shooting any other target, whether for color profiling or whatever. It will need to be as evenly lit as possible, and as perpendicular to the camera as possible, and rigid and flat.

Quote
this sounds great, when will you have a trail version ready? what are you expecting the program to cost?

I doubt a full-featured beta version will be ready until February or March of next year. As to cost, I'm envisioning something in the vicinity of $40, assuming the user prints their own target. I figure at that price it will be much easier for casual shooters to justify than the $199 or whatever DXO is charging, and I'll sell enough additional units to make up the price difference. Obviously, beta will be free to use, but time limited.
« Last Edit: November 25, 2009, 11:42:22 AM by Jonathan Wienke » Logged

Bradley Proctor
Full Member
***
Offline Offline

Posts: 150



WWW
« Reply #15 on: November 25, 2009, 01:27:30 PM »
ReplyReply

Quote from: alain
If you're keeping in mind that the majority of users isn't using DNG...   I find it strange that you're not using the original RAW information because you record the complete color information...

Using DNG seems to be the most logical solution to me.  The DNG is the original RAW information, just stored in a different format.
Logged

deejjjaaaa
Sr. Member
****
Offline Offline

Posts: 743


« Reply #16 on: November 25, 2009, 01:39:20 PM »
ReplyReply

Quote from: bproctor
Using DNG seems to be the most logical solution to me.  The DNG is the original RAW information, just stored in a different format.

OFFTOPIC

no it is not as a matter of how it is obtained ... read this thread - http://forums.adobe.com/thread/528900?tstart=0 = DNG files converted by Adobe's own DNG converter from original raw files do not have all the original information... DNG converters just strips some... who knows what it will strip tomorrow w/o much public fanfare.
Logged
Jonathan Wienke
Sr. Member
****
Offline Offline

Posts: 5759



WWW
« Reply #17 on: November 25, 2009, 02:17:50 PM »
ReplyReply

Deja, you are totally wrong. My read of that thread is that the initial "unofficial support" of the 7D didn't convert all of the metadata from the Canon RAW to DNG correctly, but the issue was fixed when Adobe updated Camera RAW and DNG Converter. The RAW image data was not affected, only the metadata. The sky isn't falling...
Logged

deejjjaaaa
Sr. Member
****
Offline Offline

Posts: 743


« Reply #18 on: November 25, 2009, 02:44:20 PM »
ReplyReply

Quote from: Jonathan Wienke
Deja, you are totally wrong. My read of that thread is that the initial "unofficial support" of the 7D didn't convert all of the metadata from the Canon RAW to DNG correctly, but the issue was fixed when Adobe updated Camera RAW and DNG Converter. The RAW image data was not affected, only the metadata. The sky isn't falling...

no, I am not and sky is in fact falling as usual - please read what I am referring to :

was that PEF to DNG issue w/ DNG converter addressed by Adobe ?

http://forums.dpreview.com/...forums/read....essage=32904790


"...The program will not work with DNG files converted from PEF's by the current version of the Adobe DNG Converter application due to this program stripping out the necessary black masked-to-light photosites at the right and bottom borders of the sensor in landscape orientation which are used by the correction algorithm..."

if the claim is correct then people who converted PEF to DNG lost in fact some "raw data", right ?

---

reply by the author of  Rawnalyze ( http://www.cryptobola.com/PhotoBola/Rawnalyze.htm )

That conversion of K20 PEF files is still erroneous (with 5.6). This is an example for the necessity to keep the original raw file: not only that the conversion is wrong, but it removes data from the "image" (the masked area). This should never happen.

and

This attitude of Adobe we know it better what is useful and what is not comes up time and again. For example the masked area is removed from Nikon D90 and D300 files as well; although the pixel values of the image are black level corrected already, still that data should not be removed. The fact, that one doesn't know any usefulnes of that data is not the same that there can be no use of it.

---

Adobe DNG converter strips the raw data that can be used by other software (see the ref'd thread @ dpreview about the program written by GordonBGood for Pentax raw files @ high ISO).

Do you object ? Does DNG converter irreversibly strips the data during conversion or not ? Very simple question.
Logged
Bradley Proctor
Full Member
***
Offline Offline

Posts: 150



WWW
« Reply #19 on: November 25, 2009, 03:03:15 PM »
ReplyReply

Quote from: deja
no, I am not and sky is in fact falling as usual - please read what I am referring to :

was that PEF to DNG issue w/ DNG converter addressed by Adobe ?

http://forums.dpreview.com/...forums/read....essage=32904790


"...The program will not work with DNG files converted from PEF's by the current version of the Adobe DNG Converter application due to this program stripping out the necessary black masked-to-light photosites at the right and bottom borders of the sensor in landscape orientation which are used by the correction algorithm..."

if the claim is correct then people who converted PEF to DNG lost in fact some "raw data", right ?

---

reply by the author of  Rawnalyze ( http://www.cryptobola.com/PhotoBola/Rawnalyze.htm )

That conversion of K20 PEF files is still erroneous (with 5.6). This is an example for the necessity to keep the original raw file: not only that the conversion is wrong, but it removes data from the "image" (the masked area). This should never happen.

and

This attitude of Adobe we know it better what is useful and what is not comes up time and again. For example the masked area is removed from Nikon D90 and D300 files as well; although the pixel values of the image are black level corrected already, still that data should not be removed. The fact, that one doesn't know any usefulnes of that data is not the same that there can be no use of it.

---

Adobe DNG converter strips the raw data that can be used by other software (see the ref'd thread @ dpreview about the program written by GordonBGood for Pentax raw files @ high ISO).

Do you object ? Does DNG converter irreversibly strips the data during conversion or not ? Very simple question.

I've got a simple solution:  Don't use Jonathan's program.  The rest of us will remain interested and supportive of his efforts.
« Last Edit: November 25, 2009, 03:03:35 PM by bproctor » Logged

Pages: [1] 2 3 4 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad