Ad
Ad
Ad
Pages: « 1 [2]   Bottom of Page
Print
Author Topic: Preserving tonal ratios in Lightroom  (Read 4902 times)
Jim Kasson
Sr. Member
****
Offline Offline

Posts: 918


WWW
« Reply #20 on: November 12, 2013, 11:33:31 PM »
ReplyReply

 Here's a MUCH younger version of me at the wheel of a '73 LaFrance engine (taken circa '82)

Rand,

Then you probably were born about the time this engine was manufactured (photographed in Sunnyside, Prince Edward Island, with an M240 and the 90mm Summicron).



Jim
Logged

hjulenissen
Sr. Member
****
Offline Offline

Posts: 1683


« Reply #21 on: November 13, 2013, 02:27:07 AM »
ReplyReply

Hello,
I took a Johnny-come-lately perusal of this thread. You got some great info from Brat and Eric. I want to address your desire to compute the local noise in the image: Perhaps you can succeed in the technique you are trying but i think there is a more straightforward  analysis that will be helpful. First, the noise in the sensels is independent, from sensel to sensel, if you can work on the raw values. The key is not to convolve adjacent sensel values. Simply record about 20 identical images and compute the noise at each voxel as the sample standard deviation using Matlab. Hope this helps.
That would work for some important kinds of noise, but not for banding.

-h
Logged
Rand47
Sr. Member
****
Offline Offline

Posts: 568


« Reply #22 on: November 13, 2013, 07:37:18 AM »
ReplyReply

Rand,

Then you probably were born about the time this engine was manufactured (photographed in Sunnyside, Prince Edward Island, with an M240 and the 90mm Summicron).



Jim

Jim,

Beautiful ... Thanks for sharing.

Rand
Logged
Rand47
Sr. Member
****
Offline Offline

Posts: 568


« Reply #23 on: November 13, 2013, 07:44:48 AM »
ReplyReply

Quote
So that was You doubling for Burt!

LOL  At this stage of my life I'm more often confused with Wilfred Brimley.   Grin

Rand
Logged
bjanes
Sr. Member
****
Offline Offline

Posts: 2822



« Reply #24 on: November 13, 2013, 10:30:02 AM »
ReplyReply

Iíve been doing some macro photography, and Iím unsatisfied with the sharpness that Iím getting. Iíve been trying to develop a testing regime aimed at determining the amount of blurring caused by camera vibration, unflat field, aberrations, diffraction, and the like. To that end, Iíve created a target which has a fairly broad band of high-spatial-frequency energy. I am photographing the target, processing the images in Lightroom, exporting them as TIFFs, reading them into Matlab, converting them to a linear (gamma = 1) representation, and performing analysis, the critical part of which is measuring the standard deviation (or, if you prefer, root-mean-square noise) of the image.


Thereís a problem: the technique overcorrects the images exported from Lightroom. I tried turning off everything I could find in the Develop module, including camera calibration. It made a difference, but didnít fix things.

So I created two images that differed in exposure by a stop. I measured the ratio of the mean values of the G channels of the two raw images with Rawdigger: it was 2.015. In Lightroom, Using PV 2012, the ratio of the linearized Adobe RGB green channels was 1.688. With PV 2010 and PV 2003, it was 1.681. Using all three channels and converting to monochrome in Matlab produces similar result. There seems to be a tone curve applied by Lightroom that keeps ratios in the raw file from being preserved in linear representations of the converted image.

I tried Iridient Developer, and got different, but still incorrect (in the photogrammetric sense) ratios of about 1.65. The ratio varied with the processing options. I thought the raw channel mixer set to green only might produce the right ratio, but no joy.

Using DCRAW, with the command line incantation ďdcraw -v -4 -a -w -j -T -o1 _D437349.NEFĒ produces sRGB files with the green channel mean ratio of 2.015, the same ratio as that of the raw files Ė- actually itís not quite the same ratio (it differs in the fifth decimal place) but I attribute that to the change of color space from camera native to sRGB. DCRAW users will note that Iím white balancing to average; leaving this out makes little difference.

Jim,

In my testing, I have found it is very difficult to obtain linear results with PV2012. However, if one uses PV2010 and sets the tone curve to linear (sliders on main page = 0 and point curve to linear) one can obtain approximately linear results by rendering into ProPhotoRGB and then converting the image to a custom space using ProPhoto primaries and a gamma of 1.0.

Here are my results with the Nikon D800e using ACR 8.2 and PV2010 with the Adobe Camera Standard profile. I photographed a Stouffer step wedge using the above technique and analyzed the results with Imatest. The results are shown graphically. The resulting gamma is approximately 1.0 and halving the exposure (the steps are 0.1 OD or 1/3 f/stop) results in half the pixel value.

Regards,

Bill
« Last Edit: November 13, 2013, 10:31:58 AM by bjanes » Logged
nma
Full Member
***
Offline Offline

Posts: 161


« Reply #25 on: November 13, 2013, 10:41:09 AM »
ReplyReply

That would work for some important kinds of noise, but not for banding.

-h

h,

The noise that OP referred to is random. My impression is that banding is a systematic error, not random.
Logged
Jim Kasson
Sr. Member
****
Offline Offline

Posts: 918


WWW
« Reply #26 on: November 13, 2013, 10:41:18 AM »
ReplyReply

... if one uses PV2010 and sets the tone curve to linear (sliders on main page = 0 and point curve to linear) one can obtain approximately linear results by rendering into ProPhotoRGB and then converting the image to a custom space using ProPhoto primaries and a gamma of 1.0.

Bill, I'll go back to Lr and try that. But for at least the next week or so, I'm focused (ahem!) on solving my sharpness testing problem with DCRAW so I can get back to making actual photographs.

I did go back and look at my Lr settings, and, thanks to you, I found the setting I had missed: the point curve, which was set to the default of Med Contrast. So what was happening is that the more ETTR images (and the brighter parts of all the images)  were being compressed (slope less than one) while the parts of the image nearer middle grey were being expanded (slope greater than one).

So, problem solved, subject to my verification.

Thanks!

Jim
Logged

Jim Kasson
Sr. Member
****
Offline Offline

Posts: 918


WWW
« Reply #27 on: November 13, 2013, 11:04:12 AM »
ReplyReply

I want to address your desire to compute the local noise in the image: Perhaps you can succeed in the technique you are trying but i think there is a more straightforward  analysis that will be helpful. First, the noise in the sensels is independent, from sensel to sensel, if you can work on the raw values. The key is not to convolve adjacent sensel values. Simply record about 20 identical images and compute the noise at each voxel as the sample standard deviation using Matlab. Hope this helps.

I'm afraid that I've caused confusion by using the term "rms noise" in the original post, and I apologize for that. What I meant by that term is the same algorithm used to calculate the standard deviation. Basically, it's the rms value of the deviations from the mean. Engineers often refer to that calculation as rms noise, even when the thing causing the deviations isn't, strictly speaking, stochastic. That's the way I was using the term, and it was sloppy, and it gave you and others the wrong impression. Another way to think of the calculation is to use a term more often used in one-dimensional signal processing: the rms value of the ac component.

In the images that I am processing, most of the variation from the mean value is signal: that is, it is caused by the target, as interpreted by the imaging system. I am well away from the read noise, so banding is not an issue. However, there is one source of real noise: photon/shot noise. There is another source of what can be thought of as noise: pixel response non-uniformity. The first can be averaged out as you suggest. The second cannot. I am operating in a portion of the dynamic range of the camera where PRNU is at least as large as photon noise.

You got me thinking about this, so I did some testing. I averaged the ten exposure-corrected images in each set, and compared the standard deviation of the averaged image to the average standard deviation of the ten individual images. The errors ranged from 0.05% to 0.14% in the eight-set aperture series I used for this experiment.

I could come up with a way to reduce the effect of the photon noise by computing the gain factor from sensor electrons to ADC counts (camera ISO / Unity Gain ISO), computing the photon noise standard deviation as the square root of the average sensor electrons count, and subtracting that in quadrature from the standard deviation I've already computed. I'm resisting that, because it seems unnecessary, and because it will make the whole procedure more complicated to explain.

I could also compute a correcting image for each camera by making a bunch of images of a flat field, applying lighting correction, and averaging to get a pixel response map which I could use to correct for PRNU. I am resisting that for the same reasons as above.

Thanks for your interest, and, again, I apologize for the confusion.

Jim
« Last Edit: November 13, 2013, 11:08:34 AM by Jim Kasson » Logged

MirekElsner
Jr. Member
**
Offline Offline

Posts: 75


WWW
« Reply #28 on: November 20, 2013, 11:08:33 AM »
ReplyReply

So, while I have a solution for raw conversion that I can use, itís much less convenient for me than to do the conversions using Lightroom.

So, my question is: How do I set up Lightroom so that the tone ratios of the original raw file are preserved in a linear representation of the output file?


Hi Jim, I would think there is so many places in the application between capture to export where this can be influenced that this question can be only answered by somebody from the LR team, like Eric. That having said, I am trying to figure out how you want to compensate for the blur and if there aren't tools already available in LR and PS that do it the same way. LR can do deconvolution sharpening. LR can do a sort of high radius/ low amount (your 400x400 matrix?). Photoshop can do several types of deblur, including motion and lens. Photoshop and plugins can do DOF stacking.
Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1683


« Reply #29 on: November 20, 2013, 12:39:39 PM »
ReplyReply

In the images that I am processing, most of the variation from the mean value is signal: that is, it is caused by the target, as interpreted by the imaging system. I am well away from the read noise, so banding is not an issue. However, there is one source of real noise: photon/shot noise. There is another source of what can be thought of as noise: pixel response non-uniformity. The first can be averaged out as you suggest. The second cannot. I am operating in a portion of the dynamic range of the camera where PRNU is at least as large as photon noise.
If you have a sufficiently "smooth" target and no temporal (signal) variation, fire N shots, then (ideally) all sensels within a color plane should be the same. The samples can be seen as a 3-d array with variation ("noise") in the vertical, horisontal and temporal axes.

Zero-mean noise added to a "smooth" signal (in time and/or space) can be reduced by averaging. But is all "noise" zero-mean? All signals are not all that "smooth".

How can this variation be visualised and interpreted? What does it tell us? Is it used by the big guys presenting sensor analysis?

If you "dissect" the noise response of a given camera at various temperatures and settings into temporal and spatial components... Is it possible to improve noise reduction (even for single-image input)?

-h
« Last Edit: November 20, 2013, 12:44:44 PM by hjulenissen » Logged
Bill Koenig
Sr. Member
****
Offline Offline

Posts: 354


« Reply #30 on: November 20, 2013, 02:32:10 PM »
ReplyReply

Check out Helicon Remote, it automates focus stacking from a Laptop, Desktop or Android devise.
I use it with my D7000 and Nexus 7 tablet.
If you need 100 shots to get the needed stack, no problem with Helicon remote.
Do a search, they have a video. 
Logged

Bill Koenig,
Pages: « 1 [2]   Top of Page
Print
Jump to:  

Ad
Ad
Ad