Ad
Ad
Ad
Pages: [1] 2 »   Bottom of Page
Print
Author Topic: Calibrating out Pixel Response Nonuniformity, or not?  (Read 9087 times)
Jim Kasson
Sr. Member
****
Offline Offline

Posts: 857


WWW
« on: April 05, 2013, 10:52:54 PM »
ReplyReply

I woke up this morning with an idea: calibrating out pixel-response nonlinearity (PRNU). First off, what is it? Itís a kind of noise in digital camera systems. Itís not actual noise, since itís predictable. It stems from the fact that all photosites in a given color plane of a digital camera donít have the same sensitivity to photons. If a group of photosites on the sensor of a digital camera are all exposed to the same photon flux for the same length of time, some will read higher values than others. If you consider the systematic errors to be a noise signal, the amplitude of the PRNU signal varies directly as the number of photons hitting the photosite, and therefore PRNU is greatest in the highlight areas of an image.

My idea for calibrating it out was bog-simple: construct a map of the pixel sensitivities, invert it, and multiply the values in the raw image by the inverted map. In the dim, distant future, cameras might do that themselves, with the maps generated by the manufacturers and burned into the camera before it is shipped.  In the medium term, raw developer coders could have you make a series of calibration images, and use them to correct each image from that camera as itís ďdevelopedĒ.

Before I got too carried away with the idea, I thought Iíd test a couple of cameras to discover the nature of PRNU. I made 256 exposures of a defocused white wall, first with the Nikon D4, and then with the Sony NEX-7. Those are the two cameras that I have handy that span the greatest range of photosite density.

Hereís how I got to 256 exposures. I figured that the PRNU and the shot noise at full scale might be comparable. If I wanted the PRNU data to be accurate, I should do enough averaging to reduce the shot noise by about a decimal order of magnitude. The square root of 256 is 8 [No, it's 16. Oops!], so thatís pretty close. Besides, I just couldnít stomach the idea of making 1024 exposures. I did compute a corrected PRNU by subtracting the shot noise in quadrature, but it turned out to be a small correction; 256 images were enough. Whew!

I used the base ISO (100 on both cameras), and a shutter speed of 1/30 of a second at an aperture which gave me an exposure in the green channels of the raw images which was about one stop below clipping.  I saved the still-mosaiced raw images as tifís, and averaged all 256 image for each camera. Then I wrote a program to extract each color plane from the averaged images and compute the mean and standard deviation of the data in a 200x200 pixel (that's 10,000 pixels in each of the four color planes) central region of each color plane.

I looked at the histogram of the data and verified that the PRNU appears Gaussian:



I brought the statistics into Excel and computed the standard deviation of the PRNU as a percentage of the signal. Averaged over all four channels, the D4 PRNU was .29% of the signal and NEX-7 PRNU was .41%. I then calculated the shot noise, and extrapolated it to what itís be for a nearly full scale signal. Then I calculated the ratio of the PRNU over the shot noise at full scale. For the D4 itís 97%, and for the NEX-7 itís 70%.

Then I got a whole lot less excited about this project. I donít hear a lot of people complaining about noise in the highlight values of digital images. Even if the PRNU is the same as the shot noise, calibrating the PRNU out will only reduce the overall highlight noise to 70% of what it was (because shot noise and PRNU are uncorrelated, you canít just subtract the numbers). Doesnít seem like itís worth the effort.

I was taught that negative results were as valuable as positive ones, so Iím posting this. Also, I could certainly have made some errors in my thinking, and Iíd like anyone who sees a problem with what Iíve just written to let me know.
« Last Edit: April 06, 2013, 11:20:00 AM by Jim Kasson » Logged

Jack Hogan
Full Member
***
Online Online

Posts: 243


« Reply #1 on: April 06, 2013, 04:44:40 AM »
ReplyReply

Very nice, Jim, especially the confirmation that PRNU has a gaussian distribution.  For the D4 I get 0.29% at ISO 100 and 0.32% at ISO 200 from DxO's SNR curves.  I typically prefer the ISO 200 values because I am more confident about having gotten a clear view of the shot noise tangent.

It'd be interesting to know how much of PRNU is due to stuff before silicon (lenses, filters, CFA) vs the rest.  As a reference, the D5200 (sensor made by Toshiba) appears to have sightly better PRNU than the D7100 with apparently the same sensor, 0.52% vs 0.62% resp. at ISO 100, 0.58 and 0.65% at ISO 200.  The D7100 apparently does not have an OLPAF/AA altogether, suggesting - I think - that we are seeing more clearly non uniformities created by and before the AA.

The situation is reversed with the D800e and the D800, which use the same sensor made by Sony - with .28 and .50% respectively at ISO 100 - and 0.36 and 0.53% at ISO 200.  They both have a 4 dot beam splitter in the light path as an OLPAF/AA, which is however reversed out in the 'e'.

[EDIT]Since I was at it I also checked the other camera pair with/without AA filter that I have data for, the Pentax K5II/s with sensors fabricated by Sony, I believe, of roughly the same pixel pitch as the D800/e.  Here again as for the the D800/e pair the K5IIS with no AA appears to have better PRNU performance than the K5II with AA (0,32%-0,32% vs 0.38%-0.39% at ISO 100 and 200 respectively).  Sony's fabrication process is obviously more precise, creating fewer non-uniformities than Toshiba's, although Toshiba's sensels are 30% smaller by area.[\EDIT]

Why does PRNU performance increase without an AA filter in the Sony made sensors?
« Last Edit: April 06, 2013, 09:18:03 AM by Jack Hogan » Logged
bjanes
Sr. Member
****
Offline Offline

Posts: 2795



« Reply #2 on: April 06, 2013, 07:19:43 AM »
ReplyReply


I brought the statistics into Excel and computed the standard deviation of the PRNU as a percentage of the signal. Averaged over all four channels, the D4 PRNU was .29% of the signal and NEX-7 PRNU was .41%. I then calculated the shot noise, and extrapolated it to what itís be for a nearly full scale signal. Then I calculated the ratio of the PRNU over the shot noise at full scale. For the D4 itís 97%, and for the NEX-7 itís 70%.

Then I got a whole lot less excited about this project. I donít hear a lot of people complaining about noise in the highlight values of digital images. Even if the PRNU is the same as the shot noise, calibrating the PRNU out will only reduce the overall highlight noise to 70% of what it was (because shot noise and PRNU are uncorrelated, you canít just subtract the numbers). Doesnít seem like itís worth the effort.

I was taught that negative results were as valuable as positive ones, so Iím posting this. Also, I could certainly have made some errors in my thinking, and Iíd like anyone who sees a problem with what Iíve just written to let me know.


Jim,

Nice work! I agree that it may not be worth reducing PRNU, since it is not apparent in most images. I'm no visual psychologist, but I understand that the human perceptual system responds to relative differences in luminance and 0.3% is below the threshold of about 1% predicted by the Weber-Fechner law. See the Normal Koren link for a brief explanation.

Since PRNU is proportional to the signal, the next step in your research would be to derive a coefficient for each pixel that would equalize the gain of each photosite. For a 30MP sensor, this would require a large table, but our computers could easily handle the computation.

Regards,

Bill
Logged
Jim Kasson
Sr. Member
****
Offline Offline

Posts: 857


WWW
« Reply #3 on: April 06, 2013, 10:11:29 AM »
ReplyReply

For the D4 I get 0.29% at ISO 100 and 0.32% at ISO 200 from DxO's SNR curves.  I typically prefer the ISO 200 values because I am more confident about having gotten a clear view of the shot noise tangent.

Jack,

The way I'm doing the calculation, I'd need 512 exposures at ISO 200 to get the same accuracy as I get with 256 at ISO 100. The PRNU noise would be half as much, measured in electrons, and the shot noise would be 0.707 as much, so I'd need twice as many exposures to get the averaged shot noise down to the same percentage of the PRNU. I suppose I could make that many exposures with the D4. With the NEX-7 it would be a pain. With the M9, it would be excruciating.

Why does PRNU as as percentage of signal vary with ISO? If the effect of PRNU is entirely on the number of electrons in the well, it shouldn't. There's probably a variation in gain that affects it as well. If we can measure that we'll have a partial answer to th question "does the PRNU all take place before the quantizing to electrons or not" question from the Unity Gain ISO thread.

I wonder if there's a way to compare the accuracy of your method for getting PRNU to mine? If you can describe it to me in detail, I can try both methods on the computer camera model and compare the results to the values programed into the model.

Jim
« Last Edit: April 06, 2013, 10:32:44 AM by Jim Kasson » Logged

Jim Kasson
Sr. Member
****
Offline Offline

Posts: 857


WWW
« Reply #4 on: April 06, 2013, 10:21:47 AM »
ReplyReply

Since PRNU is proportional to the signal, the next step in your research would be to derive a coefficient for each pixel that would equalize the gain of each photosite. For a 30MP sensor, this would require a large table, but our computers could easily handle the computation.

Bill, that was my intent going in. In fact, I have such a table for the D4 and the NEX-7. In the case of the D4, with the PRNU and the shot noise essentially equal at full scale at ISO 100, you don't need to calibrate out the PRNU to see what the image would look like with it calibrated out. Instead, you can average a bunch of images to drive the shot noise way down, and what you're left with is the PRNU. Since both are Gaussian, eliminating the PRNU and eliminating the shot noise should produce essentially the same visual effect.

In order to see the differences between the averaged and un-averaged images, you should probably judge the images on a 10-bit display system. If eight bits is enough for you, you can see some of my averaging results here.

Jim
Logged

Chris Warren
Newbie
*
Offline Offline

Posts: 6


« Reply #5 on: April 06, 2013, 11:08:08 AM »
ReplyReply

Hi Jim,

This is nice.  I have been doing some similar things with my D40 here:
http://www.luminous-landscape.com/forum/index.php?topic=61636.msg502869#msg502869
I have found that with a FW near 28,000 e and PRNU = 0.57%, shot noise (167 e) and PRNU 'noise' (160 e) are pretty similar at max signal.  From this, I went on to take 100 flat fields (remember sqrt(256) is not 8!) and convince myself that I could eliminate the PRNU and get to shot noise limited.  This was tedious, as I split out frames in IRIS and then did the processing and averaging in ImageJ; wished there was an easier way, like an in-camera button, etc.

However, the point is whether it is worthwile to do so?  I think it is.  I went into a discussion into some of this here:
http://www.dpreview.com/forums/post/38963273
I think that if we have a camera that is capable of 100:1 SNR for a uniform area, but the detail is only 10:1, then it would be good to go for as much SNR as possible, and try to raise this to 150:1 if we can, to try and pull out the details better.  IOW, if we pay good money for a sensor that is capable of 150 or 200:1 even, when shot noise limited, then why not have this capability or option?

Chris
Logged
Jim Kasson
Sr. Member
****
Offline Offline

Posts: 857


WWW
« Reply #6 on: April 06, 2013, 11:21:36 AM »
ReplyReply

(remember sqrt(256) is not 8!)

Chris, my face is red! Corrected.

Jim
Logged

Jim Kasson
Sr. Member
****
Offline Offline

Posts: 857


WWW
« Reply #7 on: April 06, 2013, 11:29:15 AM »
ReplyReply

However, the point is whether it is worthwhile to do so?  I think it is.  I went into a discussion into some of this here:
http://www.dpreview.com/forums/post/38963273

Chris, thanks for the pointer.  Given a few days to add mosaiced sensors and CFAs to my camera simulation, I should be able to model this. I can do it quicker for a Fovean-like camera. I'll take a look at it. I'll think about a target some more, but right now I kind of like the idea of a sin(x)/x radial pattern superimposed on a grey field.  Does that make sense to you?

Jim
Logged

Jim Kasson
Sr. Member
****
Offline Offline

Posts: 857


WWW
« Reply #8 on: April 06, 2013, 11:34:36 AM »
ReplyReply

I have been doing some similar things with my D40 here:
http://www.luminous-landscape.com/forum/index.php?topic=61636.msg502869#msg502869

I like this post, Chris. One thing that you found out is that PRNU hardly varies at all with ISO. That makes sense if the causes of PRNU are all before the point where the number of electrons in the well is determined.

Jim
Logged

Jack Hogan
Full Member
***
Online Online

Posts: 243


« Reply #9 on: April 06, 2013, 11:53:08 AM »
ReplyReply

Why does PRNU as as percentage of signal vary with ISO?

Good question, Jim, I've been wondering about that myself.  I calculate PRNU from DxO Total SNR data at 100%, subtracting the shot noise component quadratically.  I estimate shot noise at 100% (and therefore FWC) by finding an area on the SNR curve unaffected by read noise or PRNU and extrapolating from it: the 100% intercept of the 3dB/octave tangent to the SNR curve is shot noise at full scale assuming that the tangent is sitting on a portion of the curve where there is just shot noise.  This area is usually around .8-5% of full scale for clean (Sony sensored) cameras at ISO 100 and as gain/ISO is raised it naturally moves towards full scale until it gets overwhelmed by read noise and it becomes unusable for our purposes - usually around ISO 6400.



It doesn't work well at low ISOs for cameras with large read noise (and/or PRNU), such as the D4 or most Canons, which may not have a shot noise only portion in the ISO 100 SNR curve from which to properly extrapolate shot noise/FWC at full scale.  The tangent is lower than it should be, so shot noise at 100% and FWC are underestimated.  Here are tangents drawn on the D4 DxOmark.com full SNR curves at ISO 100 and 800.  Note how the heavy duty read noise at 100 ISO has pushed the tangent further up the curve than in the case of the cleaner RX-1 above so that now we are starting to enter in the PRNU polluted portion of the curve.  We may never get a reading of just shot noise with the D4 at ISO 100 (On the other hand at ISO 800 PRNU is too small for an accurate reading, so in these cases the sweetspot for PRNU measurement is usually around ISO 200, sometimes 400).



It is sometimes evident when the ISO 100 curve does not have a shot noise only portion because FWC, average Absolute QE and PRNU are lower than expected compared to ISO 200 and 400.  This is what one gets by reading off tangents and other data for the D4 in the graph above, for instance:



Note FWC, QE and PRNU as outliers at ISO 100.  This suggests that the area of the ISO 100 curve where the tangent rests is polluted by other sources of noise.  By raising FWC to bring PRNU in line with ISO 200, QE starts to look right, and relative gains as calculated through Ssat and FWC fall into place.  We have achieved symmetry - but this does not say much about the accuracy of the figures.



Jack
« Last Edit: April 06, 2013, 12:26:26 PM by Jack Hogan » Logged
Jim Kasson
Sr. Member
****
Offline Offline

Posts: 857


WWW
« Reply #10 on: April 06, 2013, 12:48:57 PM »
ReplyReply

I'll think about a target some more, but right now I kind of like the idea of a sin(x)/x radial pattern superimposed on a grey field. 

This?



Or this?



Or maybe even this (produced while debugging)?



Jim
Logged

BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3685


« Reply #11 on: April 06, 2013, 01:08:30 PM »
ReplyReply

Hi Jim,

A few suggestions/questions to clarify the procedure.

My idea for calibrating it out was bog-simple: construct a map of the pixel sensitivities,

Normalize to an average of 1.0 ,

Quote
invert it, and multiply the values in the raw image by the inverted map.

Quote
In the dim, distant future, cameras might do that themselves, with the maps generated by the manufacturers and burned into the camera before it is shipped.  In the medium term, raw developer coders could have you make a series of calibration images, and use them to correct each image from that camera as itís ďdevelopedĒ.

This is similar to what Astrophotographers do when they remove the vignetting from an image, shoot a number of 'flats' and average them (they also subtract the Bias), normalize to a maximum of 1 (same for all channels or separate per channel), and divide the image by that, all while still in linear gamma space and preferably before demosaicing.
 
Quote
Hereís how I got to 256 exposures. I figured that the PRNU and the shot noise at full scale might be comparable. If I wanted the PRNU data to be accurate, I should do enough averaging to reduce the shot noise by about a decimal order of magnitude. The square root of 256 is 8, so thatís pretty close.

Probably a typo, the square root of 256 is 16, but maybe you intended to say something else, 8 stops? I saw Chris caught it as well.

Quote
Besides, I just couldnít stomach the idea of making 1024 exposures. I did compute a corrected PRNU by subtracting the shot noise in quadrature, but it turned out to be a small correction; 256 images were enough. Whew!

I get the subtraction in quadrature to reduce pattern noise and other non-random effects, but what do you mean with a small correction? The subtraction gives the shot noise, then what did you do to determine the PRNU, subtract the sum of the earlier averaged signal result and the averaged shot noise from the individual files?

Quote
I looked at the histogram of the data and verified that the PRNU appears Gaussian:


Just to be sure, this is the patternnoise residual (see above)?

Quote
I brought the statistics into Excel and computed the standard deviation of the PRNU as a percentage of the signal. Averaged over all four channels, the D4 PRNU was .29% of the signal and NEX-7 PRNU was .41%. I then calculated the shot noise, and extrapolated it to what itís be for a nearly full scale signal. Then I calculated the ratio of the PRNU over the shot noise at full scale. For the D4 itís 97%, and for the NEX-7 itís 70%.

Is that 97% and 70% PNRU? Seems high ...

Quote
I was taught that negative results were as valuable as positive ones, so Iím posting this.

Yes, much appreciated. The rejection of a hypothesis also gives valuable information.

Cheers,
Bart
« Last Edit: April 06, 2013, 03:15:22 PM by BartvanderWolf » Logged
Fine_Art
Sr. Member
****
Offline Offline

Posts: 1087


« Reply #12 on: April 06, 2013, 09:09:00 PM »
ReplyReply

Its news to me that this is not normalized as a QC step before the camera leaves the factory. I know they go though a process for removing hot/stuck pixels which is burned into the pixel map. I imagined this as test shots against a screen that should produce a set number in the RAW values. I assumed there would be tests for values of R,G,B high, medium, low, to ensure the cameras are outputting what they should.

The fact you are getting this variability means their QC tests are something not covering this. Anyone know what kinds of tests are done before a camera ships?
Logged
MichaelEzra
Sr. Member
****
Offline Offline

Posts: 659



WWW
« Reply #13 on: April 07, 2013, 04:29:44 AM »
ReplyReply

You could try this technique by using raw Flat Field correction in RawTherapee by setting the blur radius to 0.
If you place all your flat fields in the same directory, point to it in the preferences, RawTherapee will average flat fields automatically, matching by camera manufacturer, camera model, lens, foal length, aperture and then apply the flat field correction.
« Last Edit: April 07, 2013, 04:45:15 AM by MichaelEzra » Logged

BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3685


« Reply #14 on: April 07, 2013, 05:50:38 AM »
ReplyReply

You could try this technique by using raw Flat Field correction in RawTherapee by setting the blur radius to 0.
If you place all your flat fields in the same directory, point to it in the preferences, RawTherapee will average flat fields automatically, matching by camera manufacturer, camera model, lens, foal length, aperture and then apply the flat field correction.

Hi Michael,

That RawTherapee will average flat fields automatically is new to me, and that averaging is essential to avoid noise amplification with a small or zero blur radius. AFAIK it isn't mentioned in the Manual.

According to the manual:
Quote from: RawTherapee manual
Blur Radius
Blur radius controls the degree of blurring of the flat field data. Default value
of 32 is usually sufficient to get rid of localised variations of raw data due to
noise.  Setting the  Blur  Radius  to  0 skips  blurring process  and allows  to
correct for dust and other debris on the sensor (as long as their position has
not changed) at the expense of carrying noise from the flat field file into the
corrected image. If such correction is desired, it is advisable to create flat field
files with minimum amount of noise at the lowest ISO setting and optimal light
exposure.

Could you confirm the averaging behavior, and does it apply to Dark frames as well? That would be great news!

Cheers,
Bart
Logged
MichaelEzra
Sr. Member
****
Offline Offline

Posts: 659



WWW
« Reply #15 on: April 07, 2013, 06:07:07 AM »
ReplyReply

Hi Bart, yes, both flat field and dark frame use averaging if more than a single matching frame is found.

See line 107
http://code.google.com/p/rawtherapee/source/browse/rtengine/ffmanager.cc

See line 110
http://code.google.com/p/rawtherapee/source/browse/rtengine/dfmanager.cc
« Last Edit: April 07, 2013, 06:14:06 AM by MichaelEzra » Logged

BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3685


« Reply #16 on: April 07, 2013, 06:33:13 AM »
ReplyReply

Hi Bart, yes, both flat field and dark frame use averaging if more than a single matching frame is found.

Michael, that's marvelous. How is the match determined?

The remark in the program code says "averaging of flatfields if more than one is found matching the same key". What's the trigger/key? I may find it by studying on the program code, but perhaps you can beat me to it with a further explanation.

Cheers,
Bart
« Last Edit: April 07, 2013, 06:34:47 AM by BartvanderWolf » Logged
MichaelEzra
Sr. Member
****
Offline Offline

Posts: 659



WWW
« Reply #17 on: April 07, 2013, 06:48:54 AM »
ReplyReply

For flat fields    (ffInfo::key):  camera manufacturer, camera model, lens, focal length, aperture

  The search for the best match is twofold:
  if perfect matches by key are found, then the list is scanned for lesser distance in time
  otherwise if no match is found, the whole list is searched for lesser distance in lens and aperture


For dark frames (dfInfo::key): camera manufacturer, camera model, ISO, shutter speed

  The search for the best match is twofold:
  if perfect matches by key are found, then the list is scanned for lesser distance in time
  otherwise if no match is found, the whole list is searched for lesser distance in iso and shutter
« Last Edit: April 07, 2013, 08:00:49 AM by MichaelEzra » Logged

BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3685


« Reply #18 on: April 07, 2013, 07:47:58 AM »
ReplyReply

For flat fields    (ffInfo::key):  camera manufacturer, camera model, lens, focal length, aperture

  The search for the best match is twofold:
  if perfect matches by key are found, then the list is scanned for lesser distance in time
  otherwise if no match is found, the whole list is searched for lesser distance in lens and aperture


For dark frames (dfInfo::key): camera manufacturer, camera model, ISO, shutter speed, aperture

  The search for the best match is twofold:
  if perfect matches by key are found, then the list is scanned for lesser distance in time
  otherwise if no match is found, the whole list is searched for lesser distance in iso and shutter

Hi Michael,

Thanks for the clarification of this seemingly undocumented (I didn't notice it in the manual) feature. It's brilliant, and as it should be implemented (EXIF driven with a cost function to automatically pick the most appropriate inputs). Kudos to the developers.

RawTherapee was already one of the top Raw converters in my book, this feature (averaging multiple Flats/Darks) can make a real difference for those who want to get the most out of their source files.

Cheers,
Bart
Logged
MichaelEzra
Sr. Member
****
Offline Offline

Posts: 659



WWW
« Reply #19 on: April 07, 2013, 08:06:16 AM »
ReplyReply

Hi Bart, I just added this info to the "Auto matching logic" sections of Dark Frame and Flat Field sections of the online manual. Thanks for pointing that it was missing (its nice to see that manual is being read! BTW, RT manual went through comprehensive updates recently).

Please note that I made a correction to "Key for dark frames" in the post above, it does not include aperture.
Logged

Pages: [1] 2 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad