Ad
Ad
Ad
Pages: « 1 ... 7 8 [9] 10 »   Bottom of Page
Print
Author Topic: Computing Unity Gain ISO from a single exposure  (Read 69010 times)
Jack Hogan
Sr. Member
****
Offline Offline

Posts: 250


« Reply #160 on: March 31, 2013, 10:54:04 AM »
ReplyReply

Are we really sure that, for a given QE, there is some threshold value of photon count below which free electrons can not be produced?

I would have thought the QE factor would be applicable to any number of photons such that there is probability that even one photon could produce an output. By this I mean that, for many succeeding tries (say 100 tries with a QE of 16%) one electron would be produced in 16% of those tries (at some confidence level, depending on the number of tries). Putting that another way, if 1 photon arrives at the D800e sensor, the probability of it producing an electron is 0.16 (16%).

I don't know the exact process by which a color filter filters photons, so you may well be right.  Does anybody know?

Quote
Would be interested to know the difference between "Effective Absolute Quantum Efficiency" and "Absolute Quantum Efficiency"?

Absolute Quantum Efficiency is the term used in the industry by vendors like Kodak to describe the combined effect of transmittance, fill factor and charge conversion efficiency at various signal wavelengths for a particular photographic sensor.  So say your chosen exposure results in 100k photons of daylight reflected from a neutral object hitting, say, a green sensel of your camera's sensor.  How many electrons can you expect in the well?

Recall that the green curve in the Absolute QE chart of the previous post has a peak of 40%.  But not all photons are of that wavelength.  Since we know that daylight has a spectral power distribution more or less like this



we simply integrate the product of the curve above with the green sensel curve in the Absolute QE graph of the earlier post.  The answer is around 13%, so one would expect about 13k electrons to be fed to the ADC for that exposure.  What about the neighbouring blue and red sensels?  Slightly different numbers.  The (weighted?) average of the three is what I call Effective Absolute QE, a single number that gives a relative indication of the efficiency of a sensor in converting photons to electrons.

Jack
Logged
xpatUSA
Sr. Member
****
Offline Offline

Posts: 304



WWW
« Reply #161 on: March 31, 2013, 11:20:33 AM »
ReplyReply

Hi Ted,

Jim did a good job of outlining the way I've seen it in the literature, even for camera sensors.  The confusion arises when we mix radiometric (flux, power etc.) and photometric quantities (Illuminance, exposure etc.).  The units are different but they describe exactly the same physical processes.  For instance I find it useful to think of Exposure as a certain number of photons incident on an area during exposure time.  But to calculate that number one has to do a few backward somersaults with both sets of units (maybe worth a separate thread).

Illuminance (in lux) provides a certain number of photons per second which get converted into e-/second by the photodiode.  So while a photographic sensor is exposed to a certain illuminance, a current of e-/s is indeed generated within the integrating photodiode, which holds the resulting total charge so that it can be read by the downstream circuitry.  The responsivity diagrams of a solar vs a photographic photodiode look very similar.  What changes are the slopes, which are related to the material used and charge collection efficiency.



Jack

PS Happy Easter

Hello Jack,

Thank you for your lucid explanation, the link to Wikipedias "all about exposure", and the illuminating graph.

Happy Easter 2U2.
Logged

best regards,

Ted
Jim Kasson
Sr. Member
****
Offline Offline

Posts: 943


WWW
« Reply #162 on: March 31, 2013, 12:51:31 PM »
ReplyReply

I've been trying to get at the visual effect of turning down the ISO below the unity gain ISO. I created a simulated Nikon D4 with some unusual characteristics: zero read noise and perfect pixel response uniformity. While I was at, it, I gave it a Fovean-like characteristic of having all four planes (two green ones) stacked on top of eash other, so there's no demosaicing. That's not important now, but it will be if and when I start feeding this simulation some images. Actually, it is somewhat important; it means that there are no values between integer electron counts generated by interpolation.

I excited a 350x350 pixel portion of the sensor with enough D65 photons to average 100 electrons per photosite in the green channels, using the real D4 ratios of red to green and blue to green raw values when faced with a D65-lit gray card.

I made simulated exposures at ISO 640, 320, 160, 80, and 40. Unlike the real D4, the simulated version has enough gain range to make the ISO 80 and 40 settings meaningful.

While still in linear camera space, I multiplied each pixel in each image by 1 for ISO 640, 2 for ISO 320, 4 for ISO 160, and so on, giving me very close to equal average values. Then I multiplied the values in each image by 100, to make the numbers big enough so that I wouldn't need heroic measures in Photoshop. The actual mean and standard deviation of the images when normalized to the range [0,1] is given in the following table:



Bart, this is what I mean about the standard deviation of the photon noise being robust in the face of resolution. The ISO 640 image can resolve every electron, while the ISO 40 image has counts from 0 to 15 with the same ADC value.

I brought the images into Photoshop as Adobe RGB with a identity conversion matrix (diagonal ones, else zeros), gave them a bit over 1 EV exposure bump, and (over) corrected the greenish color cast. Here's the histogram of the ISO 640 image:



Here's the histogram of the ISO 40 image:



Note the extreme depopulation.

Here's the ISO 640 image, magnified to 700x700 with nearest neighbor to resist JPEG degradation:



Here's the ISO 320 image:



Here's the ISO 160 image:



Here's the ISO 80 image:



Here's the ISO 40 image:




The low ISO images are somewhat clumpier than the Unity Gain ISO one, but the effect is subtle, even though the histograms look dramatically different. This test bends over backwards to give the Unity Gain ISO image a chance to shine, and to me, it's barely glowing. I am coming around to the school of thought that what's really important for ordinary photographers is the SNR, although I can understand the appeal of UGiso if you're going to be doing computations or mathematical processing on your images.

Jim

« Last Edit: March 31, 2013, 02:23:52 PM by Jim Kasson » Logged

Jim Kasson
Sr. Member
****
Offline Offline

Posts: 943


WWW
« Reply #163 on: March 31, 2013, 02:48:22 PM »
ReplyReply

Here's a link to a .psd file with a layer for each of the 350x350 images in the preceding post, an exposure layer, and a color balance layer with better settings than that used for the preceding post.

http://www.kasson.com/ll/UGNoiseTest100electrons.psd

I recommend that you load the image into Photoshop, and look at the differences between the layers by turning them on and off while leaving the top two adjustment layers in place. Feel free to change the color balance settings.

Enjoy...

Jim
« Last Edit: March 31, 2013, 03:01:02 PM by Jim Kasson » Logged

Jim Kasson
Sr. Member
****
Offline Offline

Posts: 943


WWW
« Reply #164 on: March 31, 2013, 03:33:47 PM »
ReplyReply

Bart, Ted, Jack (and anybody else who cares),

So far, I've been modeling pixel response non-linearity (PRNU) as if each cell had a slightly different quantum efficiency: ie the calculation has been done before the signal in each photosite was quantized into discrete electrons. FWIW, I've been using a probablistic model of converting photons to electrons, not a film-like threshold model; I don't really know if this is right or not.

But this is a post about PRNU. It occurs to me that there's another explanation for it, and if that explanation explains most of it, then PRNU should be considered after the electrons in the well are converted from probabilities to actual electron counts. I don't know a lot about how these imaging chips work, but here's the model I have in my brain. The photodiode converts photons to electrons, the electrons charge a capacitor, the voltage on that cap is buffered by an amp in the cell, and eventually the voltage gets amplified and converted to digital.

If I've got that right, a potential source of PRNU is variation in the sensel capacitance. The voltage, V, on a capacitor as a function of the charge, Q, and capacitance, C, is V = Q/C. You can see that, with the charge held constant, changes in C will cause changes in V, and there's no reason I know of to suspect that the changes in C occur in quanta. The same is true of amplifier gain variation, in both the ISO-programmable amplifiers and the source followers, of which there is presumably one per sensel.

Anybody have an ideas on what the underlying sources of PRNU are?

Jim
« Last Edit: March 31, 2013, 03:57:29 PM by Jim Kasson » Logged

Jim Kasson
Sr. Member
****
Offline Offline

Posts: 943


WWW
« Reply #165 on: March 31, 2013, 10:14:00 PM »
ReplyReply

One more set of simulation images. The camera this time is a D4 with read noise and PRNU (before electron quantizing). Same test pattern as the real D4 image before. One electron in each of the green planes for the bright parts, less for the red and blue planes. D65 illuminant. Raw image multiplied by 1000 when still in raw space with linear gamma. Small exposure correction -- about +1EV -- in Photoshop after conversion to Adobe RGB.  No black point corrections.

ISO 640 image:



ISO 320 image x2 in linear raw:



ISO 160 image x4 in linear raw:



Pretty close to the real thing. I'd give the nod to the Unity Gain ISO image at ISO 640, but it's not head and shoulders above the others.

If you want to see the parameters of the D4 simulation, here's the code for the camera constructor. It's in Matlab, but the code should look pretty familiar to any C++ or Java programmer.



I suppose that I could do another set of simulations with probabilistic electrons coming from a known photon flux, but I get the feeling that I'm beginning to overdo this.

Jim
« Last Edit: March 31, 2013, 10:40:21 PM by Jim Kasson » Logged

xpatUSA
Sr. Member
****
Offline Offline

Posts: 304



WWW
« Reply #166 on: March 31, 2013, 11:59:51 PM »
ReplyReply

Bart, Ted, Jack (and anybody else who cares),

But this is a post about PRNU. It occurs to me that there's another explanation for it, and if that explanation explains most of it, then PRNU should be considered after the electrons in the well are converted from probabilities to actual electron counts. I don't know a lot about how these imaging chips work, but here's the model I have in my brain. The photodiode converts photons to electrons, the electrons charge a capacitor, the voltage on that cap is buffered by an amp in the cell, and eventually the voltage gets amplified and converted to digital.

If I've got that right, a potential source of PRNU is variation in the sensel capacitance. The voltage, V, on a capacitor as a function of the charge, Q, and capacitance, C, is V = Q/C. You can see that, with the charge held constant, changes in C will cause changes in V, and there's no reason I know of to suspect that the changes in C occur in quanta. The same is true of amplifier gain variation, in both the ISO-programmable amplifiers and the source followers, of which there is presumably one per sensel.

Anybody have an ideas on what the underlying sources of PRNU are?

My 2e's worth:

Non-unformity between pixels is no different to any other mass-produced item and no production process is perfect. There are many things that cause variance in pixel characteristics especially if filters and microlenses are included. All variances are significant, including those of the semiconductor doping and lithographic processes, thickness of the materials, the depletion layer, fill, metalization accuracy, CFA consistency, microlens MTF, the list is almost endless . . .

Therefore, it would be better perhaps to not assign any one factor such as photodiode capacitance (it's not a separate capacitor) as a probable cause - so maybe a constant variance in overall QE between pixels would do it for your purposes? I say "constant" to avoid confusion with the variation of QE with wavelength, photon quantity or rate, or the direction of the wind. I am using "variance" in the statistical sense and "constant" in the sense that any given pixel has a constant % difference to any other given pixel on average over thousands of exposures.

In a previous life, I was concerned about resistor values, amongst other things. A 100K resistor born as a 102.3K resistor was OK, so long as it stayed that way during the life of my equipment. Another 100K of a different value say 99.8K in the same op-amp circuit was equally acceptable. So long as they didn't change much during aging. All of which is why God invented trim-pots Wink
« Last Edit: April 01, 2013, 12:41:46 AM by xpatUSA » Logged

best regards,

Ted
Jack Hogan
Sr. Member
****
Offline Offline

Posts: 250


« Reply #167 on: April 01, 2013, 04:52:09 AM »
ReplyReply

The low ISO images are somewhat clumpier than the Unity Gain ISO one, but the effect is subtle, even though the histograms look dramatically different. This test bends over backwards to give the Unity Gain ISO image a chance to shine, and to me, it's barely glowing. I am coming around to the school of thought that what's really important for ordinary photographers is the SNR, although I can understand the appeal of UGiso if you're going to be doing computations or mathematical processing on your images.

Jim, I don't know what to say.  In this and your next simulation you have modelled the two competing effects (SNR and UG) beautifully.  If I understand correctly in the first simulation we are looking at a uniform patch about 10.2 stops below clipping, with dithering provided by photon/shot noise only.  To simulate viewing a perfect 8x12" print of them at arm's length I can suggest zooming the patches on screen so that they are about 2.5" (65mm = 2x500xtan0.01x350) on each side.
« Last Edit: April 01, 2013, 06:35:24 AM by Jack Hogan » Logged
Jack Hogan
Sr. Member
****
Offline Offline

Posts: 250


« Reply #168 on: April 01, 2013, 05:31:21 AM »
ReplyReply

One more set of simulation images. The camera this time is a D4 with read noise and PRNU (before electron quantizing). ...

Pretty close to the real thing. I'd give the nod to the Unity Gain ISO image at ISO 640, but it's not head and shoulders above the others.

I agree that the ISO 640 image is preferable, as it would be in real life also because read noise for the D4 decreases rapidly from ISO 100 (18.6e-) to ISO 800 (3.4e-).  I believe that read noise of 18.6e- (4+ ADUs) at ISO 100 would visually swamp any quantization effects.  Did you model this change or was read noise kept constant?

Quote
If you want to see the parameters of the D4 simulation, here's the code for the camera constructor. It's in Matlab, but the code should look pretty familiar to any C++ or Java programmer.

The question that comes to mind is whether you used QE of 53% for your broad spectrum illuminant.  I understand that the Sensorgen value assumes a green laser as the illuminant Smiley

Quote
I suppose that I could do another set of simulations with probabilistic electrons coming from a known photon flux, but I get the feeling that I'm beginning to overdo this.

Jim

Another improvement to the model, if you haven't done so already, is to separate the read noise into two components: a sensor component and an analog  'gain' component.  They are apparently not correlated.  Here is a thread by Dosdan on the subject.  The reason for doing so is that, from a noise perspective, the performance of a particular sensor can intuitively be understood as, aotbe, the DR of the incoming light, having to pass through the DR of the sensor, having to pass through the DR of the analog amplification chain as a function of the gain (ISO) applied.

Fabulous work, Jim, as visually relevant as any that I've seen on this subject. 

Jack
Logged
Jack Hogan
Sr. Member
****
Offline Offline

Posts: 250


« Reply #169 on: April 01, 2013, 05:32:36 AM »
ReplyReply

My 2e's worth:

Good description, Ted.  I wouldn't know what to add, Jim.
Logged
bjanes
Sr. Member
****
Offline Offline

Posts: 2835



« Reply #170 on: April 01, 2013, 05:39:40 AM »
ReplyReply


I brought the images into Photoshop as Adobe RGB with a identity conversion matrix (diagonal ones, else zeros), gave them a bit over 1 EV exposure bump, and (over) corrected the greenish color cast. Here's the histogram of the ISO 640 image:



Here's the histogram of the ISO 40 image:



Note the extreme depopulation.


Jim,

Nice work as usual, but the histogram of the bottom image needs to be refreshed as indicated by the little triangle with the exclamation Repoint. See the Adobe help for details.

Regards,

Bill

Logged
Jim Kasson
Sr. Member
****
Offline Offline

Posts: 943


WWW
« Reply #171 on: April 01, 2013, 06:08:21 PM »
ReplyReply

I agree that the ISO 640 image is preferable, as it would be in real life also because read noise for the D4 decreases rapidly from ISO 100 (18.6e-) to ISO 800 (3.4e-).  I believe that read noise of 18.6e- (4+ ADUs) at ISO 100 would visually swamp any quantization effects.  Did you model this change or was read noise kept constant?

I kept read noise constant referred to amplifier input. I understand that putting read noise on both sides of he amplifier makes theoretical, sense, and fits some data well. I'm having trouble getting it to fit all my camera data. Here's a look at the D4 read noise as I've measured it, plotting both mean and mean plus standard deviation:



That looks like it would be pretty easy to fit the two-stage read noise model to. But now look at the RX-1:



There are two big problems here. The first is that, with the exception of the ISO 100 point, the read noise component on the output side of the amplifier is darned close to zero. The second is that the read noise goes down as the ISO is increased from 100 to 200. With the two-stage model, that shouldn't happen.

The M9 noise can't be modeled with the two-stage model (I've left off the standard deviation line):



The model looks like it could work for the D800E:



and the NEX-7:



I'm scratching my head right now. I'm also having trouble figuring out how to estimate the standard deviations of the portion of the read noise referred to the output of the amplifier in the cases where the base gain is only a stop or two away from unity gain.

Jim


Logged

Jim Kasson
Sr. Member
****
Offline Offline

Posts: 943


WWW
« Reply #172 on: April 01, 2013, 06:11:20 PM »
ReplyReply

The question that comes to mind is whether you used QE of 53% for your broad spectrum illuminant.  I understand that the Sensorgen value assumes a green laser as the illuminant Smiley

Jack, right now I'm just working with electrons, so the QE number isn't used.

Jim
Logged

Jim Kasson
Sr. Member
****
Offline Offline

Posts: 943


WWW
« Reply #173 on: April 01, 2013, 06:13:55 PM »
ReplyReply

Another improvement to the model, if you haven't done so already, is to separate the read noise into two components: a sensor component and an analog  'gain' component.  They are apparently not correlated.  Here is a thread by Dosdan on the subject.  The reason for doing so is that, from a noise perspective, the performance of a particular sensor can intuitively be understood as, aotbe, the DR of the incoming light, having to pass through the DR of the sensor, having to pass through the DR of the analog amplification chain as a function of the gain (ISO) applied.

Thanks for the link, Jack. Emil Martinec says pretty much the same thing, but, as I posted above, I'm having trouble fitting the model to two cameras.
Logged

Jim Kasson
Sr. Member
****
Offline Offline

Posts: 943


WWW
« Reply #174 on: April 01, 2013, 06:17:32 PM »
ReplyReply

Non-unformity between pixels is no different to any other mass-produced item and no production process is perfect. There are many things that cause variance in pixel characteristics especially if filters and microlenses are included. All variances are significant, including those of the semiconductor doping and lithographic processes, thickness of the materials, the depletion layer, fill, metalization accuracy, CFA consistency, microlens MTF, the list is almost endless . . .

Yep, so it is. I'm just trying to figure out whether most of the PRNU occurs before quantizing to electrons or afterward. If the weight of the two components is similar, then I'll never get data to put into a two-stage model.

Jim
Logged

Jim Kasson
Sr. Member
****
Offline Offline

Posts: 943


WWW
« Reply #175 on: April 01, 2013, 06:25:59 PM »
ReplyReply

...the histogram of the bottom image needs to be refreshed as indicated by the little triangle with the exclamation Repoint. See the Adobe help for details.

Bill,

You're right. Here are the two (refreshed) histograms for the images in the psd file pointed to a few posts above.

ISO 640:



ISO 40:



Jim
Logged

xpatUSA
Sr. Member
****
Offline Offline

Posts: 304



WWW
« Reply #176 on: April 02, 2013, 01:02:06 AM »
ReplyReply

[the list is almost endless . .] Yep, so it is. I'm just trying to figure out whether most of the PRNU occurs before quantizing to electrons or afterward. If the weight of the two components is similar, then I'll never get data to put into a two-stage model.

As one who has only just figured out the basic reason why the shot noise variance (at the ADC output) for low ISO's is less than the mean, my input may not count for much . .

Mostly, the literature states that PRNU applies to the sensor pixel output and, importantly, is often called "fixed pattern" unfortunately followed by the word "noise" (classic photographic obfuscation?). There I go, drifting off again  .  .  .

So, perhaps you could assign an rms value to define permanent pixel-to-pixel variations, in the same way that lens quality can be defined or indeed the roughness of machined surfaces?

A kindred spirit here? http://harvestimaging.com/blog/?p=916

This from a Paper for the Foveon F7 sensor may be of interest:

Quote
Well capacity is approximately 77,000 electrons per photodiode but the usual operating point (for restricted nonlinearity)
corresponds to about 45,000 electrons. Photo response non-uniformity (PRNU) is less than 1%.
Several fixed-pattern and random noise reduction techniques have been incorporated into the F7 design to realize very
good noise performance for the CMOS technology. The total fixed pattern noise from all sources is less than 1%. The
primary contributor to dark noise is ktC noise from diode reset. This noise is approximately 70 electrons. It is possible
to reduce this to about 40 electrons by implementing a reset-read-expose-read cycle for the frame and then subtract the
first frame from the second.

OT - have you tried ImageJ yet?

« Last Edit: April 02, 2013, 01:08:24 AM by xpatUSA » Logged

best regards,

Ted
BartvanderWolf
Sr. Member
****
Online Online

Posts: 3795


« Reply #177 on: April 02, 2013, 03:54:59 AM »
ReplyReply

OT - have you tried ImageJ yet?

Hi Ted, and others,

On that note, I've made an improved ImageJ macro available for download (save with right mouse button click) which will split a Bayer CFA image into its individual R/G1/G2/B channels. The macro can be copied into the 'ImageJ\plugins\Macros' folder, and will show up in the 'Plugins/Macros' pull-down menu at the bottom menu section between the other plugins that you may have installed.

As input it takes the currently active (cropped) image (presumably a 16-bit linear gamma, non-demosaiced Bayer CFA image), and it allows to choose one from four common CFA layouts*. It then produces the separate channels, selectable, and optionally also can generate a file that is the result of subtracting the G1 and G2 channels.

The subtracted G1-G2 result is interesting, because it can be used for an improved noise estimate where the impact of various systematic errors (e.g. sensor dust and non-uniform lighting) is reduced in the total standard deviation. Sometimes all we have is a single file for analysis, in which case it helps to reduce unwanted pollution of the noise data. All that is left is Random noise, calibration errors, and PRNU, assuming we're analyzing a uniformly lit surface. It is also very useful for detecting individual outlier pixels, so we can avoid them being part of our crop to be analyzed by selecting a different crop. It is also interesting to compare this subtracted G1-G2 result with a proper two exposure subtraction, which is the preferred way to conduct random noise analysis.

There is also an ImageJ DCRaw plugin available to acquire the Raw non-demosaiced source data (DCRaw Document mode) directly from the Raw camera file.  After installation the IJ-DCRaw plugin is available under the 'Plugins/Image IO' menu.

Cheers,
Bart

* Which CFA pattern layout to choose, depends on a number of things. One is the Camera model, another is whether the masked pixels are included or excluded in the Raw image data. Also important is that image crops are made at the 'correct' pixel offset position, and that image rotation is kept constant. I have yet to find an easy way of automatically determining that pattern, other than using a reference image with clear Red, Green, and Blue, features. The lightest channel corresponds to that color. For many cameras, the two Green Channel files should have very similar mean brightness values, resulting in an almost zero mean value for the G1-G2 subtracted result.
« Last Edit: April 02, 2013, 05:34:41 AM by BartvanderWolf » Logged
Jim Kasson
Sr. Member
****
Offline Offline

Posts: 943


WWW
« Reply #178 on: April 02, 2013, 09:45:55 AM »
ReplyReply

I built a two-stage read noise model for the D4. Here's a plot of the mean and mean+standard deviation for the measured and the modeled values:



The values are:



That's more noise at the lower ISOs than I'm seeing on the real camera. Some possibilities are: a) one-electron test shots were made at 1/8000 sec exposure, where read noise test shots were made at 1/30, b) there's some ACR processing going on that's making a difference, c) there were some artifacts in the read noise test shots like the read smear in the ISO 640 one-electron shot.

I've got some research to do.

Jim
« Last Edit: April 02, 2013, 10:16:31 AM by Jim Kasson » Logged

xpatUSA
Sr. Member
****
Offline Offline

Posts: 304



WWW
« Reply #179 on: April 02, 2013, 11:15:13 AM »
ReplyReply

Hi Ted, and others,

On that note, I've made an improved ImageJ macro available for download (save with right mouse button click) which will split a Bayer CFA image into its individual R/G1/G2/B channels.
Thanks for the post Bart,

Sorry to tell you that my only serious camera is a Foveon-based SD10 housebrick but your macro would be quite use for others for sure. My other camera, a micro four thirds, is used as a point-and-shoot and I take the JPEGs as they come.

That dcraw plug-in is certainly of interest, I'm currently doing it the hard way but which is the most flexible way.



Logged

best regards,

Ted
Pages: « 1 ... 7 8 [9] 10 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad