Ad
Ad
Ad
Pages: « 1 2 3 [4] 5 6 ... 10 »   Bottom of Page
Print
Author Topic: Computing Unity Gain ISO from a single exposure  (Read 61708 times)
Jim Kasson
Sr. Member
****
Offline Offline

Posts: 825


WWW
« Reply #60 on: March 26, 2013, 10:10:02 AM »
ReplyReply

For fun, I applied the theory there to the full SNR curves that can be found at DxO.

Jack, one thing you might do with your curves to make the departures from ideality (Is that a word? My spell-checker doesn't think so.) more obvious is subtract out the 3 DB/octave (or half a stop per stop, if you use log base two axes in both directions like I do) contribution of the photon noise. I wish DxO did this; it would make it easier to see what's going on in the camera, rather than reproducing the physics of photon noise over and over.

Here's an example of a similar, but not identical, set of curves, this one showing noise as a function of ISO with the ADC count held constant.



This shows small departures from ideal performance that would be invisible without subtracting out the effects of the photon noise.

Jim
« Last Edit: March 26, 2013, 02:28:26 PM by Jim Kasson » Logged

Jim Kasson
Sr. Member
****
Offline Offline

Posts: 825


WWW
« Reply #61 on: March 26, 2013, 10:14:56 AM »
ReplyReply

So one way to specify the signal in log 2 fashion familiar to photographers would be as a function of Stops from Clipping.  Another way I've seen it (as in RawDigger) is to assign O EV to 12.5% (as per the relative ISO Saturation Speed standard).  This way Saturation is +3 EV and everything else falls into place.

Thanks, Jack. Yeah, I've been sloppy. I like the "Stops from Clipping" description. I've got a workshop coming up at the end of April. I think I'll go with that nomenclature, and see how well people get it.

Jim
Logged

Jim Kasson
Sr. Member
****
Offline Offline

Posts: 825


WWW
« Reply #62 on: March 26, 2013, 10:21:22 AM »
ReplyReply

Something seems wrong here: the M9 sensor should have roughly the same 60,000 electron full well capacity as other Kodak full frame type sensors with 6.8 micron pixels, like the KAF-31600: http://www.truesenseimaging.com/all/download/file?fid=11.62
so that 30,000 seems too low...

You may be right. The M9 behavior is quite different from the model, even though the other cameras are falling into line better and better as I make the simulation more sophisticated. Maybe there's something about he camera that needs special modeling, or maybe my data is in error (I'll go back and spot-check it).  

Check my algebra on my simple-minded full well approximation:

If the number of bits in the ADC is Badc, the base ISO is ISObase, and the Unity Gain ISO is ISOug

Full Well Capacity = (2^Badc)*(ISOug/ISObase)


Thanks,

Jim
Logged

Jack Hogan
Full Member
***
Offline Offline

Posts: 236


« Reply #63 on: March 26, 2013, 12:09:16 PM »
ReplyReply

Jack, one thing you might do with your curves to make the departures from ideality (Is that a word? My spell-checker doesn't think so.) more obvious is subtract out the 3 DB/octave (or half a stop per stop, if you use log base two axies in both directions like I do) contribution of the photon noise. I wish DxO did this; it would make it easier to see what's going on in the camera, rather than reproducing the physics of photon noise over and over.

Good Idea.

Here's an example of a similar, but not identical, set of curves, this one showing noise as a function of ISO with the ADC count held constant.



This shows small departures from ideal performance that would be invisible without subtracting out the effects of the photon noise.

This shows nicely the effect of read noise at a constant raw value (of 1000, I assume) as the ISO is increased.  Is this the objective of the graph?  If not, and you just want to show deviation from 'normal', you may want to remove the effect due to read noise as well...  However, if you are using DxO's curves as the source of data you may have difficulty showing deviation from 'normal' because I understand that they do some curve fittin'  Wink

PS Jim, did I understand correctly from a previous post of yours that you are the inventor of Subtractive Dithering?  That is one bloody brilliant idea!
« Last Edit: March 26, 2013, 12:12:42 PM by Jack Hogan » Logged
Jim Kasson
Sr. Member
****
Offline Offline

Posts: 825


WWW
« Reply #64 on: March 26, 2013, 12:23:56 PM »
ReplyReply

This shows nicely the effect of read noise at a constant raw value (of 1000, I assume) as the ISO is increased.  Is this the objective of the graph? 

The purpose of the graph was to find the point where the read noise on the input side of the amplifier stops being constant as the ISO setting is increased. This graph tells me that above ISO 1600, not only does increasing the ISO setting not make things better, it actually starts to make things worse.

Jim
Logged

Jim Kasson
Sr. Member
****
Offline Offline

Posts: 825


WWW
« Reply #65 on: March 26, 2013, 12:25:13 PM »
ReplyReply

However, if you are using DxO's curves as the source of data you may have difficulty showing deviation from 'normal' because I understand that they do some curve fittin' 

I'm making my own test images in all cases.

Jim
Logged

Jim Kasson
Sr. Member
****
Offline Offline

Posts: 825


WWW
« Reply #66 on: March 26, 2013, 12:43:28 PM »
ReplyReply

Jim, did I understand correctly from a previous post of yours that you are the inventor of Subtractive Dithering?  That is one bloody brilliant idea!

Jack,

Yes, in a way, although we didn't call it that. The idea was to inject a signal centered at half the Nyquist frequency before digitization and filter it out (that's the subtractive part) after reconstruction. One of the parts I liked was that it didn't matter if part of the signal was aliased, since it was going to get filtered out anyway, so we got double the effect for the same noise than what we would have gotten if the dither signal was all below the Nyquist frequency.

http://www.google.com/patents/US3999129

Modern subtractive dither is more sophisticated.

Jim

« Last Edit: March 26, 2013, 12:47:13 PM by Jim Kasson » Logged

xpatUSA
Sr. Member
****
Offline Offline

Posts: 298



WWW
« Reply #67 on: March 26, 2013, 02:30:29 PM »
ReplyReply

Hello again Jack,

We may have a misunderstanding. I am only talking about the definition of Unity Gain, per the topic. Since the definition itself excludes photons, they and their distributions, Poisson, etc., are not relevant to the definition. The definition starts at the sensor output and knows nothing of the shot noise and thermal noise generated therein. Thus I am able to answer as below:

Hi Ted,

Ok, good point about energy levels: so you say one photon = one electron?  Does it make a difference as to what the energy of the photon in question is (i.e. 380nm, vs 760nm light) as far as the number of electrons generated?
To be pedantic, one photon = one or no electron. 380nm vs. 760nm makes no difference, other than 380nm has more energy and is more likely to whack an electron. However, there are devices that can produce more electrons than photons, such as photo-multipliers and those night-vision thingies (not IR) much beloved by the military.

Quote
I am all for keeping things simple, however noise is a pretty key element of this discussion as I hope to show below.  I am thinking about the noise that is always inherently present in light, sometimes referred to as shot noise, because its distribution is similar to the arrival statistics of shot from a shot gun, whch was characterized by a gentlemen by the name of Poisson.

Yes, shot noise and sensor thermal noise exist, of course, but they are earlier in the signal chain than the sensor output and are not, therefore, part of the definition.

Quote
So now we can address the integer nature of electrons.  Let's assume that each photosite is a perfect electron counter with Unity Gain. 10 electrons generated by the photosite, the ADC stores a count of 10 in the Raw data with no errors.  Example 1: the sensor is exposed at a known luminous exposure and the output of the photosite in question is found to result in a Raw value of 2.  What is the signal?

By including signal processing downstream of the ADC you are going outside the bounds of the definition. By including illuminance at the sensor face, you are again going outside the bounds of the definition. Therefore, the question is not relevant to "unity gain" and - with respect - by not defining "the signal", you are also rendering the question moot.

Quote
We cannot tell by looking at just one photosite.  The signal could easily be 1,2,3,4,5, 6... For instance if it were 4, shot noise would be 2, and a value of 2 is only a standard deviation away. To know what the signal is, we need to take a uniform sample of neighbouring photosites, say a 4x4 matrix*.  We gather statistics from the Raw values from each photosite and compute a mean (the signal) and a standard deviation (the noise).  In this example it turns out that the signal was 1 electron with standard deviation/noise of 1.  Interestingly, the human visual system works more or less the same way.

Quite so, but I'm sure that the definition is meant to be about the camera ISO setting and not the captured scene. That is to say that the gain of the amplification from the sensor output to the ADC output, i.e. the ISO setting itself, knows nothing of the scene nor anything of subsequent processing. And you will remember that the sensels are sampled one at a time and know nothing of their neighbors values. Equally, the ADC input only receives sensel values one at a time and I believe that to be a very important point for you to consider. Remember again that the definition does not include or imply any signal processing after the ADC output. Therefore "the signal", as far the definition is concerned, is the output of one sensel and one sensel only, not a number thereof.

Quote
Example 2: a new exposure resulting in a signal of 7 electrons for each photosite in the 4x4 matrix on our sensor.  Of course each does not get exactly 7 electrons because photons arrive randomly, and in fact we know thanks to M. Poisson that the mean of the values in our 4x4 matrix should indeed be 7 but with a standard deviation of 2.646 - so some photosites will generate a value of 7 but many will also generate ...2,3,4,5,6,8,9,10,11,12....   The signal is the mean of these values.

No, "the signal" can not be anything but the discrete value of one sensel output. No algebra, statistical or otherwise, is done until after the ADC output and therefore is not part of the definition.

Quote
Example 3: Different exposure. Say we look at our 4x4 matrix of Raw values and end up with a mean of 12.30 and a standard deviation of 3.50.

We can't look at different 16 sensel values and do any calculations on them - the definition only includes a gain term, taken one sensel at a time. Of course, the camera can do what it likes with as many sensel values that it likes and the consequent RAW file can show distributions of all kinds, BUT, being outside of the signal path from the sensor output to the ADC output, that has nothing to do with the definition.

Quote
Using the Effective Absolute QE for the D800e above (15.5)% and ignoring for the sake of simplicity Joofa's most excellent point above, could we say that this outcome resulted from exposing each photosite to a mean of +12.3/0.155= 79.35 photons?  After all, this number of photons is a mean itself.

No we could not. We can calculate a mean and SD for discrete items such as the number of electrons in a number of photosite and we can indeed assign fraction values to the said mean and SD. But, sorry, we can not turn the equation around and come up something like 79.35 photons. All that figure tells you is that it is perhaps more likely that there were 79 photons than 80 photons. You can not have a fractional number of photons. Physically impossible.

I find it disturbing that fractional photons are still being mentioned. There can be no such thing. If this basic fact about the nature of light is not understood, then nothing else can be accepted or understood and, with all due respect, our discussion would be at an end.

So, time for a question of my own:

Is it the opinion of your goodself, or indeed of this forum, that fractional photons can exist?
« Last Edit: March 26, 2013, 04:00:28 PM by xpatUSA » Logged

best regards,

Ted
Jack Hogan
Full Member
***
Offline Offline

Posts: 236


« Reply #68 on: March 26, 2013, 04:09:25 PM »
ReplyReply

Yes, shot noise and sensor thermal noise exist, of course, but they are earlier in the signal chain than the sensor output and are not, therefore, part of the definition.

I hear you, but that's a pretty narrow definition, as is the definition of signal as the output of a single photosite, or charge collection efficiency as wavelength independent - that's not the way that things work in the real world and that definition doesn't help to answer the common place that I am trying to get to the bottom of: is "one electron the smallest quantum that makes sense to digitize" as Mr. Clark says?  Or is there more of an articulated answer once Information Science is brought to bear? I have no answers, I am just curious.

To help us converse let me define the signal as the mean Raw value from a 4x4 sensel matrix on our sensor illuminated uniformly by an exposure in lx-s that would not break down into an integer number of photons.  The sort of mean signal that you would read off of a 4x4 sampling area in RawDigger and trace back to electrons and photons.

I am open to suggestions but my gut says that it is more complex than Mr Clark suggests, because light, electrons and the human visual system are stochastic systems, based on statistics.  Noise is an integral part of them at every level.  In the light itself, in the sensor, in the ADC in our visual and processing system.  If it's not there it gets injected (Jim inserted noise in the ADC so that he would have dithering and then figured out how to take it back out later - brilliant stuff, check the text around the bottom figure of this page).

As to your question, nobody is challenging the fact that light comes in quanta.  Physics has spoken about that a century ago and we all agree on it.  I may be completely off base but what I am trying to understand is whether Unity is just a figment of our imagination because integers are warm and fuzzy and allow us to calculate certain things easily, or whether it is grounded in something deeper. 

With appropriately sized noise dithering we are able to record integer values that allow us to represent electrons and fractions thereof in a stochastic  process.  What does unity mean in this context?  If instead of 1:1 it is 0.5:1, what's the real difference when the signal is a noise dithered 7.2 ADU/electrons/photons?  Isn't this the same as going from ISO 400 to 200 with an ISOless sensor while keeping exposure the same?  Half the Raw values but same information captured?

Perhaps Jim could give us a hand?
« Last Edit: March 26, 2013, 04:37:28 PM by Jack Hogan » Logged
Jim Kasson
Sr. Member
****
Offline Offline

Posts: 825


WWW
« Reply #69 on: March 26, 2013, 04:39:31 PM »
ReplyReply

...what I am trying to understand is whether Unity is just a figment of our imagination because integers are warm and fuzzy and allow us to calculate certain things easily, or whether it is grounded in something deeper...

Jack, let me take a crack at this. I think that the Unity Gain ISO concept, in spite of its unfortunate name, gets at something useful. I think the key to the distinction that you're making is where does the averaging inherent in computing the mean value occur? I know these systems aren't perfect, but let's start with an ideal model: a photosite that, after getting hit by some photons, contains an integer number of electrons, since that's the way electrons come. Let us further imagine that the charge produced by that group of electrons in close proximity can be amplified by a noiseless amplifier. The output of the amplifier with then be quantized, with each electron in the photosite adding to the voltage. Say the output of the amplifier at some gain is such that each electron in the photosite contributes one microvolt to the output of the amplifier. At the output of the amp, we might see 10 microvolts, we might see 12 microvolts, but we won't see 13.5 microvolts. Let's say that the ADC is set up so that a 2 microvolt change in the input produces a 1 LSB change in its output. With the amplifier set up with the gain as above, 10 electrons and 11 electrons will produce the same ADC output. If we double the gain, 10 electrons and 11 electrons will produce outputs that are different by 1 LSB. But if we double the gain again, we'll still only see the same number of possible output states of the ADC, with the code for 11.5 electrons unused, since the ADC never sees the voltage represented by fractional electrons.

Averaging takes place in two ways, across photosites and across exposures. But when we talk about a mean value of 13.654 electrons, we're talking about the average of a bunch of integers.

Adding noise messes things up. You could say that electrical noise smaller than one electron's worth of voltage referred to the input of the amplifier creates a dither signal that allows resolution of non-integer mean electron counts as long as the noise is at least comparable to 1 LSB, and you'd be right. So I take this Unity Gain thing with a grain of salt. I figure when the ISO knob on the camera gets a stop or two past what is necessary to get Unity Gain, then I should stop twisting it unless I have a good reason.

Also, dithering is not without cost, if we can't remove it later, and it's not clear that we can in this case. The noise removal tools in Lightroom and fancy plugins seem magical sometimes, but they do trade off resolution for noise reduction.

When we get to cameras that oversample the highest frequencies the lens can produce and we're using many camera pixels for every pixel we send the printer, then maybe Unity Gain ISO will have outlived its usefulness.

Does that help?

Jim

PS. Maybe in a further post I'll get into why the shoulder of the DlogE film curve was A Good Thing, and why ETTR images don't have one, and how Lightroom kinda gives it back sometimes. I've got to do some more testing first.

PPS. I don't actually know how the cameras manage to achieve the level of precision that they do; at hp we used to joke that voltage was quantized at the microvolt level, since we had such a hard time making reliable measurements below that.





« Last Edit: March 26, 2013, 04:59:38 PM by Jim Kasson » Logged

Jack Hogan
Full Member
***
Offline Offline

Posts: 236


« Reply #70 on: March 26, 2013, 05:01:05 PM »
ReplyReply

Adding noise messes things up. You could say that electrical noise smaller than one electron's worth of voltage referred to the input of the amplifier creates a dither signal that allows resolution of non-integer mean electron counts as long as the noise is at least comparable to 1 LSB, and you'd be right. So I take this Unity Gain thing with a grain of salt. I figure when the ISO knob on the camera gets a stop or two past what is necessary to get Unity Gain, then I should stop twisting it unless I have a good reason.

Yes, I believe that most modern DSCs with the latest crop of advanced sensors have input referred read noise of around 1 LSB, which works very well for this purpose.

Also, dithering is not without cost, if we can't remove it later, and it's not clear that we can in this case. The noise removal tools in Lightroom and fancy plugins seem magical sometimes, but they do trade off resolution for noise reduction.

When we get to cameras that oversample the highest frequencies the lens can produce and we're using many camera pixels for every pixel we send the printer, then maybe Unity Gain ISO will have outlived its usefulness.

Does that help?

Yes it does, thank you.  My Information Science base is weak, so it takes me a while to reconcile deterministic and statistical versions of the same laws.  The more I think about it the less I see Unity Gain as a parameter in the calculation of the amount of information captured in this particular channel.  Which of course does not mean that it doesn't have its uses.

Thanks again for your help,
Jack
« Last Edit: March 26, 2013, 05:16:58 PM by Jack Hogan » Logged
xpatUSA
Sr. Member
****
Offline Offline

Posts: 298



WWW
« Reply #71 on: March 26, 2013, 05:45:18 PM »
ReplyReply

I hear you, but that's a pretty narrow definition, as is the definition of signal as the output of a single photosite, or charge collection efficiency as wavelength independent - that's not the way that things work in the real world and that definition doesn't help to answer the common place that I am trying to get to the bottom of: is "one electron the smallest quantum that makes sense to digitize" as Mr. Clark says?  Or is there more of an articulated answer once Information Science is brought to bear? I have no answers, I am just curious.

To help us converse let me define the signal as the mean Raw value from a 4x4 sensel matrix on our sensor illuminated uniformly by an exposure in lx-s that would not break down into an integer number of photons.  The sort of mean signal that you would read off of a 4x4 sampling area in RawDigger and trace back to electrons and photons.

I am open to suggestions but my gut says that it is more complex than Mr Clark suggests, because light, electrons and the human visual system are stochastic systems, based on statistics.  Noise is an integral part of them at every level.  In the light itself, in the sensor, in the ADC in our visual and processing system.  If it's not there it gets injected (Jim inserted noise in the ADC so that he would have dithering and then figured out how to take it back out later - brilliant stuff, check the text around the bottom figure of this page).

As to your question, nobody is challenging the fact that light comes in quanta.  Physics has spoken about that a century ago and we all agree on it.  I may be completely off base but what I am trying to understand is whether Unity is just a figment of our imagination because integers are warm and fuzzy and allow us to calculate certain things easily, or whether it is grounded in something deeper.  

With appropriately sized noise dithering we are able to record integer values that allow us to represent electrons and fractions thereof in a stochastic  process.  What does unity mean in this context?  If instead of 1:1 it is 0.5:1, what's the real difference when the signal is a noise dithered 7.2 ADU/electrons/photons?  Isn't this the same as going from ISO 400 to 200 with an ISOless sensor while keeping exposure the same?  Half the Raw values but same information captured?

Perhaps Jim could give us a hand?

I was going to respond point by point but, as we both know, that can lead to very long and boring posts.

I agree that the condition for Unity Gain being the gain (ISO) where 1 electron moves the ADC output by 1 ADU is indeed a "narrow definition" but that is the only definition that I can find in the literature, so that is the definition I am using. If anyone knows of a different definition I'm sure we would be interested to hear it.

Since this thread appears to be less about Unity Gain per se and more about noise, dithering and other such complications, I'm inclined to bow out - as I have nothing to offer on those subjects.

Before I go, it was said that:

Quote
. . . I am trying to get to the bottom of: is "one electron the smallest quantum that makes sense to digitize" as Mr. Clark says?

I did offer a response earlier to that question re: reducing the DR and resolution but, with so much verbiage in our posts, perhaps it was missed?
« Last Edit: March 26, 2013, 06:10:33 PM by xpatUSA » Logged

best regards,

Ted
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3630


« Reply #72 on: March 26, 2013, 06:49:24 PM »
ReplyReply

No we could not. We can calculate a mean and SD for discrete items such as the number of electrons in a number of photosite and we can indeed assign fraction values to the said mean and SD. But, sorry, we can not turn the equation around and come up something like 79.35 photons. All that figure tells you is that it is perhaps more likely that there were 79 photons than 80 photons. You can not have a fractional number of photons. Physically impossible.

I find it disturbing that fractional photons are still being mentioned. There can be no such thing. If this basic fact about the nature of light is not understood, then nothing else can be accepted or understood and, with all due respect, our discussion would be at an end.

So, time for a question of my own:

Is it the opinion of your goodself, or indeed of this forum, that fractional photons can exist?

Hi Ted,

There is no such a thing, so I couldn't agree more with your analysis on unity gain. Unity gain also has little to do with things like quantum efficiency. The only thing that counts is the number of electrons that are freed and collected. It's not without reason that astronomers and those active in that field (like Roger Clark) seem to accept the notion of unity gain, because there are usually so few of them (photons that is) in astronomy. Also, transmitting irrelevantly accurate data across space is costly.

Given the fact that only those photons that get through the sensor cover-glass, and Bayer CFA, and past the surface structures/apertures and gates, and transitors (on CMOS devices), and penetrate to the correct depth to be counted, get counted. Those are the only ones to be considered in any image forming statistics.

The concept of Unity Gain is linked to the ADC (offset and gain) and the quantization bit-depth, but I also agree that the input of the ADC, and it's output, are related but different things. It's mainly after the ADC (unless there was a pre-amp) that all sorts of noise is added to the collected charge. The arrival rate of photons during the exposure time is a statistical process described by Poisson distribution statistics, but has no further influence once the electron(s) is(are) recorded.

Cheers,
Bart
« Last Edit: March 26, 2013, 07:25:52 PM by BartvanderWolf » Logged
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3630


« Reply #73 on: March 26, 2013, 07:11:19 PM »
ReplyReply

I hear you, but that's a pretty narrow definition, as is the definition of signal as the output of a single photosite, or charge collection efficiency as wavelength independent - that's not the way that things work in the real world and that definition doesn't help to answer the common place that I am trying to get to the bottom of: is "one electron the smallest quantum that makes sense to digitize" as Mr. Clark says?  Or is there more of an articulated answer once Information Science is brought to bear? I have no answers, I am just curious.

Hi Jack,

Each single (one) electron is the only thing that matters once collected (on a per sensel collection area). It helps to quantize it before we can do something useful with it. When the gain is such that 1 or 2 (or more) electrons will produce the same ADU, it will not be unity gain. When each single captured electron produces a different ADU, unity gain is in effect, and the bit depth of the ADC is optimally utilized. The fact that there may be more electrons from other processes (electronic noise, dark current, and what have you) doesn't change the importance of being able to quantize each and every EXPOSURE related electron with enough accuracy. The electronic noise and such can be mostly eliminated by taking multiple samples and averaging them, which leaves the exposure signal itself (and its Poisson noise distribution), which is what interests photographers most.

Cheers,
Bart
Logged
Jack Hogan
Full Member
***
Offline Offline

Posts: 236


« Reply #74 on: March 27, 2013, 04:12:58 AM »
ReplyReply

Hi Jack,

Each single (one) electron is the only thing that matters once collected (on a per sensel collection area). It helps to quantize it before we can do something useful with it. When the gain is such that 1 or 2 (or more) electrons will produce the same ADU, it will not be unity gain. When each single captured electron produces a different ADU, unity gain is in effect, and the bit depth of the ADC is optimally utilized. The fact that there may be more electrons from other processes (electronic noise, dark current, and what have you) doesn't change the importance of being able to quantize each and every EXPOSURE related electron with enough accuracy. The electronic noise and such can be mostly eliminated by taking multiple samples and averaging them, which leaves the exposure signal itself (and its Poisson noise distribution), which is what interests photographers most.

Cheers,
Bart

Bart, Ted and Jim, very helpful, thank you for indulging me with a less than intuitive subject - I am sure Mr. Clark is a very smart man and I may be completely off base here but I've been thinking about this for a while, meandering a bit without being able to phrase the question properly.  In a nutshell, are we not making the same mistake as those who think that engineering DR is equal to bit depth, relating a bit to a doubling of signal instead of to a basic unit of information?  As I said, my information science is weak  Smiley

I understand the importance of being able to record the information of each and every EXPOSURE related electron with enough accuracy.  My question, which you are helping me to clarify even with myself, is whether that means recording electrons to ADUs one for one.  I agree, as Ted mentioned and Jim related in his example, that in the complete absence of noise the answer would be yes.  But what about in a typical real world situation with shot noise present in the signal and around 1 ADU of input-related read noise at the ADC?

The example that brought me to this type of thinking is this:  I am in the field with a D7000, I maxed out exposure according to my blur and dof constraints and set ISO to 400 in order to just retain detail in the brightest desired highlight, the top of which now sits just above DN 16000 in the Raw file of a test image I looked at with RawDigger, making the most of my available bits.  Then I remember that the D7000 is ISOless between ISO 400 and 100 so I cut back gain (errr, ISO) to 100 while keeping exposure unchanged to obtain a welcome two additional stops of wiggle room in the [EDIThighlights] Raw data - which can now be better compressed.  Now the brightest desired highlight sits just above DN 4000 in the Raw data and all tones underneath it have been compressed by a factor of four.  Am I degrading IQ perceptiibly?  BTW, Unity Gain for the D7000 as calculated by MR. Clark should be at about ISO 240, if I understand it correctly, because at ISO 100 about 2.38 electrons are needed for 1 ADU.

The intuitive answer is, yes you are degrading IQ - because now you have compressed your original target signal range, using only a quarter of the bits as before to record it, so you are losing resolution in the graduations.  But the real world answer is no, we are not degrading IQ perceptibly because dithering occurs as a result of noise unavoidably present in the system.  We are able to record virtually the same information as in the first case plus some more.  This second way simply makes more efficient use of the 14-bit channel width*.

So what gives?  Is unity gain necessary in order to maximize information captured aotbe?  Apparently not in this case.  In the presence of appropriately sized noise, should the desired tonal range fit within the Raw value range, would a gain of 0.5 or 0.25 e-/ADU not allow us to capture the desired information?  Isn't the decision of when to stop raising gain not related just to the size of the smallest quantum but also the relative size of the noise?  Can we be a bit more precise than when Jim suggested 'I figure when the ISO knob on the camera gets a stop or two past what is necessary to get Unity Gain, then I should stop twisting it unless I have a good reason.'?  And what does "one electron [is] the smallest quantum that makes sense to digitize" mean in this context?

Jack
*  You can tell what next question this portends, right ;-)?
« Last Edit: March 27, 2013, 10:30:21 AM by Jack Hogan » Logged
Jim Kasson
Sr. Member
****
Offline Offline

Posts: 825


WWW
« Reply #75 on: March 27, 2013, 10:29:52 AM »
ReplyReply

*  You can tell what next question this portends, right ;-)?

Jack, thanks for your pursuit of this issue. Answering your questions has not only helped me clarify my own thoughts on this subject, but has helped me prepare for the workshop that I'll be involved in next month. I will get to your larger question, but I need to perform a few experiments first, so let me deal with the implied question that you raised at the end, which -- I think -- is, "What's a good reason for twisting the ISO past the point where all you're doing is amplifying the noise?"

The first reason is to be able to see what's going on in the preview image when chimping. A lot of the time, that's optional. However, when you're using a mirrorless camera like the NEX-7 or the RX-1 (by the way, I am continually awed by the combination of size and IQ of the RX-1; I haven't had a camera that so affected the way I make pictures since the D3 came out; no art yet, though), sometimes you need to turn up the ISO to see enough to frame, focus, etc. the picture.

The second reason is to have enough range in your raw processor to boost the "Exposure" in post. Lightroom only offers 5 stops.

There may be more, but those are my top two.

Jim
« Last Edit: March 27, 2013, 10:35:30 AM by Jim Kasson » Logged

xpatUSA
Sr. Member
****
Offline Offline

Posts: 298



WWW
« Reply #76 on: March 27, 2013, 10:35:38 AM »
ReplyReply

Bart, Ted and Jim, very helpful, thank you for indulging me with a less than intuitive subject - I am sure Mr. Clark is a very smart man and I may be completely off base here but I've been thinking about this for a while, meandering a bit without being able to phrase the question properly.  In a nutshell, are we not making the same mistake as those who think that engineering DR is equal to bit depth, relating a bit to a doubling of signal instead of to a basic unit of information?  As I said, my information science is weak  Smiley

I can help with that. The word 'bit' originally was a contraction of 'Binary DigIT' meaning a signal that was either ON or OFF with no other states allowed. It is also used in what is called base-2 arithmetic instead of our more familiar base-10. In base-10 the last digit of a number can have 10 values 0 to 9, the next over 0 to 90 (in tens), etc. In binary, the last digit, also called the least significant bit, can have two values 0 or 1, the next over 0 or (not 'to') 2, the next over 0 or 4, etc. In this context, 'bit' means a binary digit place-holder and e.g. 12-bit means 12 placeholders: 0 or 1 thru 0 or 2047. Relating to your question: if I added a placeholder, there would be 13 and the most significant bit would mean 0 or 4095. "4095?!" you go, "surely that's 12 bit?" but remember, all the other bits (placeholders) can also be '1' so the 13-bit binary number represent more than 4095, namely 8191, given by 2^(no of placeholders) - 1. By the same token, the number of possible color values for red on your screen is 2^8 - 1 = 255. 256 if you count black.

Confusion occurs when 'bit' is also used, by itself, in a different context - that of meaning how many ADU's are represented by e.g. an 8-bit number (also known as a byte) for example 00001101 represents 13 bits, e.g. ADUs of information. Equally, going back to resolution, if we effectively remove one bit from a 12-bit ADC, it becomes an 11-bit ADC having only 2047 positive values, excluding 0, thus halving the resolution and the ADC DR.

A hyphen or, if no hyphen, 'bit' used in the singular should imply the number of placeholders, e.g. 14-bit ADC has 14 binary digit output lines (placeholders). 'Bits' should relate to a number of ADU's, e.g. a color of 128 bits (ADUs) on your 8-bit (8-placeholder) monitor is half-saturated.

The DR is halved in my earlier example because the minimum positive value out of the ADC becomes 2 (binary 000000000010) and that was because I doubled the ISO for Unity Gain. Do remember that one electron, irrespective of its source, causes a discrete increase in sensor output voltage. So increasing electron counts by one (at, say, 8uv/electron) you get 8, 16, 24, 32uv, et subs and the ADC outputs least 4 placeholders go 0010 (2 ADU), 0100 (4 ADU), 0110 (6 ADU), 1000 (8 ADU). In other words: if 1-e causes 1 ADU of output at the ISO for Unity Gain then, by going up one stop (doubling the ISO), 1-e causes 2 ADUs of output in 2-ADU steps. That halves the best possible ADC DR from 4095/1 (12 stops) to 4095/2 (11 stops). It also halves the best possible scene DR because the ADC input is now saturated by half the electrons ergo half the photons, all other things being equal.

ADC noise is not included in the above explanation, but is often quoted as +/- 1 LSB. VGA (variable gain amplifier) noise/offset is not included either but can be significant, see here, as well any VGA output voltage offset due to thermal effects.

From the way Roger Clark writes, I deduce that he is no physicist. But my foregoing paragraphs may illustrate his point about the effect of exceeding Unity Gain ISO. Although my example of going over by 1 stop was extreme (to make the explanation easy), it does illustrate that exceeding the Unity Gain ISO by any value will have some deleterious effect.

Hope that helps,
« Last Edit: March 27, 2013, 11:32:50 AM by xpatUSA » Logged

best regards,

Ted
Jim Kasson
Sr. Member
****
Offline Offline

Posts: 825


WWW
« Reply #77 on: March 27, 2013, 11:09:45 AM »
ReplyReply

The intuitive answer is, yes you are degrading IQ - because now you have compressed your original target signal range, using only a quarter of the bits as before to record it, so you are losing resolution in the graduations.  But the real world answer is no, we are not degrading IQ perceptibly because dithering occurs as a result of noise unavoidably present in the system.

Jack, there's an assumption in the way you frame the issue: that, no matter how far up I crank the ISO, there's enough noise in the system to fill in the histogram so that all the possible ADC outputs in the middle of the histogram are used. I decided to test that assumption. With a D800E set at ISO 6400, I took an image of what is in danger of becoming my favorite subject, my LCD, at 1/30 second, defocused, average green channel around five stops below clipping (to use my new (thanks to you) formulation). I took a look at the histogram. Here it is at two levels of magnification:






You can see that there are a lot of missing codes. In fact, there are typically three empty codes between each occupied one.

If the gears in your head are turning and you're thinking, "Yes, but with a unity gain ISO of around 300, at ISO 6400, the "gain" is about 20, and there should be more missing codes than that." Well, you're right, but consider the limitations of the experiment. With this test, I am probably using all the amplifiers and maybe all the ADCs, all of which have slightly different characteristics. If I could test one amp and one ADC, I'd probably get bigger gaps in the histogram.

There is a way that I could test one ADC and one amplifier: I could look at the same pixel in successive exposures. However, I don't have the patience to gather enough samples for that experiment. As my math teachers used to say, it is left as an exercise for the interested student.


We are able to record virtually the same information as in the first case plus some more.

I would phrase it, "We are able to record virtually the same information, plus some more noise, and not as much of either at it looks like we are recording from looking at the broad-brush histogram (because of the missing codes we can't see when we're looking at the whole histogram)."

Jim

« Last Edit: March 27, 2013, 12:01:17 PM by Jim Kasson » Logged

xpatUSA
Sr. Member
****
Offline Offline

Posts: 298



WWW
« Reply #78 on: March 27, 2013, 12:02:14 PM »
ReplyReply


[Addition: if I knew the internal organization of the amps and ADCs in the D800E, I could craft a selection that only encompassed one of each. Anyone know enough to tell me how to do that?]


Unfortunately, Jim, it's likely to be a chip called an analog front end (AFE) beloved by current camera mfgs: a) cheaper b) smaller c) cheaper d) cheaper. Thus analysis will be difficult because it's a 'black box' thingy with all those amps ADCs and more inside. Here's one (PGA = programmable gain amplifier) good for a Foveon sensor (3-channel):

http://www.analog.com/static/imported-files/data_sheets/AD9814.pdf

[edit] the link shows a 14-bit ADC driving an 8-bit chip output which was puzzling but I think it delivers a 16-bit output in two shots: the 8 LSBs and then the 8 MSBs but with the 4 most significant of those set to 0.[/edit]

Love the LCD idea . . must play.

Not sure about "We are able to record virtually the same information, plus some more noise, and not as much of either [as] it looks like we are recording from looking at the broad-brush histogram (because of the missing codes we can't see when we're looking at the whole histogram" but it does read well  Wink

« Last Edit: March 27, 2013, 12:27:32 PM by xpatUSA » Logged

best regards,

Ted
Jack Hogan
Full Member
***
Offline Offline

Posts: 236


« Reply #79 on: March 27, 2013, 12:22:49 PM »
ReplyReply

I can help with that.Hope that helps,

It does, thank you Ted, I appreciate the excursus.
Logged
Pages: « 1 2 3 [4] 5 6 ... 10 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad