Ad
Ad
Ad
Pages: « 1 [2]   Bottom of Page
Print
Author Topic: What does ISO setting actually do? Michael?  (Read 21809 times)
BJL
Sr. Member
****
Offline Offline

Posts: 5162


« Reply #20 on: November 17, 2006, 05:20:30 PM »
ReplyReply

I do not understand the specifics of the "banding" issue, but overall, it does make sense that as far as random noise is concerned, once the signal is amplified to the point that A/D quantization error is of order of one photo-electron or less, more amplification will not help. And if anything, I would expect progress in A/D convertors to reduce the increment in input voltage level corresponding to one step in output level, meaning that pre-amplification to rather less than ISO 800 is likely to be enough. Even counting to within two photo-electrons is probably completely adequate in practice, because photon shot noise will be significantly more than that at any pixel that got enough light to be worth printing as anything other than pure black.

More to the point, DSLR sensor well capacities seem unlikely to go much beyond the current upper limits, so should stay under 2^17=131,072, so a 17-bit A/D convertor could do "exact" counting of every photo-electron up to a well capacity that large, so one pre-amplification level would then cover all cases. Who cares if it is ISO 100 or 3200 or in between? If 16 bit RAW is needed, maybe the decision as to whether to delete one bit from the top or bottom could be done after examining such 17-bit raw data.

And A/D convertors of more than 17 bits exist, for audio recording at least.
Logged
jani
Sr. Member
****
Offline Offline

Posts: 1604



WWW
« Reply #21 on: November 17, 2006, 05:55:23 PM »
ReplyReply

Quote
And A/D convertors of more than 17 bits exist, for audio recording at least.
One of the great revolutions in A/D conversion is the 1-bit converter (using a D flip-flop frequency delta-sigma modulator), rather than the clunky and complex 24-bit (or what-have-you) Nyquist-rate converters. Bonus: it's simpler!
Logged

Jan
bjanes
Sr. Member
****
Online Online

Posts: 2822



« Reply #22 on: November 17, 2006, 06:31:42 PM »
ReplyReply

Quote
I've never agreed with that conclusion of his.  He ignores the issue of 1-dimensional noise - line banding.  This is *the* demon of Canon high ISO shadows; it is much more visibly potent than random noise alone, in many images. 
[a href=\"index.php?act=findpost&pid=85542\"][{POST_SNAPBACK}][/a]

John,

Thanks for the information. Very well reasoned as usual. Both you and Roger are considerably more advanced than I in these matters, so I can't really judge. Roger does determine noise by subtracting two frames taken together under identical conditions, so any repeatable pattern noise is not accounted for. However, his approach does eliminate problems from nonuniform targets and illumination when one merely takes the standard deviation of  pixel values for a uniform patch. How do you measure noise?

Quote
The idea that digitization ends at unity (1 ADU = 1 electron) is not logical to me at all.  I believe 3 or 4 ADUs per electron will have slightly less posterized data.  Don't forget, the ADC is not counting electrons, it is digitizing a buffered, amplified fascimile of the charges in the wells.  Oversampling the electrons allows more accurate counting, as the noise can't flip the result as far in either direction.

[a href=\"index.php?act=findpost&pid=85542\"][{POST_SNAPBACK}][/a]

That makes sense, but there must be a point of diminishing returns when the ADU number is greater than the number of electrons captured. One is after all measuring voltage, which is related to the charge and capacitance of the pixel well together with the analog amplification. However, the voltage should increase in discrete steps with each electron.  Some degree of oversampling makes sense, just like carrying an extra decimal place in calculations so as to avoid rounding errors. If you are using a ruler to measure an object to the nearest millimeter, it doesn't help to calculate with six decimal places.

Bill
Logged
BJL
Sr. Member
****
Offline Offline

Posts: 5162


« Reply #23 on: November 20, 2006, 06:12:19 PM »
ReplyReply

Quote
The idea that digitization ends at unity (1 ADU = 1 electron) is not logical to me at all.  I believe 3 or 4 ADUs per electron will have slightly less posterized data.
[a href=\"index.php?act=findpost&pid=85542\"][{POST_SNAPBACK}][/a]
But isn't it true that at any photosite receiving enough light to deserve being rendered as anything except pure black, you need enough photo-electrons that photon shot noise is well over one electron? (RMS photon shot noise is about the square root of the number of photo-electrons.) If so, then A/D quantization errors of one electron will be buried in that other noise, except at pixels that fall below the black-point.

(It is different in technical applications like astro-photography, where they care about counting every photon, even in very dark parts of the image where S/N ratios are horribly low.)
Logged
John Sheehy
Sr. Member
****
Offline Offline

Posts: 838


« Reply #24 on: November 20, 2006, 11:30:52 PM »
ReplyReply

Quote
But isn't it true that at any photosite receiving enough light to deserve being rendered as anything except pure black, you need enough photo-electrons that photon shot noise is well over one electron?

No.  The significance of unity is an illusion.  It is a simple, appealing concept, with no basis in reality.  It's just the crossing point of the input and output of the square root function.  Hypothetically, if there were no noise other than shot noise, and you had a 1 gigapixel sensor, you could get perfectly usable images when only 20% of the sites received one single photon each.  You could probably recognize a face in an image where only 1% of the pixels received photons.  You could use a 1 gigadot printer, or downsample the image.

The real-world problem is readout noise, which varies RAW values by much more than one electron.  This changes the 1:1 problem a bit, but even here, there is nothing significant about the 1:1 point.  You can truncate the last few bits in an ISO 1600 image, and not notice the difference at all in a normally rendered print; counting individual photons is fuzzy well above the unity level; the more amplification or bit depth you use, the more accurate the signal and noise are rendered, and there is a small, but real, improvement in IQ.  Nothing magical should happen at the 1:1 level.  It really has no significance.  Read noise does NOT occur in discrete units of sensor well electrons.

Quote
(RMS photon shot noise is about the square root of the number of photo-electrons.) If so, then A/D quantization errors of one electron will be buried in that other noise, except at pixels that fall below the black-point.

An image is a community of pixels.  The better the noise is sampled, the closer you get to a true image.  There is no benefit in ignoring things because they are too noisy.  "Buried in noise" is a metaphor, but isn't really what is happening.  Noise never masks anything by 100%.  Readout noise is not a low wall behind which you can see no signal; it is an uneven ground which alters the height of signal stakes that are resting on it; the more accurately that the uneven ground is represented, the less falsehood is in the heights of the tops of the stakes.

Quote
(It is different in technical applications like astro-photography, where they care about counting every photon, even in very dark parts of the image where S/N ratios are horribly low.)
[a href=\"index.php?act=findpost&pid=86256\"][{POST_SNAPBACK}][/a]

Shooting birds in the forest with long, slow telephoto/TC stacks when the foliage is at peak growth is not much different.

Someone gave me two RAW files from a Konica-Minolta DSLR, a K7 I believe, with the same under-exposed image at both ISO 1600 and 3200, with the same manual exposure.  Even with more noise, relative to the Canons, I could still see clearer shadows in the 3200, which, unlike the Canons, is analog in ISO nature.  the 1600 had slightly higher chromatic noise, and more distinct line-banding.
« Last Edit: November 20, 2006, 11:32:23 PM by John Sheehy » Logged
BJL
Sr. Member
****
Offline Offline

Posts: 5162


« Reply #25 on: November 21, 2006, 10:00:28 AM »
ReplyReply

Quote
The significance of unity is an illusion.
[a href=\"index.php?act=findpost&pid=86298\"][{POST_SNAPBACK}][/a]
I agree that unity is an illusion, or at least an extreme that is not a natural practical choice. At most, it limits the worst case quantization error to 1e, and no matter how much more one amplifies, the error will in fact not get any less, because the original error sources all come in quanta of one or more electrons, even if subsequent amplification and charge-to-voltage conversion renders these discrete errors into variations in a continuous voltage signal.
But I think, contrary to you, that counting accurate to one electron is more than enough precision, rather than not enough, by considering the minimum magnitude of the errors already present in the signal reaching the A/D convertor.

Quote
The real-world problem is readout noise, which varies RAW values by much more than one electron.[a href=\"index.php?act=findpost&pid=86298\"][{POST_SNAPBACK}][/a]
I agree that the noise levels arriving at the A/D converter are already "much more than one electron", and that is precisely why I believe that an A/D convertor that measures accurate to a single photo-electron ("unity conversion") is already _more_ precision than has any practical value.
The real world problem is the total of read-out noise plus other noise present in the signal reaching the A/D convertor, including photon shot noise.  You might be right that in practice, read-out noise dominates, but I use photon shot noise only because fundamental physics sets a known, absolute lower limit on noise levels in the signal reaching the AD convertor. That is, photon noise gives an upper limit on the "precision" of the signal that arrives at the A/D convertor.

No matter how low other noise sources get, the minimum possible noise level (in RMS variation in the count of photo-electrons in the electron well) will be the square root of the number of photo-electrons of signal, the contribution of photon shot noise. This can somewhat arbitrarily be divided into two cases:
Case 1: photo-electron signal less than about 16e [higher, like 25e, might be a more reasonable cut-off?]
Case 2:  photo-electron signal of about 16e or more [or 25e?]

Case 1: Under 16e is 12 or more stops below the maximum possible signal in larger current DSLR photosites, so 8 or 9 stops below mid-tone levels at ISO 100 and 4 or 5 stops below mid-tones at ISO 1600. Total black on a print comes about four stops below mid-tones, so these levels are well into total black on a straight print even from an ISO 1600 image. Moreover, the S/N ratio can be at most the square root of the electron count, so at most 4:1 with this few photo-electrons, and this is so miserably low that I cannot imagine any interest in rendering such extremely dark and noisy pixels as anything except pure black.
That is, the black point would almost surely be set above the 16e level, so that the signal from a pixel receiving such a low illumination level would be transformed to level zero, eliminating any visible print noise in any part of the image receiving such low illumination.
[25e would move minimum S/N ratio up to 5:1, still I suspect uselessly low for artistic photography, and so destined to be zeroed out by the black point. Kodak has suggested 10:1 as the minimum S/N ratio needed for "acceptable" image quality, along with 40:1 for "excellent".]

Case 2: Signal of 16e or more has photon shot noise of at least 4e RMS, and thus thus the total "analog domain noise" (photon noise , dark current noise , read-out noise, pre-amplifier noise, and any others I have missed) in the signal entering the A/D convertor already has this much "error". An experimentalist would probably tell you that there is no point in measuring a quantity down to less than 1/4 of the error already present: your already get more than enough precision, "two bits" to spare, if the A/D convertor counts accurate to the level of one photo-electron, as with "unity conversion".

To quantify this, adding 1e RMS of quantization noise to 16e RMS of "analog stage" noise give a total of sqrt(17)=4.12e RMS, an increase of 0.12e, or 3% in total noise. I doubt that this change, in the deep shadows, would be even slightly visible.

And as the signal increases, the effect of an additional 1e RMS quantization error becomes less. For example, setting the black point a bit higher by requiring at least 5:1 S/N ratio at "non-black pixels" increases the minimum electron count to 25e, minimum "analog phase noise" to 25e RMS, and then adding 1e RMS to 25e RMS only increases the total noise by 0.1e, less than 2%.

If any posterization in extremely dark parts of the image still arises at the transition from black level to lowest non-zero level (which I doubt), it can be eliminated by interpolation of additional finer levels of near black, a kind of "dithering". There is no need to use the illusory precision of even more "accurate" A/D conversion to produce such levels: that would simply be using random noise in the signal as a form of random dithering.


The weakest point in this argument is the somewhat arbitrary choice of S/N ratio thresholds like 4:1, so I am open to evidence and arguments that very dark pixels with S/N ratios less than 4:1 are worth printing as anything other than pure black. But for now I am skeptical, given expert opinions like the guideline of a 10:1 minimum S/N ratio stated in a Kodak technical paper.
Logged
John Sheehy
Sr. Member
****
Offline Offline

Posts: 838


« Reply #26 on: November 21, 2006, 03:54:29 PM »
ReplyReply

Quote
John,

Thanks for the information. Very well reasoned as usual. Both you and Roger are considerably more advanced than I in these matters, so I can't really judge. Roger does determine noise by subtracting two frames taken together under identical conditions, so any repeatable pattern noise is not accounted for. However, his approach does eliminate problems from nonuniform targets and illumination when one merely takes the standard deviation of  pixel values for a uniform patch. How do you measure noise?

For blackframes, I just look at the standard deviation of the blackframe, and the blackframe with averaged lines, horizontal and vertical, for banding noise.  The values agree in IRIS and Photoshop, as long as you view the greyscale in PS as color (PS gets confused in greyscale mode, and does not report values properly).  IRIS makes it very easy - you just load the RAW file, and type the stat command, or highlight a rectangle and right click on "Statisitics".  For the banding, I measure in PS, after doing an image resizing to one pixel wide or high, and back to full size, with bicubic.

For actually exposed areas, I measure noise by shooting a color checker card out of focus, to get rid of any luminance detail in the card itself, then splitting the RAW into the 4 color channels (2 greens), and measuring the the mean and standard deviations of the flat areas within the squares (careful not to include bokeh from the black edges or their shadows, or dust spots if present).

Quote
That makes sense, but there must be a point of diminishing returns when the ADU number is greater than the number of electrons captured.

There are certainly diminishing returns, but there is no significance to the unity point per se.

Quote
One is after all measuring voltage, which is related to the charge and capacitance of the pixel well together with the analog amplification. However, the voltage should increase in discrete steps with each electron.  Some degree of oversampling makes sense, just like carrying an extra decimal place in calculations so as to avoid rounding errors. If you are using a ruler to measure an object to the nearest millimeter, it doesn't help to calculate with six decimal places.
[a href=\"index.php?act=findpost&pid=85877\"][{POST_SNAPBACK}][/a]

Who says we're only interested in the nearest millimeter, though.  We don't look at images to count photons.  We look at images to see what is suppposed to be in them, and even "diminished returns" are better than not having those diminished returns, when you really need to see what's down there in those shadows, and as little of the artifacts as possible.
Logged
Pages: « 1 [2]   Top of Page
Print
Jump to:  

Ad
Ad
Ad