Panopeeper


« Reply #40 on: December 01, 2007, 12:55:48 AM » 
Reply

So you agree with my staircase analogy, Panopeeper? Yes, I do, though I have a small problem with it: one could come to the idea, that "floors" represent "stops", therefor the levels are alway a fraction of a stop. It does not have to be that way: a step could be for example 3 stops high.



Logged

Gabor



Ray


« Reply #41 on: December 01, 2007, 03:52:01 AM » 
Reply

Yes, I do, though I have a small problem with it: one could come to the idea, that "floors" represent "stops", therefor the levels are alway a fraction of a stop. It does not have to be that way: a step could be for example 3 stops high. [a href=\"index.php?act=findpost&pid=157421\"][{POST_SNAPBACK}][/a] Yes, a step could be 3 stops high if you had a low enough bit depth like 4 bit encoding. But that's going to extremes. However, using f stops to describe DR is just one particular convention, I agree.



Logged




John Sheehy


« Reply #42 on: December 01, 2007, 10:15:48 AM » 
Reply

I don't see how this can be true. The smallest possible step in a dynamic range is limited by the effect of a single photon. If photodetectors on sensors could actually count photons, which I believe they can't, but lets assume that they can without any interference from noise of any description, then the dynamic range is determined by the maximum number of photons that a sensor can 'process' during any exposure. The context in which I was writing was one without noise, in which bit depth is the limiter. DR applies as well to any computergenerated image, in which case you could double the DR with each extra bit of linear depth. I was think of shot noise as a noise in this context. Of course, with a pure photoncounting situation, the maximum number of photons would determine DR. That is just as hypothetical as my noiseless image (for photography; not for computergenerated images). This maximum is determined by the size of the sensor, all else being equal, or if you like, the size of individual photodetectors, all else being equal. In this sense, size or scale is very relevant to DR.
In practice, you have to either subtract or cancel all sources of noise from this maximum signal capacity of the sensor in order to determine a useful DR. There's where your analogy doesn't work; the maximum number of photons minus the "noise floor" does not relate to DR at all, unless you are still thinking about pure photon counting noise, where the lowest usable standard (by any arbitrary but consistent standard) is always the same number of photons, so there is a unique mapping between max minus noise floor and max divided by noise floor. The division is the most useful and straightforward, though. "Differences" just result in a curve that needs to be converted back to a ratio for any useful calculations. As soon as you start doing anything else besides counting photons/electrons, like introducing read noise, the absolute difference between the noise floor and the max has no direct relationship to DR. A 16bit camera with a noise floor of 32K ADU would have a traditional DR of only 2x, or one stop, although the difference would be 32K ADU or somewhere from 16K to 400K electrons. An 8bit camera with a noise floor of 1.5 ADU or 0.75 to 30 electrons out of a max of 30K electrons would have much more DR, even though the difference between the noise floor and max signal is much less, both in ADUs and electrons. You seem to have a literal conception of "noise floor". It's a poorly chosen term, IMO, which gives off false connotations. It is not the bottom of anything. Signal always exists below the noise floor and is not totally obscured by the noise. It's just an important turning point for SNRs in the deep shadows. Don't forget, most of this discussion is pixelcentric, and that's fine, as long as we understand what that means. The DR we usually speak of is that of the pixel, but the pixel does not determine the image, and depending upon the pixel frequency of the detail we are interested in capturing, you can get usable signal well below the noise floor. You can record a white fat letter that almost fills the frame, on a black background, in a clean DSLR, where the level for the white letter is a small fraction of a single photon. In the same way, as we use more and more pixels in our images, the pixel noise, and pixel DR, become less of an issue to image noise and DR. Never forget, the real world of light is individual photons, and any illusion of smooth levels is achieved by mechanical binning and inability to resolve individual photons. If DR is not 'scale' dependent as you suggest then you could claim, if it were possible to design a completely noise free tiny sensor containing say 100 photodiodes just 1 micron in diameter, that such a tiny sensor could have the same dynamic range as, say a 5D or P45+.
This would be clearly ridiculous. Reductio absurdum! [a href=\"index.php?act=findpost&pid=157413\"][{POST_SNAPBACK}][/a] Yes it would be, but as I said before, my context was one where DR isn't separable from bit depth, and such scenarios can exist, if not in a digital capture. You can do a 3D raytracing with the output as a linear DNG that looks like a camera capture, but with the only noise/distortion as quantization. In that case, DR would be directly proportional to bit depth. I was just trying to give some balance to the idea that DR has nothing to do with bit depth.



Logged




John Sheehy


« Reply #43 on: December 01, 2007, 10:23:52 AM » 
Reply

Yes, a step could be 3 stops high if you had a low enough bit depth like 4 bit encoding. But that's going to extremes. However, using f stops to describe DR is just one particular convention, I agree. [a href=\"index.php?act=findpost&pid=157433\"][{POST_SNAPBACK}][/a] If the noise is high enough, though, 4 bits linear is all you need, and a gammaencoded 4 bits if the noise is even lower. You can barely see quantization effects with a D2X ISO 1600 RAW quantized to 6 linear bits. I'm sure some of the earliest CCD attempts at 1600 would suffice with 4 bits (although a positive blackpoint offset in the RAW data would be very useful at that extreme).



Logged




John Sheehy


« Reply #44 on: December 01, 2007, 10:32:33 AM » 
Reply

That's right. However, the numerical representation does not have to follow that onetoone.
Example: if the DR is only one stop and the values are stored in 256 levels, then the highest value is 256 times as high as the lowest value  but that represents only twice the lightness. If you are talking about 256 linear levels (8 bits), then a DR of 2.0 would mean that the noise floor was at level 127.5. If you are going to use 0 through 255 to represent what I would think of as 127.5 to 255, then you are clipping away shadows. You can't clip at the noise floor, without damaging the capture (not that it is a good one anyway, but further damage is further damage). Likewise, the numerical values from 0 to 7 can represent leves of a dynamic range of 1000:1. [a href=\"index.php?act=findpost&pid=157414\"][{POST_SNAPBACK}][/a] Yes, but you might want to add some noise to that, if enough weren't there already, unless you're looking to use quantization as a special effect.



Logged




John Sheehy


« Reply #45 on: December 01, 2007, 10:45:44 AM » 
Reply

I read some of Roger Clark's site the other day and I don't think he was claiming that moving from a 12 bit ADC to a 14 bit ADC would give 2 extra stops of dynamic range. He did suggest that the lower read noise possible with a 14bit (or 16 bit) device could benefit some current camera's, e.g. canon 5D and presumably Nikon D3 etc, giving a higher achievable dynamic range, but not a full 2 stops higher. Part of the issue is that the DR is limited by full well capcity and the noise floor, which the ADC contributes to. hence a 'quieter ADC' should improve the lowest readable levels. [a href=\"index.php?act=findpost&pid=157290\"][{POST_SNAPBACK}][/a] I think that the ADCs in the high end camera already aren't introducing the majority of read/blackframe noise. I'm talking about the ones that have ISO 100 blackframe noises of 1.22 to 1.4 12bit ADUs (1D* series, D3, 40D). The ones that have 2.0 to 4.0 ADU are possibly using inferior ADCs, but that can also be from the photosite read noise. I think Roger overestimates the role of ADC noise in current DSLRs, especially in Canons. I think that the difference in read noise, in electrons at different ISOs in Canons occurs right at the photosite readout. Roger was expecting an increase in DR the 14bit DSLRs that were announced, but the benefit never materialized. The fact that ISO 100 read noise stayed at almost exactly the same level between the 1Dmk2 and 1Dmk3 suggests that the ADC is probably not a major contributor to noise, IMO.



Logged




John Sheehy


« Reply #46 on: December 01, 2007, 10:53:57 AM » 
Reply

The Leica DMR back for the R8 and R9 is a 16 bit camera. It has excellent dynamic range and gradation and can retain much more shadow detail than a 12 or 14bit camera. Those who have used both the DMR and 5D (for example) in RAW mode overwhelmingly prefer the DMR's image quality. [a href=\"index.php?act=findpost&pid=157400\"][{POST_SNAPBACK}][/a] Is this with the same absolute exposure (IOW, same scene with the same lighting, same Av/Tv combo or equivalent)? Differences could be in absolute exposure, otherwise. What a camera says its ISO is isn't very exact. The 5D meters for about 1.2x the ISO it says it is at, for example. Also, how deep into the shadows are you talking? A MF back could have low shot noise, and high read noise, compared to a DSLR, and look better in the shadows down to a point, and then break up faster as it goes deeper into the shadows, so often, it depends on how deep you go. Anyway, the 5D is not spectacular at ISO 100. It has 1/2 stop more read noise than the 1D* cameras at ISO 100. The extra pixels make up for it a little, compared to a 1Dmk2 or 3, but not enough to equal them in the deep shadows.



Logged




Ray


« Reply #47 on: December 02, 2007, 12:18:51 AM » 
Reply

The context in which I was writing was one without noise, in which bit depth is the limiter. DR applies as well to any computergenerated image, in which case you could double the DR with each extra bit of linear depth. I was think of shot noise as a noise in this context. John, If you have already defined a situation where bit depth is the limiter to DR, then you might be right that DR is not dependent upon scale. However, it's difficult to follow your reasoning. I would have thought the DR of a computergenerated image would be limited by the contrast ratio of the computer monitor. However, the number of shades between the darkest point and the brightest point would depend on the bit depth. The staircase analogy still applies, does it not? Of course, with a pure photoncounting situation, the maximum number of photons would determine DR. That is just as hypothetical as my noiseless image (for photography; not for computergenerated images). Well, it's not entirely hypothetical but rather a matter of accuracy. Current DSLRs produce a DR which is roughly related to a photon count at base ISO, do they not? If you concede that a futuristic camera that was so accurate and sophisticated it could serve a dual purpose as a photon counter, would have a dynamic range based on the maximum number of photons it could count during a full exposure, ie a DR based on scale, then why should a less accurate camera not have a DR based on scale. Has the nature and definition of DR changed because our capturing device is less accurate and can only give a rough approximation of photon count? There's where your analogy doesn't work; the maximum number of photons minus the "noise floor" does not relate to DR at all, unless you are still thinking about pure photon counting noise, where the lowest usable standard (by any arbitrary but consistent standard) is always the same number of photons, so there is a unique mapping between max minus noise floor and max divided by noise floor. Again, I can't follow your reasoning. If I say, 'the maximum number of photons minus the noise floor = DR', then I must be thinking about this ultraaccurate, pure photon counting situation. I don't see how you can move from a position of conceding that at a level of great accuracy DR does relate to scale, but at a level of less accuracy DR has nothing to do with scale.



Logged




Guillermo Luijk


« Reply #48 on: December 02, 2007, 04:50:31 PM » 
Reply

That's right. However, the numerical representation does not have to follow that onetoone.
Example: if the DR is only one stop and the values are stored in 256 levels, then the highest value is 256 times as high as the lowest value  but that represents only twice the lightness.
Likewise, the numerical values from 0 to 7 can represent leves of a dynamic range of 1000:1. I have to agree with John Sheehy. Panopeeper, as I already said in a post on this thread, we are talking about sensors, i.e. about CAPTURE of dynamic range, not about STORING or PROCESSING. And as long as sensors are LINEAR devices, they simply cannot register more DR than the number of bits used for the LINEAR A/D encoding. Your example is perfectly right to demonstrate that more DR than N fstops can be coded into a properly designed NONLINEAR Nbit encoding system, but as far as I know only Leica's M8 RAW files are nonlinearly encoded, and after all they are only 8bit. All Canons, Nikons, etc... use linear ADC encoding, and because of this they can achieve to store a maximum of N fstops of DR in N bits. So for those cameras bit depth is a physical limit to DR. Just an example: if N=16, your LINEAR range is 0..65535  Min amount of light we can represent: 1  Max amount of light we can represent: 65535  Max contrast we are capable of capturing: 65535/1 What is the corresponding DR to it? DR=log(65535/1)/log(2)=16 fstops=N You could store 256 fstops of DR in 8bit, just assigning 1 level of the 0..255 range to each fstop, that is OK. But I insist: there is no capture device (sensor) than works in such a nonlinear way today. Surely tomorrow there will be (Leica's M8 achieves nearly the same DR with 8bit than a Canon 5D with its linear 12bit RAW files, and it's all thanks to a very intelligent nonlinear encoding).


« Last Edit: December 02, 2007, 07:08:07 PM by GLuijk »

Logged




Ray


« Reply #49 on: December 02, 2007, 07:33:44 PM » 
Reply

You could store 256 fstops of DR in 8bit, just assigning 1 level of the 0..255 range to each fstop, that is OK. But I insist: there is no capture device (sensor) than works in such a nonlinear way today. Surely tomorrow there will be (Leica's M8 achieves nearly the same DR with 8bit than a Canon 5D with its linear 12bit RAW files, and it's all thanks to a very intelligent nonlinear encoding). [a href=\"index.php?act=findpost&pid=157723\"][{POST_SNAPBACK}][/a] Ah! Here we have the explanation for this current difference of opinion. DR can be limited by bit depth in the linear method of encoding used in most digital cameras, whereby each doubling of light values is represented by a doubling of encoded levels; but it doesn't have to be. This point was made earlier in the thread, a few times, but seems to have got overlooked or forgotten, at least by me. If you were to put an 8 bit A/D converter in a P45 DB, then you could truly say the dynamic range was limited by bit depth (using the linear method of encoding). However, if you put a 16 bit A/D converter in a current Canon or Olympus DSLR with their current noise levels, then I think it would be true to say that DR was limited by photodiode size. On the other hand, it might just be limited by noise. Either way, in this situation DR is scale limited; ie, limited by the scale of the noise or the scale of the photoreceptor.



Logged




Panopeeper


« Reply #50 on: December 02, 2007, 10:01:56 PM » 
Reply

we are talking about sensors, i.e. about CAPTURE of dynamic range, not about STORING or PROCESSING We are talking all the time (like yourself just in this opost) about the numerical representation of the values. Raw data is not about the analog values. Just an example: if N=16, your LINEAR range is 0..65535 The numerical range of the stored data is 165535. This does not mean anything on its own. Think of following: you can measure length in inches or in millimeters. If you measure it in inches, then you can express much larger lengths in the same numerical range. Both are linear measurements, with different scales. The meaning of linear in the present context is, that every increase of the numerical value by one represents a certain fixed increase of the light intensity. It is not relevant from this point, how much that intensity difference is. The problem is, that you are concentrating on "stop" and "bit", instead of seeing the dynamic range simply as "proportion". Who said, that it has to be measured on a 2based log scale? Why not 3, 10 or 1.5? You are mixing the number representation (binary) with your favourite scale of the dynamic range. What about a computer, which can work only with decimal numbers? (There was such.) What about tristate harware architecture ("trits" instead of "bits")? Imagine a line with a point on it, let's name it a. Draw markers on top of the line with a fix distance of d, i.e. at a+d, a+2d, a+3d, a+4d, a+5d, etc. These are the "levels", stored as the raw values. Draw markers at the underside of the line too; pick the first one anywhere to the right of a nd name that a*p, then the next one a*p^2, a*p^3, a*p^4, a*p^5, etc ( p > 1). These represent the steps (not necessarily stops) of the dynamic range. If p is 2, then we measure it in stops, but it does not have to be. There is no fixed relationship between the two row of markers. The linear markers can be much closer or much farther than the proportional markers, up to a certain point. Because p > 1, the proportional scale will "outpace" the linear scale at some point, and that point may fall within the range of our interest, or it may not.



Logged

Gabor



Panopeeper


« Reply #51 on: December 02, 2007, 10:04:34 PM » 
Reply

If you were to put an 8 bit A/D converter in a P45 DB, then you could truly say the dynamic range was limited by bit depth (using the linear method of encoding You could predict, that the result will be horrendeously posterized, but you could not say anything about the captured dynamic range.



Logged

Gabor



Ray


« Reply #52 on: December 03, 2007, 01:44:17 AM » 
Reply

You could predict, that the result will be horrendeously posterized, but you could not say anything about the captured dynamic range. [a href=\"index.php?act=findpost&pid=157794\"][{POST_SNAPBACK}][/a] You could say something about the captured dynamic range if the encoding is the same as it currently is in most cameras, ie. 256, 128, 64 , 32, 16, 8, 4, 2, 1, which represents 8 intervals or just 8 stops of DR. The posterization would be horrendous in the lower midtones, producing a practically useful DR of considerably less than the theoretical maximum of 8 stops.



Logged




Guillermo Luijk


« Reply #53 on: December 03, 2007, 04:47:15 AM » 
Reply

We are talking all the time (like yourself just in this opost) about the numerical representation of the values. Raw data is not about the analog values. The numerical range of the stored data is 165535. This does not mean anything on its own.
Think of following: you can measure length in inches or in millimeters. If you measure it in inches, then you can express much larger lengths in the same numerical range. Both are linear measurements, with different scales.
The meaning of linear in the present context is, that every increase of the numerical value by one represents a certain fixed increase of the light intensity. It is not relevant from this point, how much that intensity difference is.
The problem is, that you are concentrating on "stop" and "bit", instead of seeing the dynamic range simply as "proportion". Who said, that it has to be measured on a 2based log scale? Why not 3, 10 or 1.5? You are mixing the number representation (binary) with your favourite scale of the dynamic range. What about a computer, which can work only with decimal numbers? (There was such.) What about tristate harware architecture ("trits" instead of "bits")?
Imagine a line with a point on it, let's name it a. Draw markers on top of the line with a fix distance of d, i.e. at a+d, a+2d, a+3d, a+4d, a+5d, etc. These are the "levels", stored as the raw values.
Draw markers at the underside of the line too; pick the first one anywhere to the right of a nd name that a*p, then the next one a*p^2, a*p^3, a*p^4, a*p^5, etc (p > 1). These represent the steps (not necessarily stops) of the dynamic range. If p is 2, then we measure it in stops, but it does not have to be.
There is no fixed relationship between the two row of markers. The linear markers can be much closer or much farther than the proportional markers, up to a certain point. Because p > 1, the proportional scale will "outpace" the linear scale at some point, and that point may fall within the range of our interest, or it may not. [a href=\"index.php?act=findpost&pid=157793\"][{POST_SNAPBACK}][/a] Panopeeper, the point of talking about doubling and halving levels, has no relation with bits. It is simply the definition of "fstop": relative range of lightness where the brightest value is double as the lowest light value. Forget now about bits or trits, think that you just have an encoding system which can record light levels from 0 to 65535. And you also know this system responds linearly to light, i.e. if a light source generates a level X on that range, double the light would have generated 2*X and half the light would have generated X/2. With such a linear response system, can you imagine of any level distribution that would allow to capture more than: log(65535)/log(2)=16 fstops of dynamic range? You can't. It's a numerical limit for such a linear capture system. The new 14bit Canon and Nikon cameras have linear encoding, so if they were hypothetically free of noise they could only distinguish a maximum of 14 fstops. The practical dynamic range will be quite lower that this figure due to noise since the lowest fstops will be plenty of noise making them unusable from a photographic point of view.


« Last Edit: December 03, 2007, 07:31:49 AM by GLuijk »

Logged




bjanes


« Reply #54 on: December 03, 2007, 05:44:19 AM » 
Reply

Forget now about bits or trits, think that you just have an encoding system which can record light levels from 0 to 65535. And you also know this system responds linearly to light, i.e. if a light source generates a level X on that range, double the light would have generated 2*X and half the light would have generated X/2. With such a linear response system, can you imagine of any level distribution that would allow to capture more than: log(65535)/log(2)=16 fstops of dynamic range? You can't. It's a numerical limit for such a linear capture system. [a href=\"index.php?act=findpost&pid=157850\"][{POST_SNAPBACK}][/a] This has been gone over ad naseum times and reasoning such as Guillermo's has always prevailed. Further discussion is futile.



Logged




John Sheehy


« Reply #55 on: December 03, 2007, 07:45:30 AM » 
Reply

You can cover a range of 50 stops even with one bit. [a href=\"index.php?act=findpost&pid=157163\"][{POST_SNAPBACK}][/a] Yes, but you have to use extreme oversampling; otherwise, all you have is a threshold and the concept of "DR" or even "stops" is meaningless. Real light is always "1bit", but the receptors in our eyes and the sensor wells and film grain all work towards binning multiple singlebit values into a wider range of shades besides "photon or no photon". A 10MP 1bit camera would be of limited usefulness; a 500 GP 1bit camera would be quite usable, if handling the data were practical.



Logged




John Sheehy


« Reply #56 on: December 03, 2007, 07:55:22 AM » 
Reply

You could predict, that the result will be horrendeously posterized, but you could not say anything about the captured dynamic range. [a href=\"index.php?act=findpost&pid=157794\"][{POST_SNAPBACK}][/a] Again, it depends on what kind of resolution you're looking for. For each pixel, as an independent measuring device, the 8bit linear reduces DR to slightly less than 8 stops, without any noise, and even less, with noise. When you start looking at groups of pixels, however, then their combined values start creating more levels, and potentially more DR, and here a little bit of linearly distributed noise before digitization can actually help if there is not enough existing noise.



Logged




Panopeeper


« Reply #57 on: December 03, 2007, 01:21:39 PM » 
Reply

Yes, but you have to use extreme oversampling; otherwise, all you have is a threshold and the concept of "DR" or even "stops" is meaningless The subject was not the practicability but the principle. I could have said "20 stops in ten bits", which is still impractical, but not as extreme as "50 stops with one bit". The concept of "stop" is relevant mainly for photographers, it plays no role whatsoever in the principle.



Logged

Gabor



Panopeeper


« Reply #58 on: December 03, 2007, 02:54:25 PM » 
Reply

With such a linear response system, can you imagine of any level distribution that would allow to capture more than: log(65535)/log(2)=16 fstops of dynamic range? You can't. It's a numerical limit for such a linear capture system I just tried to make it understandable, that there is no such connection. You are still concentrating on stops and bits instead of concentrating on the underlying math. The only meaning of the formula log(65535)/log(2) is, that 16 divided by 1 is 16. Let's see it a different way. There is a given scenery, a given lens and a given sensor. Assume we can count the photons individually; the numerically expressed intensity is deducted directly from the number of captured photons. In one case the resulting number representing the intensity equals to the number of photons. In another case only every 15th photon counts. In another case only every 200th photon counts, but we can go even further: only every 500th photon counts (i.e. the numerical intensity is the number of photons divided by 500). The 60000 photons of a well would be converted in the numerical range of 0120, which requires just 7 bits. The numerical representation is linear in all these cases, but the number of levels covering the entire range is vastly different, and so is the number of required bits to store the data. In all these cases the covered dynamic range is the same.



Logged

Gabor



EricV


« Reply #59 on: December 03, 2007, 04:18:56 PM » 
Reply

Let's see it a different way.... In all these cases the covered dynamic range is the same. [a href=\"index.php?act=findpost&pid=157991\"][{POST_SNAPBACK}][/a] The point you are missing is that the dynamic range is NOT the same. Dynamic range by definition is the ratio between maximum signal and minimum signal, not simply the value of the maximum signal. (We are of course still igoring noise, otherwise the definition would be maximum signal divided by noise.) In your example, the maximum signal is the same in all cases (60000 photons), but the minimum signal varies with the method of photon counting. If you only count one photon in 500, then the minumum signal you can count is 500 photons and the dynamic range is 60000/500 = 120, capturable as you state in 7 bits. But if every photon is counted, then the dynamic range increases to 60000/1 and requires 16 bit to capture. This is a real increase in dynamic range, by any reasonable definition.



Logged




