Ad
Ad
Ad
Pages: [1] 2 3 ... 5 »   Bottom of Page
Print
Author Topic: Dynamic Range vs bit depth  (Read 11192 times)
jrsforums
Sr. Member
****
Offline Offline

Posts: 736


« on: February 15, 2013, 09:59:22 AM »
ReplyReply

Is bit depth, by definition, a ceiling on the dynamic range an image can contain?

For example, a 14 bit raw image cannot contain more than 14 stops of DR.

An 8 bit jpeg, no more than 8 stops?

Thx, John
Logged

John
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1678


« Reply #1 on: February 15, 2013, 10:17:04 AM »
ReplyReply

Is bit depth, by definition, a ceiling on the dynamic range an image can contain?

For example, a 14 bit raw image cannot contain more than 14 stops of DR.
I shall let the sensor experts comment on that one.

Quote
An 8 bit jpeg, no more than 8 stops?
First, a jpeg is always gamma-processed, meaning that it is comparable with something like 12-13 bits linear.

Second, there are no limits to what kind of processing can be done to a jpeg. A bracketed exposure set covering any number of stops can be tonemapped and distributed as an 8-bit jpeg. It would still contain information about a large scene dynamic range.

-h
Logged
ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7399


WWW
« Reply #2 on: February 15, 2013, 11:14:09 AM »
ReplyReply

Hi,

Normally the signal from the sensor is linearly coded. In that case a bit is needed for each EV of DR.

It is possible to use non linear coding. Leica does it and perhaps some other compressed formats. So you can cram any dynamic range in an encoded format, but an 8 bit coding will still only hold 256 values per channel.

Best regards
Erik

Is bit depth, by definition, a ceiling on the dynamic range an image can contain?

For example, a 14 bit raw image cannot contain more than 14 stops of DR.

An 8 bit jpeg, no more than 8 stops?

Thx, John
Logged

Tim Lookingbill
Sr. Member
****
Offline Offline

Posts: 1153



WWW
« Reply #3 on: February 15, 2013, 11:19:04 AM »
ReplyReply

Quote
Is bit depth, by definition, a ceiling on the dynamic range an image can contain?

Only if you can relate this to USEABLE detail captured within a scene's dynamic range and so far no has SHOWN a direct relationship which makes discussions on this topic similar to using Einstein's theory of relativity to physically prove we can time travel. The math makes sense, but the energy required to do it makes it impossible without the person becoming vaporized.

It's a neat story, though, just like this one.
Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1678


« Reply #4 on: February 15, 2013, 11:23:41 AM »
ReplyReply

I think that the take-away point is that for most cameras, "noise" seems to dominate "posterization". It is tempting to use this to conclude that cameras tend to have a sufficient number of bits.

Not that the ADC may very well generate noise internally before it comes around to actually deciding on a discrete code. And the distinction between "sensel", "analog amplification" and "ADC" may be blurry.

-h
Logged
Tim Lookingbill
Sr. Member
****
Offline Offline

Posts: 1153



WWW
« Reply #5 on: February 15, 2013, 11:37:51 AM »
ReplyReply

I think that the take-away point is that for most cameras, "noise" seems to dominate "posterization". It is tempting to use this to conclude that cameras tend to have a sufficient number of bits.

Not that the ADC may very well generate noise internally before it comes around to actually deciding on a discrete code. And the distinction between "sensel", "analog amplification" and "ADC" may be blurry.

-h

In short it's impossible to prove due to a lack of distinction.
Logged
IliasG
Newbie
*
Offline Offline

Posts: 17


« Reply #6 on: February 15, 2013, 12:15:45 PM »
ReplyReply

Is bit depth, by definition, a ceiling on the dynamic range an image can contain?

For example, a 14 bit raw image cannot contain more than 14 stops of DR.

An 8 bit jpeg, no more than 8 stops?

Thx, John

Hi John

Per definition DR is Max_signal/noise floor. Well ... this is the so called "engineering DR", DxO uses a slightly different definition Max_signal/Level where SNR=1.

Keep in mind that noise is stochastic value and so it can be a fraction of the used unit. If noise could be zero DR could be infinite.... In our case (digital photo) there is always some "read noise" but even if in a magic way a manufacturer can eliminate it at zero, in a n-bit encoded raw there will exist the quantization noise which is the stdev of a uniform distribution http://en.wikipedia.org/wiki/Quantization_error and this equals to 0.29. So the max (engineering) DR that a n-bit file can hold is n+1.8 ..

http://forum.dxomark.com/index.php/topic,198.0.html
Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1678


« Reply #7 on: February 15, 2013, 12:25:14 PM »
ReplyReply

Hi John

Per definition DR is Max_signal/noise floor. Well ... this is the so called "engineering DR", DxO uses a slightly different definition Max_signal/Level where SNR=1.

Keep in mind that noise is stochastic value and so it can be a fraction of the used unit. If noise could be zero DR could be infinite.... In our case (digital photo) there is always some "read noise" but even if in a magic way a manufacturer can eliminate it at zero, in a n-bit encoded raw there will exist the quantization noise which is the stdev of a uniform distribution http://en.wikipedia.org/wiki/Quantization_error and this equals to 0.29. So the max (engineering) DR that a n-bit file can hold is n+1.8 ..

http://forum.dxomark.com/index.php/topic,198.0.html
The quantization error is signal dependent; the uniform distribution is only an engineering simplification used when the number of bits is high enough that signal and noise can be considered uncorrelated. It is trivial to contruct a signal that can be quantized with exactly zero error (e.g. a square waveform).

In a perfect photon-counting camera, it would be sufficient for the ADC to have codes corresponding to 1 photons, 2 photons, ... up until sensor saturation point. Then you would only have photon noise. At least with my classical physics understanding.

-h
Logged
IliasG
Newbie
*
Offline Offline

Posts: 17


« Reply #8 on: February 15, 2013, 12:53:57 PM »
ReplyReply

The quantization error is signal dependent; the uniform distribution is only an engineering simplification used when the number of bits is high enough that signal and noise can be considered uncorrelated. It is trivial to contruct a signal that can be quantized with exactly zero error (e.g. a square waveform).

In a perfect photon-counting camera, it would be sufficient for the ADC to have codes corresponding to 1 photons, 2 photons, ... up until sensor saturation point. Then you would only have photon noise. At least with my classical physics understanding.

-h

Isn't the noise floor measured in the absence of light ??.
Logged
jrsforums
Sr. Member
****
Offline Offline

Posts: 736


« Reply #9 on: February 15, 2013, 01:30:38 PM »
ReplyReply

Time out.....

Are we talking apples and oranges?

When I am talking about 'bits', am I not talking about digital data?

Most of the responses, except Erik, seemed to be discussing the analog data, before the conversion to digital.

Is that correct?  Or am I missing something?

John
Logged

John
Tim Lookingbill
Sr. Member
****
Offline Offline

Posts: 1153



WWW
« Reply #10 on: February 15, 2013, 02:45:50 PM »
ReplyReply

Time out.....

Are we talking apples and oranges?

When I am talking about 'bits', am I not talking about digital data?

Most of the responses, except Erik, seemed to be discussing the analog data, before the conversion to digital.

Is that correct?  Or am I missing something?

John

Bits as in what you're discussing is a concept pertaining to precision in on how much the ADC passes on usable data (detail) from non-usable=(noise). The source which is the sensor trumps in importance concerning usable data over what the ADC can cull through within sensor voltage readings using high bit precision and pass on into 1's and 0's.

By the time you see it as an 8 bit video preview, the precision in culled data has already occurred and pretty much can't be controlled what it delivers unless you can come up with your own ADC software manipulation routine loaded on its chip and that's not going to ever happen with consumer grade digital cameras.

Just curious can you show us how this information is going to help you make better photographs? I've never seen it demonstrated in the many discussions on this subject since 12 & 14 bit concept was associated with digital cameras.
Logged
IliasG
Newbie
*
Offline Offline

Posts: 17


« Reply #11 on: February 15, 2013, 02:49:48 PM »
ReplyReply

Time out.....

Are we talking apples and oranges?

When I am talking about 'bits', am I not talking about digital data?

Most of the responses, except Erik, seemed to be discussing the analog data, before the conversion to digital.

Is that correct?  Or am I missing something?

John

Hi John,

we are at the same page .. don't worry. All this talk about quantization is about digital data.

BTW what exactly do you mean by "Dynamic Range" ??. Can you give your definition ?.
Logged
Tim Lookingbill
Sr. Member
****
Offline Offline

Posts: 1153



WWW
« Reply #12 on: February 15, 2013, 03:06:36 PM »
ReplyReply

Dynamic range as in more distinguishable detail from what's seen as noise more so in the shadows because highlights have the sensor sites at full saturation.

The 12 bit uses a fine mesh to sift out noise from detail in the shadows where as 14 bit uses an even finer mesh during the Analog to Digital conversion. Think of it like sifting for fine gold flakes. 12 bit will still let detail come through but let in more larger clumps of noise (rocks) while 14 bit will be more precise in allowing smaller noise (rocks) included within the shadow detail.

That way in post with the data interpolated to 16 bit the editing tools have an even more refined culling process in bringing out more definition in the shadows that can be seen over the noise.


That's extended dynamic range in relation to bit depth. It still requires our eyes to see if there really is more usable detail in the shadows that we as humans would consider as more DR.
Logged
jrsforums
Sr. Member
****
Offline Offline

Posts: 736


« Reply #13 on: February 15, 2013, 03:57:54 PM »
ReplyReply


BTW what exactly do you mean by "Dynamic Range" ??. Can you give your definition ?.

Thanks, Ilias...

I am sure that I cannot give a definition in any proper scientific way.

If you guys can permit me, I am trying to peel back this "onion", in as simple terms possible.  Kind of like a child's Big Animal book primer....i.e. horsey, horsey, duckie, duckie....

As Erik said, 8 bit coding only holds 256 values per channel.  I look at that as 8 stops of tonal value...8 stops of dynamic range (in my layman's terms)

Remember, I talked about "ceiling", that is, the container (8 bit coding) can (I'm asking) contain maximum 8 stops.

How am I doing so far?

If I'm OK, then 14 bit coding can contain up to 14 stops.  If I want to convert this to 8 bit coding, I have to throw away  6 stops of data range.  This may or not be meaningful data, but it is less range.  How this is significant photographically, and what can be done, is a different discussion.

Quantization, et.al. are important and interesting to those familiar with this area.  However, on a practical basis it is like Newton's Laws....very practical, but not correct.  Of course, they are not correct at the extremes, such as when approaching the speed of light.

Why am I thinking of a Big Animal Picture Book...I think it can eventually have some appicability in eventual instruction at the camera club level, where I am a chair of programs and instruction...trying to get the "unwashed" to understand.

John

PS....not to muddy the water....I look at DXO's rting of the D800 having a DR of 14.33, using their definition of DR.  I look at that and say "interesting"....how does the 10 lbs fit in the 5 lbs bag...??  :-)
Logged

John
Tim Lookingbill
Sr. Member
****
Offline Offline

Posts: 1153



WWW
« Reply #14 on: February 15, 2013, 04:08:20 PM »
ReplyReply

Don't know how you're going to explain this concept to the "unwashed" at your camera club without showing within image capture how it helps the photographer grab more "usable to a photographer" dynamic range.

What does 8 stops look like from 14 with regard to usable image detail?
Logged
jrsforums
Sr. Member
****
Offline Offline

Posts: 736


« Reply #15 on: February 15, 2013, 04:14:38 PM »
ReplyReply

Hey....I am admittedly fishing around.

However, I know telling them the following will not help at all  :-)

Quote
The 12 bit uses a fine mesh to sift out noise from detail in the shadows where as 14 bit uses an even finer mesh during the Analog to Digital conversion. Think of it like sifting for fine gold flakes. 12 bit will still let detail come through but let in more larger clumps of noise (rocks) while 14 bit will be more precise in allowing smaller noise (rocks) included within the shadow detail.

However, once you get the base of coding differences down, you can enter a discussion on how you can fit the meaningful 10 lbs into 5 lbs.
« Last Edit: February 15, 2013, 04:17:10 PM by jrsforums » Logged

John
EricV
Full Member
***
Offline Offline

Posts: 130


« Reply #16 on: February 15, 2013, 04:18:36 PM »
ReplyReply

Dynamic range is not simply dependent on bit depth when the encoding (translation of light into bits) is not linear. 

Example of linear encoding:
   Light =    {1,2,4,8,16,32,64,128,256,512,1024}   scene with 11-bit dynamic range
   Output = {1,2,4,8,16,32,64,128,128,128,128}    8-bit camera captures 8-bit dynamic range

Example of non-linear encoding:
   Light =    {1,2,4,8,16,32,64,128,256,512,1024}   scene with 11-bit dynamic range
   Output = {1,10,20,30,40,50,60,70,80,90,100}    8-bit camera captures 11-bit dynamic range
Logged
FranciscoDisilvestro
Sr. Member
****
Online Online

Posts: 526


WWW
« Reply #17 on: February 15, 2013, 04:36:23 PM »
ReplyReply

Hi,

Quote
PS....not to muddy the water....I look at DXO's rting of the D800 having a DR of 14.33, using their definition of DR.  I look at that and say "interesting"....how does the 10 lbs fit in the 5 lbs bag...??  :-)

This is a very common misunderstanding. The DR of 14.33 of the Nikon D800 reported by DxO is based on their "Print" concept of resizing the image to 8"x12"at 300 dpi. If you switch to "Screen" then the value you get for the D800 is 13.23, which is less than 14

Let's keep things simple from a theoretical point of view (The issues addressed by other posters about noise, etc. are valid, but I think you have to understand the basic theory before going to those advanced concepts)

The first thing is wheter the digital representation (encoding) is linear or not. Linear means that for each doubling of the input signal (in this case light) you end up with a numerical value that is double from the preivous one. This linear encoding is typical of most digital cameras in raw format and there is a relation between the bit depth and the Maximun DR that can be contained. It is important to understand that this relation does not work both ways.

Example: DR = 14 f stops => theoretically you need at least 14 bits. If you use 13 or less, then you lose DR, using 15 or more does nothing. Think this works like the number of digits that you use for your bank account. If you have a 5 figures balance, then you need at least 5 digits (plus 2 for decimals) to represent your balance. Using more digits will not increase your balance, Using less well, you don't want to do that.

Now, when the encoding is not linear, then the issue is different and it will depend on the mathematical formula used for encoding.
 
It can be shown that using gamma 2.2 encoding you could contain up to 16 f stops of DR with 8 bits of data (that is only doing the math for the values).

Logged

jrsforums
Sr. Member
****
Offline Offline

Posts: 736


« Reply #18 on: February 15, 2013, 06:45:12 PM »
ReplyReply

Thanks, Francisco...

If I am correct, most RAW files (CR2, NEF) are linear. 

In camera, it is converted to gamma 2.2 8 bit jpeg.

In ACR/LR that changes at some point to gamma 2.2, but definitely when converted to 16 bit Tiff.  If 8bit can give me 16 stops, what can a 16bit TIFF?  Of course, this is normally converted to 8bit for output.

Without the user doing any tone "wrestling", I guess the difference "depends"....depends on what goes on under the covers.

Is it typical, without user involvement, that gamma encoding would contain 16 stops in 8bits....or is that just a perfect world?

I am really not looking for examples of the perfect mathematical world, but examples of what happens in the practical world....and why we bother with RAW....vs. just taking what the camera gives us :-)

John
Logged

John
JohnCox123
Guest
« Reply #19 on: February 15, 2013, 07:02:44 PM »
ReplyReply

Out of curiosity what's the dynamic range of a film like Kodak Ektar 100 or Fuji Acros?
Logged
Pages: [1] 2 3 ... 5 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad