Ad
Ad
Ad
Pages: « 1 2 3 [4] 5 »   Bottom of Page
Print
Author Topic: Dynamic Range vs bit depth  (Read 11252 times)
sandymc
Sr. Member
****
Offline Offline

Posts: 269


« Reply #60 on: February 23, 2013, 08:29:20 AM »
ReplyReply

I think that perhaps there is some confusion about two different situations here:

1. The number of bits in an image representation (representation being being NEF or CR2 or TIFF or whatever) does not limit DR, because you can use whatever encoding you want to achieve an arbitrary level of DR.

2. The number of bits in the ADC of a camera does (assuming a linear ADC and linear sensor) fundamentally limit DR.

Point being, the number of bits in the camera's ADC, and the number of bits in a representation of an image are not the same thing.

Sandy
Logged
Jack Hogan
Full Member
***
Offline Offline

Posts: 236


« Reply #61 on: February 23, 2013, 08:34:11 AM »
ReplyReply

The ansewr is pretty staightforward: no, bit depth is not a ceiling on the amount of information that an image can contain; and yes, a 14-bit raw image can contain more than 14 stops of DR if by DR we mean one of its typical engineering definitions - with linear encoding, without having to resort to non-linear gamma.
I disagree with that. No matter if a statistical (or extrapolated from a curve) definition of DR yields DR figures greater than the number of ADC bits. A practical user (photographer) will not be able to allocate and properly render information of a real world scene of N stops of DR in a RAW file produced by a linear ADC with less than N bits.

I am glad we are finally getting to the heart of the matter, after a little cajoling on my part ;-)  So, if I understand correctly, you agree with me in theory.  But in practice.....

In the real world an arbitrary number of stops of DR can be stored in or produced from a linear raw file with a bit-depth of one.  A practical example is a B&W image from your typical run of the mill newspaper viewed at arm's length: 1=ink, 0=noink.  The natural scene was a foggy day in Berlin with a DR of 5 stops.  It was captured and converted for newspaper use to a file with a bit-depth of 1 bit. The viewed image has a DR of 6 stops, determined by the physical properties of ink and paper independently of the bit depth of the file from which it was produced.  Despite its 1-bit depth the viewed image also shows several stops of (smoothish) tonal gradations in between, as determined by the physical characteristics of the human visual system.

5 to 1 to 6.  So it appears that file bit depth, Dynamic Range and Tonal Range are not really directly related - with appropriate noise level and viewing distance we can record many more than N stops of DR in an N bit linear file.  That's the deterministic answer.  But I know that in fact they are tied together in the statistical dimension, so who's up on Information Science who can tie these quantities together?  We need to hear words like quanta, standard deviation, sample size etc. Smiley

Jack
« Last Edit: February 23, 2013, 08:43:22 AM by Jack Hogan » Logged
jrsforums
Sr. Member
****
Offline Offline

Posts: 736


« Reply #62 on: February 23, 2013, 09:00:51 AM »
ReplyReply

Guys......

1-bit depth...interesting. 64-bit floating point...also interesting.  To some.

Guillermo said what I, the OP, was looking for....."A practical user (photographer)..."  What can they expect....straight out of the camera?
Logged

John
Jack Hogan
Full Member
***
Offline Offline

Posts: 236


« Reply #63 on: February 23, 2013, 09:14:23 AM »
ReplyReply

"A practical user (photographer)..."  What can they expect....straight out of the camera?

If you are talking about DR, as mentioned it depends on the physical characteristics of your eyes, your output device and your viewing setup.  Most Output devices (high end photo paper, monitors) struggle to produce 9 bits of DR (sometimes expressed in the form of a linear contrast ratio, e.g. 500:1), so that's the most you would typically get by looking at the image OOC - often less.

What do you need the additional range that your camera is able to capture for, then?  Well, perhaps you made a mistake in choosing exposure or perhaps there are some deep shadows that you'd like to bring up in PP to squeeze into the visible DR.   Modern cameras do this for you automatically nowadays if you turn on the relative in-camera feature (Nikon calls it ADL).

Jack
« Last Edit: February 23, 2013, 09:21:32 AM by Jack Hogan » Logged
jrsforums
Sr. Member
****
Offline Offline

Posts: 736


« Reply #64 on: February 23, 2013, 09:47:06 AM »
ReplyReply

If you are talking about DR, as mentioned it depends on the physical characteristics of your eyes, your output device and your viewing setup.  Most Output devices (high end photo paper, monitors) struggle to produce 9 bits of DR (sometimes expressed in the form of a linear contrast ratio, e.g. 500:1), so that's the most you would typically get by looking at the image OOC - often less

What do you need the additional range that your camera is able to capture for, then?  Well, perhaps you made a mistake in choosing exposure or perhaps there are some deep shadows that you'd like to bring up in PP to squeeze into the visible DR.   Modern cameras do this for you automatically nowadays if you turn on the relative in-camera feature (Nikon calls it ADL).

Jack

I understand.  Which is why I said before any user manipulation.

When you open a in-camera produced jpeg or a raw image in a postprocessing program, say ACR/LR, each of these images has a general range of DR....whether the monitor can show it or not.  Each has a range of information that the user can manipulate to produce the output they want.

What s the difference between these two sources?
Logged

John
Jack Hogan
Full Member
***
Offline Offline

Posts: 236


« Reply #65 on: February 23, 2013, 11:14:09 AM »
ReplyReply

When you open a in-camera produced jpeg or a raw image in a postprocessing program, say ACR/LR, each of these images has a general range of DR....whether the monitor can show it or not.  Each has a range of information that the user can manipulate to produce the output they want.

The easy qualitative answer to your question is that there is a lot more information in the Raw file than in the OOC Jpeg - a lot  Wink  For instance, if you underexposed and needed to increase brightness in PP, the 8-bit Jpeg would start showing visible posterization and other artifacts very quickly, while the Raw file would probably continue to look pleasing while you increased brightness a few more stops. That's the easy answer that depends on qualitative words like 'visible' and 'pleasing'.

But there is no easy quantitative answer, that's why I asked it myself in a different form a couple of pages back, and again a couple of posts up - it's so hard that nobody here seems to be up to the task  Sad  It depends on the nature of the information and the observer, on noise wrt the size of an ADU, sample size etc. There are too many variables involved and it needs to be addressed by someone who is better versed in Information Science than I am.   Even if someone were able to calculate the capacity of these two specific channels, we probably would not know what to do with that bit of information in practice  Smiley

I can answer only a portion of your question, to help you understand why there is no easy answer:

When you open a in-camera produced jpeg or a raw image in a postprocessing program, say ACR/LR, each of these images has a general range of DR....

The fact is, there is no range of DR inherent in a file of a specific bit-depth, whether the data is encoded linearly or not.  With a large enough sample and appropriately sized noise (not too big not too small, just the size of Montreal) the sky is the limit - remember the 1-bit newspaper image?

Jack
« Last Edit: February 23, 2013, 11:31:29 AM by Jack Hogan » Logged
thierrylegros396
Sr. Member
****
Offline Offline

Posts: 652


« Reply #66 on: February 23, 2013, 01:38:16 PM »
ReplyReply

Think also to HDR imaging software use of "Local contrast enhancement" to obtain fake high DR in paper or screen.

Yes, DR is not linearly related to bit depth Wink

Thierry
Logged
bjanes
Sr. Member
****
Offline Offline

Posts: 2794



« Reply #67 on: February 23, 2013, 02:14:55 PM »
ReplyReply

I disagree with that. No matter if a statistical (or extrapolated from a curve) definition of DR yields DR figures greater than the number of ADC bits. A practical user (photographer) will not be able to allocate and properly render information of a real world scene of N stops of DR in a RAW file produced by a linear ADC with less than N bits.


I am glad we are finally getting to the heart of the matter, after a little cajoling on my part ;-)  So, if I understand correctly, you agree with me in theory.  But in practice.....

In the real world an arbitrary number of stops of DR can be stored in or produced from a linear raw file with a bit-depth of one.  A practical example is a B&W image from your typical run of the mill newspaper viewed at arm's length: 1=ink, 0=noink.  The natural scene was a foggy day in Berlin with a DR of 5 stops.  It was captured and converted for newspaper use to a file with a bit-depth of 1 bit. The viewed image has a DR of 6 stops, determined by the physical properties of ink and paper independently of the bit depth of the file from which it was produced.  Despite its 1-bit depth the viewed image also shows several stops of (smoothish) tonal gradations in between, as determined by the physical characteristics of the human visual system.

5 to 1 to 6.  So it appears that file bit depth, Dynamic Range and Tonal Range are not really directly related - with appropriate noise level and viewing distance we can record many more than N stops of DR in an N bit linear file.  That's the deterministic answer.  But I know that in fact they are tied together in the statistical dimension, so who's up on Information Science who can tie these quantities together?  We need to hear words like quanta, standard deviation, sample size etc. Smiley

Jack

Jack,

I'm not up to the task you suggested, but I post these comments for discussion:

  • Posterizataion occurs when the output has steps that can be distinguished by human perception
  • The perceptual system differentiates between relative levels, not absolute levels as described by the Weber-Fechner law which states that the perceptual difference is 1% (see Norman Koren).
  • This translates to about 79 levels (1.01^79) ≈ 0.01
  • Norman states that fewer levels are needed in the dark tones, and he suggests that the darkest f/stop should contain 8 levels. This is due in part to the masking effects of flare light
  • Gamma encoding uses fewer bits in the higher f/stops and re-distributes them to the darker f/stop. See his chart.

Emil Martinec states that raw data are never posterized (presumably because the noise is greater than the quantization step). Posterization occurs during processing when too few bits are used for the file after adjustments are applied. Greg Ward has an interesting post on HDR formats. He uses 1% as the maximal difference in levels, consistent with the Weber-Fechner law, but more stringent than Norman's criterion.

The table below is from his post and describes the DR of various formats in orders of magnitude (log base 10). To convert to f/stops multiply by log(2, base 10) = 3.213. The math is done for you in the table below. scRGB comes in two versions. One uses 12 bits per channel (36 bits total) and a gamma curve with a linear segment for low levels, and the other uses a linear ramp with 16 bits per channel (48 bits total). Raw files are linear and the 48 and scRGB would apply to 16 bit raw files.

Bill
« Last Edit: February 24, 2013, 06:15:18 AM by bjanes » Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1678


« Reply #68 on: February 24, 2013, 02:58:06 AM »
ReplyReply

If you have a hypothetical 1-bit camera with sufficently dense sensels so as to capture practically all of the information presented by the lense.... What is the dynamic range of your raw file? 6dB? Bearing in mind that this camera would capture more information about the scene than even the highest tech current Sony 14-bit sensors.

I am not so sure that it makes much sense to distinguish so abruptly between spatial precision and level precision.

-h
Logged
Jack Hogan
Full Member
***
Offline Offline

Posts: 236


« Reply #69 on: February 24, 2013, 05:19:10 AM »
ReplyReply

Greg Ward has an interesting post on HDR formats. He uses 1% as the maximal difference in levels, consistent with the Weber-Fechner law, but more stringent than Norman's criterion.

The table below is from his post and describes the DR of various formats in orders of magnitude (log base 10). To convert to f/stops divide by log(2, base 10) = 3.213. The math is done for you in the table below. scRGB comes in two versions. One uses 12 bits per channel (36 bits total) and a gamma curve with a linear segment for low levels, and the other uses a linear ramp with 16 bits per channel (48 bits total). Raw files are linear and the 48 and scRGB would apply to 16 bit raw files.

Bill,

Thank you very much for the Greg Ward link and your table, most interesting.  I get his definition of Scene Referred, Human Observed encoding and LogLuv32, although I wonder what negative luminance is  Wink  If I understand correctly, with this efficient encoding we could store in a TIFF with 32 bits per pixel (or the equivalent of about 11 bits/channel in RGB) whatever nature could throw at us (126 stops of luminance DR!) so that it would appear to us virtually indistinguishable from the original .   That's a very cool estimate of the information Capacity of the human visual system.

Although perhaps a bit far from our target, limited by the linear RGB world of our Raw files.  How many Scene Referred, Human Observed bits of information can be stored in there?  Are you suggesting that scRGB16 is a good proxy for it and therefore we could say: 11.6?  Seems a bit low at first look.
Jack
« Last Edit: February 24, 2013, 05:30:19 AM by Jack Hogan » Logged
Jack Hogan
Full Member
***
Offline Offline

Posts: 236


« Reply #70 on: February 24, 2013, 05:20:15 AM »
ReplyReply

I am not so sure that it makes much sense to distinguish so abruptly between spatial precision and level precision.

Yes! Perhaps we could attempt to tie the two together?
Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1678


« Reply #71 on: February 24, 2013, 05:54:21 AM »
ReplyReply


  • The perceptual system differentiates between relative levels, not absolute levels as described by the Weber-Fechner law which states that the perceptual difference is 1% (see Norman Koren).
This seems to be in agreement with the good old gamma FAQ of Charles Poynton:
http://www.poynton.com/PDFs/GammaFAQ.pdf
Quote
Through an amazing coincidence, vision’s response to intensity is effec- tively the inverse of a CRT’s nonlinearity.
...
Projected cinema film, or a photographic reflection print, has a contrast ratio of about 80:1. Television assumes a contrast ratio, in your living room, of about 30:1. Typical office viewing conditions restrict the contrast ratio of a CRT display to about 5:1.
...
At a particular level of adaptation, human vision responds to about a hundred-to-one contrast ratio of intensity from white to black. Call these intensities 100 and 1. Within this range, vision can detect that two intensi- ties are different if the ratio between them exceeds about 1.01, corre- sponding to a contrast sensitivity of one percent.
To shade smoothly over this range, so as to produce no perceptible steps, at the black end of the scale it is necessary to have coding that represents different intensity levels 1.00, 1.01, 1.02, and so on. If linear light coding is used, the “delta” of 0.01 must be maintained all the way up the scale to white. This requires about 9,900 codes, or about fourteen bits per compo- nent.
If you use nonlinear coding, then the 1.01 “delta” required at the black end of the scale applies as a ratio, not an absolute increment, and progresses like compound interest up to white. This results in about 460 codes, or about nine bits per component. Eight bits, nonlinearly coded according to Rec. 709, is sufficient for broadcast-quality digital television at a contrast ratio of about 50:1.
If poor viewing conditions or poor display quality restrict the contrast ratio of the display, then fewer bits can be employed.
If a linear light system is quantized to a small number of bits, with black at code zero, then the ability of human vision to discern a 1.01 ratio between adjacent intensity levels takes effect below code 100. If a linear light system has only eight bits, then the top end of the scale is only 255,and contouring in dark areas will be perceptible even in very poor viewing conditions.
Logged
Jack Hogan
Full Member
***
Offline Offline

Posts: 236


« Reply #72 on: February 24, 2013, 10:29:10 AM »
ReplyReply

Greg Ward has an interesting post on HDR formats.

Interesting read.  For those still in doubt, he says

"A 48-bit RGB pixel using a standard 2.2 gamma as found in conventional TIFF images holds at least 5.4 orders of magnitude" of DR.

He gets that from dividing the maximum 16-bit/channel integer value (65535 DN) by the minimum value, which I assume he supposes to be at most a quantization error of 0.29 DN.   So a 16 bit/channel integer file holds at least log2(65535/0.29) = 17.8 stops of DR (5.4 orders of magnitude), independently of whether the data is gamma encoded or linear, since maxima and minima are the same.  A 14 bit/channel Raw file would by the same token hold at least log2(16383/0.29) = 15.8 stops.

Why at least?  Because depending on sampling and noise dithering, the quantization error can become much smaller than that - all the way to immaterial, so that the dominant determinant of minimum detectable signal would be factors outside of quantization and data type.

Jack

« Last Edit: February 24, 2013, 10:34:10 AM by Jack Hogan » Logged
bjanes
Sr. Member
****
Offline Offline

Posts: 2794



« Reply #73 on: February 25, 2013, 07:57:27 AM »
ReplyReply

Although perhaps a bit far from our target, limited by the linear RGB world of our Raw files.  How many Scene Referred, Human Observed bits of information can be stored in there?  Are you suggesting that scRGB16 is a good proxy for it and therefore we could say: 11.6?  Seems a bit low at first look.
Jack

Jack,

That figure is low since Greg's requirement for 1% steps in the shadows is very stringent. We can usually get by with fewer and Norman Koren suggests 8 steps. This translates to 9% steps (1.09^8≈2). The take home point is that a bit depth of 16 with a linear ramp is not true high dynamic range, and a bit depth of 14 would fall even more short of HDR.

Bill
Logged
tarlijade
Newbie
*
Offline Offline

Posts: 1


« Reply #74 on: April 24, 2013, 12:40:08 AM »
ReplyReply

I have a question, If possible can anyone answer this for me, i've been researching for hours and am still struggling to understand and word this question.
"How does bit depth influence image reproduction? and What implications does it have in determining exposure?"
I am a beginner and am failing to answer this.
Help would be rather appreciated thanks x Huh
« Last Edit: April 24, 2013, 12:44:24 AM by tarlijade » Logged
ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7405


WWW
« Reply #75 on: April 24, 2013, 01:21:24 AM »
ReplyReply

Hi,

A bit is the smallest value a computer uses, it either one or zero. Bit depth is how many bits are used to represent a channel (Red, Green or Blue).

Each bit in bit depth corresponds to an EV. So would it not be for noise 12 bits would cover 12 EV and 16 bits would cover 16 EV, but there is always noise, and todays best devices have about 13 EV of dynamic range (maximum signal divided by noise), so there is some merit to 14 bits but no merit of having more than 14 bits.

The number of bits would not affect exposure at all. If a sensor has a wider dynamic range you can extract more shadow detail.

Best regards
Erik


I have a question, If possible can anyone answer this for me, i've been researching for hours and am still struggling to understand and word this question.
"How does bit depth influence image reproduction? and What implications does it have in determining exposure?"
I am a beginner and am failing to answer this.
Help would be rather appreciated thanks x Huh
Logged

BartvanderWolf
Sr. Member
****
Online Online

Posts: 3637


« Reply #76 on: April 24, 2013, 01:33:56 AM »
ReplyReply

"How does bit depth influence image reproduction? and What implications does it have in determining exposure?"

Hi,

When more (smaller) increments/steps in brightness can be encoded, the brightness gradients will be smoother. It also allows to boost local contrast more accurately without creating huge jumps and related noise amplification.

As an analogy, you could think of it as riding down an stone staircase on a bike. The smaller the step height increments, the smoother the ride.

Also consider, that bit depth is going to encode the Raw data, which then has to undergo at least a gamma conversion which amplifies the shadow differences and reduces the highlight differences.

Cheers,
Bart
Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1678


« Reply #77 on: April 24, 2013, 06:52:51 AM »
ReplyReply

As an analogy, you could think of it as riding down an stone staircase on a bike. The smaller the step height increments, the smoother the ride.
Except if the staircase is covered with a layer of sand. If that is the case, you may not notice the steps.

-h
Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1678


« Reply #78 on: April 24, 2013, 07:06:13 AM »
ReplyReply

Why at least?  Because depending on sampling and noise dithering, the quantization error can become much smaller than that - all the way to immaterial, so that the dominant determinant of minimum detectable signal would be factors outside of quantization and data type.
Ignoring band-limiting, filtering and such:
A perfect square-wave has only two levels. If those two levels happens to fall on quantizer levels (1 bit would do), the quantization noise would be exactly zero. SNR is infinite.

Standard engineering approaches to discrete sampling assumes that signal and quantization error is uncorrelated and one or both uniformly distributed, thus the quantization "noise" can be calculated independent of signal. This leads to the SNR=6.02*#bits formula.

Dither is the willful introduction of more noise, usually with the intent of encoding more (low-frequency) levels. Noiseshaping can move this noise into frequencies where it is less annoying.

-h
Logged
Tim Lookingbill
Sr. Member
****
Offline Offline

Posts: 1153



WWW
« Reply #79 on: April 25, 2013, 12:39:47 PM »
ReplyReply

I found a demonstration online that might help you understand why this 12 bit vs 14 bit concept with regard to a camera's Analog to Digital conversion process for Raw capture is pretty much meaningless and fraught with other variables that make it impossible to prove its efficacy.

The link below is a comparison study of dynamic range (more DR the more bits you need to render) between two 14 bit A/D "PRECISION" processing cameras, the Canon 7D and the Pentax K5. All you have to look at is the underexposed by -5EV (compared to normal "looking" exposure) Raw shots of a typical backlit shaded outdoor scene. Scroll down below these dark looking shots to the normalized edited version (of the -5EV only) that you'll have to download and examine at 100% view and note the level of noise and amount of detail pulled out of the darkest areas between these two images.

http://www.pentaxforums.com/reviews/canon-7d-vs-pentax-k-5-review/image-quality.html#drtest

Here is a link to a discussion on the importance of 14 bit that you may find of further interest...

http://www.dpreview.com/forums/post/24452969

If you are referring to the concept of bit depth with regard to 16 bit chosen in ACR/LR or Photoshop "Mode" that is a totally different concept which has more to do with adding/interpolating up bit levels that aren't in the original 14 bit Raw file as a way to perform smoother looking edits to broad gradations like blue skies to reduce posterization with regard to editing within an 8 bit video preview of the display.

16 bit in your editor has more to do with previews where as 12/14 bit has more to do with actual data processing at the source of the camera's electronics.
« Last Edit: April 25, 2013, 12:44:39 PM by tlooknbill » Logged
Pages: « 1 2 3 [4] 5 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad