Ad
Ad
Ad
Pages: [1] 2 »   Bottom of Page
Print
Author Topic: DSLR dynamic range  (Read 14529 times)
dkosiur
Newbie
*
Offline Offline

Posts: 12


« on: February 17, 2008, 08:48:00 PM »
ReplyReply

How much difference is 2 bits of dynamic range (12-bit vs. 14-bit) going to make in an image? And where is this difference going to appear? From LuLa videos and other articles, I'd guess that better shadow detail will be one result?

And if 14-bit dynamic range (such as offered by the D300) provides smoother tonal gradations (I think I'm saying that correctly), what difference will that make in a print?

Lots of questions. Just wondering if 14-bit dynamic range in the D300 is a worthwhile advantage.

Thanks,
Dave
Logged
Panopeeper
Sr. Member
****
Offline Offline

Posts: 1805


« Reply #1 on: February 17, 2008, 10:52:42 PM »
ReplyReply

The designation "14bits" means bit depth, i.e. finer tonal gradations as 12bit depth. Furthermore, this carries the potential for larger dynamic range. However, on its own it does not indicate a higher dynamic range than with 12bit depth.

My estimation is, that the DR of the D300 at ISO 100 is about 8.5 EV.
« Last Edit: February 17, 2008, 10:53:34 PM by Panopeeper » Logged

Gabor
dkosiur
Newbie
*
Offline Offline

Posts: 12


« Reply #2 on: February 18, 2008, 12:13:10 AM »
ReplyReply

Quote
The designation "14bits" means bit depth, i.e. finer tonal gradations as 12bit depth. Furthermore, this carries the potential for larger dynamic range. However, on its own it does not indicate a higher dynamic range than with 12bit depth.

My estimation is, that the DR of the D300 at ISO 100 is about 8.5 EV.
[a href=\"index.php?act=findpost&pid=175575\"][{POST_SNAPBACK}][/a]

Thanks for the clarification...

One question remains -- what difference is the 14-bit depth likely to have on prints?
Logged
John Sheehy
Sr. Member
****
Offline Offline

Posts: 838


« Reply #3 on: February 18, 2008, 01:19:11 AM »
ReplyReply

Quote
How much difference is 2 bits of dynamic range (12-bit vs. 14-bit) going to make in an image? And where is this difference going to appear? From LuLa videos and other articles, I'd guess that better shadow detail will be one result?

And if 14-bit dynamic range (such as offered by the D300) provides smoother tonal gradations (I think I'm saying that correctly), what difference will that make in a print?

Lots of questions. Just wondering if 14-bit dynamic range in the D300 is a worthwhile advantage.
[a href=\"index.php?act=findpost&pid=175558\"][{POST_SNAPBACK}][/a]

The issues are very complicated.  There is a unique issue in the D300, as well.

No consumer digital camera in current production needs more than 12 bits in the RAW data, for the purposes of taking a single exposure, except the Pentax K10D which needs (but doesn't have) 13 bits, but only at ISO 100.  There is too much noise for the potential extra tonal gradations to be perceived.  However, it seems that some RAW converters use extra precision in their calculation engines when working with higher bit data, and may give better results with 14-bit data, but could also give better results with 12-bit data with two extra bits of zeros tacked on, to force converters into higher precision.

The D300 is different, though, as its 12-bit output apparently isn't just the 14-bit output with two bits dropped; it is a slower, and more deliberate readout of the sensor, which would be better than the 12-bit output even if the last two bits of the 14-bit were dropped.
Logged
bjanes
Sr. Member
****
Offline Offline

Posts: 2756



« Reply #4 on: February 18, 2008, 08:21:24 AM »
ReplyReply

Quote
The designation "14bits" means bit depth, i.e. finer tonal gradations as 12bit depth. Furthermore, this carries the potential for larger dynamic range. However, on its own it does not indicate a higher dynamic range than with 12bit depth.

My estimation is, that the DR of the D300 at ISO 100 is about 8.5 EV.
[{POST_SNAPBACK}][/a]


In addition, the two extra bits should theoretically result in a 12 db (2 f/stop) improvement is the signal to noise ratio of the ADC and reduce read noise, improving the DR by two stops. Quantization error in the least significant bit is reduced. See this link for details.

[a href=\"http://www.edn.com/article/CA419561.html]ADC SNR[/url]

However, the 15 bit ADCs used in current cameras do not exhibit ideal performance. It is difficult to design a 14 bit ADC that operates at the bit rate needed in high frame rate digital cameras such as the 1D MIII and the Nikon D3.

For a good analysis of dynamic range and noise, see this article by Roger Clark. According to Roger, the actual improvement for the Canon 1D is more like 1/2 stop.
Logged
lovell
Full Member
***
Offline Offline

Posts: 131


WWW
« Reply #5 on: March 14, 2008, 08:34:23 PM »
ReplyReply

A few things....bit depth (8, 12, 14) does not define dynamic range, nor does it define potential dynamic range.  All bit depth does is determine the number of slices in the DR pie.

The sensor's output to the A/D converter defines the DR.

For example, if camera (A) provides 12 bits of data per channel, and camera ( provides 16 bits per channel, does that necessarily mean that camera ( provides wider DR?  No!  It could be, that the camera provide less bit depth procides the wider DR!

All bit depth does is define the graduation of the analog data converted to digital.  So for example of camera (A) provides 7 stops of DR and 12 bits/channel, and camera ( provides 7 stops of DR and 14 bits/channel, then they both provide the exact same DR, but what is different between them?  The number of graduations of tone is the difference.  One camera provides more tonal values then the other.

Furthermore, it would be wrong to suggest that one will not be able to see differences in an image that is 12 bits/channel versus an image that provides 14.  This may well be true for small prints (assuming the printer can support 14/channel), but as one enlarges a print, one will see more and more the differences, for example in sky tones, graduations from one shade of blue to another, and human skin, and any element of a composition that shows subtle graduations.  In other words, more bits is a good thing; a big deal, even if our human eyes will not be able to tell the differences between tone X and tone X + 1.  More bits means smoother transitions, graduations in color, and that, we often can see.

The other wonderful thing about more bit depth is rounding errors.  For example, if you tweek an 8 bit image in PhotoShop, you're more likely to get combing of tones, especially toward the left of the histogram, the shadows.  This is because the math used to apply levels, curves, and the like results in rounding errors, and as you go up in bit depth, you will go down in rounding errors, or in otherwords, result in less digital artifacts such as combing, blocked up shadows, etc.
Logged

After composition, everything else is secondary--Alfred Steiglitz, NYC, 1927.

I'm not afraid of death.  I just don't want to be there when it happens--Woody Allen, Annie Hall, '70s
bjanes
Sr. Member
****
Offline Offline

Posts: 2756



« Reply #6 on: March 15, 2008, 06:50:04 AM »
ReplyReply

Quote
A few things....bit depth (8, 12, 14) does not define dynamic range, nor does it define potential dynamic range.  All bit depth does is determine the number of slices in the DR pie.

The sensor's output to the A/D converter defines the DR.

For example, if camera (A) provides 12 bits of data per channel, and camera ( provides 16 bits per channel, does that necessarily mean that camera ( provides wider DR?  No!  It could be, that the camera provide less bit depth procides the wider DR!

All bit depth does is define the graduation of the analog data converted to digital.  So for example of camera (A) provides 7 stops of DR and 12 bits/channel, and camera ( provides 7 stops of DR and 14 bits/channel, then they both provide the exact same DR, but what is different between them?  The number of graduations of tone is the difference.  One camera provides more tonal values then the other.

[{POST_SNAPBACK}][/a]

What you say would be true if one were using floating point encoding, but with linear integer encoding, bit depth and dynamic range are tightly bound. A common analogy is with a staircase where the DR is the height of the staircase and bit depth is the number of steps.  With linear integer encoding, the number of steps is determined by the bit depth, but the height of the steps is predetermined. The steps are large in the shadows and small in the highlights. With linear integer encoding, a bit depth of N bits can encode N f/stops of DR. This would give only one level in the deepest shadows. For more explanation, please refer to the table and accompanying text on Norman Koren's [a href=\"http://www.normankoren.com/digital_tonality.html]web site[/url].

From the table, it is evident that a bit depth of 10 can encode 10 f/stops with only one level in the deepest shadows. If you require 8 levels in the darkest f/stop so as to prevent visible banding, the 10 bits would encode a DR of only seven f/stops.

Bill
Logged
Tim Gray
Sr. Member
****
Offline Offline

Posts: 2002



WWW
« Reply #7 on: March 15, 2008, 07:14:12 AM »
ReplyReply

Quote
What you say would be true if one were using floating point encoding, but with linear integer encoding, bit depth and dynamic range are tightly bound. A common analogy is with a staircase where the DR is the height of the staircase and bit depth is the number of steps.  With linear integer encoding, the number of steps is determined by the bit depth, but the height of the steps is predetermined. The steps are large in the shadows and small in the highlights. With linear integer encoding, a bit depth of N bits can encode N f/stops of DR. This would give only one level in the deepest shadows. For more explanation, please refer to the table and accompanying text on Norman Koren's web site.

From the table, it is evident that a bit depth of 10 can encode 10 f/stops with only one level in the deepest shadows. If you require 8 levels in the darkest f/stop so as to prevent visible banding, the 10 bits would encode a DR of only seven f/stops.

Bill
[a href=\"index.php?act=findpost&pid=181666\"][{POST_SNAPBACK}][/a]

I thinks this is a very important point.  When someone says increasing bit depth doesn't increase dynamic range I think they risk missing a key point and that while the statement is technically true, the ability to represent the full dynamic range is dependent on the bit depth (within the limits noted above).  Take a A/D converter with, say 8 stops of DR.  Map that into a 2 bit space and imagine what the DR would appear to be.  Lots of blocked shadows and clipped highlights, I'd imagine    Or back to the ladder analogy - if you only have 2 rungs, do you really care how tall the ladder is?

I have certainly noticed that it's easier to dig detail out of the shadows with 14 bits vs 12 bits.  That may not mean (technically) more DR, but it does mean I'm getting more use out of the DR that's there.  Having said that, given current DR I suspect moving to 16 bits probably won't result in any kind of observable improvement.
Logged
John Sheehy
Sr. Member
****
Offline Offline

Posts: 838


« Reply #8 on: March 15, 2008, 10:37:43 AM »
ReplyReply

Quote
I thinks this is a very important point.  When someone says increasing bit depth doesn't increase dynamic range I think they risk missing a key point and that while the statement is technically true, the ability to represent the full dynamic range is dependent on the bit depth (within the limits noted above).

I doubt that anyone who knows enough to say that 14 bits doesn't provide any more DR (than 12 bits does in current cameras) fails to realize that more bits *could* allow more DR.  They're just pointing out that they currently do not.  There's just too much read noise in current cameras to provide the kind of DR which would be photonically possible without read noise.  The potential of a camera without read noise is staggering.  I've been working for the last few days on simulations of pure shot noise, and it the usability of shadows goes way beyond my expectations.  I'm talking here about 1/2.5"   The idea that a 1:1 SNR is the absolute bottom of the barrel of usable signals is only a read-noise-related assessment.  Shot noise is a completely different creature, with completely different characteristics, when you get down around the 1:1 noise floor, and lower.  I will post my findings in all the forums that I participate in when I perfect the code and find the best subjects to use.  Patterned subjects fare best way down in the shadows.  I can see alternating black and white lines at the nyquist (1 line thick each, something that should never happen optically) in simulations of a 10MP 1/1.8" sensor at ISO 1 million.

Nothing that we have in the real world currently shows images with just shot noise, so most assessments of noise assume that shot noise is just another source of the same thing that read noise is.  I have, for a couple of years now, felt that this is wrong, but never got around to actually doing thorough simulations (I played with the ideas a few months ago just to get a glimpse).

Quote
Take a A/D converter with, say 8 stops of DR.  Map that into a 2 bit space and imagine what the DR would appear to be.  Lots of blocked shadows and clipped highlights, I'd imagine 

That depends on how much noise is in the underlying analog RAW capture.

Quote
Or back to the ladder analogy - if you only have 2 rungs, do you really care how tall the ladder is?

I have certainly noticed that it's easier to dig detail out of the shadows with 14 bits vs 12 bits.
[a href=\"index.php?act=findpost&pid=181670\"][{POST_SNAPBACK}][/a]

Can you really know that?  Unless you can take a copy of a 14-bit RAW file and turn the 2 least significant bits into '10' in every pixel, and then run both through the same converter, with the same settings, you can not really tell.  Most likely, you are comparing two different cameras, or two completely different readout modes with burst speed tradeoffs (a la the D300), when you think you are comparing "14 bits" to "12 bits".
Logged
John Sheehy
Sr. Member
****
Offline Offline

Posts: 838


« Reply #9 on: March 15, 2008, 10:39:51 AM »
ReplyReply

Quote
From the table, it is evident that a bit depth of 10 can encode 10 f/stops with only one level in the deepest shadows. If you require 8 levels in the darkest f/stop so as to prevent visible banding, the 10 bits would encode a DR of only seven f/stops.
[a href=\"index.php?act=findpost&pid=181666\"][{POST_SNAPBACK}][/a]

This concerns computer-generated graphics without dithering.  Its application to digital photography is very limited, as noise changes everything.
Logged
gdanmitchell
Jr. Member
**
Offline Offline

Posts: 68


WWW
« Reply #10 on: March 15, 2008, 10:44:58 AM »
ReplyReply

Extra bits could be used to accomplish two things: describe a larger range of values or more precisely describe values within the existing range.

I understand that the 14-bit DSLR technology does the latter rather than the former. In other words, rather than increasing the dynamic range that can be described in the file in describes the levels more precisely.

This could, in certain circumstances, somewhat reduce banding - but it seems to me that the circumstances would be pretty marginal; perhaps when you radically alter things like levels or curves in post processing.

Dan
Logged

G Dan Mitchell
SF Bay Area, California, USA
bjanes
Sr. Member
****
Offline Offline

Posts: 2756



« Reply #11 on: March 15, 2008, 10:48:04 AM »
ReplyReply

Quote
This concerns computer-generated graphics without dithering.  Its application to digital photography is very limited, as noise changes everything.
[a href=\"index.php?act=findpost&pid=181701\"][{POST_SNAPBACK}][/a]

Dynamic range can be limited by noise or by lack of levels. With most current digital cameras, noise is the limiting factor, but as the cameras improve it will be necessary to use more than 12 bits. Dithering can help to mask banding in the shadows, bit it does not add any levels.

Bill
Logged
bjanes
Sr. Member
****
Offline Offline

Posts: 2756



« Reply #12 on: March 15, 2008, 10:54:34 AM »
ReplyReply

Quote
Extra bits could be used to accomplish two things: describe a larger range of values or more precisely describe values within the existing range.

I understand that the 14-bit DSLR technology does the latter rather than the former. In other words, rather than increasing the dynamic range that can be described in the file in describes the levels more precisely.

This could, in certain circumstances, somewhat reduce banding - but it seems to me that the circumstances would be pretty marginal; perhaps when you radically alter things like levels or curves in post processing.

Dan
[{POST_SNAPBACK}][/a]

Current digital DSLRs use linear integer encoding and can accomplish only the former. You do not understand the theory of encoding and I suggest you look at this [a href=\"http://www.anyhere.com/gward/hdrenc/hdr_encodings.html]article.[/url]
Logged
John Sheehy
Sr. Member
****
Offline Offline

Posts: 838


« Reply #13 on: March 15, 2008, 11:12:49 AM »
ReplyReply

Quote
This could, in certain circumstances, somewhat reduce banding - but it seems to me that the circumstances would be pretty marginal; perhaps when you radically alter things like levels or curves in post processing.

[a href=\"index.php?act=findpost&pid=181705\"][{POST_SNAPBACK}][/a]

It's not going to happen because of the bit depth of the RAW data; any posterization you see in a conversion is the fault of the conversion.  RAW data is nowhere close to being posterized in current cameras, with only two exceptions I can think of; the Pentax K10D should really have a 13th bit at ISO 100, to avoid slight posterization of the very deepest shadows (shadows that are normally rendered black or close to it), and the Sony A700 has smoothed out shadows at high ISO in the RAW data.
Logged
John Sheehy
Sr. Member
****
Offline Offline

Posts: 838


« Reply #14 on: March 15, 2008, 11:20:53 AM »
ReplyReply

Quote
Dynamic range can be limited by noise or by lack of levels. With most current digital cameras, noise is the limiting factor, but as the cameras improve it will be necessary to use more than 12 bits. Dithering can help to mask banding in the shadows, bit it does not add any levels.
[a href=\"index.php?act=findpost&pid=181706\"][{POST_SNAPBACK}][/a]

Dithering *does* increase levels; average levels.  If you had a 4-bit RAW camera, and shot a step wedge with 64 levels, you would not have 64 different level patches unless there was sufficient noise in the system.  With no noise at all, and perfect patches and perfectly even lighting, some patches would record exactly the same.  An optimal amount of dithering noise before digitization or further quantization trades a small amount of pixel accuracy for a huge increase in area accuracy.
Logged
dkosiur
Newbie
*
Offline Offline

Posts: 12


« Reply #15 on: March 15, 2008, 04:36:59 PM »
ReplyReply

Lots of good stuff here.

OK, here's a phrasing of what should have been my original question in this thread -- "Is there a noticeable difference between 12-bit and 14-bit Raw files, such as those produced by the D300?"

I'd be particularly interested if a raw processor like RPP, which uses higher-precision math, shows differences between the 2 types of Raw files from a D300.

Dave
Logged
John Sheehy
Sr. Member
****
Offline Offline

Posts: 838


« Reply #16 on: March 15, 2008, 07:42:57 PM »
ReplyReply

Quote
Lots of good stuff here.

OK, here's a phrasing of what should have been my original question in this thread -- "Is there a noticeable difference between 12-bit and 14-bit Raw files, such as those produced by the D300?"

I'd be particularly interested if a raw processor like RPP, which uses higher-precision math, shows differences between the 2 types of Raw files from a D300.
[a href=\"index.php?act=findpost&pid=181774\"][{POST_SNAPBACK}][/a]

The difference between 14-bit and 12-bit on the D300 seems to be more than just the number of bits (12 clearly isn't 14 after quantization); the 12-bit is a faster, slightly dirtier readout.  I don't remember where, but someone linked to pairs of RAWs otherwise the same except for 12 vs 14 bits, and the 12 bit samples all had more analog read noise, including line (banding) noise.
Logged
Guillermo Luijk
Sr. Member
****
Offline Offline

Posts: 1274



WWW
« Reply #17 on: March 16, 2008, 06:29:28 AM »
ReplyReply

Quote
bit depth (8, 12, 14) (...) nor does it define potential dynamic range.

(...)

For example, if camera (A) provides 12 bits of data per channel, and camera (B ) provides 16 bits per channel, does that necessarily mean that camera (B ) provides wider DR?

If camera A provides 12-bit linear encoded data, it will never be able to represent more than 12 f-stops of DR, it's a physical limit of DR for such encoding. In the same way 16 f-stops of DR is the physical limit for a 16-bit linear encoding.

If we needed 13 f-stops of DR:
- Camera A should be simply discarded.
- Camera B should be designed reasonably noise-free in the higher 13 f-stops of its DR to make them usable for the photographer.

So even if the number of bits does not mean a certain DR, it sets a limit to it and therefore defines potential DR.


Regarding the question made by the op, my findings after analysing DR on Canon 40D, Nikon D3 and Sony A700 RAW files (all 14-bit machines), is that they are not noise-free enough in the lowest f-stops to really start enjoying such an improvement in DR that justifies the bit depth increase.

All three have about 9 f-stops of effective DR usable speaking in photographic terms (very different to any engineering criteria like for example Roger Clark's, which yields higher values of DR not directly of interest to the photographer). It's higher than the DR of the previous generation cameras, but we still have to wait for a next generation of sensors on DSLRs where noise is low enough to make 12-bit definitively insufficient to exploit their fully DR potential.

DR table (roughly):
* Canon 350D: 8 f-stops
* Canon 5D: 8,5 f-stops
* Canon 40D, Nikon D3, Sony A700: 9 f-stops
* Fuji S3 Pro: 11 f-stops (double sensor Super CCD)
« Last Edit: March 16, 2008, 06:51:05 AM by GLuijk » Logged

lovell
Full Member
***
Offline Offline

Posts: 131


WWW
« Reply #18 on: March 20, 2008, 01:50:36 PM »
ReplyReply

But doesn't a tone value of 0 provide the same exact black in a 2,4,8,12, or 16 bit image?

And on the highlight side, isn't an 8 bit value of 255 equal the same white tone as a 12 bit's 4,096?

Sure, more bit depth can support wider DR, but this is not the same thing as saying that more bit depth causes wider DR, right?

Perhaps I'm missing something here, but it seems to me that bit depth and DR are not related directly.
Logged

After composition, everything else is secondary--Alfred Steiglitz, NYC, 1927.

I'm not afraid of death.  I just don't want to be there when it happens--Woody Allen, Annie Hall, '70s
bjanes
Sr. Member
****
Offline Offline

Posts: 2756



« Reply #19 on: March 20, 2008, 03:01:53 PM »
ReplyReply

Quote
But doesn't a tone value of 0 provide the same exact black in a 2,4,8,12, or 16 bit image?
[{POST_SNAPBACK}][/a]

A tone value of 0 would give an infinite dynamic range. With a 12 bit linear raw capture, the maximum value is 4095 and the minimum useful value is 1. The DR is 4095:1. Expressed in f/stops the value is log base 2(4095) = 12 stops. It helps to normalize the values to a maximum of 1.0. Then a minimum of 1 is 1/4095. In density units the darkest shadow would be 3.61, which is far greater than is possible on a reflection print. That is why black point compensation can be helpful.

Look at the table on [a href=\"http://www.normankoren.com/digital_tonality.html]Normal Koren's[/url] web site.

Quote
Perhaps I'm missing something here, but it seems to me that bit depth and DR are not related directly.
[a href=\"index.php?act=findpost&pid=183035\"][{POST_SNAPBACK}][/a]

Go back and read Guillermo's post.

Bill
Logged
Pages: [1] 2 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad