Ad
Ad
Ad
Pages: « 1 2 [3] 4 5 »   Bottom of Page
Print
Author Topic: Dynamic Range vs bit depth  (Read 7751 times)
Jack Hogan
Full Member
***
Offline Offline

Posts: 195


« Reply #40 on: February 19, 2013, 08:36:14 AM »
ReplyReply

Isn't the noise floor measured in the absence of light ??.
No, in imaging typically Dynamic Range is calculated as the maximum signal divided by the signal at a given SNR.  DxO uses SNR=1 (=0dB).  Bill Claff uses SNR=20 within the Circle of Confusion.
« Last Edit: February 19, 2013, 08:55:53 AM by Jack Hogan » Logged
Jack Hogan
Full Member
***
Offline Offline

Posts: 195


« Reply #41 on: February 19, 2013, 08:55:20 AM »
ReplyReply

In non-technical language, as I understand it, dynamic range is is expressed in terms of Exposure Values (EV).

Dynamic Range is a unitless ratio, typically the maximum signal divided by the minimum useful signal that can be recorded/reproduced - this last bit depends on typical use in the specific discipline.  It can be expressed as the number of doublings (log2).  In photographic circles a doubling of signal is normally referred to as a 'Stop'.

For instance, the maximum signal that can be recorded by an A99 at ISO50 is about 59,000 electrons.  SNR is equal to 1 when the signal is about 6.4 electrons.  So engineering DR as used by DxO is about 59000/6.4=9200 which is equivalent to about 13.2 stops.  Not bad for a 12 bit sensor  Wink
« Last Edit: February 19, 2013, 08:57:01 AM by Jack Hogan » Logged
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3012


« Reply #42 on: February 19, 2013, 09:05:14 AM »
ReplyReply

No, in imaging typically Dynamic Range is calculated as the maximum signal divided by the signal at a given SNR.  DxO uses SNR=1 (=dB).  Bill Claff uses SNR=20 within the Circle of Confusion.

Hi Jack,

Well, the absence of light leaves only the read noise (+dark current which exists above 0 Kelvin and accumulates with time, to be even more accurate), which is used for the engineering definition of DR. Other levels of SNR >1 are arbitrary and, while they do correspond more to usable/practical shadow noise levels, AFAIK there is no universally accepted minimum SNR (sometimes we need more, sometimes less). An SNR of 20 may be way to noisy for some, while others find it still acceptable (e.g. after using a good noise reduction algorithm that spares detail).

Another issue with SNR>1 is that some cameras can use noise reduction on the image data before writing it to the Raw data file. While that may help to reduce the noise and artificially boost the DR, it no longer says anything about the sensor quality (and whether noise reduction was used, because the reference is missing).

So from a technical point of view, I think that the engineering definition gives the best impression of the quality of the electronics involved, and DR numbers based on arbitrary (low exposure level) noise limits can give an impression of how noisy the shadows in in image may look only at that particular signal level (it will be noisier still at lower levels though, so how low do you really need to go?).

Cheers,
Bart
Logged
thierrylegros396
Sr. Member
****
Offline Offline

Posts: 602


« Reply #43 on: February 19, 2013, 10:54:06 AM »
ReplyReply

Hi Jack,

Another issue with SNR>1 is that some cameras can use noise reduction on the image data before writing it to the Raw data file. While that may help to reduce the noise and artificially boost the DR, it no longer says anything about the sensor quality (and whether noise reduction was used, because the reference is missing).

Cheers,
Bart

I think there are now a lot of cameras that use that way to artificially "improve" their sensors.

But the drawback is that the deep shadows are no more linear.

And when you want to "push" the shadow in your Raw converter, you may have some surprises  Wink Wink

Have a Nice Day.

Thierry
Logged
Tim Lookingbill
Sr. Member
****
Offline Offline

Posts: 1055



WWW
« Reply #44 on: February 19, 2013, 10:57:15 AM »
ReplyReply

Thanks for that "CCD University" link, Bart. Very informative.

The Anti-blooming section told a lot about my own camera's sensor behavior. I know I don't have an anti-blooming gate which does affect how far I go with ETTR shooting Raw of sunlit pastel stone texture or tree bark, a PITA to deal with in post requiring a long session of cloning.

It'll appear on the camera's LCD histogram that I exposed just right with no clipped flashing indicators on the incamera preview, but zoomed in at 100% in ACR shows tiny white or full saturation yellow spots peppered all over these kind of brightly lit textures indicating I should've reduced exposure by maybe a 1/3 of a stop.
Logged
Jack Hogan
Full Member
***
Offline Offline

Posts: 195


« Reply #45 on: February 19, 2013, 12:52:08 PM »
ReplyReply

Well, the absence of light leaves only the read noise (+dark current which exists above 0 Kelvin and accumulates with time, to be even more accurate), which is used for the engineering definition of DR.

Hey Bart,

Yes, different folks choose different lower limits depending on application, that's why I said 'typically'.  Other than for circuit designers where I can see the benefit of eDR, I think that a given total SNR is a more relevant indicator of IQ for Photograhers, hence often referred to as such.


So from a technical point of view, I think that the engineering definition gives the best impression of the quality of the electronics involved, and DR numbers based on arbitrary (low exposure level) noise limits can give an impression of how noisy the shadows in in image may look only at that particular signal level (it will be noisier still at lower levels though, so how low do you really need to go?).

Indeed.  On the other hand the latter may give a better idea of the working range of one's instrument.

Jack
Logged
IliasG
Newbie
*
Offline Offline

Posts: 17


« Reply #46 on: February 20, 2013, 02:51:05 PM »
ReplyReply

No, in imaging typically Dynamic Range is calculated as the maximum signal divided by the signal at a given SNR.  DxO uses SNR=1 (=0dB).  Bill Claff uses SNR=20 within the Circle of Confusion.

At that stage the discussion was about engineering DR and where the noise floor is measured in the absence of signal.
Logged
IliasG
Newbie
*
Offline Offline

Posts: 17


« Reply #47 on: February 20, 2013, 03:04:25 PM »
ReplyReply

...

For instance, the maximum signal that can be recorded by an A99 at ISO50 is about 59,000 electrons.  SNR is equal to 1 when the signal is about 6.4 electrons.  So engineering DR as used by DxO is about 59000/6.4=9200 which is equivalent to about 13.2 stops.  Not bad for a 12 bit sensor  Wink

Inspecting Α99's raw histogramm with rawdigger shows that at the dark side (512-2000 ADU) it's a 13bit data in 14bit container. Check "Sony ARW2 hack" at properties to see the correct raw histogramm.
Logged
Jack Hogan
Full Member
***
Offline Offline

Posts: 195


« Reply #48 on: February 21, 2013, 05:26:39 AM »
ReplyReply

Inspecting Α99's raw histogramm with rawdigger shows that at the dark side (512-2000 ADU) it's a 13bit data in 14bit container. Check "Sony ARW2 hack" at properties to see the correct raw histogramm.

I see, so the A99 is not a good example for the discussion at hand :-)

The FWC and signal at SNR=1 in the post quoted earlier were calculated extrapolating DxO's full SNR curves graphically, so bit depth or non-linear coding of the Raw files should not be a limiting factor on DR as long as the curves represent the substance of the SNR information.
« Last Edit: February 21, 2013, 05:32:27 AM by Jack Hogan » Logged
Guillermo Luijk
Sr. Member
****
Offline Offline

Posts: 1273



WWW
« Reply #49 on: February 21, 2013, 03:23:19 PM »
ReplyReply

Is it typical, without user involvement, that gamma encoding would contain 16 stops in 8bits....or is that just a perfect world?

The discusion 'bits vs DR' only makes sense at the capture stage, i.e. when considering RAW files which are linear. 8-bit JPEG files and 16-bit TIFF files are processed image files, and they could hypothetically contain up to 256 and 65536 stops of DR, as long as you devote a single tonal level to represent each stop.

The role of gamma is far more interesting than the DR discusion. In an integer encoding (e.g. 8-bit JPEG or 16-bit TIFF files), the gamma expansion is what allows to redistribute the available levels (256 and 65536) so that both shadows and highlights are represented by a sufficiently high number of levels.

See what happens when we encode a real world scene with about 16 stops of DR (it was made noisefree thanks to multiexposure):




into a 16-bit TIFF file with 2.2 gamma:




and with 1.0 gamma (linear encoding):



Deep shadows get posterized.

Regards
Logged

jrsforums
Sr. Member
****
Offline Offline

Posts: 706


« Reply #50 on: February 21, 2013, 04:10:10 PM »
ReplyReply

Thanks, Guillermo...

I know that a lot of magic can be do to fit 10 lbs into a 5 lbs bag.....dodging/burning, negative development, tonal compression, etc.

My tern, without user involvement, was poorly chosen.....and a lot of the responses took off into interesting, but not necessarily practical directions...so I sort of gave up.

Let me ask it a little different.


In camera, the raw image gets converted to jpeg.  If you started withy. Raw image of, say, 12 stops of DR....what would one expect the DR of the jpeg to be.

Take the same Raw image to ACR/LR....on opening, what would the DR be...about...not excruciatingly scientifically correct.

Again, I may not have asked this correctly, but I think you may understand where I am basically heading.

John
Logged

John
Guillermo Luijk
Sr. Member
****
Offline Offline

Posts: 1273



WWW
« Reply #51 on: February 21, 2013, 04:50:31 PM »
ReplyReply

Let me ask it a little different.


In camera, the raw image gets converted to jpeg.  If you started withy. Raw image of, say, 12 stops of DR....what would one expect the DR of the jpeg to be.

Take the same Raw image to ACR/LR....on opening, what would the DR be...about...not excruciatingly scientifically correct.

The DR contained in the JPEG file depends on the camera processing, not on the capabilities of the 8-bit JPEG format. A captured RAW files properly processed into a JPEG file, will contain and will display all the captured DR. But usually camera software clips some highlights which were intact in the RAW file, and clip to black deep shadows information, so the information contained and displayed in the JPEG is less than the data in the RAW file. But I insist, this is not a problem nor limitation of the 8-bit JPEG format, but the result of camera processing (white balance, contrast curve, saturation,...).

Regards.
« Last Edit: February 21, 2013, 04:52:09 PM by Guillermo Luijk » Logged

Jack Hogan
Full Member
***
Offline Offline

Posts: 195


« Reply #52 on: February 21, 2013, 05:49:40 PM »
ReplyReply

Deep shadows get posterized.

I am not sure I understand.  Gamma only works its magic by encoding more linear shadow bits into fewer non-linear ones - for instance when displaying 16 bit data through an 8 bit, color managed video system.  It does nothing but create rounding errors when storing linear 16 bit data encoded non-linearly in the same 16 bits and then displaying them through an 8 bit, color managed video system.

It'd be interesting to see your last two images as displayed by Photoshop CS's well behaved ACE color engine to see whether they show posterizaton (they shouldn't, but who knows ;-).

Jack
Logged
Guillermo Luijk
Sr. Member
****
Offline Offline

Posts: 1273



WWW
« Reply #53 on: February 21, 2013, 06:42:05 PM »
ReplyReply

I am not sure I understand.  Gamma only works its magic by encoding more linear shadow bits into fewer non-linear ones - for instance when displaying 16 bit data through an 8 bit, color managed video system.  It does nothing but create rounding errors when storing linear 16 bit data encoded non-linearly in the same 16 bits and then displaying them through an 8 bit, color managed video system.

It'd be interesting to see your last two images as displayed by Photoshop CS's well behaved ACE color engine to see whether they show posterizaton (they shouldn't, but who knows ;-).

Jack

They posterize when the 16-bit TIFF is linear. Those images come from Photoshop.
These are the levels devoted to each stop using linear and 2.2 gamma:



If you have valid information in the very deep shadows (stops numbered above as -12, -13,...), when you lift them with a strong exposure correction up, the linear image displays posterization because of the lack of levels to encode an entire stop.

Regards
Logged

Jack Hogan
Full Member
***
Offline Offline

Posts: 195


« Reply #54 on: February 22, 2013, 12:31:49 PM »
ReplyReply

They posterize when the 16-bit TIFF is linear. Those images come from Photoshop.
These are the levels devoted to each stop using linear and 2.2 gamma:

If you have valid information in the very deep shadows (stops numbered above as -12, -13,...), when you lift them with a strong exposure correction up, the linear image displays posterization because of the lack of levels to encode an entire stop.

I see.  However, I still do not understand: you may have 19 bits of information, but if that information is originally encoded linearly as 16 bit data, as long as you stay at 16 bits gamma 1 or gamma 2.2 is going to behave similarly, other than for rounding errors.  For instance, where is the input data to fill-in the levels below 424 in the gamma encoded file below going to come from?



You need linear data of more than 16 bit depth as the input file to take advantage of gamma encoding at 16 bits, which I didn't think was the case here, right?

Imho the posterization in your image is introduced somewhere in the 8-bit video display chain, probably by poorly behaved color management which is unprepared to deal with gamma 1.0

Jack
« Last Edit: February 22, 2013, 12:34:03 PM by Jack Hogan » Logged
IliasG
Newbie
*
Offline Offline

Posts: 17


« Reply #55 on: February 22, 2013, 02:34:43 PM »
ReplyReply

I see.  However, I still do not understand: you may have 19 bits of information, but if that information is originally encoded linearly as 16 bit data, as long as you stay at 16 bits gamma 1 or gamma 2.2 is going to behave similarly, other than for rounding errors.  For instance, where is the input data to fill-in the levels below 424 in the gamma encoded file below going to come from?



You need linear data of more than 16 bit depth as the input file to take advantage of gamma encoding at 16 bits, which I didn't think was the case here, right?

Imho the posterization in your image is introduced somewhere in the 8-bit video display chain, probably by poorly behaved color management which is unprepared to deal with gamma 1.0

Jack

Hi Jack,

Gulliermo's point is that with gamma encoding we need less bits to keep unposterized data at the dark, than with linear encoding. You could see the missing input data (424 to 0) in a comparison of 19bit linear vs 16 bit g2.2.

In the case of sRGB which at the starting darker tones has a linear part with a slope 12.92 the data density is equal to a linear with +3.69 bit depth. So it's OK with keeping unposterized 19bit linear data.
In the case of Rec.709 the slope is 4.5 and it's equal in data density to +2.17 bits

It's a pity that we are stuck on only 2-3 bit depths (8 or 16 int plus 32 float) for RGB data ..
Logged
Jack Hogan
Full Member
***
Offline Offline

Posts: 195


« Reply #56 on: February 22, 2013, 05:05:31 PM »
ReplyReply

Hi Jack,

Gulliermo's point is that with gamma encoding we need less bits to keep unposterized data at the dark, than with linear encoding. You could see the missing input data (424 to 0) in a comparison of 19bit linear vs 16 bit g2.2.

In the case of sRGB which at the starting darker tones has a linear part with a slope 12.92 the data density is equal to a linear with +3.69 bit depth. So it's OK with keeping unposterized 19bit linear data.
In the case of Rec.709 the slope is 4.5 and it's equal in data density to +2.17 bits

It's a pity that we are stuck on only 2-3 bit depths (8 or 16 int plus 32 float) for RGB data ..

Glad to see that Mr. Luijk has lawyers ;-)  I still do not understand how 16 bit linear data encoded with gamma 1.0 results in posterization when the exact same 16 bit linear data encoded with gamma 2.2 does not.  Unless of course the fault lies with a poorly behaved color management system instead of with gamma encoding ...  Smiley
« Last Edit: February 22, 2013, 05:14:05 PM by Jack Hogan » Logged
Guillermo Luijk
Sr. Member
****
Offline Offline

Posts: 1273



WWW
« Reply #57 on: February 22, 2013, 05:55:43 PM »
ReplyReply

I still do not understand how 16 bit linear data encoded with gamma 1.0 results in posterization when the exact same 16 bit linear data encoded with gamma 2.2 does not.

I didn't say the source data was 16 bit linear. It was 64-bit floating point built from a multiexposure blend (5 shots 3 stops apart). In that situation, after proper conversion to 16-bit integer the linear gamma didn't manage to prevent posterization while 2.2 gamma did.
« Last Edit: February 22, 2013, 05:58:00 PM by Guillermo Luijk » Logged

Jack Hogan
Full Member
***
Offline Offline

Posts: 195


« Reply #58 on: February 23, 2013, 03:29:13 AM »
ReplyReply

I didn't say the source data was 16 bit linear. It was 64-bit floating point built from a multiexposure blend (5 shots 3 stops apart). In that situation, after proper conversion to 16-bit integer the linear gamma didn't manage to prevent posterization while 2.2 gamma did.

Ah, I see, but  OT and misleading as far as this thread is concerned.  The question was:

Is bit depth, by definition, a ceiling on the dynamic range an image can contain?
For example, a 14 bit raw image cannot contain more than 14 stops of DR.
An 8 bit jpeg, no more than 8 stops?

The ansewr is pretty staightforward: no, bit depth is not a ceiling on the amount of information that an image can contain; and yes, a 14-bit raw image can contain more than 14 stops of DR if by DR we mean one of its typical engineering definitions - with linear encoding, without having to resort to non-linear gamma.

And for anyone starting from a typical raw image today rendered through a modern raw converter, it makes virtually no difference as far as visible posterization is concerned whether the final 16-bit TIFF contains linear or gamma 2.2 encoded data ;-)

Cheers,
Jack
« Last Edit: February 23, 2013, 04:12:32 AM by Jack Hogan » Logged
Guillermo Luijk
Sr. Member
****
Offline Offline

Posts: 1273



WWW
« Reply #59 on: February 23, 2013, 07:06:26 AM »
ReplyReply

The ansewr is pretty staightforward: no, bit depth is not a ceiling on the amount of information that an image can contain; and yes, a 14-bit raw image can contain more than 14 stops of DR if by DR we mean one of its typical engineering definitions - with linear encoding, without having to resort to non-linear gamma.

I disagree with that. No matter if a statistical (or extrapolated from a curve) definition of DR yields DR figures greater than the number of ADC bits. A practical user (photographer) will not be able to allocate and properly render information of a real world scene of N stops of DR in a RAW file produced by a linear ADC with less than N bits.

Unless you use, let's say a 24Mpx sensor to build small 50x50px icons, where resizing will improve SNR and number of tonal values to acceptable levels of DR over the number of original bits, yes, bit depth is a limiting factor for DR in real world photographic applications.
« Last Edit: February 23, 2013, 07:31:19 AM by Guillermo Luijk » Logged

Pages: « 1 2 [3] 4 5 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad