Ad
Ad
Ad
Pages: « 1 [2] 3 4 5 »   Bottom of Page
Print
Author Topic: Dynamic Range vs bit depth  (Read 7750 times)
Tim Lookingbill
Sr. Member
****
Offline Offline

Posts: 1055



WWW
« Reply #20 on: February 15, 2013, 07:10:07 PM »
ReplyReply

Thanks, Francisco...

If I am correct, most RAW files (CR2, NEF) are linear.  

In camera, it is converted to gamma 2.2 8 bit jpeg.

In ACR/LR that changes at some point to gamma 2.2, but definitely when converted to 16 bit Tiff.  If 8bit can give me 16 stops, what can a 16bit TIFF?  Of course, this is normally converted to 8bit for output.

Without the user doing any tone "wrestling", I guess the difference "depends"....depends on what goes on under the covers.

Is it typical, without user involvement, that gamma encoding would contain 16 stops in 8bits....or is that just a perfect world?

I am really not looking for examples of the perfect mathematical world, but examples of what happens in the practical world....and why we bother with RAW....vs. just taking what the camera gives us :-)

John

You're confusing 16 bit interpolated in ACR/LR with the 12 and 14 bit precision performed at the camera's internal ADC which I said previously can't be controlled.

By the time you get the Raw image in ACR/LR the meaning of bits (16) in regards to precision is now being defined as it applies to the level of extreme edits the user can perform in the Raw converter so as not to induce posterization not only in broad swaths of blue sky for example but also when bringing out definition deep down into the shadows without a lot of noise which is what extending dynamic range encompasses.

It is counterproductive for you to equate bits with dynamic range until you can see it in your Raw converter which will greatly increase the capability of applying extreme edits to extend DR more than editing incamera generated 8 bit jpegs which have had a default tone curve applied that crushes shadow detail into the noise floor.

I don't read or consult DxO or dpreview dynamic range measured claims because they use default settings on non-real world targets to base their findings. It doesn't tell me a thing about what anyone can get out of a Raw file in post, the only reason to shoot Raw.

All you have to do to test this is shoot a high dynamic range scene and expose to preserve highlights for a jpeg and then a Raw and notice the differences in shadow detail you can pull out of the Raw compared to the jpeg. You can equate that to bits if you want but there's no way of proving a correlation so why bother.

« Last Edit: February 15, 2013, 07:11:56 PM by tlooknbill » Logged
FranciscoDisilvestro
Sr. Member
****
Offline Offline

Posts: 323


WWW
« Reply #21 on: February 15, 2013, 07:37:29 PM »
ReplyReply


Is it typical, without user involvement, that gamma encoding would contain 16 stops in 8bits....or is that just a perfect world?


This is just ideal world, when you account all the other issues, especially noise, you'll get less.
Logged

Tim Lookingbill
Sr. Member
****
Offline Offline

Posts: 1055



WWW
« Reply #22 on: February 15, 2013, 08:46:57 PM »
ReplyReply

Below is a high dynamic range scene I shot Raw (53mm, 1/250s, f/8, ISO 200) with my 6 year old Pentax K100D 6MP DSLR which only captures at 12 bits internally. It's dynamic range capture capabilities may not be as wide as more modern cameras. The full frame version is the JPEG preview I extracted from the Raw PEF using "Instant JPEG From Raw" at full resolution, downsized for the web.

The second shot is a 400% zoomed in view screen capture of the shadow detail between the 16bit ProPhotoRGB preview of the Raw in ACR on the left versus the 8 bit AdobeRGB JPEG on the right in Photoshop. Note the clumps of jpeg compression even at incamera high quality. Both have a huge S-curve applied to brighten the shadows.

Both previews look a bit similar with the Raw not having so much green spill over as the jpeg but where the real differences lie are in the behavior of the edits in the preview applying tweaks to the S-curve which are far more smoother and easier to control including the black point slider in ACR vs editing the S-curve on the jpeg in gamma encoded Photoshop.

With all the variances that influence the preview including changing DNG profiles on the Raw I couldn't tell 12 bit, from 14 bit from 16 bit having any part in it.  
« Last Edit: February 15, 2013, 08:58:08 PM by tlooknbill » Logged
Simon J.A. Simpson
Full Member
***
Offline Offline

Posts: 140


« Reply #23 on: February 16, 2013, 11:21:47 AM »
ReplyReply

Is bit depth, by definition, a ceiling on the dynamic range an image can contain?

For example, a 14 bit raw image cannot contain more than 14 stops of DR.

An 8 bit jpeg, no more than 8 stops?

Thx, John

The simple answer is that the dynamic range is determined by:
a)  the ability of the camera sensor
b)  the colour space into which the RAW data is converted (e.g. sRGB can accommodate about 5.3 stops of dynamic range and adobe RGB approximately 8 stops).

But scientifically this is not quite correct since the 'dynamic range' of colour spaces is defined in a different way than 'stops' (a blizzard of scientific correction will probably follow).  But this is not to say that these colour spaces cannot accommodate wider ranges of dynamic range (e.g. 14 stops) – they just have to compress the data in cunning ways.

The number of bits determines how that dynamic range is represented (i.e. the number of different levels of tone – the more bits the more levels of tone between maximum black and maximum white).  Posterisation will most likely only become visible in a low bit non–RAW image which has been manipulated a lot, or in a very low bit-depth image.

See the excellent 'Real World Photoshop' books for a clear and well illustrated explanation.
« Last Edit: February 16, 2013, 11:32:19 AM by Simon J.A. Simpson » Logged
Guillermo Luijk
Sr. Member
****
Offline Offline

Posts: 1273



WWW
« Reply #24 on: February 17, 2013, 08:25:53 AM »
ReplyReply

Is bit depth, by definition, a ceiling on the dynamic range an image can contain?

For example, a 14 bit raw image cannot contain more than 14 stops of DR.

An 8 bit jpeg, no more than 8 stops?

An 8 bit JPEG can contain up to 256 stops as long as the source data was appropiately mapped on the 8-bit file. If you map each one stop gap from the original real world scene to match exactly one level in your JPEG file, you'll be encoding 256 stops. Each of them would be poorly represented though, only one level per stop (no gradation at all).

Bit depth is a DR limiting factor in the capture stage. And as long as the encoding is linear, yes, no more than N stops can be captured with an N bit linear ADC. However the real limiting factor is usually noise. So even the best 14-bit sensors cannot capture more than 11 stops of effective DR in photographic applicatioons (i.e. 11 stops with a sufficiently high SNR to make textures distinguisable).
Logged

IliasG
Newbie
*
Offline Offline

Posts: 17


« Reply #25 on: February 17, 2013, 09:58:00 AM »
ReplyReply

The simple answer is that the dynamic range is determined by:
a)  the ability of the camera sensor
b)  the colour space into which the RAW data is converted (e.g. sRGB can accommodate about 5.3 stops of dynamic range and adobe RGB approximately 8 stops).

But scientifically this is not quite correct since the 'dynamic range' of colour spaces is defined in a different way than 'stops' (a blizzard of scientific correction will probably follow).  But this is not to say that these colour spaces cannot accommodate wider ranges of dynamic range (e.g. 14 stops) – they just have to compress the data in cunning ways.

The number of bits determines how that dynamic range is represented (i.e. the number of different levels of tone – the more bits the more levels of tone between maximum black and maximum white).  Posterisation will most likely only become visible in a low bit non–RAW image which has been manipulated a lot, or in a very low bit-depth image.

See the excellent 'Real World Photoshop' books for a clear and well illustrated explanation.

Where can we find those calculations to come at results as 5.3 stop for sRGB and 8.0 stops for AdobeRGB ??.
Logged
IliasG
Newbie
*
Offline Offline

Posts: 17


« Reply #26 on: February 17, 2013, 10:05:01 AM »
ReplyReply

An 8 bit JPEG can contain up to 256 stops as long as the source data was appropiately mapped on the 8-bit file. If you map each one stop gap from the original real world scene to match exactly one level in your JPEG file, you'll be encoding 256 stops. Each of them would be poorly represented though, only one level per stop (no gradation at all).

Bit depth is a DR limiting factor in the capture stage. And as long as the encoding is linear, yes, no more than N stops can be captured with an N bit linear ADC. However the real limiting factor is usually noise. So even the best 14-bit sensors cannot capture more than 11 stops of effective DR in photographic applicatioons (i.e. 11 stops with a sufficiently high SNR to make textures distinguisable).


Guillermo,

DxO measured screen-DR higher than the bit depth for some 12bit models like Sony NEX-6 (12.61) and NEX-7 (12.59) ...
Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1615


« Reply #27 on: February 17, 2013, 10:14:06 AM »
ReplyReply

If it is the DXO downscaling to 8MP that allows cameras of 12 bits to have more than 12 stops of DR, does this mean that if they had been completely noiseless, the measurement would be limited to 12 stops of DR afterall?

If there was no noise, there would be no value in averaging pixels, either?

-h
Logged
Tim Lookingbill
Sr. Member
****
Offline Offline

Posts: 1055



WWW
« Reply #28 on: February 17, 2013, 11:18:04 AM »
ReplyReply

What does a stop look like?

What image detail is contained in a stop?

How do you define dynamic range?

Logged
IliasG
Newbie
*
Offline Offline

Posts: 17


« Reply #29 on: February 17, 2013, 01:04:15 PM »
ReplyReply

If it is the DXO downscaling to 8MP that allows cameras of 12 bits to have more than 12 stops of DR, does this mean that if they had been completely noiseless, the measurement would be limited to 12 stops of DR afterall?

If there was no noise, there would be no value in averaging pixels, either?

-h

The point is that there are at DxO, DR figures greater than the bit depth even without downscaling. NEX-7 screen-DR score (per pixel) is 12.59 stops and after downscaling to 8Mp (print-DR) 13.39.
Although their pixel score comes not from a single pixel but is the average level over a patch which can have 1000 pixels.
Logged
FranciscoDisilvestro
Sr. Member
****
Offline Offline

Posts: 323


WWW
« Reply #30 on: February 17, 2013, 03:42:26 PM »
ReplyReply

Hi,

It seems that SONY uses a non-linear encoding to the raw values, as it is shown here.

Linear encoding is perhaps the easiest (not easy), straightforward method and easy to perform calculations, but in a way is a "brute force" approach.
Logged

Guillermo Luijk
Sr. Member
****
Offline Offline

Posts: 1273



WWW
« Reply #31 on: February 17, 2013, 04:00:45 PM »
ReplyReply

Guillermo,

DxO measured screen-DR higher than the bit depth for some 12bit models like Sony NEX-6 (12.61) and NEX-7 (12.59) ...

Sure, but that is a statistical measure of no use to the photographer. DxO's SNR criteria is 0dB and a 0dB image is useless in conventional photography. If you use DxO's SNR plots to recalculate the DR with a 12dB criteria the calculated DR will be much lower.

DxO calculations are correct (the way they measure DR can yield higher DR values than the number of bits), but the interpretation of their figures requires some statistical knowledge that a non technical audience hasn't.
« Last Edit: February 17, 2013, 04:04:07 PM by Guillermo Luijk » Logged

Simon J.A. Simpson
Full Member
***
Offline Offline

Posts: 140


« Reply #32 on: February 18, 2013, 03:20:43 AM »
ReplyReply

Where can we find those calculations to come at results as 5.3 stop for sRGB and 8.0 stops for AdobeRGB ??.

They are contained within the IEC definition of the colour spaces.

See attached documents (IEC and Adobe).
Logged
Simon J.A. Simpson
Full Member
***
Offline Offline

Posts: 140


« Reply #33 on: February 18, 2013, 03:27:08 AM »
ReplyReply

What does a stop look like?

What image detail is contained in a stop?

How do you define dynamic range?

What does a stop look like?
It's a hole.

What image detail is contained in a stop?
All of the image detail at which the hole is pointed (within the limits of the lens/pinhole).

How do you define dynamic range?
The dynamic range of what ?  Different ‘whats’ different definitions.  Also different assumptions produce different definitions.  Now we’re getting complicated !  See Ansel Adams books (The Negative, The Print) for a discussion on this.

 Grin Grin Grin
« Last Edit: February 18, 2013, 03:28:52 AM by Simon J.A. Simpson » Logged
Ray
Sr. Member
****
Offline Offline

Posts: 8812


« Reply #34 on: February 18, 2013, 06:31:12 AM »
ReplyReply

In non-technical language, as I understand it, dynamic range is is expressed in terms of Exposure Values (EV).

Although the term EV is synonymous with 'stop', it has nothing to do with DoF and refers only to the amount of exposure the sensor receives, regardless of whatever combination of F/stop and shutter speed is used to achieve such exposure.

Using the term 'stop' instead of EV is also a bit sloppy because all lenses used at the same f/stop do not let through the same amount of light at the same shutter speed. There is a varying degree of transmission loss due to the opacity of the glass and the number of elements.

If Camera A is claimed to have 2EV, or 2 stops better dynamic range than Camera B, then Camera B would need to receive two more EVs, or two stops' greater exposure than Camera A in order for the noise in the deepest shadows, at the limits of the DR, to appear the same as in Camera A, all else being equal of course, including ISO sensitivity.

However, if Camera B receives 2 stops more exposure at the same base ISO, it is likely that SNR in the midtones, including lower and upper midtones, will be better in Camera B than in Camera A, especially if Camera A is a recent Nikon, and Camera B is a Canon.

For example, you'll notice on the DXOMark site that the SNR at 18% figures (SNR around the midtones) are approximately equal for the 5D3 and the D800.  If you overexpose the 5D3 shot to get the deep shadow detail as clean as in the D800 shot, then sure you'll get cleaner midtones than the D800, but you'll also get blown highlights.
Logged
sandymc
Full Member
***
Offline Offline

Posts: 235


« Reply #35 on: February 18, 2013, 06:32:04 AM »
ReplyReply

They are contained within the IEC definition of the colour spaces.

See attached documents (IEC and Adobe).

Not so. The Adobe spec has a contrast ratio in it, but that is for the reference viewing environment, not the color space itself.

Sandy
Logged
Tim Lookingbill
Sr. Member
****
Offline Offline

Posts: 1055



WWW
« Reply #36 on: February 18, 2013, 09:39:57 AM »
ReplyReply

What does a stop look like?
It's a hole.

What image detail is contained in a stop?
All of the image detail at which the hole is pointed (within the limits of the lens/pinhole).

How do you define dynamic range?
The dynamic range of what ?  Different ‘whats’ different definitions.  Also different assumptions produce different definitions.  Now we’re getting complicated !  See Ansel Adams books (The Negative, The Print) for a discussion on this.

 Grin Grin Grin

You just come up with that on your own or did you need help? Grin
Logged
Simon J.A. Simpson
Full Member
***
Offline Offline

Posts: 140


« Reply #37 on: February 18, 2013, 01:58:11 PM »
ReplyReply

Not so. The Adobe spec has a contrast ratio in it, but that is for the reference viewing environment, not the color space itself.

Sandy

Sandy, with respect, the contrast ratio refers to the "Reference Display" not the viewing environment (see 4.1 and 4.2.3).   In a leap of faith I assumed the contrast ratio of the "Reference Display" was defined in order to encompass the 'dynamic range' of the the colour space.  Perhaps I am mistaken ?

Using 'stops' (or better EVs) to define dynamic range is, I know, scientifically incorrect; but for photographers like me (and perhaps others too) it is a useful way of approximating the dynamic range of one thing to another – a kind of rule of thumb if you will.  Otherwise one is forced to compare maximum densities, contrast ratios, sensitivities – all scientifically defined in entirely different ways.  I know this is heresy and I humbly apologise.
Logged
Simon J.A. Simpson
Full Member
***
Offline Offline

Posts: 140


« Reply #38 on: February 18, 2013, 01:59:26 PM »
ReplyReply

You just come up with that on your own or did you need help? Grin

I need help; lots of help, all the time.  And, yes, I am taking the tablets.
 Grin
Logged
Jack Hogan
Full Member
***
Offline Offline

Posts: 195


« Reply #39 on: February 19, 2013, 08:33:39 AM »
ReplyReply

Sure, but that is a statistical measure of no use to the photographer.

If you are saying that the 0dB lower signal range is of no direct use to a photographer you are probably right.  On the other hand there is some information worth recording there, as you yourself have shown in the past.

There is no way to know what the noise or the signal is if all we have is a single pixel.  Noise, signal and many other physical properties of light, are statistical in nature and therefore require a larger sample to determine them.  How large?  Every human (photographer or not) physically 'views' things by averaging light within a circle of confusion.   Photographic output for instance is typically viewed in samples of a few tens of pixels, if we take the typical definition of the CoC for APS-C or FF sized sensors - more or less what Bill Claff uses for his calculations. 

So the question 'How many bits of information can I record through an n-bit linear ADC' seems to me to depend as much on sample size as on the relative size of random noise to an ADU.  I would venture that with appropriate noise and sample sizes, one could enconde a very large Dynamic Range even with n=1, let alone n=14 (witness your average newspaper image or 1-bit ADCs in audio).  So DxO's readings are very useful as they are, and spot on.  The only question is how large of a sample they use to calculate their data.  Anyone?

My Information Science is somewhat lacking.  Does anyone here know how to derive a formula that can answer the question in terms of sample size and noise present in the channel?

Jack
Logged
Pages: « 1 [2] 3 4 5 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad