Ad
Ad
Ad
Pages: [1] 2 »   Bottom of Page
Print
Author Topic: Just curious about color bit depth  (Read 3071 times)
sanfairyanne
Full Member
***
Offline Offline

Posts: 233


« on: November 05, 2013, 12:31:01 AM »
ReplyReply

I'm a 5d2 user but as time goes by I am increasingly interested in what the larger resolution of MF digital backs can offer. There's many a question I could ask but one which I'm intrigued by is the advertised 16 color bit depth of the IQ260. I read an article stating that the Nikon D800 has a color bit depth of 25.3bits as seen in this article:
http://snapsort.com/learn/sensor/dxo-mark/color-depth

I also added the attached image from that site. I feel I'm missing something in this statement from the website:
Color depth refers to how many variations of color the camera can capture, it is expressed in bits, with large values like 24 bits being excellent, and small values like 16bits being poor

Please forgive me if this is a stupid question. Thanks in advance.
Logged
ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7655


WWW
« Reply #1 on: November 05, 2013, 02:28:13 AM »
ReplyReply

Hi,

What the IQ260 has is a 16 bit signal path, it has nothing to do with color depth. It is actually nonsense used as a marketing term. No one ever has demonstrated any advantage of 16 bit signal path over say 14-bit signal path on a CCD sensor, except perhaps astronimical sensors deep frozen with liquid nitrogen.

DxO measurements are something entirely different. You can find a description on the DxO site, I will try to post a link later.

If you are interested in IQ260, try to test it or rent one for a couple of days.

Best regards
Erik

I'm a 5d2 user but as time goes by I am increasingly interested in what the larger resolution of MF digital backs can offer. There's many a question I could ask but one which I'm intrigued by is the advertised 16 color bit depth of the IQ260. I read an article stating that the Nikon D800 has a color bit depth of 25.3bits as seen in this article:
http://snapsort.com/learn/sensor/dxo-mark/color-depth

I also added the attached image from that site. I feel I'm missing something in this statement from the website:
Color depth refers to how many variations of color the camera can capture, it is expressed in bits, with large values like 24 bits being excellent, and small values like 16bits being poor

Please forgive me if this is a stupid question. Thanks in advance.
Logged

sanfairyanne
Full Member
***
Offline Offline

Posts: 233


« Reply #2 on: November 05, 2013, 04:20:43 AM »
ReplyReply

Erik,

I'd appreciate the link if you have time, I Googled DXO as I'd never heard of the site before. Any Phase One is out of my price range, I just wanted to learn a bit about what the top kit offers because given time it will filter down to the more affordable kit. For instance perhaps Canon will bring out a MF camera next year.
Logged
ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7655


WWW
« Reply #3 on: November 05, 2013, 05:14:03 AM »
ReplyReply

Hi,

This link: http://www.dxomark.com/About/In-depth-measurements/Measurements/Color-sensitivity

It may not be really helpful.

Best regards
Erik


Erik,

I'd appreciate the link if you have time, I Googled DXO as I'd never heard of the site before. Any Phase One is out of my price range, I just wanted to learn a bit about what the top kit offers because given time it will filter down to the more affordable kit. For instance perhaps Canon will bring out a MF camera next year.
Logged

Christoph C. Feldhaim
Sr. Member
****
Offline Offline

Posts: 2509


There is no rule! No - wait ...


« Reply #4 on: November 05, 2013, 05:53:18 AM »
ReplyReply

I have an additional question - I hope I am not hijacking the thread with this:

Do modern Digital cameras capture ALL visible colors and just compress them into the camera colorspace (e.g. like Adobe RGB with perceptual rendering intent) or to they clip?

If they do clip - what is known about the gamut they can capture?
Logged

sanfairyanne
Full Member
***
Offline Offline

Posts: 233


« Reply #5 on: November 05, 2013, 06:43:10 AM »
ReplyReply

Thanks Erik I will look at it in depth after work.

Kind Regards
Logged
ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7655


WWW
« Reply #6 on: November 05, 2013, 11:40:28 AM »
ReplyReply

Hi,

Camera sensors don't really have a color space. They collect all colors and convert into the channels, R, G, B. At this stage the camera does not clip. Each signal will by the integral of the QE (Quentum Efficiency) of the sensor multiplied by the spectral intensity of the subject. If the camera converts to JPEG it would convert to a color space, normally sRGB or Adobe RGB. The raw data is not affected by this.

Best regards
Erik


I have an additional question - I hope I am not hijacking the thread with this:

Do modern Digital cameras capture ALL visible colors and just compress them into the camera colorspace (e.g. like Adobe RGB with perceptual rendering intent) or to they clip?

If they do clip - what is known about the gamut they can capture?
Logged

ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7655


WWW
« Reply #7 on: November 05, 2013, 12:01:20 PM »
ReplyReply

Hi,

I don't think the lens I gave you gives much useful information. Here is some short information.

When you photograph something you will get a signal. You will also get some noise. Most of the noise is coming from light itself. Light comes in quanta, called photons. A pixel captures photons and converts them to electrons. A fairly normal pixel may hold something like 50000 electrons. You need 16 bits to count 50000 electrons. But you also have noise. Major sources are shot noise which is natural variation of photons, this is proportional to the square root of the number of photons. So if you had say 64 photons falling on a 100 pixels, the signal from the pixels would vary between 56 and 72 (not exactly, somewhat simplified), that really means that the last few bits are pretty meaningless.

Another form of noise is readout noise, this can be very load on modern CMOS sensors (like +/- 3 electrons) but quite a bit higher (like +/- 15 electrons) on the CCDs typically used in MFD. The shot noise is normally quite smooth while readout noise is more ugly (pepper salt type).

So the readout channel on a MFDB may be 16 bit but of those 16 bits 4 bits will be noise. The information is actually around 12 bits.

If you transfer color, there are three channels. So, theoretically an MFDB would be able to separate between 2**(3*12), that is 68 billion (US)  colours. the real number is much lower, a more typical value may be something like 25 bits, that still means 33 million different colors.

The MFDB sensors are larger, so they capture more photons. So the noise from sensor can be lower than on smaller sensors even if the noise/are is higher.

Best regards
Erik





Thanks Erik I will look at it in depth after work.

Kind Regards

Logged

Vladimirovich
Sr. Member
****
Offline Offline

Posts: 1320


« Reply #8 on: November 05, 2013, 12:06:09 PM »
ReplyReply

Do modern Digital cameras capture ALL visible colors
but then you have metameric failures
Logged
Christoph C. Feldhaim
Sr. Member
****
Offline Offline

Posts: 2509


There is no rule! No - wait ...


« Reply #9 on: November 05, 2013, 01:23:26 PM »
ReplyReply

So - when the light is finally rasterized into RGB values in the RAW file - do these represent the whole visible spectrum?
The whole horseshoe?
So - is the compression of these colors into a colorspace finally done in the RAW conversion?
I mean - in the end these RGB values coming from the Digitizer somehow must be interpreted and clipped / compressed.
How exactly is that done?
Logged

ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7655


WWW
« Reply #10 on: November 05, 2013, 01:45:30 PM »
ReplyReply

Hi,

I cannot answer that. The sensor themselves are sensitive to both UV and IR, so they see a wider spectrum than the human eye. That is the reason the IR filter is needed.

The visible spectrum is regarded to go between 390 nm and 700 nm, and this is covered by a normal CGA, see figure from Dalsa  below.
The spectral sensivity of the eye is here:


So I would say, the sensor sees the same colors the human eye does, but does not see them the same way.

So you do some mathematical transforms to convert from native colour space to say Lab and than you convert from Lab to a smaller color space.

Best answer I can give, without reading all the math. It is simple math, but you still need to understand it...

Best regards
Erik



So - when the light is finally rasterized into RGB values in the RAW file - do these represent the whole visible spectrum?
The whole horseshoe?
So - is the compression of these colors into a colorspace finally done in the RAW conversion?
I mean - in the end these RGB values coming from the Digitizer somehow must be interpreted and clipped / compressed.
How exactly is that done?
Logged

Christoph C. Feldhaim
Sr. Member
****
Offline Offline

Posts: 2509


There is no rule! No - wait ...


« Reply #11 on: November 05, 2013, 01:54:25 PM »
ReplyReply

So I would say, the sensor sees the same colors the human eye does, but does not see them the same way.
So you do some mathematical transforms to convert from native colour space to say Lab and than you convert from Lab to a smaller color space.

Well - the math is not that overly complicated to me.

So the processing chain is:

Light Spectrum: The Light => Camera sensor sensel buckets: A potential ==> A/D Converter: RAW RGB values ==> (What comes here?) ==> TIFF or Jpeg with an assigned colorspace

In the (What comes here?) part somewhere is the RAW file, a camera profile and some conversion, but what boggles me is, what is the rationale how these conversions are done?
Once we have a tagged file everything is finished for postprocessing, but if we see all these colors in the horseshoe which are not represented by the colorspaces we use for processing or printing but can be seen by the camera ... what happens to these colors and how?
* Christoph C. Feldhaim scratches head

Logged

digitaldog
Sr. Member
****
Offline Offline

Posts: 9191



WWW
« Reply #12 on: November 05, 2013, 02:08:30 PM »
ReplyReply

In the (What comes here?) part somewhere is the RAW file, a camera profile and some conversion, but what boggles me is, what is the rationale how these conversions are done?
Each raw converter has to make some assumption of source "color space" here to render into whatever RGB processing color space is to be used (in ACR that is ProPhoto RGB primaries).
Logged

Andrew Rodney
Author “Color Management for Photographers”
http://digitaldog.net/
Christoph C. Feldhaim
Sr. Member
****
Offline Offline

Posts: 2509


There is no rule! No - wait ...


« Reply #13 on: November 05, 2013, 02:24:48 PM »
ReplyReply

Each raw converter has to make some assumption of source "color space" here to render into whatever RGB processing color space is to be used (in ACR that is ProPhoto RGB primaries).

Wouldn't it make sense to instantly invent a colorspace designed for the file depending on its color distribution?
Imagine an image with lots of reds and orange - the tip of the colorspace triangle should then be more to the right of the horseshoe,
and an image with lots of blues and greens would get a colorspace with the tip more to the left and so on?

Or is ProPhoto already so big, that it encompasses all yet printable colors?
Logged

ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7655


WWW
« Reply #14 on: November 05, 2013, 02:29:06 PM »
ReplyReply

Hi,

Yes, I think so.

It is possible to construct a color space with primaries outside the horseshoe diagram but those primaries don't exist. But the number of stimuli could be increased.

Best regards
Erik



Or is ProPhoto already so big, that it encompasses all yet printable colors?
Logged

Christoph C. Feldhaim
Sr. Member
****
Offline Offline

Posts: 2509


There is no rule! No - wait ...


« Reply #15 on: November 05, 2013, 02:33:30 PM »
ReplyReply

Hi,

Yes, I think so.

It is possible to construct a color space with primaries outside the horseshoe diagram but those primaries don't exist. But the number of stimuli could be increased.

Best regards
Erik




Yes - it would be at the cost of coding efficiency having these abstract nonexistent colors and a lot of junk colors lying between them outside the horseshoe.

So the (hopefully last) question would be:
When the RAW file leaves the camera: Has the mapping of the RGB values to a usable colorspace already been done, or is it done in the raw developer?

Logged

ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7655


WWW
« Reply #16 on: November 05, 2013, 02:42:53 PM »
ReplyReply

Raw developer I think. But the color is essentially set by the CGA. The RGB values are integrals over spectrum of QE * subject spectrum. The RGB signals are then converted into normalized values and multiplied by a color transformation matrix. That matrix is a part of the raw file, I think. Lot of info in the raw file.

Best regards
Erik


Yes - it would be at the cost of coding efficiency having these abstract nonexistent colors and a lot of junk colors lying between them outside the horseshoe.

So the (hopefully last) question would be:
When the RAW file leaves the camera: Has the mapping of the RGB values to a usable colorspace already been done, or is it done in the raw developer?


Logged

Christoph C. Feldhaim
Sr. Member
****
Offline Offline

Posts: 2509


There is no rule! No - wait ...


« Reply #17 on: November 05, 2013, 02:47:04 PM »
ReplyReply

Raw developer I think. But the color is essentially set by the CGA. The RGB values are integrals over spectrum of QE * subject spectrum. The RGB signals are then converted into normalized values and multiplied by a color transformation matrix. That matrix is a part of the raw file, I think. Lot of info in the raw file.

Best regards
Erik



That makes sense and would possibly explain why having differing camera profiles for differing jobs is a good idea.
Logged

hjulenissen
Sr. Member
****
Offline Offline

Posts: 1683


« Reply #18 on: November 05, 2013, 02:54:23 PM »
ReplyReply

Raw developer I think. But the color is essentially set by the CGA. The RGB values are integrals over spectrum of QE * subject spectrum. The RGB signals are then converted into normalized values and multiplied by a color transformation matrix. That matrix is a part of the raw file, I think. Lot of info in the raw file.
Yes. The camera might be seen as a set of 3 "bandpass filters" that each pass part of the spectrum.

When developing raw file (done inside the camera for in-camera JPEGs, in your computer raw developer otherwise), the developer basically have to map (or its knowledge of) this camera-specific response to some standard representation suitable for further editing and display or print.

The camera does not have very detailed information about the "visible spectrum", nor does it record its 3 channels in exactly the same way as a human does (I believe that there might be sligth inter-human differences as well). Thus, two colors that appear different in person, might be recorded as the exact same set of rgb values. Or two objects that appear to have the same color in person, might be recorded as two different sets of rgb values. Thus it is an approximate process where a lot of engineering and subjective effort can be spent (see Foveon sensors)

Selection of camera filters may present some challenges: finding the right materials for generating a given response might be non-trivial, given that you want to deposit small dots in the CFA on an assembly line at low cost, and for a camera that lasts 10 years or more. Also, having the "perfect" color response might counter the desire for low-noise luminance recordings. By sacrificing one, you might improve the other (Canon vs Sony might be relevant examples).

-h
Logged
Christoph C. Feldhaim
Sr. Member
****
Offline Offline

Posts: 2509


There is no rule! No - wait ...


« Reply #19 on: November 05, 2013, 03:00:37 PM »
ReplyReply

Yes. The camera might be seen as a set of 3 "bandpass filters" that each pass part of the spectrum.

When developing raw file (done inside the camera for in-camera JPEGs, in your computer raw developer otherwise), the developer basically have to map (or its knowledge of) this camera-specific response to some standard representation suitable for further editing and display or print.

The camera does not have very detailed information about the "visible spectrum", nor does it record its 3 channels in exactly the same way as a human does (I believe that there might be sligth inter-human differences as well). Thus, two colors that appear different in person, might be recorded as the exact same set of rgb values. Or two objects that appear to have the same color in person, might be recorded as two different sets of rgb values. Thus it is an approximate process where a lot of engineering and subjective effort can be spent (see Foveon sensors)

Selection of camera filters may present some challenges: finding the right materials for generating a given response might be non-trivial, given that you want to deposit small dots in the CFA on an assembly line at low cost, and for a camera that lasts 10 years or more. Also, having the "perfect" color response might counter the desire for low-noise luminance recordings. By sacrificing one, you might improve the other (Canon vs Sony might be relevant examples).

-h

Yes - humans definitely differ - thats why they created the standard observer in the old days.

Its funny - I didn't manage to find any data about the spectral response of the camera channels - though it should be easy to create one for every camera by photographing a spectrally defined colorset and doing some math. (I think principal component analysis for every color channel should do the job).

Logged

Pages: [1] 2 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad