Ad
Ad
Ad
Pages: [1] 2 »   Bottom of Page
Print
Author Topic: What is the quantitative definition of "gamut" with respect to 'bit depth'  (Read 2312 times)
bwana
Full Member
***
Offline Offline

Posts: 183


« on: June 06, 2014, 08:07:50 PM »
ReplyReply

Of course I've read that gamut is the range of colors that a display or device can reproduce. And of course color is defined by the wavelength of light.

So if a device can display more colors it has a 'wider' gamut. I have read that wider gamut colors spaces simply have more saturated colors. So to encode more colors, a panel has to use more numbers. This implies that it needs more bits.

But I have read countless articles stating that bit depth has nothing to do with gamut.
What does that mean? You cannot represent a larger collection of colors without more bits.

Then we use the same word 'gamut' to discuss color spaces. Here I really get confused. Srgb and adobe RGB are said to have the same number of shades that can be specified but that the range of shades is greater for adobe RGB. In this context, gamut refers to the 'breadth' of colors in the collection. Like two different boxes of crayons. Each box may have 32 crayons but one box might just be shades of red- it covers fewer frequencies. Does this mean I n a quantitative sense, adobe RGB should therefore consist of colors that include higher and lower frequencies of light than srgb?
Logged
Sheldon N
Sr. Member
****
Offline Offline

Posts: 808


« Reply #1 on: June 06, 2014, 09:19:22 PM »
ReplyReply

Think of this analogy.... it's like a staircase.

Gamut is how tall the staircase is. Bit depth is how many steps there are on the way to the top.

You can have a very tall staircase with just a few big steps. You can have a shorter staircase with lots of little steps. The number of steps is not tied to the height of the staircase.

Gamut is how saturated of a color can be displayed, bit depth is how finely the transitions between colors and tones are broken down (think smoother gradients).
Logged

bwana
Full Member
***
Offline Offline

Posts: 183


« Reply #2 on: June 06, 2014, 09:29:15 PM »
ReplyReply

tnx. going with this analogy, don't i need bigger numbers to specify a taller staircase? hence a larger gamut should require more bits to describe its breadth, no?
Logged
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3651


« Reply #3 on: June 07, 2014, 05:10:38 AM »
ReplyReply

tnx. going with this analogy, don't i need bigger numbers to specify a taller staircase? hence a larger gamut should require more bits to describe its breadth, no?

Hi,

No. Zero can be minimum and one can be maximum. Likewise 0 can be the same minimum and 255 (8-bits/channel) can be the same maximum, it just allows to quantify finer intermediate gradations. Same for  0 and 65535 (16-bits/channel).

Cheers,
Bart
Logged
bjanes
Sr. Member
****
Offline Offline

Posts: 2794



« Reply #4 on: June 07, 2014, 06:08:53 AM »
ReplyReply

tnx. going with this analogy, don't i need bigger numbers to specify a taller staircase? hence a larger gamut should require more bits to describe its breadth, no?

The analogy of a stair case is often used for explaining the encoding of the dynamic range of a digital sensor, where the DR is the height of the stair case and the bit depth is the size of the individual steps. However, the analogy breaks down in the case of linear integer encoding such as is used for raw files of a digital camera. In this case, the size of the steps is variable but fixed. The DR of the encoding is limited by the bit depth, and encoding of N stops of DR requires a bit depth of N bits.

This limitation can be overcome by the use of a power or log function, as explained in this excellent article by Greg Ward.

Bill

« Last Edit: June 07, 2014, 06:27:49 AM by bjanes » Logged
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3651


« Reply #5 on: June 07, 2014, 06:42:03 AM »
ReplyReply

The DR of the encoding is limited by the bit depth, and encoding of N stops of DR requires a bit depth of N bits.

This limitation can be overcome by the use of a power or log function, as explained in this excellent article by Greg Ward.

Indeed, and Bruce Lindbloom offers a quick levels calculator. But this is still the number/distribution of steps (=precision of intermediate values), not the gamut (=range limits).

However, this conversion loss (from linear to a non-linear workingspace encoding, e.g. gamma with straight line toe, and then to another output colorspace) exceeds the simple calculation of losses. The re-quantification will add aliasing artifacts which lead to banding in smooth gradients.

Ultimately, we'll need to migrate to floating point representation of color coordinates in working spaces to minimize losses due to the lack of precision when converting to other colorspaces. Raw converters such as RawTherapee already use floating point for the internal calculations.

Cheers,
Bart
« Last Edit: June 07, 2014, 06:54:01 AM by BartvanderWolf » Logged
bwana
Full Member
***
Offline Offline

Posts: 183


« Reply #6 on: June 07, 2014, 09:37:09 AM »
ReplyReply

Thank you for your explanations. I do get the fact that gamut is the way we describe the set of colors in a collection. But quotes like this confuse me
...In this case, the size of the steps is variable but fixed...

The analogy that comes to my mind is a deck of cards. If we propose Srgb is the set of hearts (13 cards) then which better describes Adobergb:
1)the set of hearts and diamonds.
Or
2) a specific subset of hearts and diamonds consisting of thirteen cards.

Adobergb includes more colors than srgb but at the same time I have read that each color space is limited to the number of colors that the bit depth of an image will allow . So it seems that color spaces are only definable by real numbers (an infinite set within an upper and lower bound) and somehow this gets mapped to a finite set of integers as defined by bit depth of the image.

Another more flavorful analogy might be thinking of ice cream. Srgb is the infinite collection of all shades of light colored ice cream and Adobe rgb is the infinite collection of all light and dark colored ice creams. An 8 bit image is a small cone and a 16 bit image is a large cone.

  So are the colors that exist in one adobergb image different than the set than the colors that exist in another adobe rugby image? And is. the same is true for srgb?
Logged
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3651


« Reply #7 on: June 07, 2014, 09:55:57 AM »
ReplyReply

Adobergb includes more colors than srgb but at the same time I have read that each color space is limited to the number of colors that the bit depth of an image will allow . So it seems that color spaces are only definable by real numbers (an infinite set within an upper and lower bound) and somehow this gets mapped to a finite set of integers as defined by bit depth of the image.

That's the essence. You can, for the outer boundaries also think of a 'balloon', with dots on it to describe the discrete bit positions that are available to precisely describe color coordinates. One can blow up the balloon to get a larger gamut, but the dots will be less precise as they grow apart with gaps in between. One can also deflate the balloon, the gamut gets smaller, but the dots get closer/denser spaced, more precice. The dots are also inside the balloon, not only on the hull.

Cheers,
Bart
Logged
Jim Kasson
Sr. Member
****
Offline Offline

Posts: 836


WWW
« Reply #8 on: June 07, 2014, 10:32:31 AM »
ReplyReply

And of course color is defined by the wavelength of light.

This may be more detail than you need, but color is not a phenomenon of physics, but rather one of psychology. Color is not defined by the spectral characteristics of the stimulus -- the wavelength(s) of light -- but by the human response to those stimuli. Since there is a large range of commonality in the way that men see color, and an even larger range in the case of women, we don't need people to turn spectra into colors; we can do it with sensors and tables formed according to the results of experiments on people. That's how light, a phenomenon of many dimensions, gets transformed into color, which has just three.

Before we can talk about the relationship of gamut to bit depth, we need to pick a color space, which defines the three axes. Let's pick sRGB. Think of it as a cube, with one corner black, one corner white, three corners each particular shades of red, blue, and green, and three corners equal mixtures of those that we call cyan, magenta and yellow. Those eight vertices, and the planes that form the faces of the cube, define the gamut of sRGB. It can't encode colors outside of that cube.  You will usually see gamuts displayed in other than the color space under consideration, but Im going to ignore that here to make it simple.

Now to the bit depth. If the bit depth of each color is one bit, we can only encode the corners of the cube. We can only make one red, one green, one blue, etc. Now let's say the bit depth is two for each color plane. Along the red axis, we can have black, a dark red, a lighter red, and the full-on red at the corner of the cube. Same for blue, green, cyan, magenta, and yellow. We can also get four neutral-axis colors by setting R=G=B: black, dark gray, lighter gray, and white. There are other additional colors with different combinations of the two bits in each color plane, for a total of 64 possibilities. The colors in the interior of the cube don't increase the gamut of the color space measured in itself, but you could argue that the colors on the faces do or don't. That's where bit depth and gamut interact, albeit weakly.

If we use three bits for each color plane, we can make 8x8x8 = 512 different colors. With four, we can get 16x16x16 = 4096 colors. with 8 bits, it's more than 16 million. Also at eight bits, we have more than 65000 colors on each of the faces, and the weak interaction between gamut and bit depth becomes inconsequential.

Jim
« Last Edit: June 07, 2014, 10:36:16 AM by Jim Kasson » Logged

digitaldog
Sr. Member
****
Offline Offline

Posts: 9110



WWW
« Reply #9 on: June 07, 2014, 11:40:23 AM »
ReplyReply

But I have read countless articles stating that bit depth has nothing to do with gamut.
What does that mean?
Wider gamut, more saturated possible colors, not more number of colors. Number of colors is based on encoding of the data (the number of stairs in the staircase analogy). You can use simple math to divide up bits that are supposed to represent numbers (of colors). 24 bit color equates to the ability to encode, or define 16.7 million colors. You can't see that many colors for one. And a 24 bit image of a gray card has vastly different number of actual colors than a 24 bit image of a field of colorful flowers. Now make even more steps in the staircase. Use 16-bits per color and we're talking billions of (possible) colors, again you can't see that number. The gamut, the length of the staircase hasn't changed. Only the steps you use to divide up the distance. More bits, more possible addressable color values, not necessarily a wider range (gamut). One has nothing to do with the other in the simplest terms much like having a 16 foot staircase with 16 or 32 steps you'd climb doesn't alter the length you have to go from start to finish. Bit depth is a theoretical value, that's important to consider. How many colors does a 16-bit per color image contain that you can see if it's a document filled with R128/G128/B128? In sRGB or ProPhoto RGB? In 8-bits per color or 16-bits per color?

Quote
Adobergb includes more colors than srgb but at the same time I have read that each color space is limited to the number of colors that the bit depth of an image will allow
No it doesn't contain more colors. If both are a 24-bit document, the Adobe RGB (1998) document doesn't contain more colors than the sRGB cousin, it does however have a wider gamut. It's able to produce more saturated color as the RGB primaries that define the color space are farther than the sRGB primaries as plotted in a gamut map.
Logged

Andrew Rodney
Author “Color Management for Photographers”
http://digitaldog.net/
bwana
Full Member
***
Offline Offline

Posts: 183


« Reply #10 on: June 07, 2014, 02:22:54 PM »
ReplyReply

Thank you all for your replies. Because I am a quant and think of everything as a number, I was trying to interpret color spaces as an array of colors. Since any given color space is an infinite collection of real numbers (each number specifying a color) I was trying to ascertain the 'bit depth' of a color space when in fact there can be as many shades as the variable you choose- for example if you assign a 32 bit variable as the container for a color space in a computer program, then you can only name 2^32 colors. A 64 bit variable will hold many more shades. In any event, the actual bounds of the color space are set by numbers that are determined when the device is calibrated with a spectrophotometer. And there are many more of them than can be included in an image. For example, in a 16 bit file, each pixel can assume one of 2^16 colors, that is much less than the colors contained in an array of a single integer variable in a 32 bit OS. I do not know how a color space is actually represented in software - whether there is a math function that can generate it or if it IS simply an array. Also, I do not know how monitors work, I assume they use integer values since they are digital. Analog connections (VGA, composite,etc) are excluded from this discussion.
Logged
Jim Kasson
Sr. Member
****
Offline Offline

Posts: 836


WWW
« Reply #11 on: June 07, 2014, 02:48:59 PM »
ReplyReply

Because I am a quant and think of everything as a number, I was trying to interpret color spaces as an array of colors. Since any given color space is an infinite collection of real numbers (each number specifying a color) I was trying to ascertain the 'bit depth' of a color space when in fact there can be as many shades as the variable you choose- for example if you assign a 32 bit variable as the container for a color space in a computer program, then you can only name 2^32 colors. A 64 bit variable will hold many more shades. In any event, the actual bounds of the color space are set by numbers that are determined when the device is calibrated with a spectrophotometer. And there are many more of them than can be included in an image. For example, in a 16 bit file, each pixel can assume one of 2^16 colors, that is much less than the colors contained in an array of a single integer variable in a 32 bit OS. I do not know how a color space is actually represented in software - whether there is a math function that can generate it or if it IS simply an array. Also, I do not know how monitors work, I assume they use integer values since they are digital. Analog connections (VGA, composite,etc) are excluded from this discussion.

I don't think you're going to find clarity thinking of a color as a one-dimensional quantity. Do the same analysis you just did in three dimensions, and I think it will make more sense to you.

Colors in images in computers are -- forgetting indexed color -- represented as triples in, usually, 8 or 16-bit unsigned integers, or 32 or 64-bit floating point. Each element of the triple when considered throughout the image, is called a color plane. Thus, mages are represented as MxNx3 matrices, or arrays if you prefer, of numbers, where M and N are the dimensions of the image in pixels. The information that says what color space the values are in is supplied as metadata in a separate field.

Digital monitors take color data in 8 bit wide by three chunks, where each chunk defines a pixel value. A few monitors take data in 10-bit by three chunks -- confusingly, they are referred to as 30-bit monitors.

In what is commonly referred to as a 16-bit file, each pixel can assume one of (2^16)*(2^16)*(2^16)  colors, not (2^16). The 16-bits refers to the depth of each color plane.

Jim
« Last Edit: June 07, 2014, 02:51:52 PM by Jim Kasson » Logged

bwana
Full Member
***
Offline Offline

Posts: 183


« Reply #12 on: June 08, 2014, 03:57:18 PM »
ReplyReply

I do not really know how colors are represented in software. some places even refer to 8 dimensions (variables) - white, black, r,g,b,c,m,y.
But I get your point. My misunderstanding was that bit depth can apply to images and that 'bit amount' applies to the size of the variable used for color spaces(when they are represented in computers). My conclusion from the preceding discussion is that the number of colors in a given image is defined by its bit depth. The choice of colors the image is allowed to show is represented by its color space . When you convert from one color space to another, the numbers that each pixel contains that define its colors are actually changed so that the colors are as close as possible. When a color space is assigned to an image, the numbers do not change and the colors the pixels actually contain are therefore different because the same numbers in different color spaces are different colors. Additional confusion arises in the The conversion process because the user is given some control by assigning a colorimetric or perceptual rendering intent, relative and absolute, but that's a whole 'nother can of worms.
Logged
Jim Kasson
Sr. Member
****
Offline Offline

Posts: 836


WWW
« Reply #13 on: June 08, 2014, 04:14:53 PM »
ReplyReply

I do not really know how colors are represented in software. some places even refer to 8 dimensions (variables) - white, black, r,g,b,c,m,y.

Colors are represented in three dimensions. Colorants, typically those used in printing, are represented in the same number of dimensions as the number of inks. Usually, that all takes place "underneath the covers" of the driver and printer, and the photographer doesn't have to worry about it, and can't do much about it if she wants to.

I think we're talking about colors here, right?

But I get your point. My misunderstanding was that bit depth can apply to images and that 'bit amount' applies to the size of the variable used for color spaces(when they are represented in computers).

I'm not getting the distinction between "bit depth" and "bit amount". Can you elaborate?

My conclusion from the preceding discussion is that the number of colors in a given image is defined by its bit depth. The choice of colors the image is allowed to show is represented by its color space .

I wouldn't have said it that way, but that's right.

When you convert from one color space to another, the numbers that each pixel contains that define its colors are actually changed so that the colors are as close as possible.

With infinite precision, and if the color in question is within the gamut of both color spaces, and the conversion is model-based, such as used to convert one RGB color to another, the numbers are changed, but the color remains identical.

When a color space is assigned to an image, the numbers do not change and the colors the pixels actually contain are therefore different because the same numbers in different color spaces are different colors.

Yes.


Additional confusion arises in the The conversion process because the user is given some control by assigning a colorimetric or perceptual rendering intent, relative and absolute, but that's a whole 'nother can of worms.

It certainly is. Those things are mostly to tell the conversion software what to do with colors that are out of gamut in the target space (rendering intent), and to handle white point differences (relative and absolute).

Jim
« Last Edit: June 08, 2014, 04:48:22 PM by Jim Kasson » Logged

bwana
Full Member
***
Offline Offline

Posts: 183


« Reply #14 on: June 08, 2014, 04:26:17 PM »
ReplyReply

You are kind and generous with your time and explanations. Thank you Jim . The distinction between bit depth and bit amount is not really a distinction at all but an attempt to keep two concepts separate. The bit depth of an image is how many bits are used to specify one of the three colors. Similarly, in representing a color space we need three variables ( r,g,b) and these are usually 32 bit or 64 bit integers.
Logged
Jim Kasson
Sr. Member
****
Offline Offline

Posts: 836


WWW
« Reply #15 on: June 08, 2014, 04:44:09 PM »
ReplyReply

You are kind and generous with your time and explanations. Thank you Jim . The distinction between bit depth and bit amount is not really a distinction at all but an attempt to keep two concepts separate. The bit depth of an image is how many bits are used to specify one of the three colors. Similarly, in representing a color space we need three variables ( r,g,b) and these are usually 32 bit or 64 bit integers.

I'm still not getting it.

BTW, the most common integer bit depths are 8 and 16 bits, and the integers are unsigned. The most common floating-point bit depths are 32 (for HDR, mostly) and 64 (for scientific work, chosen so that roundoff errors on intermediate quantities will be small).

But to your main distinction. If each of the colors in an image is represented by three 8 bit integers, we say the bit depth of the image is 8.

To represent a color space (the metadata that defines the space, not the colors of the pixels in the image), we need numbers, but we haven't talked about their precision yet. Is that what you're calling bit amount?

Jim
Logged

bwana
Full Member
***
Offline Offline

Posts: 183


« Reply #16 on: June 09, 2014, 06:40:12 PM »
ReplyReply

Yes. Precision is determined by the size of the variable used.

It comes down to actually knowing how how are colors represented in photoshop's guts. If 0 is black, and 11111111111111111111111111111111 is maximum red as represented by a 32 integer, how does the computer know which 'red' to send to the monitor? For example what would shade no. 2556 look like?  Somewhere in there is a table that says convert this number to another number depending on the color space. Is the colorspace assignment is the last thing that happens?

The whole purpose of my question has to do with choosing the working colorspace. When i am manipulating an image (curves, levels, white balance adjustments, gradients, masks, sharpening, etc) I wonder how much I am mangling the result by working in the 'wrong' colorspace. Everywhere you read that pro photo has the widest gamut and should be the colorspace of choice ito work in. So i was wondering why? Are there more 'bits' used in representing prophoto than srgb? No, there cannot be. And after all this discussion,I have an intuitive feeling that it shouldn't matter what colorspace I work in.  Colors are not really relevant when photoshop is doing its stuff. We are just manipulating numbers. Colorspace just matters at the end- Where I have to choose smoother gradients (srgb) or a more saturated red (adobergb).
Logged
Jim Kasson
Sr. Member
****
Offline Offline

Posts: 836


WWW
« Reply #17 on: June 09, 2014, 07:15:05 PM »
ReplyReply

Yes. Precision is determined by the size of the variable used.

It comes down to actually knowing how how are colors represented in photoshop's guts. If 0 is black, and 11111111111111111111111111111111 is maximum red as represented by a 32 integer, how does the computer know which 'red' to send to the monitor? For example what would shade no. 2556 look like?  Somewhere in there is a table that says convert this number to another number depending on the color space. Is the colorspace assignment is the last thing that happens?

The whole purpose of my question has to do with choosing the working colorspace. When i am manipulating an image (curves, levels, white balance adjustments, gradients, masks, sharpening, etc) I wonder how much I am mangling the result by working in the 'wrong' colorspace. Everywhere you read that pro photo has the widest gamut and should be the colorspace of choice ito work in. So i was wondering why? Are there more 'bits' used in representing prophoto than srgb? No, there cannot be. And after all this discussion,I have an intuitive feeling that it shouldn't matter what colorspace I work in.  Colors are not really relevant when photoshop is doing its stuff. We are just manipulating numbers. Colorspace just matters at the end- Where I have to choose smoother gradients (srgb) or a more saturated red (adobergb).

Now we're getting to the heart of the matter. First, set your bit depth in Ps to 16. Ps doesn't support 32 bit integer bit depth, and is limited to what it can do with 32-bit floating point.

In the bad old days before Ps could do much at 16-bit depth, your concerns were quite valid. With a bit depth of 8, the really big color spaces like CIELab and ProPhotoRGB could often exhibit posterization -- aka contouring -- under some manipulations, as the number of bits wasn't sufficient to quantize the large space finely enough.

This is not a problem with a bit depth of 16 bits. Feel free to use PPRGB if you want.

Don't use sRGB unless your only output medium is a web page. There are plenty of colors you can print on almost any decent printer that can't be represented in sRGB, and you'll be making images with one hand tied behind your back. Get a monitor that can show you the whole AdobeRGB gamut, and work in a space that's at least that big.

When you convert from a working space that's bigger than your output space, the color management software makes (mostly) intelligent choices on how to map the out-of-gamut colors in the working space into the output space. You can help it along using layers and soft proofing.  If your working space is smaller that the output space, you're out of luck -- you're not going to be able to use the whole output space.

Make sense?

Jim
Logged

Jim Kasson
Sr. Member
****
Offline Offline

Posts: 836


WWW
« Reply #18 on: June 09, 2014, 11:20:43 PM »
ReplyReply

Colors are not really relevant when photoshop is doing its stuff. We are just manipulating numbers.

That may have been true in 1987 when scanner operators were adjusting color on B&W monitors. It is most certainly not true if you want to see on your monitor the effects of what you are doing with the Ps tools.

Jim
Logged

MarkM
Sr. Member
****
Offline Offline

Posts: 327



WWW
« Reply #19 on: June 10, 2014, 10:04:01 PM »
ReplyReply

And after all this discussion,I have an intuitive feeling that it shouldn't matter what colorspace I work in.  Colors are not really relevant when photoshop is doing its stuff. We are just manipulating numbers. Colorspace just matters at the end- Where I have to choose smoother gradients (srgb) or a more saturated red (adobergb).

I don't think your intuition right in this case. The colorspace often matters because we are often want to capture or print colors that can only be represented in a larger space. If you are working in a small space like sRGB and go to print on a device with a larger gamut, the conversion from small to large doesn't inflate to fill the larger gamut. Likewise if you capture a scene with a digital camera that is able to capture a very wide range of color and you choose to work with that image in a small space, the colors that are out of the smaller gamut are irretrievably lost in the conversion. And it's still true even if we are just manipulating numbers because the conversion you multiply by a matrix to get from one space to another. Going from a large space to a small space can result in values larger than the space can represent (i.e. > 255 in 8 bit images). These are then clipped or scaled depending on the rendering intent.

As far as computer representation, everything I've ever seen is pretty simple. 24 bit RGB images are represented by 3 bytes per pixel — giving 8 bits per color channel. These are often typed arrays for efficiency — for example Chrome and Safari deal with Uint8 clamped arrays that are just a string of bytes - 1 per pixel channel.
Logged

Pages: [1] 2 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad