Ad
Ad
Ad
Pages: « 1 [2]   Bottom of Page
Print
Author Topic: RGB values of a 18% Gray Card?  (Read 29782 times)
Peter_DL
Sr. Member
****
Offline Offline

Posts: 421


« Reply #20 on: April 15, 2008, 04:37:59 PM »
ReplyReply

Quote
The middle gray can be important for calibration and proofing purposes as explained in some detail in this post ISO Standards for Museum Imaging
Bill
[{POST_SNAPBACK}][/a]
Interesting:
so there’s now a [a href=\"http://www.cdiny.com/color_profiles.html]ProStarRGB[/url] space = ProPhotoRGB + L* trc.
A while ago we created L*-beta-RGB = Bruce Lindbloom’s beta-RGB + L* trc

Anyway, a straight 2.2 gamma never was a good choice for several reasons:
see ProphotoRGB with 1.8 gamma
see sRGB with a "shadows bump" i.e. gamma < 2.2 for a start
see Adobe’s color engine which supresses a straight 2.2 gamma in the deep shadows by means of a slope limiting feature

However, as we know, gamma never was invented to comply to human perception, but to compensate for the non-linear response i.e. grid voltage / luminance on screen of CRTs:
>> In early days of the television industry it was decided that the non-linearity of the cathode ray tube (CRT) called gamma, would be taken into account in the video-camera by applying an inverse transfer function to the video-signal in order to compensate the CRT gamma. This way expensive non-linear signal processing was not needed in the TV receivers so the cost of the receivers was kept at minimum. At those days non-linear analog signal processing hardware was very expensive and difficult to realize.  << by Timo Autiokari

Bill, - just to use the opportunity to compile some related links.


Quote
Well considering that the data is in a linear encoding, that the numbers [in LR] are using a 2.2 encoding, there's a disconnect here and L* is pretty immaterial. The numbers and histogram are based on a color space that is neither being used under the hood or what you'd end up with in Photoshop.  But that's a different issue, one that sometimes troubles me just a tad.
[a href=\"index.php?act=findpost&pid=189667\"][{POST_SNAPBACK}][/a]
Agreed - also regarding "troubles me just a tad".

As I understand, it finally means to take care with the "Blacks" slider in LR and ACR as well, in order not to collect too many pixel in the left corner close to black clipping.

Peter

--
Logged
bjanes
Sr. Member
****
Offline Offline

Posts: 2756



« Reply #21 on: April 15, 2008, 05:40:55 PM »
ReplyReply

Quote
Well considering that the data is in a linear encoding, that the numbers are using a 2.2 encoding, there's a disconnect here and L* is pretty immaterial. The numbers and histogram are based on a color space that is neither being used under the hood or what you'd end up with in Photoshop.  But that's a different issue, one that sometimes troubles me just a tad.
[a href=\"index.php?act=findpost&pid=189667\"][{POST_SNAPBACK}][/a]

Yes, the Lightroom pixel reporting doesn't make good sense to me either. I much prefer the system the Mr. Knoll uses in ACR where you can select the space into which the file will be rendered (usually ProPhophotoRGB for me). As long as one understands what is being reported, allowances can be made. L* may not be of importance to you, but it is to many. RGB color spaces based on L* rather than gamma are coming into increasing usage, especially in Europe. The ISO has just approved eciRGBv2.

Bill
Logged
digitaldog
Sr. Member
****
Online Online

Posts: 8628



WWW
« Reply #22 on: April 15, 2008, 05:41:32 PM »
ReplyReply

Quote
Interesting:
so there’s now a ProStarRGB space = ProPhotoRGB + L* trc.
A while ago we created L*-beta-RGB = Bruce Lindbloom’s beta-RGB + L* trc

I'm going to have to ping Bruce and see if he did this purely as an interesting exercise or he feels that its useful, because just about every color geek I respect in the US thinks this L* stuff is a bunch of baloney. There was a rash of long but interesting discussions last month on all this, some good posts by both Karl Lang and Chris Murphy which basically points out, this is a solution in search of a problem. Its a big deal in Europe, but the bottom line is, no one there from the list had anything useful to prove that this is of use.

Lars Borg (the color scientist at Adobe, a guy who knows a few things about color) wrote this:

Quote
L* is great if you're making copies. However, in most other
scenarios, L* out is vastly different from L* in.  And when L* out is
different from L* in, an L* encoding is very inappropriate as
illustrated below.

Let me provide an example for video. Let's say you have a Macbeth
chart. On set, the six gray patches would measure around  L* 96, 81,
66, 51, 36, 21.

Assuming the camera is Rec.709 compliant, using a 16-235 digital
encoding, and the camera is set for the exposure of the Macbeth
chart, the video RGB values would be 224,183,145,109,76,46.

On a reference HD TV monitor they should reproduce at L* 95.5, 78.7,
62.2, 45.8, 29.6, 13.6.
If say 2% flare is present on the monitor (for example at home), the
projected values would be different again, here: 96.3, 79.9, 63.8,
48.4, 34.1, 22.5.

As you can see, L* out is clearly not the same as L* in.
Except for copiers, a system gamma greater than 1 is a required
feature for image reproduction systems aiming to please human eyes.
For example, film still photography has a much higher system gamma
than video.

Now, if you want an L* encoding for the video, which set of values
would you use:
96, 81, 66, 51, 36, 21 or
95.5, 78.7, 62.2, 45.8, 29.6, 13.6?
Either is wrong, when used in the wrong context.
If I need to restore the scene colorimetry for visual effects work, I
need 96, 81, 66, 51, 36, 21.
If I need to re-encode the HD TV monitor image for another device,
say a DVD, I need 95.5, 78.7, 62.2, 45.8, 29.6, 13.6.

In this context, using an L* encoding would be utterly confusing due
to the lack of common values for the same patches.  (Like using US
Dollars in Canada.)
Video solves this by not encoding in L*. (Admittedly, video encoding
is still somewhat confusing. Ask Charles Poynton.)

When cameras, video encoders, DVDs, computer displays, TV monitors,
DLPs, printers, etc., are not used for making exact copies, but
rather for the more common purpose of pleasing rendering, the L*
encoding is inappropriate as it will be a main source of confusion.

Are you planning to encode CMYK in L*, too?

Lars
Logged

Andrew Rodney
Author “Color Management for Photographers”
http://digitaldog.net/
digitaldog
Sr. Member
****
Online Online

Posts: 8628



WWW
« Reply #23 on: April 15, 2008, 05:58:35 PM »
ReplyReply

Quote from: bjanes,Apr 15 2008, 03:40 PM
Yes, the Lightroom pixel reporting doesn't make good sense to me either. I much prefer the system the Mr. Knoll uses in ACR where you can select the space into which the file will be rendered (usually ProPhophotoRGB for me).

The problem is workflow. In ACR, you pick the encoding color space and boom, you end up with the rendered image in Photoshop based on what you asked for. In LR, you may never leave it since you can print, then export that data at any point. So do you make a UI of preferences that say "use this encoding color space for all the work"? Or do you use some non existent color space for the numbers and histogram because you fear showing the actual linear encoded data would confuse the users the product is aimed for? Personally, I think it might be useful to see ProPhoto RGB linear values but the histogram would be wacked out.

I also really like how I can SEE in the ACR histogram the effect of the encoding color space option I make in terms of gamut clipping. Of course, we could just force all LR users to work with only ProPhoto RGB (something that was suggested internally by one alpha tester).

I agree its not as seamless as it could be. At first I was put off by the scale but now I'm totally hip to percentages but I'd like them to be based on something I'm actually doing. Melissa RGB is a name for an imagery color space that no one is using, although anyone who wishes could build one in Photoshop. Buy why?

Quote
L* may not be of importance to you, but it is to many. RGB color spaces based on L* rather than gamma are coming into increasing usage, especially in Europe. The ISO has just approved eciRGBv2.
[a href=\"index.php?act=findpost&pid=189800\"][{POST_SNAPBACK}][/a]

I didn't make a personal judgement that it was useful to me or not, only that a pretty smart group of people on this side of the pond think its a lot to do about nothing, and as discussed on the ColorSync list, they challenged proponents to provide some empirical evidence of its use, something that was ignored. Case in point, this post from Chris Murphy:

Quote
I have no problem with research and testing these ideas. My complaint  
is the grandiose language used in unproven statements, and the  
premature establishment and recommendation by the ECI and others for  
real world workflows that are not, and should not be, test beds for  
research. I find it inappropriate.

It is in exactly the wrong order of the way things are to be done in a  
scientific manner. Research, hypothesis, test, refinement, theory  
development including beta test. Then publish a recommendation for end  
users, while working on making the recommendation an ISO standard.

I'm quite frankly mystified how eciRGBv2 became an ISO standard and  
what the point is, absent any semblance of compelling information on  
why yet another flavor of the year RGB color space is necessary, and  
that the goals it wishes to achieve are relevant and also not possible  
any other way.

My complaint is primarily about the process followed thus far,  
although increasingly it is based on what is being presented (i.e.  
what is not being presented) to back up grandiose claims. Case in  
point from the ECI web site:

"'conversion losses' between data and the human eye are thus a matter  
of the past"

"substantial advantages in the shadows"

"risk of posterization effects –  is significantly reduced"

"Errors caused by colour space conversions – are minimized as much as  
currently technically feasible"

These are not proven. There is slim to no context given. No test  
parameters have been published so people can reproduce the test and  
the findings. And there appears to be no me
tric. These grand  
statements, are based on what empirical data?

The obvious conclusion most end users will read into this is that the  
opposite must be true with respect to eciRGBv1 and that is:

There are conversion losses that are visible
There are substantial disadvantages in the shadows.
There are significant risks for posterization.
There are errors in color space conversions.

Now these are obviously not true with respect to eciRGBv1 workflows.

It is confusing whether L* based intermediate space advocates are  
talking about 8bpc workflows, 16bpc workflows, or both. The arguments  
would naturally be different, but advocates seem to flip flop on this  
and just say it's great for both, missing the relevance of bitdepth in  
the entire discussion.
« Last Edit: April 15, 2008, 05:59:05 PM by digitaldog » Logged

Andrew Rodney
Author “Color Management for Photographers”
http://digitaldog.net/
Peter_DL
Sr. Member
****
Offline Offline

Posts: 421


« Reply #24 on: April 15, 2008, 06:36:07 PM »
ReplyReply

Quote
I'm going to have to ping Bruce and see if he did this purely as an interesting exercise or he feels that its useful, because just about every color geek I respect in the US thinks this L* stuff is a bunch of baloney. There was a rash of long but interesting discussions last month on all this, some good posts by both Karl Lang and Chris Murphy which basically points out, this is a solution in search of a problem. [{POST_SNAPBACK}][/a]
Actually Karl Lang created this space back in 2005 (with permission of Bruce L.) after I bothered him long enough. It happens that this is just a local call for me. Perhaps you remember the initial discussion in the robgalbraith forum.


Quote
Its a big deal in Europe, but the bottom line is, no one there from the list had anything useful to prove that this is of use.
[a href=\"index.php?act=findpost&pid=189801\"][{POST_SNAPBACK}][/a]
No problem. It’s after midnight here, so you may allow me just to quote from an earlier discussion (with Gernot Hoffmann):

>> Further, for introduction: Tonal settings in Photoshop can be calculated on a pure RGB basis, completely ignoring the underlying gamma. I won’t say that Photoshop operates this way, but it’s possible to do this on a descriptive basis.
For example, moving the Levels’-blackpoint slider to the right, subtracts a respective value from all RGB data in the deep shadows. The more general equation should be:
RGBout = a x (RGBin - BP)
a = 255 / (255 - BP)
wherein BP refers to a Levels’-blackpoint setting from 0 to a value BP

Now let’s take a brief, 3-step grayscale in Adobe RGB with dark grays of RGB 9, 16 and 21. In a copy file (!) and converted to sRGB (!), this makes 4, 7 and 13. Understandable, due to the lower local gamma of sRGB. Nonetheless, both grayscales should look right the same on screen.

The next step is mentioned ‘image editing practice’ to move the Levels’-blackpoint slider towards a value where the first patch reaches zero. In case of Adobe RGB, the value for BP setting is 9. In case of sRGB, the value for BP setting is just 4 (see above numbers).

Now what is the result. The grayscale in Adobe RGB turns to RGB 0, 7 and 12. Whereas with the copy file ‘in’ sRGB the new grayscale is 0, 4 and 9. Big difference!

Zero to 12 in Adobe RGB corresponds to below / equal L* 1. A critical range and the grayscale tends to disappear on screen. Zero to 9 in sRGB goes up to L* 3. The steps should stay clearly visible on screen or print.

Just give it a try.<<

Another aspect is that a numerically lower BP setting is less damaging for [a href=\"http://www.c-f-systems.com/Phototips.html#RAW]Color integrity[/url] a la David Dunthorn.


Andrew, - we really don’t have to argue here. At the end of the day, I find the implementation with LR/ACR (acting on linear data while showing a gamma encoded histogram) better than to combine the two contradictive requirements in one working space. Just in Photoshop, I would not use a straight 2.2 gamma space, even though I notice that many liked Adobe RGB.

Please note, I’m not at all talking about this “L* monitor calibration & profiling” thing here.

Peter

--
« Last Edit: April 15, 2008, 06:41:11 PM by DPL » Logged
digitaldog
Sr. Member
****
Online Online

Posts: 8628



WWW
« Reply #25 on: April 15, 2008, 07:21:17 PM »
ReplyReply

Quote
Actually Karl Lang created this space back in 2005 (with permission of Bruce L.) after I bothered him long enough.

Well credit to you for bothering him, apparently more recently, he's not at all hip to the L* idea:
Quote
  Chris is also right, a printing press has dot gain, the input space 
of a press (the neg or plate) is transformed by dot gain on press. If 
you use an L* curve to encode the file you will create significant 
loss when you go through the profile to to compensate for the dot 
gain. A 1.8 gamma brightens the mid-dark tones which the profile 
would have to do to compensate for dot gain anyway. Because the 
encoding curve is much closer to the profile compensated curve much 
less loss will occur. This is the reason ColorMatch is 1.8 (it's also 
why in 1982 Apple and Xerox PARC chose 1.8 [in their case laser 
printer dot gain.])

Quote
Andrew, - we really don’t have to argue here.

Who's arguing? I so far am just posting older posts in another public forum on the subject. I have no opinion, but I think what Chris said about proof of concept would be useful.
Logged

Andrew Rodney
Author “Color Management for Photographers”
http://digitaldog.net/
Peter_DL
Sr. Member
****
Offline Offline

Posts: 421


« Reply #26 on: April 16, 2008, 01:05:59 AM »
ReplyReply

Quote
Well credit to you for bothering him, apparently more recently, he's not at all hip to the L* idea:[{POST_SNAPBACK}][/a]
Sorry - confused the name, it was the other Karl: Karl Koch with [a href=\"http://www.color-solutions.de/english/index_E.htm]basICColor[/url]

Peter

--
« Last Edit: April 16, 2008, 01:16:55 AM by DPL » Logged
Peter_DL
Sr. Member
****
Offline Offline

Posts: 421


« Reply #27 on: April 16, 2008, 10:12:25 AM »
ReplyReply

Quote
I have no opinion, but I think what Chris said about proof of concept would be useful.
[{POST_SNAPBACK}][/a]
We could turn the question around, to ask why Adobe uses this linear ProPhoto space as an intermediate in LR/ACR.

Of course, I can’t answer for your engineers, but there’s an inherent issue with Black-point setting, which is most pronounced in a regular 2.2 gamma space such as Adobe RGB, and less pronounced in any 1.x gamma space starting with 1.0 ProPhoto but also including sRGB, L*-type RGB working spaces and most of [a href=\"http://www.josephholmes.com/propages/AboutRGBSpaces.html]Joseph Holmes' spaces[/url] – which altogether bear such low local gamma in the shadows (not to be confused with the midtone gamma as indicated e.g. by Photoshop’s Custom Profile function).

Say we have a dark gray of linear RGB 2 which shall be brought to zero. In a linear space it just requires to set the Black-point slider 2 levels to the right. Done.

Now in a regular 2.2 gamma space, same dark gray is indicated by an RGB of 28. Means to get it zero (and zero is always the same independent from gamma) we will have to move the Levels’ Black-point slider 28 units to the right.

Q:  So what ?
A:  Problem with this much stronger move is a considerable stronger effect on all other RGB data. It’s an unnatural operation so to speak. In detail this can result in:
a.) posterization of shadow details (see my post above),
b.) an undesired increase of color saturation, starting from the shadows with decreasing significance towards the highlights,
unless of course you would find this particularly pleasing.

Hope this contributes for clarification.

Peter

--
« Last Edit: April 16, 2008, 10:32:26 AM by DPL » Logged
digitaldog
Sr. Member
****
Online Online

Posts: 8628



WWW
« Reply #28 on: April 16, 2008, 10:43:42 AM »
ReplyReply

Quote
We could turn the question around, to ask why Adobe uses this linear ProPhoto space as an intermediate in LR/ACR.

Intermediate?
Logged

Andrew Rodney
Author “Color Management for Photographers”
http://digitaldog.net/
Peter_DL
Sr. Member
****
Offline Offline

Posts: 421


« Reply #29 on: April 16, 2008, 11:24:11 AM »
ReplyReply

Quote
Intermediate?
[a href=\"index.php?act=findpost&pid=189960\"][{POST_SNAPBACK}][/a]

>> White Balance (Color Temperature and Tint), in addition to any adjustments with Camera Raw’s Calibrate tab, actually tweak the conversion from native camera space to an intermediate, large-gamut processing space

(This intermediate space uses ProPhoto RGB primaries and white point, but with linear gamma rather than native ProPhoto RGB gamma of 1.8.)

… It’s simply impossible to replicate these corrections in Photoshop, so it’s vital that you take advantage of Camera Raw to set the white balance and, if necessary, to tweak the calibration for a specific camera.

… The remaining operations are carried out in the intermediate linear-gamma version of ProPhoto RGB.<<
[comment: in the whole context it becomes clear that this excludes the Exposure slider and the Lens tab controls as well].

Cited from: Camera Raw with Adobe Photoshop CS, PeachPit Press, 2005 by Bruce Fraser, p. 29/30.

Peter

--
Logged
digitaldog
Sr. Member
****
Online Online

Posts: 8628



WWW
« Reply #30 on: April 16, 2008, 11:30:22 AM »
ReplyReply

Quote
>> White Balance (Color Temperature and Tint), in addition to any adjustments with Camera Raw’s Calibrate tab, actually tweak the conversion from native camera space to an intermediate, large-gamut processing space

OK I understand the term now. But I don't know that having access to the data prior to this would be useful (I'll have to think about that one). It might be difficult for users decipher the color numbers and certainly the histogram of the native space (whatever that might be). And if we're using different camera models, would this imply that they would be different? That might be an issue for users. With this ProPhoto space, all users work with the same color space.
Logged

Andrew Rodney
Author “Color Management for Photographers”
http://digitaldog.net/
madmanchan
Sr. Member
****
Offline Offline

Posts: 2101


« Reply #31 on: April 16, 2008, 06:25:02 PM »
ReplyReply

Camera Raw uses a linear intermediate space for raw processing because many of the image processing algorithms operate better on the linear data, esp. in the early stages.
Logged

Peter_DL
Sr. Member
****
Offline Offline

Posts: 421


« Reply #32 on: April 17, 2008, 02:43:28 PM »
ReplyReply

Quote
OK I understand the term now. But I don't know that having access to the data prior to this would be useful (I'll have to think about that one). [a href=\"index.php?act=findpost&pid=189973\"][{POST_SNAPBACK}][/a]
Quote
Camera Raw uses a linear intermediate space for raw processing because many of the image processing algorithms operate better on the linear data, esp. in the early stages.[a href=\"index.php?act=findpost&pid=190043\"][{POST_SNAPBACK}][/a]
It was reported that >> a little noise reduction, and any chromatic aberration corrections, are also done in the native camera space. (Chromatic aberration corrections could cause unwanted color shifts if they were done later on in a non-native space.)<< The native camera space is supposed to be linear by nature as the sensor should respond linearly on the "dosage" of light received.

Then comes the conversion to ProPhoto linear.  As explained by Thomas Knoll: the sliders for Exposure, Temperature, Tint as well as the 3x2 hue/sat controls of the Calibrate tab tweak the 9 degrees of freedom with the 3 by 3 camera matrix, which is then used to process the data.

Regarding all further "creative rendering tools" which are subsequently applied in the frame of this intermediate linear-gamma version of ProPhoto RGB, well, there’s probably more than one way to skin a cat. The idea of a strict metadata editor may reason why this platform is kept closed.

Peter

--
Logged
madmanchan
Sr. Member
****
Offline Offline

Posts: 2101


« Reply #33 on: April 17, 2008, 02:58:04 PM »
ReplyReply

Quote
Regarding all further "creative rendering tools" which are subsequently applied in the frame of this intermediate linear-gamma version of ProPhoto RGB, well, there’s probably more than one way to skin a cat.

Yes, but there are additional image processing algorithms besides the ones you mention that operate better in a linear space. (Others don't matter so much and work fine in either a linear space or tone-mapped space with different but equally plausible results.)

Quote
The idea of a strict metadata editor may reason why this platform is kept closed.

I don't understand this comment at all. Clarify, please?
Logged

Peter_DL
Sr. Member
****
Offline Offline

Posts: 421


« Reply #34 on: April 18, 2008, 08:02:05 AM »
ReplyReply

Quote
Quote
The idea of a strict metadata editor may reason why this platform is kept closed. [a href=\"index.php?act=findpost&pid=190206\"][{POST_SNAPBACK}][/a]
I don't understand this comment at all. Clarify, please?
[a href=\"index.php?act=findpost&pid=190211\"][{POST_SNAPBACK}][/a]
Blending of images e.g. for HDR purposes, application of Plug-ins and some more Photoshop tools which I like to use at times are currently not supported within Camera Raw, just afterwards (or let’s call it downstream) in Photoshop.

So we are again at the question why some of us do such fancy things i.e. to skip many of the creative tools in ACR in order to start from scratch with a "scene-referred image" in Photoshop.  Actually I came to this with ACR version 2.x where many of the nice rendering controls which we are now appreciating were simply not given at this point of time.
Nonetheless, I still find this route to be a viable option e.g. for some operations such as blending of images or noise reduction which often benefit from "virgin" data  --  which have not been squeezed so far through the notorious tone curve and other tools which are supposed to belong to the category of "creative processing" at the stage of ProPhoto linear with ACR (as opposed to those controls which are really essential for matrix conversion and what is called color reconstruction, or which are even applied before in the native camera space).

One could think about ways for a more seamless assembly and integration of both environments, i.e. Photoshop + ACR tools, on the platform of this linear ProPhoto space which seems to be inherently present with ACR anyway. However, technical realization might jeopardize the idea of storing all changes in the metadata. IMHO.

Peter

--
« Last Edit: April 18, 2008, 08:04:14 AM by DPL » Logged
bjanes
Sr. Member
****
Offline Offline

Posts: 2756



« Reply #35 on: April 18, 2008, 09:49:54 AM »
ReplyReply

Quote
I don't understand this comment at all. Clarify, please?

Blending of images e.g. for HDR purposes, application of Plug-ins and some more Photoshop tools which I like to use at times are currently not supported within Camera Raw, just afterwards (or let’s call it downstream) in Photoshop.

So we are again at the question why some of us do such fancy things i.e. to skip many of the creative tools in ACR in order to start from scratch with a "scene-referred image" in Photoshop.  Actually I came to this with ACR version 2.x where many of the nice rendering controls which we are now appreciating were simply not given at this point of time.

Nonetheless, I still find this route to be a viable option e.g. for some operations such as blending of images or noise reduction which often benefit from "virgin" data  --  which have not been squeezed so far through the notorious tone curve and other tools which are supposed to belong to the category of "creative processing" at the stage of ProPhoto linear with ACR (as opposed to those controls which are really essential for matrix conversion and what is called color reconstruction, or which are even applied before in the native camera space).

One could think about ways for a more seamless assembly and integration of both environments, i.e. Photoshop + ACR tools, on the platform of this linear ProPhoto space which seems to be inherently present with ACR anyway. However, technical realization might jeopardize the idea of storing all changes in the metadata. IMHO.

Peter

[{POST_SNAPBACK}][/a]

As usual, Peter is thinking outside of the box. The linear ProPhoto space of ACR is essentially the [a href=\"http://www.colour.org/tc8-05/Docs/colorspace/PICS2000_RIMM-ROMM.pdf]RIMM Color Space[/url] as stated by Thomas Knoll in a post on the Adobe forums:

"Camera Raw uses both RIMM and ROMM in its processing path (ignoring the gamma encoding). After the white balance and initial camera profile application, the data is basically in RIMM space. The tone curve and other user controls transform the image into ROMM space, which is then converted to the final selected RGB working space.

The big difference is RIMM is *input* (or scene) referred, and ROMM is *output* referred. Otherwise, they are really the same color space. Most of the controls that raw converters have are control knobs for the process of converting scene referred data to output referred data.
"

This RIMM space has sufficient dynamic range for current raw camera input, but if several files are merged into a high dynamic range image, then one must use a true HDR encoding. Various HDR encodings are discussed by Greg Ward.

The Microsoft linear scRGB is similar to the linear ProPhotoRGB, but it uses sRGB primaries. Greg notes:

"The scRGB standard is broken into two parts, one using 48 bits/pixel in an RGB encoding and the other employing 36 bits/pixel either as RGB or YCC. In the 48 bits/pixel substandard, scRGB specifies a linear ramp for each primary. Presumably, a linear ramp is employed to simplify graphics hardware and image-processing operations. However, a linear encoding spends most of its precision at the high end, where the eye can detect little difference in adjacent code values. Meanwhile, the low end is impoverished in such a way that the effective dynamic range of this format is only about 3.5 orders of magnitude, not really adequate from human perception standpoint, and too limited for light probes and HDR environment mapping."

In view of the above, if we want a scene referred format for out data, it would be best to use a true HDR format with log or floating point encoding. In such an encoding, half of the data would no longer be in the brightest f/stop of the capture and one rationale for exposure to the right would be removed, but one should still use ETTR for better dynamic range and signal to noise.

I am bringing up these points for discussion.

Bill
Logged
Dinarius
Sr. Member
****
Offline Offline

Posts: 709


« Reply #36 on: May 12, 2008, 02:30:45 PM »
ReplyReply

Fascinating thread which I've only just stumbled on.

As someone who shoots for galleries and museums for a living, the topic is right up my street. Thanks for the link to the document entitled "Adopting ISO Standards for Museum Imaging." Would appreciate some more links to info on eciRGBv2.

As to grey cards..........

The basICColor grey card, as I posted on another link, is the best I know of.

http://www.color-solutions.de/english/Orde...isTargets_E.htm

It has a Lab* of 60, 0, 0 or 143 in Adobe 1998. So, if you set your white a black points with a photo of an oil on canvas, for example, you can then adjust luminance until the card is 143. Never fails.

It is also spectrally neutral, so use it under any combination of lighting. Brilliant for shooting interiors under multiple light sources. Even if you don't like what the click of the white balance tool produces, it always acts as a starting point.
D.
Logged
Pages: « 1 [2]   Top of Page
Print
Jump to:  

Ad
Ad
Ad