Ad
Ad
Ad
Pages: « 1 [2] 3 »   Bottom of Page
Print
Author Topic: Temperature and Tint  (Read 9962 times)
bjanes
Sr. Member
****
Offline Offline

Posts: 2781



« Reply #20 on: November 18, 2011, 05:00:40 PM »
ReplyReply

Exactly. Having all the numbers colorimetrically correct doesn’t ensure anyone, especially the image creator, will like the rendering. Its subjective. This goes back to the recent, long and convoluted thread where the term “accurate color” was dismissed by some, (especially me) while we were told its not subjective.

You are comparing apples and oranges. Accurate reproduction of a scene under field conditions can be difficult and may require some subjective input for pleasing results, as with a sunset. However, if I photograph an X-Rite color checker and render it into a defined color space such as ProPhotoRGB, I can compare the observed values in the file to those measured from the target by a laboratory grade spectrophotometer and determine the accuracy of the rendering and give a mathematical analysis. For more practical work, one could assume that the color checker is reasonably accurate. It helps to know the white balance for the illumination used to take the photograph. What is subjective about this?

Regards,

Bill
Logged
Tim Lookingbill
Sr. Member
****
Offline Offline

Posts: 1153



WWW
« Reply #21 on: November 18, 2011, 05:46:15 PM »
ReplyReply

Quote
As Eric stated, color and tint are a good starting point, but why not merely take a reading from a neutral card illuminated by the same light as the subject (if that is possible).

Well, I've done this adjusting color temp sliders in ACR by clicking on a WhiBal card lit by overcast cloudy light on my window sill. What I find happens is there can be a wide variation in the numbers and thus appearance of color cast adjusting the Kelvin and Green/Magenta sliders that still deliver an R=G=B readout in the card. So much for nailing it on a consistent basis.

In actuality to my eyes the WhiBal gray portion of the card has a warm-ish desaturated camel colored hue lit under this overcast light. It turns slightly blue-ish clicking for R=G=B. I used "Cloudy" as an incamera preset which gave the most accurate to scene appearance but still required white balance adjusting to get rid of the overly warm appearance.
Logged
digitaldog
Sr. Member
****
Offline Offline

Posts: 8981



WWW
« Reply #22 on: November 18, 2011, 05:48:14 PM »
ReplyReply

Andrew, do you know where these color scientists working on this can be contacted?

I’d start on Apple’s Colorsync user lists (end user and developer).

Logged

Andrew Rodney
Author “Color Management for Photographers”
http://digitaldog.net/
digitaldog
Sr. Member
****
Offline Offline

Posts: 8981



WWW
« Reply #23 on: November 18, 2011, 05:54:24 PM »
ReplyReply

However, if I photograph an X-Rite color checker and render it into a defined color space such as ProPhotoRGB, I can compare the observed values in the file to those measured from the target by a laboratory grade spectrophotometer and determine the accuracy of the rendering and give a mathematical analysis.

So the light source for the Spectrophotometer and the light under which Macbeth resides matches or the differences are correlated how? The Spectrophotometer provides spectral data, you could convert it into Lab presumably? Does the white point specification play any role? Because you have to end up giving us ProPhoto RGB values, it is coming from Lab right? What about the known issues (some would call them bugs, like Karl who you quoted), with Lab? If you nail ALL the numbers as everyone agrees, there’s then some issue shooting outside this controlled condition, like your sunset example?

You mention render the data into a defined color space such as ProPhotoRGB for this specific set of observable values. That’s rendered exactly how to maintain the math? What converter? Set how?
Logged

Andrew Rodney
Author “Color Management for Photographers”
http://digitaldog.net/
MarkM
Sr. Member
****
Offline Offline

Posts: 292



WWW
« Reply #24 on: November 18, 2011, 06:21:50 PM »
ReplyReply

Andrew, do you know where these color scientists working on this can be contacted?

There is lots of good stuff going on at the Munsell Color Science Lab at RIT. On of their faculty members, Mark Fairchild, has a great book  "Color Appearance Models," which is a good place to start to get the gist of where color scientists are heading. It's an excellent (though expensive) book that starts from the basics of human vision and psychophysics and works the reader through some of the newer color appearance models. 
Logged

bjanes
Sr. Member
****
Offline Offline

Posts: 2781



« Reply #25 on: November 18, 2011, 08:17:28 PM »
ReplyReply

So the light source for the Spectrophotometer and the light under which Macbeth resides matches or the differences are correlated how? The Spectrophotometer provides spectral data, you could convert it into Lab presumably? Does the white point specification play any role? Because you have to end up giving us ProPhoto RGB values, it is coming from Lab right? What about the known issues (some would call them bugs, like Karl who you quoted), with Lab? If you nail ALL the numbers as everyone agrees, there’s then some issue shooting outside this controlled condition, like your sunset example?

One could use a hand held device as shown on the X-Rite site. The Spectrodensitometer 530 would do the job and costs US $5000. It has its own light source and can produce Illuminants A, C, D50, D55, D65, D75, F2, F7, F11, F12. By the way, it has a white calibration target. It can read out directly in L*a*b. Since L*a*b is D50, I would use that illuminant to avoid problems with chromatic adaption.

You mention render the data into a defined color space such as ProPhotoRGB for this specific set of observable values. That’s rendered exactly how to maintain the math? What converter? Set how?

To measure the system color error (DeltaE) the rendering engine is not critical and one would not have to use D50 illumination for the ColorChecker. D50 bulbs are available, but in my experiments, I merely used Solux bulbs and white balanced. One is merely comparing the rendered value with the true value. For now, L*a*b is standard even though it has some issues when pushed to extremes, but the ColorChecker is not that taxing of a test. The use of ProPhotoRGB also eliminates problems with chromatic adaption, since it is also D50.

If one does not have the $5000 for the 530, he can use the published L*a*b or ProPhotoRGB values for the color checker. This would provide reasonable accuracy. You can nit pick all you want, but such measurements are done all the time and can produce an acceptable measure of accuracy for testing various camera profiles and raw converters. Color scientists could refine the test methodology.

Regards,

Bill
Logged
Tim Lookingbill
Sr. Member
****
Offline Offline

Posts: 1153



WWW
« Reply #26 on: November 18, 2011, 08:47:00 PM »
ReplyReply

Thanks, Mark and Andrew, for the tips on where to find color appearance model color science discussions.
Logged
bjanes
Sr. Member
****
Offline Offline

Posts: 2781



« Reply #27 on: November 18, 2011, 08:50:29 PM »
ReplyReply

Well, I've done this adjusting color temp sliders in ACR by clicking on a WhiBal card lit by overcast cloudy light on my window sill. What I find happens is there can be a wide variation in the numbers and thus appearance of color cast adjusting the Kelvin and Green/Magenta sliders that still deliver an R=G=B readout in the card. So much for nailing it on a consistent basis.

Yes, there are an infinite number of spectra that can produce a correlated color temperature of 11000K or whatever your cloudy light might be. I'm not that sure that cloudy light lies on the Planckian locus, but if it does, the tint would be zero and only one temp would produce R=B=G.

In actuality to my eyes the WhiBal gray portion of the card has a warm-ish desaturated camel colored hue lit under this overcast light. It turns slightly blue-ish clicking for R=G=B. I used "Cloudy" as an incamera preset which gave the most accurate to scene appearance but still required white balance adjusting to get rid of the overly warm appearance.

Perhaps your vision was adapted to the indoor light and was not adapted to the cloudy light. Also the illuminaton was likely mixed, outdoor and indoor. As pointed out in the Fairchild article on human color vision that Marcin referenced, color constancy (the everyday perception that the colors of objects remain unchanged across significant changes in illumination color and luminance levels) is actually quite poor when careful observations are made. Of course, the scene appearance is a perception and perceptions can't be quantified. One can measure stimuli with great precision, but the appearance could only be estimated by color matching under controlled conditions. This is how the CIE determined the characteristics of the standard human observer.

Regards,

Bill
Logged
RFPhotography
Guest
« Reply #28 on: November 19, 2011, 08:03:52 AM »
ReplyReply

Bill, all these measurbatory machinations you speak of may produce 'reasonably accurate' (seems you're hedging here from earlier discussions) in a very tightly controlled environment.  But in the real world, where most of us live and work, the theory falls down.  How do you use a 'paint by WB numbers' approach to generate your 'reasonably accurate' colour reproduction in an environment that has a wide range of lighting colour temperatures?  Even in a studio, with controlled lighting there will be variations based on, as you noted, the different reflectant surfaces the light from the 'accurate' studio lights will bounce off of.  As I and, more importantly, others have pointed out, things like an XRite CC are a decent starting point but there's still a level of subjectivity and adjustment that may be required to get the required or desired 'colour match' (as closely as that's ever possible) taking into account the manner in which the image is to be reproduced (e.g., screen, high gloss art paper, magazine stock), not even taking into account the luminant under which it will be viewed.
Logged
bjanes
Sr. Member
****
Offline Offline

Posts: 2781



« Reply #29 on: November 19, 2011, 10:01:40 AM »
ReplyReply

Bill, all these measurbatory machinations you speak of may produce 'reasonably accurate' (seems you're hedging here from earlier discussions) in a very tightly controlled environment. 

Of course, totally accurate results are not possible with current cameras since they have metameric error due to imperfect CFA filters, and I never claimed perfect accuracy was attainable. However, my point was that accuracy under controlled conditions with a relatively simple target (the ColorChecker) can be measured and expressed as DeltaE or DeltaC and that white balance is a critical first step. With the color checker, the WB can be read directly from a neutral patch of the target, but a neutral area in scenes in the field is often not available and it is a good idea to perform a custom white balance prior to shooting under these conditions. The camera records light reflected off the subject, and the color of that light is critical in assessing the actual color of the subject. In practical photography we are merely interested in obtaining a tri-stimulus metameric match.

But in the real world, where most of us live and work, the theory falls down.   As I and, more importantly, others have pointed out, things like an XRite CC are a decent starting point but there's still a level of subjectivity and adjustment that may be required to get the required or desired 'colour match' (as closely as that's ever possible) taking into account the manner in which the image is to be reproduced (e.g., screen, high gloss art paper, magazine stock), not even taking into account the luminant under which it will be viewed.

Yes, an XRite CC used with the Passport software is useful in creating a custom profile under the existing field conditions. White balance from a neutral card is also useful. In my experience and in the experience of others (satisfied WhiBal users) a good WB reading is a very useful first step. The camera produces a scene referred image and the color accuracy raw file is of primary interest. Of course, rendering into a defined color space using an accurate camera profile is important, but this can be refined post-capture. The metric being measured in my prior posts is the accuracy of the rendered raw file as compared to the actual colors of the target which are known or can be measured if one has the equipment; the print and the illuminant under which it is viewed is irrelevant here. Color adaption of the viewer is also irrelevant.

Regards,

Bill
Logged
RFPhotography
Guest
« Reply #30 on: November 19, 2011, 11:02:11 AM »
ReplyReply

We're back to the same, tired, circular discussion, Bill.  What you're referring to as 'accuracy' isn't.  And the world doesn't work in the theory of controlled conditions.  Tim alluded to a perfect example earlier.  Water flowing in a river, part in shade, part in sunlight or daylight.  Where do you place the CC card?  In the daylight or the shade?  Put it in the shade and the daylight area will have a very heavy colour cast.  Put it in the daylight and the river in shade will have a very deep colour cast.  Is either 'accurate'?  No.  Is it even practical or possible to walk and put it in either location?  Quite possible not.
Logged
Tim Lookingbill
Sr. Member
****
Offline Offline

Posts: 1153



WWW
« Reply #31 on: November 19, 2011, 11:21:23 AM »
ReplyReply

An interesting concept with regard to placing the CC chart in shade or direct sunlight would be to get two charts for each and photograph them as one image.

The question would be, if you make the CC chart look correct in the sunlit version (with a camera profile and/or edits) and let the shaded CCchart's colors fall where they may, would that shaded chart's colors deliver a colorimetrically or perceptually correct rendering? Or would there be extreme edits required maybe having to selectively edit each color patch to get it to look as it should under shaded light?
Logged
digitaldog
Sr. Member
****
Offline Offline

Posts: 8981



WWW
« Reply #32 on: November 19, 2011, 11:22:13 AM »
ReplyReply

We're back to the same, tired, circular discussion, Bill.  What you're referring to as 'accuracy' isn't.  And the world doesn't work in the theory of controlled conditions. 

It is “accurate” when he says it is and not when it doesn’t suite the conditoins: (Accurate reproduction of a scene under field conditions can be difficult and may require some subjective input for pleasing results, as with a sunset.)

Or:

(This would provide reasonable accuracy. You can nit pick all you want, but such measurements are done all the time and can produce an acceptable measure of accuracy for testing various camera profiles and raw converters. Color scientists could refine the test methodology.)

It is reasonable, acceptable and accurate when we’re told it is. When conditions exist when it is not accurate, it’s not.

Reminds me of the analysis Bruce Lindbloom made about another specious argument with similar circular logic:  If one takes this technique to its logical conclusion, Dan's 16-bit challenge would become "When considering all images showing no 16-bit advantage, 16-bit images show no advantage." Substitute accurate for 16-bit.

How accurate or inaccurate, with specific situations using any defined metric is never explained. But as you point out, we’ve been down this rabbit hole before, lets move on.
Logged

Andrew Rodney
Author “Color Management for Photographers”
http://digitaldog.net/
RFPhotography
Guest
« Reply #33 on: November 19, 2011, 11:42:14 AM »
ReplyReply

An interesting concept with regard to placing the CC chart in shade or direct sunlight would be to get two charts for each and photograph them as one image.

The question would be, if you make the CC chart look correct in the sunlit version (with a camera profile and/or edits) and let the shaded CCchart's colors fall where they may, would that shaded chart's colors deliver a colorimetrically or perceptually correct rendering? Or would there be extreme edits required maybe having to selectively edit each color patch to get it to look as it should under shaded light?

Exactly my point, Tim.  In order to get each area 'accurate' (let's assume for the sake of discussion that the neutral grey patch at R=G=B is 'accurate) there'd have to be some heavy masking and editing.  But even when 'accurate' is achieved is that a desirable result?  

The idea of WB accuracy also completely ignores the idea of using WB creatively.  

Here's an example.  I really liked the texture of the bark on this old tree.  The first shot is an 'accurate' WB.  It's a kind of muddled grey/brown.  I knew; however, that if I used my WB creatively I could come up with something much different and, to me, much more interesting.  That's the second shot.  It's had additional processing to clone out a couple things, sharpen, curves, etc.  Those same edits could have been made to the 'accurate' WB shot but it still would have been bleh.  I'll add a second set as well.  The third image is with the As Shot WB.  It's pretty 'accurate' in terms of colour rendition.  The grass around the trees was green and the bark of the trees was a muddled grey/brown.  The second shot uses WB to creatively render the scene. 
« Last Edit: November 19, 2011, 11:50:55 AM by BobFisher » Logged
Hening Bettermann
Sr. Member
****
Offline Offline

Posts: 558


WWW
« Reply #34 on: November 19, 2011, 12:56:31 PM »
ReplyReply

Uff! uff! Now I have done exactly what I tried to avoid: triggered a new long and convoluted discussion, including a new show down between Andrew and Bill Janes. I consciously kept my question out of that thread, even if it belonged in the context. I deliberately presented my innocent little question without mentioning what I wanted to use the answer for. And now…Posts keep coming faster than I can reply. This one belongs after post #19.

First off: Many thanks to all of you for your concern.

@Czornyj:
The limitations you mention in post #16 are obviously all true - but yet, on one and the same paper or screen, some renderings look more natural than others. I am aware of that what I try/tried to achieve can only be a very rough approximation. It is/was just a first attempt to come up with an alternative to an AWB - which on its part suffers from the very same limitations. And then some: If I understand it right, it has by definition to assume an average color balance, and try to achieve that.

I would like to share a key experience which greatly confirmed my suspiciousness towards automatisms - if that was ever needed. - In the days of film, I took 2 images of a scene in northern Sweden, one around noon, one shortly before sunset. As exspected, the transparency of #2 was much more red than #1. Later, I had these 2 images scanned on an Imacon. In the default rendering of the Flexcolor software, #2 was *more green* than #1! Obviously, the automatism thought "No, so much red can not be true", and obviously, it was wrong.

@Andrew
> In theory maybe. In practice not entirely.
> There are color scientists working on color appearance models that could greatly aid in getting closer to your goal. But we’re not there yet.

See this sounds much better to me than just "It's subjective."

@tlooknbill

With regard to the bluish: most of the images on my web site are processed after my hitherto workflow, that is shot with AWB and processed As Shot. First recently have I changed the WB to Daylight in some of them. Beyond these 2 alternatives, I have not tried to modify color in any way. I hope, that the images shown look reasonably natural. I don't find them bluish.  

> Please don't take what I'm saying as criticism of your work.
I certainly don't! Your diagnosis "You really don't like saturation. I see you prefer the 'natural' look." is entirely in accordance with my self-understanding.

A little aside: Ditching your fellow Texican countryman Brian Griffith's Raw Developer for its default color rendering sounds to me like throwing away a jewel because one doesn't like the color of the wrapping paper. As said, I don't like that default rendering either - for reasons opposite to yours. But why use the default rendering? Yes, RD does not  support DNG profiles, you have to make ICC profiles, and I had to learn a little bit of Unix to use Argyll for that. I find the deconvolution sharpening the trump card of RD.

@bjanes:
Thank you for this informative post.
 
> why not merely take a reading from a neutral card illuminated by the same light as the subject (if that is possible).

Maybe I should try that. Would it be more precise than the rough scale quoted in my first post?  

The color event, I read, consists of 3 parts: the incident light, the light reflected from the objects, and my brain.

In relationship to this triangle, the photographer shooting art for reproduction, or objects for a catalogue, may try to achieve accuracy by standardizing shooting light and viewing light to make them "fall out of the equation", so that, under standard viewing light, the image will look the same as the object.

In (my) landscape photography, on the other hand, the color of the light is part of the subject - I want to catch it, not to standardize it out.

I read that my brain has some sort of AWB - but it is not 100%. I remember walking a late summer afternoon in the inner city of Copenhagen with its half-timbered white houses. In the sun of  that late afternoon, these houses did NOT seem white to me, but pink. It is this pink I want to capture. The grey card method would eliminate it, wouldn't it? I think it would leave the problem: to what degree should I follow it? But I will try it.

> but one must realize that daylight is a combination of sunlight, skylight, light scattered from clouds, and reflected light from the ground, vegetation, and surrounding buildings (in the case of a more urban environment).

If I conceive the latter as the light I do NOT want to filter out (if any), then it may indeed sound like my intended "method" may not be that bad after all.

Good light! - Hening.
« Last Edit: November 19, 2011, 02:38:31 PM by Hening Bettermann » Logged

digitaldog
Sr. Member
****
Offline Offline

Posts: 8981



WWW
« Reply #35 on: November 19, 2011, 01:04:34 PM »
ReplyReply

FWIW and going back OT, I’m also a big fan of Raw Developer.
Logged

Andrew Rodney
Author “Color Management for Photographers”
http://digitaldog.net/
bjanes
Sr. Member
****
Offline Offline

Posts: 2781



« Reply #36 on: November 19, 2011, 02:34:33 PM »
ReplyReply

We're back to the same, tired, circular discussion, Bill.  What you're referring to as 'accuracy' isn't. 

The Wikipedia definition of accuracy consistent with my usage: "In the fields of science, engineering, industry and statistics, the accuracy[1] of a measurement system is the degree of closeness of measurements of a quantity to that quantity's actual (true) value." This is exactly what I am doing. Comparing the measured color checker values with those of the target (either measured or per spec). What is your definition and how am I using the term incorrectly?

And the world doesn't work in the theory of controlled conditions.  Tim alluded to a perfect example earlier.  Water flowing in a river, part in shade, part in sunlight or daylight.  Where do you place the CC card?  In the daylight or the shade?  Put it in the shade and the daylight area will have a very heavy colour cast.  Put it in the daylight and the river in shade will have a very deep colour cast.  Is either 'accurate'?  No.  Is it even practical or possible to walk and put it in either location?  Quite possible not.

The example is a poor one, since the lighting of the scene is mixed and there are two illuminants. An accurate white balance would be for one illuminant, and two WBs would be needed for your case. You could take two shots with the proper WBs and blend them. No single white balance would reproduce neutrals as neutral in your mixed scene, which is what you want to do with WB. The camera would produce a scene referred image reproducing the two areas of the scene accurately within the limitations of the camera.  A human observer could account for the mixed illumination via chromatic adaption, use of memory colors, color constancy, and discounting of the illuminant. This is where the art of rendering the image comes in, and the procedure to be followed would depend on how you wish the scene to appear in the photograph. You would probably want the sunlit area to have a neutral white balance. If you wanted the cloud lit area of the image to appear somewhat blue as it would appear to an observer with incomplete adaption, you might use a WB in that area of the image that would yield a bluish cast.

Regards,

Bill
Logged
bjanes
Sr. Member
****
Offline Offline

Posts: 2781



« Reply #37 on: November 19, 2011, 02:42:54 PM »
ReplyReply

The idea of WB accuracy also completely ignores the idea of using WB creatively.  

Here's an example.  I really liked the texture of the bark on this old tree.  The first shot is an 'accurate' WB.  It's a kind of muddled grey/brown.  I knew; however, that if I used my WB creatively I could come up with something much different and, to me, much more interesting.

I am not talking about creative use of WB, but the use of WB to produce an image that reflects how the object appeared to the eye. For example, if a botanist wished to publish the appearance of the bark in a book on trees, the first WB would probably be best, and it would reproduce the colors of the tree within the accuracy limitations of the camera, WB, and camera profile and rendering engine. Personally, I find your "creative" images to be horrendous. If one wanted to reproduce a Monet painting with the colors as Monet intended, certainly one would not want to use the BobFisher creative WB to produce a ghastly effect.

Regards,

Bill
Logged
bjanes
Sr. Member
****
Offline Offline

Posts: 2781



« Reply #38 on: November 19, 2011, 02:49:28 PM »
ReplyReply


It is reasonable, acceptable and accurate when we’re told it is. When conditions exist when it is not accurate, it’s not.

Reminds me of the analysis Bruce Lindbloom made about another specious argument with similar circular logic:  If one takes this technique to its logical conclusion, Dan's 16-bit challenge would become "When considering all images showing no 16-bit advantage, 16-bit images show no advantage." Substitute accurate for 16-bit.

How accurate or inaccurate, with specific situations using any defined metric is never explained. But as you point out, we’ve been down this rabbit hole before, lets move on.


The Bruce Lindbloom analogy is specious. I have explained my measurement of accuracy by comparing the color of the color checker to that obtained in the image. You just can't seem to appreciate that. And in my testing, I don't WB on obviously blown neutral areas  Grin.

Regards,

Bill
Logged
MarkM
Sr. Member
****
Offline Offline

Posts: 292



WWW
« Reply #39 on: November 19, 2011, 03:00:39 PM »
ReplyReply

The Wikipedia definition of accuracy consistent with my usage: "In the fields of science, engineering, industry and statistics, the accuracy[1] of a measurement system is the degree of closeness of measurements of a quantity to that quantity's actual (true) value." This is exactly what I am doing.

It's still problematic, Bill, because it's not clear which actual (true) value you are trying to measure.

Consider Hening's last example of photographing white houses in late afternoon light. What does it mean to photograph or measure the color 'accurately'?

There are a couple options:
1. You could take a spectrophotometer with its built in light source and measure the houses. (Let's pretend the house paint is really perfectly neutral white for the sake of simplicity). The spectro will give you a nice flat distribution across the spectrum. In other words, white. This IS accurate in a sense—if you went to the paint manufacturer and showed an image with perfectly white houses they would say yes—that's the color of the paint. But this is not what Hennig saw and does NOT accurately describe the scene. He saw pink houses because of the late afternoon light.

2. You could measure the light reflecting from the white wall with a spectrophotometer. You'll get a totally different color that is essentially a spectral distribution of the light source. This stands a better chance of accurately describing what Hennig saw. But you have a new problem now: in order to convert that SPD into a colorimetric value you need a white point. What do you choose? The right answer is the white point which Hennig's eyes are chromatically adapted to, but you can't know that. So you are back to making a subjective choice.

3. Using a grey card is essentially the same as option one. You can neutralize the grey to its known value, but then you will have stripped the pink out of the walls and again it won't be accurate in the sense of describing the scene.

As far as I can tell, there is no objective way to reproduce Hennig's subjective experience. Only Hennig with the help of his memory can do that.
« Last Edit: November 19, 2011, 03:50:59 PM by MarkM » Logged

Pages: « 1 [2] 3 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad