Ad
Ad
Ad
Pages: « 1 ... 4 5 [6] 7 »   Bottom of Page
Print
Author Topic: Is accurate color possible in non-standard light?  (Read 9673 times)
Hening Bettermann
Sr. Member
****
Offline Offline

Posts: 554


WWW
« Reply #100 on: October 12, 2013, 01:28:10 PM »
ReplyReply

Hi Andrew!

Thank you for your reply.
It is like you neglect the difference between accuracy and precision, and the fact that they are only loosely tied to each other. What you require is not only accuracy, but accuracy AND precision on the spectrometer level.

And I think you exaggerate the degree of imprecision introduced by subjectivity. As I wrote above, I think that "everybody who has ever seen this kind of light/scene will at once be able to identify it as being more realistic than e.g. the one obtained by the AWB."[What I meant was the example in post #23, and it turns out it was not obtained with AWB, but with a grey card].

>hours or days later when we process the image to match 'what we recall'?
You are re-introducing the time-memory factor which is overcome in grade 2.

>So if we both do this but end up with different results because a large part of the process is subjective, who's more accurate? We can't say.
 
No we can't. But there are 2 Buts:
1-My point is that we are very probably BOTH more accurate than to other persons, who just use a free hand estimate, and even more more accurate than another two persons, who do not do it on location, but are referred to their memory.
And in light like the one under discussion here, all will be more accurate than a WB obtained with the grey card, as shown in post #23, example one. This grey card method is inaccurate not due to poor precision level, but because it is conceptually wrong (for landscape).
2-When we both measure the length of say a football field with a tape measure, the results will differ. Does it mean the term accurate can not be used for the method? I think no. I think it only shows that EVERY measurement has an error, little or large. Our to measurements will differ by decimeters, maybe even meters - but they are very probably BOTH more accurate than a freehand estimate of the length. 

That said, I don't really need the word 'accurate' to be happy. I can do with 'realistic' or 'natural'. Only thing I insist on is that  grades 1 and 2 are conceptually different from minus one and zero, and that their results are not just arbitrarily pleasing.

They may in fact be less pleasing than those obtained with grades -1 and 0, I agree on that one. If they are, I have different choices: I can 'fix' them or trash them. But at least I'll know approximately 'where I am'.

Too long again!



 
Logged

digitaldog
Sr. Member
****
Offline Offline

Posts: 8576



WWW
« Reply #101 on: October 12, 2013, 01:50:45 PM »
ReplyReply

It is like you neglect the difference between accuracy and precision, and the fact that they are only loosely tied to each other. What you require is not only accuracy, but accuracy AND precision on the spectrometer level.
Sounds like those new car commercials (I prefer Sweet and Sour Chicken to just Sour Chicken). So yes, I desire accuracy and precision. The post is about the accuracy of color in non standard light. I didn't write this, I have commented on the issue of using accuracy in the question.
Quote
And I think you exaggerate the degree of imprecision introduced by subjectivity. As I wrote above, I think that "everybody who has ever seen this kind of light/scene will at once be able to identify it as being more realistic than e.g. the one obtained by the AWB."[What I meant was the example in post #23, and it turns out it was not obtained with AWB, but with a grey card].
That may be true. It doesn't change the issue of the language the OP is using and the need to express some metric of accuracy which as yet hasn't and probably cannot be stated.
Quote
>hours or days later when we process the image to match 'what we recall'?
You are re-introducing the time-memory factor which is overcome in grade 2.
Over come how and if so, by how much?

Quote
1-My point is that we are very probably BOTH more accurate than to other persons, who just use a free hand estimate, and even more more accurate than another two persons, who do not do it on location, but are referred to their memory.
Probably but how to prove and by what degree? You know the old saying about opinions.
Quote
2-When we both measure the length of say a football field with a tape measure, the results will differ. Does it mean the term accurate can not be used for the method?
It's more accurate than guessing. It's more accurate than using your right foot. We can pull out all kinds of devices, one more accurate than the next to attempt to dismiss the previous measurement. In terms of color accuracy, we at least have a metric of which anything finer is invisible to us. If we measure a color and we are told one instrument says the differences between it and another is a dE of 0.5, another is a difference of 0.05, it's moot. We can't see the difference in either case. It's interesting to know one reference grade device is far more accurate than the other, but if we can't see the lesser accurate difference, who cares?

If your goal is to come up with the length of a football field because you have to build one, does it matter if one measurement is different from the other by 0.05 inches? Or even half a foot? Whoever is paying for the field probably has some metric of error or accuracy they desire. At some point, it doesn't matter. Is +/- 1mm important? With the subjectivity issues I raise, we have no metric to even agree upon as being a useful goal.
Quote
I think no. I think it only shows that EVERY measurement has an error, little or large. Our to measurements will differ by decimeters, maybe even meters - but they are very probably BOTH more accurate than a freehand estimate of the length.
But you did measure the item with differing methods to at least come up with a value and one can produce a value of which we agree is acceptable! We do not have this in this discussion because nothing was measured and everything is subjective. If we use your argument above, we have no use for measuring anything because someone can use a higher grade device to disprove the first measurement. If not disprove, suggest it's not accurate. And this is useful to a degree because until we measure and have a metric of what is acceptable, who says the football field has to be within +/- half a foot or 1mm?
Quote
That said, I don't really need the word 'accurate' to be happy.
And I said good for you. But there are others here too, and the OP who asked the question using that specific term: accuracy.
Quote
But at least I'll know approximately 'where I am'.
How do you express that to others? There's no metric. There's no way to agree with the premise which might be the goal <g>. Again, the question was, is accurate color possible in non-standard light. I submit the question should be reworded. Or we need some means to come up with an accuracy metric. I don't know how you do that without measuring.
« Last Edit: October 12, 2013, 01:52:36 PM by digitaldog » Logged

Andrew Rodney
Author “Color Management for Photographers”
http://digitaldog.net/
Hening Bettermann
Sr. Member
****
Offline Offline

Posts: 554


WWW
« Reply #102 on: October 12, 2013, 02:54:12 PM »
ReplyReply

Only a short reply this time.

>>>hours or days later when we process the image to match 'what we recall'?
>>You are re-introducing the time-memory factor which is overcome in grade 2.
>Over come how and if so, by how much?

Well if I 'measure' the WB at shooting time, I don't rely on memory. By how much depends on the precision of the 'measurement', which is limited, but I believe better than anything I know of.
Logged

digitaldog
Sr. Member
****
Offline Offline

Posts: 8576



WWW
« Reply #103 on: October 12, 2013, 03:02:04 PM »
ReplyReply

Well if I 'measure' the WB at shooting time, I don't rely on memory. By how much depends on the precision of the 'measurement', which is limited, but I believe better than anything I know of.
Measuring with what, getting what data (CCT which is a range of colors)?
Ever see the CCT values of the camera and the raw converter not agree by a fairly large numeric value? Pretty common for me. I've also found that very often, the reported CCT value and what I end up with after mucking around with Tint/Temp to get a color appearance I prefer is quite different. Are the original values correct or my aim point for color appearance?
Logged

Andrew Rodney
Author “Color Management for Photographers”
http://digitaldog.net/
bjanes
Sr. Member
****
Offline Offline

Posts: 2756



« Reply #104 on: October 12, 2013, 03:34:35 PM »
ReplyReply

Measuring with what, getting what data (CCT which is a range of colors)?
Ever see the CCT values of the camera and the raw converter not agree by a fairly large numeric value? Pretty common for me. I've also found that very often, the reported CCT value and what I end up with after mucking around with Tint/Temp to get a color appearance I prefer is quite different. Are the original values correct or my aim point for color appearance?

CCT is measured along the Planckian locus and a more exact color temp includes a tint adjustment. Even so differing WB algorithms will give different results for a given Kelvin value and tint.

Bill
Logged
Hening Bettermann
Sr. Member
****
Offline Offline

Posts: 554


WWW
« Reply #105 on: October 12, 2013, 03:45:02 PM »
ReplyReply

@ Andrew, post 103

>Measuring with what, getting what data

In post #99 you had accepted the term 'measurement' for my visual comparison of the scene to camera-recorded white balances:

>>In this sense, when I compare the view of the scene to a stepped scale of white balances, I 'measure' the WB. It is not very precise, but it is subjective only in the sense that the precision may differ from person to person.
>Agreed.
Logged

hjulenissen
Sr. Member
****
Offline Offline

Posts: 1666


« Reply #106 on: October 14, 2013, 01:27:49 AM »
ReplyReply

It seems to me that, to get h's image-as-open-window effect, what you want to do is duplicate in the image the stimuli that your eyes received at the original scene. In colorimetric terms, if the image can duplicate the same XYZ tristimulus values as the original scene, the cones in your eyes will be stimulated in exactly the same way that they were at the original scene. No need to get all spectral about it, this is basis of color matching.

So the task boils down to reproducing the XYZ tristimulus values of the scene. A standard input ICC profile provides the means of converting the camera RGB values to XYZs. A complication is that color management wants to adapt the raw XYZ numbers to simulate how the colors might appear in an environment with a specific white point, usually D50. To fool the color management system to just let the original raw XYZ values of the scene pass thru without adaptation, I think it would only be necessary to set the camera white balance to D50 (either in the camera settings for JPEG, or in the raw converter when converting raws.) You also need to avoid any other changes to white balance. (There is info at the ICC web site about this kind of "Scene-Referred" workflow.)

At this point you should have your original scene XYZs intact. Now there is a problem: to display them on a monitor, because monitor white points are usually around D65, the scene XYZ would be shifted bluer by the monitor. So now the need is to perform a conversion to the monitor profile using Absolute Intent. This will adjust the XYZs so that the original scene XYZs are reproduced on the monitor despite its D65 white point.
So what you are saying is: use calibration/profiling to make camera and display behave like (approximations to) ideal XYZ devices. Make sure that the raw WB rendering matches the white-point of the display (by choosing the native WP as WB target, or by calibrating the display WP to D50?).
Quote
If I'm not mistaken, this should give you an approximation of the original scene XYZs on a monitor. If the monitor can be taken to the original scene, the colors should appear the same as in the scene, provided the monitor is capable of achieving the same brightnesses as in the original scene. When you have the monitor out in the original scene, you will be viewing the monitor in the same viewing conditions as the scene. It is only under identical viewing conditions that matching XYZ tristimulus values will have the same color appearance.

For the window effect, you will need a monitor capable of matching the brightness of outdoor scenes - it is unlikely you will have one, but high dynamic range monitors are available for $$$. However it is generally ok to scale the XYZs to a lower relative Brightness (which is called Lightness) and still have a good appearance - after all it is usual in photography to display images at other than the original scene brightnesses.

Your example is better than mine was. Place the monitor in the original landscape. Ensure that it is capable of the brightness/gamut that the scene has. Position it in such a way that it covers the same field of view as the image that it is reproducing. Does it _now_ appear as a "window into the landscape"?


If it does, then it seems to me that one can be fairly confident that any color issues in reproducing the scene in different contexts (a print hanging on an exhibition wall, the LCD of your laptop at the mall,...) is about context-perceptually motivated re-targeting the rendering (which may or may not be as much "art" as "engineering", but critical to artistic and/or commercial success nonetheless).

My gut-feeling is that even this (granted artificial) exercise can be difficult. I have wrestled some (saturated) reds and greens that cause my camera, raw editor, wide-gamut display, dye printer and profiling to (collectively) behave in unpredictable (for me at least) ways.

I just framed an image in a large, black passepartout. The difference from viewing the image soft-proofed on screen was striking (in a negative way). I do (of course) agree that image surround affects our perception. Rather than treat everything as a "big magic black box that can never be fully understood by humanity", I want to break it down into manageable pieces.

-h
« Last Edit: October 14, 2013, 01:37:46 AM by hjulenissen » Logged
Czornyj
Sr. Member
****
Offline Offline

Posts: 1369



WWW
« Reply #107 on: October 14, 2013, 06:53:06 AM »
ReplyReply

Yes, if you change the illuminant, you are either changing the stimuli, or changing the viewing conditions, depending on whether you are talking about reflectances x the illuminant, or about a self-luminous display. The colors generally won't match under either condition, hence the need for CATs and CAMs.

I don't get your point.


The point is that you capture the colour values as if they were under XYZ viewing conditions, and you don't have any information how they should be corrected by CAM.
Logged

Czornyj
Sr. Member
****
Offline Offline

Posts: 1369



WWW
« Reply #108 on: October 14, 2013, 07:01:43 AM »
ReplyReply

Does it _now_ appear as a "window into the landscape"?

It will never look like a "window in the landscape", it can only more or less remind the original scene (after long & heavy editing). Take a laptop with a decent display, try that "window" trick outdoor, and you'll soon realise that things are far more complicated than you'd wish they were.
« Last Edit: October 14, 2013, 07:11:06 AM by Czornyj » Logged

digitaldog
Sr. Member
****
Offline Offline

Posts: 8576



WWW
« Reply #109 on: October 14, 2013, 11:02:39 AM »
ReplyReply

It will never look like a "window in the landscape", it can only more or less remind the original scene (after long & heavy editing). Take a laptop with a decent display, try that "window" trick outdoor, and you'll soon realise that things are far more complicated than you'd wish they were.
Agreed! If you display scene referred data on the laptop, it's not going to look anything like the scene itself. If you display output referred data on the laptop, well it's been processed by someone or something so we're back to square one (someone or something like the camera itself altering the data). The potentially huge differences between scene gamut and display gamut, contrast ratio and brightness should not be understated either.
Logged

Andrew Rodney
Author “Color Management for Photographers”
http://digitaldog.net/
MiSwan
Newbie
*
Offline Offline

Posts: 15


« Reply #110 on: October 14, 2013, 03:13:38 PM »
ReplyReply

Hmm…


……….and I thought I was drunk. What are you smooking, you wackos? Mushroom? You should chew it instead.

I want some too. Give it to me now.
Logged
crames
Full Member
***
Offline Offline

Posts: 210


WWW
« Reply #111 on: October 14, 2013, 06:49:30 PM »
ReplyReply

So what you are saying is: use calibration/profiling to make camera and display behave like (approximations to) ideal XYZ devices. Make sure that the raw WB rendering matches the white-point of the display (by choosing the native WP as WB target, or by calibrating the display WP to D50?).

Yes, effectively using the camera/ICC profile combination as a colorimeter, along with exposure information to obtain absolute XYZ values from the original scene. You don't want color management to alter the XYZ by performing WB or CATs from one assumed white point to another. If white points are kept the same throughout the processing, then color management will have no reason to change anything. Sure, you could calibrate the display to D50, which would simplify things a little.

Quote
Your example is better than mine was. Place the monitor in the original landscape. Ensure that it is capable of the brightness/gamut that the scene has. Position it in such a way that it covers the same field of view as the image that it is reproducing. Does it _now_ appear as a "window into the landscape"?


If it does, then it seems to me that one can be fairly confident that any color issues in reproducing the scene in different contexts (a print hanging on an exhibition wall, the LCD of your laptop at the mall,...) is about context-perceptually motivated re-targeting the rendering (which may or may not be as much "art" as "engineering", but critical to artistic and/or commercial success nonetheless).

Yes, any changes in viewing conditions (contexts) will result in a different appearance.

As an example, if you don't have a HDR display, you will not be able to reproduce the exact colors (XYZs) of a bright, outdoor scene. Displaying the outdoor image at a much lower luminance on a normal display will never match the original scene due to perceived differences in brightness, contrast and colorfulness. The traditional photographic appearance tweaks to deal with such a situation are to increase the colorfulness (saturation) and the contrast to help make up the difference.

Quote
My gut-feeling is that even this (granted artificial) exercise can be difficult. I have wrestled some (saturated) reds and greens that cause my camera, raw editor, wide-gamut display, dye printer and profiling to (collectively) behave in unpredictable (for me at least) ways.

I just framed an image in a large, black passepartout. The difference from viewing the image soft-proofed on screen was striking (in a negative way). I do (of course) agree that image surround affects our perception. Rather than treat everything as a "big magic black box that can never be fully understood by humanity", I want to break it down into manageable pieces.

-h

Although it is difficult to reproduce bright outdoor scenes because of LDR displays, it should be doable in less demanding conditions such as indoor scenes, nightscapes, etc.

Color appearance models break it down into manageable pieces - complicated but manageable.
Logged

Cliff
crames
Full Member
***
Offline Offline

Posts: 210


WWW
« Reply #112 on: October 14, 2013, 07:10:59 PM »
ReplyReply

The point is that you capture the colour values as if they were under XYZ viewing conditions, and you don't have any information how they should be corrected by CAM.


A CAM needs the XYZs of whatever it is that you want to match, plus information about the original viewing conditions, and the viewing conditions of the reproduction. There is no limitation of "XYZ viewing conditions," whatever they are. The viewing conditions are whatever they happen to be at the original scene when the camera took the shot, and when viewing the reproduction. The viewing conditions are things like the average luminance, background luminance, XYZs of white, relative brightness of the surround.  In order to use a CAM you have to make note of, or estimate the viewing conditions, whatever they are.
Logged

Cliff
crames
Full Member
***
Offline Offline

Posts: 210


WWW
« Reply #113 on: October 14, 2013, 07:31:28 PM »
ReplyReply

It will never look like a "window in the landscape", it can only more or less remind the original scene (after long & heavy editing). Take a laptop with a decent display, try that "window" trick outdoor, and you'll soon realise that things are far more complicated than you'd wish they were.

Assuming you laptop doesn't have a HDR display that can match the bright scene luminance, it must have looked dark, something like below while editing on site. Although there was nothing you could do about the brightness difference, you did boost the color and brightness quite a bit, probably just as a CAM would.

Your edited image can't really show anyone what the scene looked like to you, since you edited it to appear the way it did under those specific conditions. Looking at it now on my monitor it can't possibly appear the way it did to you when you edited it.

If you were to repeat your interesting experiment with a scene with luminances that your laptop can actually reproduce, is there a reason you couldn't make it look like a window?
« Last Edit: October 14, 2013, 07:42:14 PM by crames » Logged

Cliff
crames
Full Member
***
Offline Offline

Posts: 210


WWW
« Reply #114 on: October 14, 2013, 07:59:11 PM »
ReplyReply

Agreed! If you display scene referred data on the laptop, it's not going to look anything like the scene itself. If you display output referred data on the laptop, well it's been processed by someone or something so we're back to square one (someone or something like the camera itself altering the data). The potentially huge differences between scene gamut and display gamut, contrast ratio and brightness should not be understated either.

Those are all good reasons for it not matching. The biggest difference is going to be the brightness. But if you can get a colorimetric match and a brightness match, there is no reason the laptop won't reasonably match the scene under the same viewing conditions.
Logged

Cliff
D Fosse
Sr. Member
****
Offline Offline

Posts: 285



« Reply #115 on: October 15, 2013, 07:51:17 AM »
ReplyReply

Very interesting discussion, because it makes you question all the things you sort of took for granted.

So here's one thing that I always took for granted: A photograph isn't meant to be an "accurate" representation of reality, in fact that notion is fundamentally flawed to begin with. The whole point is the interpretation, the translation from one realm to another. That is the art of it - what to leave in and what to take out; what to emphasize and what to ignore. The mere act of framing in the viewfinder starts this interpretation and removes from reality.

IOW my conclusion is that "accurate color" is a contradiction in terms. There's no such thing - even if it was technically possible (which I believe it isn't).

---

I'm meeting this head on every day, since I'm a photographer at an art museum and accurate color is the basic requirement that underlies almost everything I do. So I dutifully calibrate and profile and measure everything to the best of my abilities. But I've silently and quietly given up on accuracy a long time ago. What I aim for is equivalent.
« Last Edit: October 15, 2013, 07:54:57 AM by D Fosse » Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1666


« Reply #116 on: October 15, 2013, 08:14:11 AM »
ReplyReply

So here's one thing that I always took for granted: A photograph isn't meant to be an "accurate" representation of reality, in fact that notion is fundamentally flawed to begin with. The whole point is the interpretation, the translation from one realm to another. That is the art of it - what to leave in and what to take out; what to emphasize and what to ignore. The mere act of framing in the viewfinder starts this interpretation and removes from reality.
"Isn't meant to be an "accurate" representation" by whom? The photographer? The camera manufacturer? The raw developer? I'd say that a photograph cannot currently be an "accurate" representation of general physical scenes because (among other things):
-Any movement is frozen
-Focus is fixed
-The rendering is 2d/non-stereoscopic
-DR is limited compared to many scenes

Might it be possible to do accurate reproduction of painted art if there is no visible structure in the paint (and all kinds of expensive, impractical measures are taken, I am sure that you know the trade-offs better than me)? But for general photographers, that might be too limiting.

I don't see why one should not _try_ to make the illusion as "accurate" as possible. It might prove impossible to get all of the way, but getting 90% of the way might be of value.
Quote
IOW my conclusion is that "accurate color" is a contradiction in terms. There's no such thing - even if it was technically possible (which I believe it isn't).
If one believes that our perception of vision is only affected by the photons hitting our eyes, then absolute accuracy would consist of merely recording and re-emitting those photons? If we further restrict ourselves to a limited range of wavelengths, a somewhat stochastic/quantized description of those photons (position, energy, etc), the problem is slightly less impractical. So how far do we have to take these simplifications in order to do something that can be made and is still perceptually transparent (or even worse, non-annoying)?

If the OP is often able to get where he wants (impression of print matching memory of scene) by manually tweaking the two common parameters of WB, we seem to be quite close, only needing to find two scalars automatically. Now, if the print was somehow presented as a side-by-side with the original scene, the bar would probably be raised.

-h
« Last Edit: October 15, 2013, 08:25:53 AM by hjulenissen » Logged
D Fosse
Sr. Member
****
Offline Offline

Posts: 285



« Reply #117 on: October 15, 2013, 08:33:16 AM »
ReplyReply

"Isn't meant to be an "accurate" representation" by whom?

Poor choice of words on my part. What I really mean is "can never be" - and for exactly the reasons you list.

Quote
I don't see why one should not _try_ to make the illusion as "accurate" as possible

Me neither, and I do try. But it can never be a perfect match, and that's why I used the term "equivalent". If I hit colors that are equivalent, people will accept that as accurate and be happy with it. And to take it further, that's why it doesn't have to be a perfect match.
Logged
digitaldog
Sr. Member
****
Offline Offline

Posts: 8576



WWW
« Reply #118 on: October 15, 2013, 09:35:01 AM »
ReplyReply

If one believes that our perception of vision is only affected by the photons hitting our eyes, then absolute accuracy would consist of merely recording and re-emitting those photons?
If they believe that, they need to do a lot more reading on the perception of vision! That's a simplistic view (no pun intended) and far from reality.
Logged

Andrew Rodney
Author “Color Management for Photographers”
http://digitaldog.net/
Hening Bettermann
Sr. Member
****
Offline Offline

Posts: 554


WWW
« Reply #119 on: October 15, 2013, 01:30:03 PM »
ReplyReply

Some thoughts on some terms.

Accuracy.
@myself post #100
> It is like you neglect the difference between accuracy and precision, and the fact that they are only loosely tied to each other. What you require is not only accuracy, but accuracy AND precision on the spectrometer level.

Well it turns out that my view is the truth of yesterday. It refers to this model: fig 1
(source: http://en.wikipedia.org/wiki/Accuracy_and_precision)
which is now superseded by this one:
fig 2 and 3
In this new terminology, accuracy consists of precision and what is now called trueness, so here precision is in fact part of the definition of accuracy. So noted.
So my goal is to be called 'true' colors from now on.


Subjectivity.
That readings from 2 persons differ, does not make a method subjective. It only means that there is a person-to-person error, which is part of the overall error. This is true for all measurements performed by humans, and something principally different from 'subjective' in the sense in which 'pleasing' is in fact subjective by definition. The statement 'I like this color better' can of course never be measured, validated or falsified.

Trueness could, to some degree. Torger has already given a sketch:
post #54
> say if I have a large number of non-photagraphers with me when I make a shot, and then show them the print I made afterwards and ask them if they think it's an honest reproduction of the scene, if they think this is close to "how it actually looked" and most say yes, then I've succeeded.
This might - theoretically - be improved by choosing painters rather than normal mortals etc.etc.

Good light and 'true' colours!
of course I forgot the attachments:
« Last Edit: October 15, 2013, 01:34:51 PM by Hening Bettermann » Logged

Pages: « 1 ... 4 5 [6] 7 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad