Ad
Ad
Ad
Pages: « 1 ... 3 4 [5] 6 7 »   Bottom of Page
Print
Author Topic: Is accurate color possible in non-standard light?  (Read 11419 times)
digitaldog
Sr. Member
****
Offline Offline

Posts: 9093



WWW
« Reply #80 on: October 09, 2013, 02:59:10 PM »
ReplyReply

The OP clearly did not like the solutions of "move sliders until you are happy".
I'm aware of that. So his options are don't move the sliders and get what he gets, set the camera to capture a JPEG and get what he gets, hope there is some science fiction like product or device that gives him an exact replica of his memory. Might I point out that I hate flying and would prefer a transporter device but as yet, that's all science fiction. I fly, he moves sliders (or doesn't).
Quote
Taken to the extreme, this is an argument against all color management.
Taken to the extreme, this is an argument agains taking photo's. You can take anything to the extreme including speculating about science fiction but today, that's not a useful process.
Quote
Just keep printing and pushing sliders until you are happy?

Or any similar control to edit your images, otherwise, accept what the automatic system provides and move on.
Quote
Czornyj has explained how stuff works today.

Indeed, and unless you want to speculate and move into the realm of science fiction, a subject that's probably more appropriate in another forum on a different web site, why go there?
Quote
He did not explain why it has to be so, or (as far as I have seen) suggest a solution.
He and I did. Just get a tiny and inexpensive Spectrophotometer built into the camera that triggers each time the shutter does, and of course write that EXIF data to the raw, get the spectral sensitively of the chip. You're pretty much there at least in terms of defining the color of the scene and the capture device. There's more work to be done, we still need a good color appearance model because again, current technology doesn't treat colors in context like we do.
Quote
It seems that spectral sensitivites of camera sensors are available.
Where (and by whom)? And how will you account for the capture colorimetry portion? It's not impossible, it's massively expensive and complex. But then I'm told a transporter isn't impossible either, but rather:
Quote
In 1993 an international group of six scientists, including IBM Fellow Charles H. Bennett, confirmed the intuitions of the majority of science fiction writers by showing that perfect teleportation is indeed possible in principle, but only if the original is destroyed
Logged

Andrew Rodney
Author “Color Management for Photographers”
http://digitaldog.net/
ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7406


WWW
« Reply #81 on: October 09, 2013, 03:08:02 PM »
ReplyReply

Hi,

I dont really now. The samples the OP posted makes me consider mixed light. Part of the image is illuminated by skylight and part by orange shifted sunlight. The humidity of the air affects the amount of blue removed from the sunlight. So we have say 20000K skylight and 3000K sunlight illuminating different parts of the image and this dual illumination is part of the visual impact of the image.

I guess that modern DSLRs are probably handling lighting like this well, basically choosing a white balance with both satisfactory reds and blues. Getting the right WB is the first order solution after that we may need to fix some second order effects.

Best regards
Erik


I'm aware of that. So his options are don't move the sliders and get what he gets, set the camera to capture a JPEG and get what he gets, hope there is some science fiction like product or device that gives him an exact replica of his memory. Might I point out that I hate flying and would prefer a transporter device but as yet, that's all science fiction. I fly, he moves sliders (or doesn't). Taken to the extreme, this is an argument agains taking photo's. You can take anything to the extreme including speculating about science fiction but today, that's not a useful process.  
Or any similar control to edit your images, otherwise, accept what the automatic system provides and move on.  
Indeed, and unless you want to speculate and move into the realm of science fiction, a subject that's probably more appropriate in another forum on a different web site, why go there? He and I did. Just get a tiny and inexpensive Spectrophotometer built into the camera that triggers each time the shutter does, and of course write that EXIF data to the raw, get the spectral sensitively of the chip. You're pretty much there at least in terms of defining the color of the scene and the capture device. There's more work to be done, we still need a good color appearance model because again, current technology doesn't treat colors in context like we do. Where (and by whom)? And how will you account for the capture colorimetry portion? It's not impossible, it's massively expensive and complex. But then I'm told a transporter isn't impossible either, but rather:
Logged

hjulenissen
Sr. Member
****
Offline Offline

Posts: 1678


« Reply #82 on: October 10, 2013, 05:12:30 AM »
ReplyReply

I'm aware of that. So his options are don't move the sliders and get what he gets, set the camera to capture a JPEG and get what he gets, hope there is some science fiction like product or device that gives him an exact replica of his memory. Might I point out that I hate flying and would prefer a transporter device but as yet, that's all science fiction.
If one cannot (or should not) discuss why plains are like they are, then how can planes ever improve? Rather than try to ridicule my posts by calling it "science fiction", why don't you try to use your color expertise to explain why you think it is the wrong direction?

You still havent explained what the _fundamental_ difference of a 3ch color device vs a 256ch color device is except the obvious (number of channels). Why is it that a 3ch device cannot "predict the spectrum", while a 256ch device can? If so, where is the limit? Can a 4ch device "predict the spectrum"? or a 23 ch device?
Quote
I fly, he moves sliders (or doesn't). Taken to the extreme, this is an argument agains taking photo's. You can take anything to the extreme including speculating about science fiction but today, that's not a useful process.  
Or any similar control to edit your images, otherwise, accept what the automatic system provides and move on.
Taking things to the extreme can be an efficient method to make authors of poorly founded claims think things over.
Quote
 
Indeed, and unless you want to speculate and move into the realm of science fiction, a subject that's probably more appropriate in another forum on a different web site, why go there? He and I did. Just get a tiny and inexpensive Spectrophotometer built into the camera that triggers each time the shutter does, and of course write that EXIF data to the raw, get the spectral sensitively of the chip. You're pretty much there at least in terms of defining the color of the scene and the capture device. There's more work to be done, we still need a good color appearance model because again, current technology doesn't treat colors in context like we do. Where (and by whom)? And how will you account for the capture colorimetry portion? It's not impossible, it's massively expensive and complex. But then I'm told a transporter isn't impossible either, but rather:
Your continued attempts at ridicule makes for a less interesting discussion and makes you look less competent than I think you are. Can you please stop?

I don't get why we need a color appearance model if the goal is to reproduce an image as if one was looking at the scene through a window. Can you explain this?

I think that part of the confusion stems from unwritten assumptions about the desirability of making a landscape image appear as if one is looking through a window. Do you think that this can be desirable, or do you think that all images should be "perceptually corrected"?

-h
« Last Edit: October 10, 2013, 05:16:09 AM by hjulenissen » Logged
digitaldog
Sr. Member
****
Offline Offline

Posts: 9093



WWW
« Reply #83 on: October 10, 2013, 09:35:01 AM »
ReplyReply

If one cannot (or should not) discuss why plains are like they are, then how can planes ever improve? Rather than try to ridicule my posts by calling it "science fiction", why don't you try to use your color expertise to explain why you think it is the wrong direction?
I have, several times. Did you not note the colors in context discussion? Or that we need spectral data of the scene illuminant and the sensor to even approach the data needs for some kind of perceptual color modeling? Or that color is something that happens between your ears, using that large organ called your brain. In the context of this post, color is not a particular wavelength of light. It isn't a set of numbers that defines a single color pixel.
Quote
You still havent explained what the _fundamental_ difference of a 3ch color device vs a 256ch color device is except the obvious (number of channels).

You are asking the wrong question. The question should be, do we have spectral data? With the camera, we do not. So you might have missed my point about needing a built in Spectrophotometer in our cameras and recording actual spectral data of the illuminant within each capture.
Quote
Your continued attempts at ridicule makes for a less interesting discussion and makes you look less competent than I think you are.
With all due respect, please take my comment in context to this: you obviously mistake me as someone who gives a s*&t.
Quote
I don't get why we need a color appearance model if the goal is to reproduce an image as if one was looking at the scene through a window. Can you explain this?
I told you for at least the 2nd time above. We're dealing with numbers of solid colors without reference to other colors, the current technology has a slew of perceptual flaws of which some are working to overcome with better color appearance models. The phenomena that happens inside your brain and not thousands of solid colors that when viewed on a display resemble an image. Not what you recall you thought you saw at the scene. I posted a URL on the extreme difference between scene and output referred imagery. Start here (http://www.michaelbach.de/ot/col_mix/index.html and then http://en.wikipedia.org/wiki/Color_model) and try to understand how complex human perception is to model considering the slew of conditions where our vision is fooled.
Quote
Do you think that this can be desirable, or do you think that all images should be "perceptually corrected"?
What the OP wants IS desirable but unfortunately about as likely as transporting humans at this point in time. I have no idea what you mean by "perceptually corrected" but if that is supposed to mean we have to manually adjust images based on the current tools and capture technology to get a pleasing image, an image that isn't anything close to being colorimetrically correct, the answer is yes (and I said that already, probably more than once here).
Logged

Andrew Rodney
Author “Color Management for Photographers”
http://digitaldog.net/
Hening Bettermann
Sr. Member
****
Offline Offline

Posts: 566


WWW
« Reply #84 on: October 10, 2013, 04:06:56 PM »
ReplyReply

But if I compare the camera screen to the real scene at shooting time, are not the "things that happen inside my brain" the same for both the scene and the screen ? (omitting the bias of the jpeg for now). Could this trick not move the image a bit from "what I recall I thought I saw at the scene" to what I really saw?
I note that Czornyj, who seems to be your peer in knowledge, and with whom you agree in describing the problem, arrives at a conclusion quite different from yours: "The problem is the difference of scene contrast, display gamut, and the fact, that you can spoil some colours by editing others, but the overall impression was much closer to the look of the scene. I suppose at this moment direct comparison is the simplest solution to get the "right" colours." (Post #73).
That isn't QUITE the same as "pull the sliders until you're happy", is it well?
Logged

Czornyj
Sr. Member
****
Offline Offline

Posts: 1411



WWW
« Reply #85 on: October 10, 2013, 04:22:41 PM »
ReplyReply

That isn't QUITE the same as "pull the sliders until you're happy", is it well?

I pulled the sliders until I get similar look on the screen, that's the only difference. I did it just to illustrate the fact, that the camera renders the "wrong" colours in other discussion. The result was that I was criticised for achieving artificial effect by people who didn't even saw the original scene, but stated that jpeg rendered by the camera looked more realistic Cheesy
« Last Edit: October 10, 2013, 04:24:13 PM by Czornyj » Logged

digitaldog
Sr. Member
****
Offline Offline

Posts: 9093



WWW
« Reply #86 on: October 10, 2013, 04:30:56 PM »
ReplyReply

But if I compare the camera screen to the real scene at shooting time, are not the "things that happen inside my brain" the same for both the scene and the screen ? (omitting the bias of the jpeg for now).
That bias is huge, and again, it's output referred. It's squashed so to speak, to fit the range of that screen (squashed in terms of dynamic range, gamut etc). Consider the illuminant of the LCD and the role it has on what you're viewing at that moment.
Quote
I note that Czornyj, who seems to be your peer in knowledge, and with whom you agree in describing the problem, arrives at a conclusion quite different from yours: "The problem is the difference of scene contrast, display gamut, and the fact, that you can spoil some colours by editing others, but the overall impression was much closer to the look of the scene.
That's indeed a big part of the 'problem' along with the other issues I brought up. Our conclusions are the same and we're talking about different problems that make this so difficult.
Logged

Andrew Rodney
Author “Color Management for Photographers”
http://digitaldog.net/
Hening Bettermann
Sr. Member
****
Offline Offline

Posts: 566


WWW
« Reply #87 on: October 10, 2013, 04:57:19 PM »
ReplyReply

I can not see that your conclusion is the same as Czornyj's. You reject everything if it is not colorimetry; he accepts a partial improvement based on visual comparison. So do I. I think that a visual comparison done at shooting time has a chance to come closer to what I saw than one done by memory - even with all the biases. That's all. That's what I can do with todays tools, and I find it a little more satisfactory than shear arbitrariness.
Logged

digitaldog
Sr. Member
****
Offline Offline

Posts: 9093



WWW
« Reply #88 on: October 10, 2013, 05:51:26 PM »
ReplyReply

I can not see that your conclusion is the same as Czornyj's. You reject everything if it is not colorimetry; he accepts a partial improvement based on visual comparison.
I thought my writing here and elsewhere, other posts were clear. Let me try again. I reject the term accurate and color without colorimetry. Without colorimetry we have no basis for the use of the term accurate. I accept pleasing or desired color that isn't colorimetry correct. By it's very nature, pleasing output referred color is rarely if ever colorimetrically correct. If you look at the LCD (which is a canned rendering), dismiss the illuminant of that device but see it matches what you view next to it, that's fine. Got no problem with that. Calling it accurate is simply not true and can't be proven with any metric. It's subjective. I have no issue with subjective and pleasing color. It IS what nearly everyone here is producing. With differing degrees of work.
Quote
I think that a visual comparison done at shooting time has a chance to come closer to what I saw than one done by memory - even with all the biases. That's all. That's what I can do with todays tools, and I find it a little more satisfactory than shear arbitrariness.
And that's fine! But it isn't accurate color. It's what you want, expect, accept, pleased by. Unless you can measure a number of items and 'do the math', calling it accurate isn't an accurate statement. Is that clearer picture of my POV?
Logged

Andrew Rodney
Author “Color Management for Photographers”
http://digitaldog.net/
papa v2.0
Full Member
***
Offline Offline

Posts: 196


« Reply #89 on: October 11, 2013, 05:07:15 AM »
ReplyReply

Hi
May I chip in here. I have been following this thread and the Op question is "Is accurate color possible in non-standard light'

(Or do you mean 'accurate colour appearance'? There is a difference but thats a whole new can of worms)

Almost is the answer, but not practical with a single shot capture and current white balance algorithms.

I think the main problem here is white balancing the scene.

Humans can view a scene with different light sources and are constantly white balancing as we look around (remember most of our colour vision is in the 2˙ field) and discounting the illuminant, so in effect we perceive a sort of uniform white balance and a perceived colour constancy across the scene.

A camera (or post software) must make a single white balance prediction across the frame, not a pixel by pixel choice.

White point estimation is, using a single shot RGB capture, an under-constrained problem.

To calculate the illumination colour we need the surface spectral reflectance properties and the RGB device responses.
We only have the device RGB values.
There are several white balance algorithms of varying complexity, Grey World , Max RGB, Hybrid Retinex etc. They can produce good results but can fail drastically  if the range of scene colours is limited, as in OP scene.


If you are shooting RAW WB is done in post but will rely on 'memory of scene" if done off site or if done in the field would require well calibrated monitors with a wide a gamut as possible under the appropriate viewing conditions, preferable not a laptop! But again you are white balancing across whole scene, so after chromatic adaptation to output space White Point (D65, or D50), some parts may look correct, others will be out.

 Multi Chanel capture will be better at WB that a 3 channel. A 256 device would be like  a spectrophotometer! Good but impractical.

As mentioned by Andrew, a small spectro in the camera at time of capture would help but the problem of multi scene illumination would still arise.

Introducing a colour appearance model would go some way to achieving the OP goal, but this again requires robust White Point estimation to calculate scene appearance prior to rendering. (See attached)

One would really need pixel by pixel WB and Appearance modelling, so come on Nikon, Canon Adobe, get this sorted..

Iain
Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1678


« Reply #90 on: October 11, 2013, 06:02:14 AM »
ReplyReply

With all due respect, please take my comment in context to this: you obviously mistake me as someone who gives a s*&t.
With all due respect, I believed that you had something to contribute, other than cursing and ridicule. Have a good day, sir.

-h
« Last Edit: October 11, 2013, 06:27:58 AM by hjulenissen » Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1678


« Reply #91 on: October 11, 2013, 06:18:10 AM »
ReplyReply

I think the main problem here is white balancing the scene.

Humans can view a scene with different light sources and are constantly white balancing as we look around (remember most of our colour vision is in the 2˙ field) and discounting the illuminant, so in effect we perceive a sort of uniform white balance and a perceived colour constancy across the scene.
I agree that WB seems like a problem.

If you can bear with my analogies: a 13"x19" window on the wall of my cabin might cover a large or small part of our vision (depending on how close you view it). It may also have a radically different WB/spectral properties from that of the cabin interior. So that vista may appear "blue cast". Or any other tint. My thesis is that this "color cast" is acceptable, and perhaps desirable in this particular setting. Now swap that window with an LCD screen or a passively illuminated print. Do we still want that "color cast"? Again, my thesis is "yes", at least for some images.

How do we recreate the appearances of looking through that window (or, stricter but sufficient: how do we recreate the physics of the light that was flowing through the window, that triggered our perceptual response), and how does this differ from the usual goals of photography?
Quote
A camera (or post software) must make a single white balance prediction across the frame, not a pixel by pixel choice.

White point estimation is, using a single shot RGB capture, an under-constrained problem.

To calculate the illumination colour we need the surface spectral reflectance properties and the RGB device responses.
We only have the device RGB values.
What if the only illumination in the scene is a decidedly "red" LED? Does it still make sense to think about white-point, or is it better to make the best-informed spectral recreation that we are able to do?

Quote
Multi Chanel capture will be better at WB that a 3 channel. A 256 device would be like  a spectrophotometer! Good but impractical.
I agree that the more channels, the higher spectral resolution/confidence. So with 256 channels you would know _more_ about the spectrum, with 3 channels you would know _some_ about the spectrum.

If the scene is spectrally "smooth" (and this is known/correctly guessed), then the amount of information to be had by increasing the number of channels should be less. Just like an unsharp lens cannot distinguish (very much) between a high-MP camera and a low-MP camera.
Quote
As mentioned by Andrew, a small spectro in the camera at time of capture would help but the problem of multi scene illumination would still arise.
Are you thinking about a spatially averaged, spectrally high-resolution recording, or a more focused "color picker"? If the sky is the brightest surface in the scene (and a prominent part of the image real estate), and the sky has spectral properties that fall within colors that humans on site would classify as "pastel", where would you map this illuminant in the output rendering?

-h
« Last Edit: October 11, 2013, 06:25:11 AM by hjulenissen » Logged
crames
Full Member
***
Offline Offline

Posts: 210


WWW
« Reply #92 on: October 11, 2013, 01:28:38 PM »
ReplyReply

It seems to me that, to get h's image-as-open-window effect, what you want to do is duplicate in the image the stimuli that your eyes received at the original scene. In colorimetric terms, if the image can duplicate the same XYZ tristimulus values as the original scene, the cones in your eyes will be stimulated in exactly the same way that they were at the original scene. No need to get all spectral about it, this is basis of color matching.

So the task boils down to reproducing the XYZ tristimulus values of the scene. A standard input ICC profile provides the means of converting the camera RGB values to XYZs. A complication is that color management wants to adapt the raw XYZ numbers to simulate how the colors might appear in an environment with a specific white point, usually D50. To fool the color management system to just let the original raw XYZ values of the scene pass thru without adaptation, I think it would only be necessary to set the camera white balance to D50 (either in the camera settings for JPEG, or in the raw converter when converting raws.) You also need to avoid any other changes to white balance. (There is info at the ICC web site about this kind of "Scene-Referred" workflow.)

At this point you should have your original scene XYZs intact. Now there is a problem: to display them on a monitor, because monitor white points are usually around D65, the scene XYZ would be shifted bluer by the monitor. So now the need is to perform a conversion to the monitor profile using Absolute Intent. This will adjust the XYZs so that the original scene XYZs are reproduced on the monitor despite its D65 white point.

If I'm not mistaken, this should give you an approximation of the original scene XYZs on a monitor. If the monitor can be taken to the original scene, the colors should appear the same as in the scene, provided the monitor is capable of achieving the same brightnesses as in the original scene. When you have the monitor out in the original scene, you will be viewing the monitor in the same viewing conditions as the scene. It is only under identical viewing conditions that matching XYZ tristimulus values will have the same color appearance.

For the window effect, you will need a monitor capable of matching the brightness of outdoor scenes - it is unlikely you will have one, but high dynamic range monitors are available for $$$. However it is generally ok to scale the XYZs to a lower relative Brightness (which is called Lightness) and still have a good appearance - after all it is usual in photography to display images at other than the original scene brightnesses.

Now as to Torger's OP, the exact matching XYZ color stimuli of a sunset will appear different when viewed under different viewing conditions. Viewing conditions being the color of the light in the viewing environment, the relative luminance of the room light compared to the display, etc. There are Color Appearance Models (such as CIECAM02) that can take the viewing conditions into account and modify the XYZs accordingly in order to have colors appear the same under different viewing conditions. Included are controls for the degree of adaptation, meaning that the models can predict that under stronger colors of light your eyes will not be fully adapted, resulting in a residual color cast in the reproduction. Color Appearance Models haven't really caught on yet in general photography, but might be something interesting to explore.

Cliff
Logged

Cliff
Czornyj
Sr. Member
****
Offline Offline

Posts: 1411



WWW
« Reply #93 on: October 11, 2013, 01:59:15 PM »
ReplyReply

So the task boils down to reproducing the XYZ tristimulus values of the scene

This is the most common misunderstanding of the problem. We believe that XYZ values ultimately define how we see colours, while it's a VERY simplified model, limited only to what we perceive when we observe a D50 2 degree stimuli on a grey background in bright surround. In a real world our perception is much more complicated than a simple colour matching function.
« Last Edit: October 11, 2013, 02:05:10 PM by Czornyj » Logged

digitaldog
Sr. Member
****
Offline Offline

Posts: 9093



WWW
« Reply #94 on: October 11, 2013, 02:32:05 PM »
ReplyReply

This is the most common misunderstanding of the problem. We believe that XYZ values ultimately define how we see colours, while it's a VERY simplified model, limited only to what we perceive when we observe a D50 2 degree stimuli on a grey background in bright surround. In a real world our perception is much more complicated than a simple colour matching function.
And unless I'm mistaken, limited only to what we perceive when we observe a D50 2 degree stimuli on a grey background in bright surround of a single color sample.

The answer to the OP's question (the subject line) is simple: No. Not based on the common and affordable solutions used by most people on this forum. IF the question is, Is pleasing color possible in non-standard light, the answer is yes with some work involved.
Logged

Andrew Rodney
Author “Color Management for Photographers”
http://digitaldog.net/
crames
Full Member
***
Offline Offline

Posts: 210


WWW
« Reply #95 on: October 11, 2013, 03:01:08 PM »
ReplyReply

This is the most common misunderstanding of the problem. We believe that XYZ values ultimately define how we see colours, while it's a VERY simplified model, limited only to what we perceive when we observe a D50 2 degree stimuli on a grey background in bright surround. In a real world our perception is much more complicated than a simple colour matching function.

I guess you didn't read the rest of my post.

I would say it's a common misunderstanding to confuse the physical stimulus (XYZ) with the color perception.

XYZ tristimulus values tell you nothing about how a color will be perceived. But if the viewing conditions match, colors with the same XYZ values will be perceived to match in appearance.

XYZ is based on color matching studies and Grassman's laws, and is not limited to D50, grey background, bright surround as you say. To quote from Fairchild's Color Appearance Models

Quote
if the signals from the three cone types are equal for two stimuli, they must match in color when seen in the same conditions, since no additional information in introduced within the visual system to distinguish them
Logged

Cliff
Czornyj
Sr. Member
****
Offline Offline

Posts: 1411



WWW
« Reply #96 on: October 11, 2013, 03:20:58 PM »
ReplyReply

The three cone types are capable of independent sensitivity regulation, see p. 8
http://www.cis.rit.edu/fairchild/PDFs/AppearanceLec.pdf

So the signals from the three cone types are not equal for two stimuli under different illuminant
« Last Edit: October 11, 2013, 03:29:12 PM by Czornyj » Logged

crames
Full Member
***
Offline Offline

Posts: 210


WWW
« Reply #97 on: October 11, 2013, 04:03:27 PM »
ReplyReply

The three cone types are capable of independent sensitivity regulation, see p. 8
http://www.cis.rit.edu/fairchild/PDFs/AppearanceLec.pdf

So the signals from the three cone types are not equal for two stimuli under different illuminant

Yes, if you change the illuminant, you are either changing the stimuli, or changing the viewing conditions, depending on whether you are talking about reflectances x the illuminant, or about a self-luminous display. The colors generally won't match under either condition, hence the need for CATs and CAMs.

I don't get your point.
Logged

Cliff
Hening Bettermann
Sr. Member
****
Offline Offline

Posts: 566


WWW
« Reply #98 on: October 11, 2013, 05:04:08 PM »
ReplyReply

@ Andrew, post #88

Thanks for your reply. The picture of your POV is clear, but I think it was so even before. It's either colorimetry or nothing at all. No you don't reject the *use* of nothing-at-all, on the contrary, you endorse it; but you do not acknowledge that there could be grades of accuracy within nothing-at-all. Would(n't) this, too, be a correct description of your POV?

I have not used neither 'accurate' nor 'measuring' in this thread. - I remember the very first lesson in physics at high school. The message was: To 'measure' is to 'compare'. There was no requirement of a particular (high) level of precision.

In this sense, when I compare the view of the scene to a stepped scale of white balances, I 'measure' the WB. It is not very precise, but it is subjective only in the sense that the precision may differ from person to person.

Let me expand on the grades of accuracy as I see them.

grade minus 1. Artistic freedom. Consciously deviating from any strict relationship to the perceived colors of the scene.

grade zero. I pull the sliders until I'm happy; I may not be conscious about what I try to achieve: realistic? just pleasing? or I don't care.

grade 1. The attempt to be 'realistic' (to avoid the term 'accurate'). There is one particular view I consciously try to 'hit'. And discover that there are no readily available tools to do it; I'm referred to my memory. This is the grade that Torger has demonstrated here to great effect. The result can not be measured with a colorimeter, but it could easily be validated: everybody who has ever seen this kind of light/scene will at once be able to identify it as being more realistic than e.g. the one obtained by the AWB.

grade 2. This is where I - forgive the vanity - see my own attempt. Like grade 1, with the little addition that the memory factor is eliminated.--edit: And the addition of a stepped scale to aid the visual judgement.

grade 3. Colorimetry. You know all about this one.

But you tell me that -1 to 2 are all the same, because they can not be 'measured' with a spectrometer. I still don't buy it.
« Last Edit: October 12, 2013, 12:10:49 PM by Hening Bettermann » Logged

digitaldog
Sr. Member
****
Offline Offline

Posts: 9093



WWW
« Reply #99 on: October 11, 2013, 06:06:16 PM »
ReplyReply

It's either colorimetry or nothing at all.
That isn't what I wrote. I thought I was very clear that pleasing color which isn't nothing at all is just fine and something most of us strive for and nearly all produce.
 
I reject the term accuracy in a process that is subjective. How can something subjective have any accuracy metric?

We both go to a location together and setup our cameras side by side. We capture the scene precisely the same moment. We go back to our offices and process the image as we feel we saw the scene. Do they perfectly match? What if they don't? Who's 'correct'? Being subjective, without any means to measure what was there, let alone what happened in our brains, who's to say his capture is more accurate than the other? Using colorimetry, we can easily measure the differences in the two renderings. That doesn't tell us who's closer to the reality of the scene but it tells us how different we processed the data.
Quote
I have not used neither 'accurate' nor 'measuring' in this thread.
Good!
Quote
- I remember the very first lesson in physics at high school. The message was: To 'measure' is to 'compare'. There was no requirement of a particular (high) level of precision.
That's fine as a start. And it is useful to define the level of precision. But without any measurement, we've put the cart before the horse. How do you measure subjectivity?
Quote
In this sense, when I compare the view of the scene to a stepped scale of white balances, I 'measure' the WB. It is not very precise, but it is subjective only in the sense that the precision may differ from person to person.
Agreed. So if we both do this but end up with different results because a large part of the process is subjective, who's more accurate? We can't say. We can argue but we can't come to a firm conclusion. We can ask a 3rd party (who might be biased). They can subjectively tell us which they prefer. None of this helps define which is more accurate in terms of what was there at the scene let alone what we think we remember of the scene. What if you had 10 hours of sleep the night before and I smoked 5 bong loads just before we took the shot <g>? you think that might affect what we remember we saw hours or days later when we process the image to match 'what we recall'?
Quote
Let me expand on the grades of accuracy as I see them.
grade minus 1. Artistic freedom. Consciously deviating from any strict relationship to the perceived colors of the scene.
How is the accuracy defined? It isn't.
Quote
But you tell me that -1 to 2 are all the same, because they can not be 'measured' with a spectrometer. I still don't buy it.
They are not exactly the same. But they are processes that have no means to measure accuracy. #3 does (or could). Further, what do we do when we use colorimetry to produce a numeric match and it doesn’t look like what we recalled or looks butt ugly? The reality is this is often the case and I illustrated this with actual examples early in this thread. Great, we have a colorimetric match and it looks awful. But it's accurate. The pleasing rendering is, well more pleasing. You can use the old "rate this image on a scale from 1 to 10" but guess what, that's subjective.

Bottom line is asking a question like Is accurate color possible in non-standard light is kind of the wrong question. Because accurate is in the question and the process is so damn subjective.
« Last Edit: October 11, 2013, 06:08:08 PM by digitaldog » Logged

Andrew Rodney
Author “Color Management for Photographers”
http://digitaldog.net/
Pages: « 1 ... 3 4 [5] 6 7 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad