Ad
Ad
Ad
Pages: « 1 2 3 [4] 5 6 7 »   Bottom of Page
Print
Author Topic: Is accurate color possible in non-standard light?  (Read 11039 times)
Tony Jay
Sr. Member
****
Offline Offline

Posts: 2110


« Reply #60 on: October 05, 2013, 06:47:59 PM »
ReplyReply

And where it existed. Color, is a perceptual property. So if you can't see it it's not a color. Color is not a particular wavelength of light. We define colors based on perceptual experiments. Excitation of photoreceptors followed by retinal processing and ending in the visual cortex, this is stuff happening in our head and that's the colors that existed. Cameras operate on a different level with respect to 'color'.
This is exactly correct.

Tony Jay
Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1677


« Reply #61 on: October 05, 2013, 11:25:08 PM »
ReplyReply

And where it existed. Color, is a perceptual property. So if you can't see it it's not a color. Color is not a particular wavelength of light. We define colors based on perceptual experiments. Excitation of photoreceptors followed by retinal processing and ending in the visual cortex, this is stuff happening in our head and that's the colors that existed. Cameras operate on a different level with respect to 'color'.
Even though the word "color" is used to describe a perceptual thing, the world that enable us to sense colors is a very physical thing. Recreate all of the physical stimuli leading to some perceptual response, and chances are that you are (as close as you will ever get) to recreating that perceptual response once more.

If you have this pastel colored scene referenced by the OP, I don't get why color philosophy is needed to make it realistic. Rather, it seems to me that the standard practice of "white balancing away the light" is the problem, something which there should be pragmatic cures for?

-h
Logged
Hening Bettermann
Sr. Member
****
Offline Offline

Posts: 558


WWW
« Reply #62 on: October 06, 2013, 04:06:20 AM »
ReplyReply

> Rather, it seems to me that the standard practice of "white balancing away the light" is the problem, something which there should be pragmatic cures for?

But obviously, there are not! I think the reason is that the standard practice is tailored to something like catalogue photography and art reproduction, where "White balancing away the light" is indeed needed - not to landscape shooting, where one wants to picture that same light and its reflections.

Good light and true color! - Hening
Logged

Czornyj
Sr. Member
****
Online Online

Posts: 1400



WWW
« Reply #63 on: October 06, 2013, 09:01:00 AM »
ReplyReply

If you have this pastel colored scene referenced by the OP, I don't get why color philosophy is needed to make it realistic. Rather, it seems to me that the standard practice of "white balancing away the light" is the problem, something which there should be pragmatic cures for?

Philosophy aside, you'd need some multispectral camera and intelligent colour appearance model as a "pragmatic cure" in the above mentioned situation.
« Last Edit: October 06, 2013, 09:30:59 AM by Czornyj » Logged

digitaldog
Sr. Member
****
Offline Offline

Posts: 9004



WWW
« Reply #64 on: October 06, 2013, 11:40:31 AM »
ReplyReply

Even though the word "color" is used to describe a perceptual thing, the world that enable us to sense colors is a very physical thing.
Yes to some degree. We can measure that color and by doing so, we can have pretty good way to discuss inaccuracy using dE. We're talking about single colors, not colors in context. For calibrating and profiling device behavior, this model, colorimetry works well.
Logged

Andrew Rodney
Author “Color Management for Photographers”
http://digitaldog.net/
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1677


« Reply #65 on: October 07, 2013, 12:25:19 AM »
ReplyReply

Yes to some degree. We can measure that color and by doing so, we can have pretty good way to discuss inaccuracy using dE. We're talking about single colors, not colors in context. For calibrating and profiling device behavior, this model, colorimetry works well.
So, for recreating the effect of "looking through a mountain lodge window"*) onto the scene... Calibrating and profiling the camera and display/print method would be sufficient (of course, you would be limited to the gamut/DR/... of the imaging chain)? Is this similar to using a standard observer as a reference, and minimising the error (given some constraints/regularization) that reproducing the scene using the current camera/display/print would give?

Does the display have to be self-illuminant (e.g. LCD) for this to work, or is it possible to use a (profiled) print along with a profile/guesstimate of the room lighting? (Constrained by the radiated power/reflectance available at each wavelength; a yellow-tinted candle lit interior might not be able to illuminate a convincing simulation of a blue-cast snowy landscape unless that landscape was close to darkness)

-h
*)If my window references seems odd: I just want to separate those errors that stems from filling the human FOV with one consistent scene, vs filling part of our view with one scene, and the rest with a (potentially highly) different scene. A mountain logde window seems like a real-world reference that we can relate better to than talking about "5 degree standard observers" or something like it.
« Last Edit: October 07, 2013, 12:30:04 AM by hjulenissen » Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1677


« Reply #66 on: October 07, 2013, 12:42:30 AM »
ReplyReply

Philosophy aside, you'd need some multispectral camera and intelligent colour appearance model as a "pragmatic cure" in the above mentioned situation.
Why is that?

A crude engineering approach might be:
1. Obtain a profile of the camera (wavelength vs sensitivity for each primary)
2. Obtain a profile for you display (energy vs wavelength for each primary)
3. For a given camera raw file, apply a linear/nonlinear transform that minimize the squared error within [390 700]nm (make each output pixel the closest possible spectrum to that which is known about the scene)

A less crude approach might be to minimize the error within bands corresponding to "red", "green" and "blue".

Having a robust multispectral camera would probably make things easier, but I don't see that it is critical unless the scene contains some "irregular" and "hairy" spectra. Are the OPs image likely to be irregular, or are they more probably smooth and easily characterised?

-h
Logged
Czornyj
Sr. Member
****
Online Online

Posts: 1400



WWW
« Reply #67 on: October 07, 2013, 03:06:30 AM »
ReplyReply

Problem is that "images acquired with RGB cameras contains on the contrary a systematic color error, since no basic linear dependency can be found between the spectral sensitivity of RGB cameras and the spectral sensitivity of human observers":
http://www.lfb.rwth-aachen.de/en/research/basic-research/multispectral/

But the major problem is that the observer model is "wrong" - it works for D50 illuminated colours on 20% grey background in bright surround. The different colour context, viewing conditions, brightness, chromatic adaptation, and countless other factors make that it fails:
http://www.cis.rit.edu/fairchild/PDFs/AppearanceLec.pdf

When we take all these factors into account it becomes obvious, that there's not event the slightest chance to get "right" colours on OP's image - due to limitations of the camera sensor and RAW developing methods.
« Last Edit: October 07, 2013, 03:20:03 AM by Czornyj » Logged

hjulenissen
Sr. Member
****
Offline Offline

Posts: 1677


« Reply #68 on: October 07, 2013, 03:45:16 AM »
ReplyReply

Problem is that "images acquired with RGB cameras contains on the contrary a systematic color error, since no basic linear dependency can be found between the spectral sensitivity of RGB cameras and the spectral sensitivity of human observers":
http://www.lfb.rwth-aachen.de/en/research/basic-research/multispectral/
That sentence seems odd to me. Is there _no_ linear dependency? I would assume that there is one, but that it does not allow for a complete transformation (i.e. there might be an "optimal" 3x3 matrix, but you would still have errors). I am an engineer, errors are a fact of life. The question is how much of a problem those errors are.
Quote
But the major problem is that the observer model is "wrong" - it works for D50 illuminated colours on 20% grey background in bright surround. The different colour context, viewing conditions, brightness, chromatic adaptation, and countless other factors make that it fails:
http://www.cis.rit.edu/fairchild/PDFs/AppearanceLec.pdf
If I want my framed image/LCD to look as if it was a window peeping into a landscape, most of those problems would go away, would they not?

I would think that the low-level spectral sensitivity of my sight is relatively stable (cones and rods). High-level processing is probably highly context-dependant.
Quote
When we take all these factors into account it becomes obvious, that there's not event the slightest chance to get "right" colours on OP's image - due to limitations of the camera sensor and RAW developing methods.
So there is not the slightest chance that results for the OP can be improved by a mere change of software algorithms/rendering goals?

It would be neat to have a Lightroom slider that went from "neutral illumination (relative colours)" to "original illumination (absolute colours)".

-h
« Last Edit: October 07, 2013, 03:48:43 AM by hjulenissen » Logged
Czornyj
Sr. Member
****
Online Online

Posts: 1400



WWW
« Reply #69 on: October 07, 2013, 04:11:22 AM »
ReplyReply

I want my framed image/LCD to look as if it was a window peeping into a landscape, most of those problems would go away, would they not?
But how? The sensor introduces some "errors", and then the colours are interpreted as D50 2 degree stimuli on 20% grey in bright surround which is 100% not the case of OP image.

To capture "right" colours we would need a camera with a sensor that matches XYZ curves better (and maybe a spectroradiometer on top of it), and a RAW developer that takes into account viewing conditions, colour context, scene brightness and so on, and then renders properly interpreted colours into the editing space. Or maybe something even more complicated.

It would be neat to have a Lightroom slider that went from "neutral illumination (relative colours)" to "original illumination (absolute colours)".
Lightroom doesn't know what "original illumination" really was, that's the part of the problem.
« Last Edit: October 07, 2013, 04:26:42 AM by Czornyj » Logged

digitaldog
Sr. Member
****
Offline Offline

Posts: 9004



WWW
« Reply #70 on: October 07, 2013, 09:18:26 AM »
ReplyReply

So, for recreating the effect of "looking through a mountain lodge window"*) onto the scene... Calibrating and profiling the camera and display/print method would be sufficient (of course, you would be limited to the gamut/DR/... of the imaging chain)?
I don't know what calibrating and profiling the camera implies but rarely are the desired or obtained results colorimetry correct and may not match something else.

If you want to know the behavior of an Epson printer today and in a year, with the current tools, very useful and doable. You print out a target and measure it today and in a year, you do so on a device that calibrates to a reference and has good repeatability. You define how many patches and where in color space to sample the output. You can plot on a daily dE2000 behavior of your printer, compare just the paper (media dE), understand dry down behavior etc. None of this has anything to do with color appearance, or Colors in context.
Logged

Andrew Rodney
Author “Color Management for Photographers”
http://digitaldog.net/
Hening Bettermann
Sr. Member
****
Offline Offline

Posts: 558


WWW
« Reply #71 on: October 07, 2013, 03:46:11 PM »
ReplyReply

@czornyj
>The sensor introduces some "errors", and then the colours are interpreted as D50 2 degree stimuli on 20% grey in bright surround which is 100% not the case of OP image.

And yet, the OP, Torger, has managed to re-create colors that are very very close (image in post #55) or at least very trustworthy and naturally looking, by memory. Now if he would eliminate the memory factor by comparing the actual view with a stepped scale of camera recorded white balances at shooting time, he might come even closer (if that is at all possible), couldn't he?
Logged

hjulenissen
Sr. Member
****
Offline Offline

Posts: 1677


« Reply #72 on: October 09, 2013, 02:18:06 AM »
ReplyReply

But how? The sensor introduces some "errors", and then the colours are interpreted as D50 2 degree stimuli on 20% grey in bright surround which is 100% not the case of OP image.
The figure below seems to depict the camera raw file signal vs scene spectrum:


The figure below seems to depict LCD (using "white" and "rgb" LED backlight) spectral intensity:


Using information similar to the two figures (for a given camera and display), I guess that one might try to recreate (what is known about) the scene spectrum with some error using the freedom in the display. The engineering approach might be minimizing the squared error. I assume that something similar can be done for the 10 or so inks found in photo ink printers, except you will have to measure the spectrum of the combined ink+paper+illumination.

I don't think that D50, 2 degree stimuli, bright surround etc comes into play for this example as formulated by me.

Perceptual stuff will probably be relevant once you try to make the error criterion better suited for the goal of making images that is to be viewed by humans (some spectral errors might be more critical than others).

Still, if you could measure the spectrum of every pixel within 400-800nm with a very small squared error, and recreate it with a very small squared error (using multi-spectral means), a human would probably think that the scene was "accurately" recreated if framing, size etc was kept constant. Doing the same using a (less accurate) 3-channel colour aparatus is conceptually the same, only with larger error.

-h
« Last Edit: October 09, 2013, 02:27:32 AM by hjulenissen » Logged
Czornyj
Sr. Member
****
Online Online

Posts: 1400



WWW
« Reply #73 on: October 09, 2013, 04:16:10 AM »
ReplyReply

And yet, the OP, Torger, has managed to re-create colors that are very very close (image in post #55) or at least very trustworthy and naturally looking, by memory. Now if he would eliminate the memory factor by comparing the actual view with a stepped scale of camera recorded white balances at shooting time, he might come even closer (if that is at all possible), couldn't he?
Yes, to some degree, like I did myself in case of sample posted here:
http://www.luminous-landscape.com/forum/index.php?topic=82738.msg668928#msg668928
The problem is the difference of scene contrast, display gamut, and the fact, that you can spoil some colours by editing others, but the overall impression was much closer to the look of the scene. I suppose at this moment direct comparison is the simplest solution to get the "right" colours.
Logged

Czornyj
Sr. Member
****
Online Online

Posts: 1400



WWW
« Reply #74 on: October 09, 2013, 04:38:10 AM »
ReplyReply

Using information similar to the two figures (for a given camera and display), I guess that one might try to recreate (what is known about) the scene spectrum with some error using the freedom in the display. The engineering approach might be minimizing the squared error. I assume that something similar can be done for the 10 or so inks found in photo ink printers, except you will have to measure the spectrum of the combined ink+paper+illumination.
Of course we can't. We have completely no idea what was the spectrum captured by sensels of the camera. And even if we knew it, we couldn't recreate it with the display pixels.

I don't think that D50, 2 degree stimuli, bright surround etc comes into play for this example as formulated by me.
Of course it does - we correct the difference between the sensor's spectral curves and XYZ curves of standard D50 2 degree observer using camera profiles, then we display colours that produce metameric match on a monitor. The colours on the display have completely nothing to do with the original spectra, which BTW is unknown.

« Last Edit: October 09, 2013, 04:41:56 AM by Czornyj » Logged

hjulenissen
Sr. Member
****
Offline Offline

Posts: 1677


« Reply #75 on: October 09, 2013, 04:54:40 AM »
ReplyReply

Of course we can't. We have completely no idea what was the spectrum captured by sensels of the camera. And even if we knew it, we couldn't recreate it with the display pixels.
If one knows the spectral sensitivity of each color channel of the camera, one _does_ have an idea what the spectrum was.  The measurement may not be very "accurate", but that was never my claim. Claiming that you have "completely no idea what was the spectrum" is clearly wrong.
Quote
Of course it does - we...
Who are "we"? Me? you? Adobe? I was trying to be very clear about my suggestion in my post, yet you seem to put all kinds of assumptions in-between my lines. Please don't do that. Please re-read my post and interpret it literally. If I am unclear or you are confused, please ask instead of assuming.
Quote
...correct the difference between the sensor's spectral curves and XYZ curves of standard D50 2 degree observer using camera profiles, then we display colours that produce metameric match on a monitor. The colours on the display have completely nothing to do with the original spectra, which BTW is unknown.
Who mentioned XYZ? Who mentioned camera profiles?

-h
« Last Edit: October 09, 2013, 04:59:03 AM by hjulenissen » Logged
Czornyj
Sr. Member
****
Online Online

Posts: 1400



WWW
« Reply #76 on: October 09, 2013, 05:53:38 AM »
ReplyReply

I'm just trying to explain how it works.

Please, rethink it once again - how could we predict the original spectra of the scene, when countless different spectra can induce the same RGB signal?

How could we reproduce the original spectra using display that has a matrix with only three colour filters?
« Last Edit: October 09, 2013, 06:00:39 AM by Czornyj » Logged

hjulenissen
Sr. Member
****
Offline Offline

Posts: 1677


« Reply #77 on: October 09, 2013, 06:19:56 AM »
ReplyReply

I'm just trying to explain how it works.
And I was offering suggestions to how the OPs problem might be improved.

I am not a color scientist, so my suggestions might be naive or impractical. If so, I'd like to understand why it is so.
Quote
Please, rethink it once again - how could we predict the original spectra of the scene, when countless different spectra can induce the same RGB signal?
How does a carpenter measure the length of a wall when the measure does not have infinite precision? He take the best reading that he can, make the best out of it, and (ideally) keeps a reference of the uncertainty.
Quote
How could we reproduce the original spectra using display that has a matrix with only three colour filters?
You use what you have to the best of your abilities.

A "rgb" camera or display may be seen as a somewhat irregular 3-channel filter bank (spectrally). By adjusting the gain of the 3 channels, you can get some level of approximation to some spectrum. If you had a 256-channel uniform filterbank, the _problem_ would not be fundamentally different (in my view), but the _precision_ of the reconstruction would be a lot better.

Think about it - even with e.g. 256 uniform spectral channels, there will still be limited spectral precision. You will still have ambiguity in that two different physical scenes can produce the same sampled signal (and thus impossible to separate or recreate accurately).


I am not convinced that the problem of the OP is really that the spectra are "hard" to record accurately (i.e. "hairy" spikes that can only be recorded using multi-spectral means, or at least color sensing aparatus that is a very good approximation to human visual perception). Rather, I have a feeling that the colors are (in a way) accurately recorded by his camera and representable by his display/printer. But somewhere along the line, our "best practice" color processing fails his task. By e.g. attempting to remove absolute WB. If "absolute scene white-point" could be established (by camera profile, or pointing a physical "white picker" towards the pastel sky), perhaps simple changes to color processing could attempt to make this the reference point (instead of "neutral white")?

-h
« Last Edit: October 09, 2013, 06:33:52 AM by hjulenissen » Logged
digitaldog
Sr. Member
****
Offline Offline

Posts: 9004



WWW
« Reply #78 on: October 09, 2013, 09:36:22 AM »
ReplyReply

And I was offering suggestions to how the OPs problem might be improved.
Not using colorimetry or the current behavior and options we have to capture images in the field. This was a problem in search of a solution (actual solution based on today's tools and techniques, move the sliders and make the images look as you prefer). Now we're stretching into science fiction in terms of what might be possible in an attempt to solve the problem? Czornyj has explained the issues well and asked the right questions (how could we predict the original spectra of the scene, when countless different spectra can induce the same RGB signal?). IF we had the spectral data captured with the image, IF we had the spectral sensitivities of the sensor, we'd be in a different possible position here. We have neither.
Quote
How does a carpenter measure the length of a wall when the measure does not have infinite precision?
Doesn't wash. We know what data we need just to start this process. You should be asking, How does a carpenter measure the length of a wall when the measurement is only provided in a weight, not a distance. Answer, he can't.
Logged

Andrew Rodney
Author “Color Management for Photographers”
http://digitaldog.net/
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1677


« Reply #79 on: October 09, 2013, 02:09:32 PM »
ReplyReply

Not using colorimetry or the current behavior and options we have to capture images in the field. This was a problem in search of a solution (actual solution based on today's tools and techniques, move the sliders and make the images look as you prefer).
The OP clearly did not like the solutions of "move sliders until you are happy". Taken to the extreme, this is an argument against all color management. Just keep printing and pushing sliders until you are happy? Not a very predictable or satisfactory process.
Quote
Czornyj has explained the issues well and asked the right questions
Czornyj has explained how stuff works today. He did not explain why it has to be so, or (as far as I have seen) suggest a solution.
Quote
(how could we predict the original spectra of the scene, when countless different spectra can induce the same RGB signal?).
I am surprised that I am so poor at delivering this point. Are you disputing that 3 parameters can be used to describe a spectrum (with some precision)?
Quote
IF we had the spectral data captured with the image, IF we had the spectral sensitivities of the sensor, we'd be in a different possible position here. We have neither. Doesn't wash. We know what data we need just to start this process. You should be asking, How does a carpenter measure the length of a wall when the measurement is only provided in a weight, not a distance. Answer, he can't.
A weight does not relate to a tape measure like 256 channels does to 3 channels. Weights and tape measures measures two different things. 3-ch and 256-ch cameras measure the same thing with different precision.

It seems that spectral sensitivites of camera sensors are available. If your Antarctica trip was ruined due to poor colors, I am sure that you can afford to have some lab measure the sensitivity of your camera with great precision. I agree that true spectral data of a point of a given scene is more of a long shot.

-h
« Last Edit: October 09, 2013, 02:15:56 PM by hjulenissen » Logged
Pages: « 1 2 3 [4] 5 6 7 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad