Ad
Ad
Ad
Pages: « 1 [2] 3 4 ... 7 »   Bottom of Page
Print
Author Topic: Is accurate color possible in non-standard light?  (Read 9958 times)
digitaldog
Sr. Member
****
Offline Offline

Posts: 8619



WWW
« Reply #20 on: October 03, 2013, 10:52:06 AM »
ReplyReply

Let me rephrase. There are color models and methods that try to achieve some sort of well-defined color reproduction in controlled situations, used in for example art reproduction.
Key word in the above sentence is try. Trying and achieving (to a hopefully well defined metric anyone can agree upon) are two different things! And what color models do you refer to and do they succeed in providing the match without massive efforts by some human to produce what someone says is a match?
Quote
When you photograph a painting for posterity you don't want it to be some personal taste of the photographer that decides how the colors are rendered. Instead you want to have some standard based on experiments on the color-seeing population so you will render something that is close to the colors most people see when they look at the original painting.
What standard are you referring to?
Quote
From this discussion it seems that my assumption was right -- that there are no such methods, and all color models which strive for some sort of accuracy (or realism perhaps is a better word) work in a fairly narrow color temperature range.

Yes and no (more yes than no). Today's color management makes a lot of assumptions that may not pan out and produce what you would like to consider as accurate color. A few examples: Nearly all ICC profiles assume a D50 viewing condition when that may not be the case. Or that only one object in our solar system is capable of producing D50 and further, D50 is a sampling of colors taken all over the planet. Or that the current models used for color management have plenty of issues that produce color matching problems (example, Lab's attempt to create a a perceptually uniform color space which isn't perceptually uniform, or how it exaggerates the distance in yellows and consequently underestimate the distances in blues) again producing color matching issues. Or that we have conditions where metameric failure occurs, even with the observer. That there are color appearance models still in the works and not implemented in current color management solutions. That our capture systems can capture 'color' we can't see or we can see colors it can't capture. Etc, etc, etc.
Quote
Ie, landscape photographers working in dusk or dawn and wanting realism must trust their good color memory and post-processing skills. So I'm not missing out on some color management method I as a landscape photographer should know about, which was my main concern and the reason I opened this thread.
My advise is to continue making lovely images and don't get caught up in an accuracy, color matching rabbit hole. You shot film once? Was that an issue? Because none of the films reproduced the scene as it appeared to you and they all had renderings that were built to convince you to buy that film product. Digital isn't much different. We have large numbers of users who can't get a decent print to screen match let alone reproduce a single print that is accurate to the scene. An emissive display and reflective print will never match perfectly. To extrapolate past that 'issue' and ask, how can I produce a capture, then a print that is accurate to the scene seems awfully difficult and further, is it even necessary?
Logged

Andrew Rodney
Author “Color Management for Photographers”
http://digitaldog.net/
torger
Sr. Member
****
Offline Offline

Posts: 1376


« Reply #21 on: October 03, 2013, 11:25:00 AM »
ReplyReply

I'm a little bit familiar with color management but no expert so it's risky for me to point out any standard in particular. Let's say there are various of organizations and companies that work with color and try to solve various challenges related to accuracy, and different standards and methods has arised, to calibrate cameras, screens, printers, link them together in a color managed workflow, with ICC DCPs color checkers, spectrophotometers and the like. I have myself a color managed workflow from screen to print. I know color management is about reducing the errors not eliminating them, because you can't.

I'm not planning to get caught up on anything, just checking what the possibilities are. If it was possible to make something more repeatable than me guesstimating white balance and individual color adjustments from shot to shot, I'd love to use it. It would speed up my workflow instead of having white balance picking angst. It would also contribute to make my images have a more similar and stable look from year to year instead of guesstimating freshly each time.

There sure are impossible special cases, the northen lights for example, are recorded very different on camera than with the eyes. And narrow band mixed artificial lights is also an impossible case. I just thought that there might be a chance for better methods for dusk and dawn light.

The reason it does not exist I don't think is because there's no need or interest, but because it's hard, impractical or perhaps even impossible to make anything better than the photographer randomly pulling the temp and tint slider, which indeed seems to be the established method Smiley.
Logged
digitaldog
Sr. Member
****
Offline Offline

Posts: 8619



WWW
« Reply #22 on: October 03, 2013, 11:50:20 AM »
ReplyReply

Let's say there are various of organizations and companies that work with color and try to solve various challenges related to accuracy, and different standards and methods has arised, to calibrate cameras, screens, printers, link them together in a color managed workflow, with ICC DCPs color checkers, spectrophotometers and the like. I have myself a color managed workflow from screen to print. I know color management is about reducing the errors not eliminating them, because you can't.
The bottom line, and the reason I posted was to let you know that the term accuracy, in relationship with color management is mostly a marketing term and without some metric to what accuracy goal you desire, you going to get caught up with semantics or far worse, marketing semantics.
If you tell me you want to make a print today and one in a year and you desire that the printer exhibit no more than a Avg dE2000 value of 1, AND you define how many patches you wish to use such we can have a firm idea of what the average represents, that's very doable with todays tools and software. If you tell me you want your capture and a print to be accurate, I can try to sell you all kinds of products and blow smoke up your behind which doesn't serve you well.
Quote
If it was possible to make something more repeatable than me guesstimating white balance and individual color adjustments from shot to shot, I'd love to use it.
Yes, you could setup a test lab, capture known color values today, analyze the numbers and test that in a week to see the dE differences. How that will aid you in the field is highly questionable. And even if we learned that your camera has a dE2000 difference of X number of colors shot today and in a week, due to say temperature or something else, now what? It is one thing to be doing astrophotography or some laboratory capture process where the dE values have to be lower, but since that isn't what you're going to do with the device, but instead make pleasing images, is this useful, can you even do anything about this behavior?
Quote
It would speed up my workflow instead of having white balance picking angst.

Get something like a Passport or similar target and photograph it before each capture. Now you come back from shooting a sunset and guess what result you'll get if you white balance? Probably not what you desire and certainly not accurate color to the scene (scene colorimetry). In the studio under the same strobes: that could be a very useful tool and workflow.
Quote
It would also contribute to make my images have a more similar and stable look from year to year instead of guesstimating freshly each time.
Yes, if you always captured in a controlled studio condition with the same strobes. If you are doing product photography, that be quite useful!
Quote
There sure are impossible special cases, the northen lights for example, are recorded very different on camera than with the eyes.

Begging the question, do you want the image to appear as you thought you saw it and thus probably inaccuracy or as the camera saw it which could also be inaccurate due to the limitations or idiosyncrasies of the device and not what you wish to represent in your image?
Quote
The reason it does not exist I don't think is because there's no need or interest, but because it's hard, impractical or perhaps even impossible to make anything better than the photographer randomly pulling the temp and tint slider, which indeed seems to be the established method Smiley.
I'd say both and I'd say it highly depends on what you're doing with a camera. If you're photographing cytology under a microscope and hoping to show others what you saw by sending them an image, that's one thing. If you're shooting then northern lights and your goal is to impress your photographic vision or what you remembered you saw, that's a different story.
Logged

Andrew Rodney
Author “Color Management for Photographers”
http://digitaldog.net/
torger
Sr. Member
****
Offline Offline

Posts: 1376


« Reply #23 on: October 03, 2013, 01:28:50 PM »
ReplyReply

I've mentioned several times in this thread that using a color checker as reference won't work, as in extreme color temperatures, well even moderately off 5500K the eye/brain does not fully compensate and starts to see a cast on objects that look white under 5500K.

I've attached a very clear example of this. Snow is experienced as (quite close to) pure white under midday. The attached picture shows the same snow and ice shot at the same occassion, with minutes apart, but one has a "creative" white balance, ie the snow is used as white reference, while the other I have manually tuned from my best ability to make it a realistic representation of what the eye saw at the scene. The result would be very similar if I white balanced with a color checker or grey card, ie very very far from realistic. Sure you can make a pleasing image, but I also like to be able to make something realistic, preferably in a structured repeatable way.

Today I don't work so much with pulling individual colors. I try to remember what the cast of white was, or how pink the sky was etc, and then I pull the white balance to try to get a reasonable match with memory. Then I have a couple of DCP/ICC profiles (made for 5500K I guess) from different software makers and the plain 5500/6500 color matrix (which I think adobe has come up with) and I do trial and error with those to see if it becomes better or worse. Sometimes the color matrices gives me better result than the profiles. When I shoot with the DSLR I sometimes tune the white balance on the camera so the LCD image looks similar to what I see, so I have some sort of reference embedded in the RAW to support my memory.

Relying on my memory and pull sliders to match is the least bad method so far, and now I know that there probably are no better methods that could help me to get a realistic color rendering baseline. Tough luck. I want to capture and show the unique beautiful light as it is up here in northern Sweden (you don't see this kind of dusk light even in the southern parts of Sweden, the light becomes extremely pink/blue), but that only makes sense if I'm able to render it realistically. If I just do it on chance or as I please I could have shot it in much different light and just tune it. So I don't think it's strange to desire realism, but I understand that with current technology and understanding brain/eye function in these type of light conditions there are not many tools for the photographer to use.
« Last Edit: October 03, 2013, 01:40:57 PM by torger » Logged
digitaldog
Sr. Member
****
Offline Offline

Posts: 8619



WWW
« Reply #24 on: October 03, 2013, 01:50:12 PM »
ReplyReply

I've mentioned several times in this thread that using a color checker as reference won't work, as in extreme color temperatures, well even moderately off 5500K the eye/brain does not fully compensate and starts to see a cast on objects that look white under 5500K.
The Passport has differing colored whites (cool to warm) so you can season to taste which isn't accurate color (it's pleasing color). You're shooting raw right? The WB settings have zero affect on that data, you can render it anyway you desire.
Quote
I've attached a very clear example of this. Snow is experienced as (quite close to) pure white under midday.
Visually or numerically or both? What numbers, the actual scene referred values or what you end up with after altering the settings?
Quote
The attached picture shows the same snow and ice shot at the same occassion, with minutes apart, but one has a "creative" white balance, ie the snow is used as white reference, while the other I have manually tuned from my best ability to make it a realistic representation of what the eye saw at the scene.

I would submit that unless you measured the illuminant at the scene, both are creative, you apparently prefer one over the other. Again, unless you can measure the colors at the scene and of the final, there's no actual colorimetry happening and everything else is just a set of numbers you either prefer or you don't. That's subjective.
Quote
Sure you can make a pleasing image, but I also like to be able to make something realistic, preferably in a structured repeatable way.
Tell us what the differences are? What makes one that you say is realistic and accurate instead pleasing?
Quote
I try to remember what the cast of white was, or how pink the sky was etc and then I pull the white balance to try to get a reasonable match with memory.
How is that accurate? What metric can you use to suggest it's accurate since there's nothing measured?
Quote
Sometimes the color matrices gives me better result than the profiles.
Better as preferred. That's subjective.

What I'm suggesting you to do is consider what you're asking for (accurate color) and how to get it. What you're describing is a desire for a pleasing rendering. You can't record what you remember of the scene inside your brain. What you ate for breakfast could affect your perception that morning, before we even get into the impossibility of defining memory color. Calibrate and profile your display, adjust to taste to produce a rendering you are pleased with that you and only you feel represents the color you recall. I understand you want this to happen more automatically but short of capturing a reference target as a start, or having a few decent camera profiles as a starting point, there isn't much more you can do. A large part of Photography is rendering a scene you wish to represent it to your viewer: http://wwwimages.adobe.com/www.adobe.com/products/photoshop/family/prophotographer/pdfs/pscs3_renderprint.pdf

Read this piece (http://www.color.org/ICC_white_paper_20_Digital_photography_color_management_basics.pdf) and examine the dark and unattractive Figure 1. That's vastly more accurate than the other two figures. It's scene referred. But does anyone prefer that rendering over the other two images where are output referred and numerically far less accurate? Doubt it.
Logged

Andrew Rodney
Author “Color Management for Photographers”
http://digitaldog.net/
torger
Sr. Member
****
Offline Offline

Posts: 1376


« Reply #25 on: October 03, 2013, 02:26:18 PM »
ReplyReply

Yes I shoot raw :-). The camera rendering still get embedded as a preview image for most raw formats.

As you can see from the images I attached the color differences are not subtle, they are huge. So yes there is no doubt that my manual rendering is more realistic than the snowy white balance, regardless what I had for breakfast. I had a fellow photographer with me and her renderings look similar to mine - although there is a lot of psychology involved trained healthy eyes get to the same ballpark. I know memory certainly is unreliable, but what to do? As you say there's not much there is. It does not seem attractive to me to replace an inconsistent 'okay' (my memory) with a consistent bad (color checker). I'll think I'll give the checker another chance though, the reference could be used for at least something.

When it comes to dullness of "accurate" renderings I know about that, but it's still a good starting point for processing, and often a contrast curve and slight saturation increase is all that is needed. Being able to get the hues reasonably realistic is my most desired property.
Logged
digitaldog
Sr. Member
****
Offline Offline

Posts: 8619



WWW
« Reply #26 on: October 03, 2013, 02:34:31 PM »
ReplyReply

Yes I shoot raw :-). The camera rendering still get embedded as a preview image for most raw formats.
You mean the proprietary JPEG rendering that the camera provides that has no direct bearing on the raw data itself?
Quote
As you can see from the images I attached the color differences are not subtle, they are huge.

The are but they don't have to be. That's why you capture raw data.
Quote
So yes there is no doubt that my manual rendering is more realistic than the snowy white balance, regardless what I had for breakfast.
No argument other than your manual rendering is more to your preference and accuracy can't be attached to this without first having some measurements of the scene. Measurements that will not match what you produced because you produced an output referred image.
Quote
I had a fellow photographer with me and her renderings look similar to mine - although there is a lot of psychology involved trained healthy eyes get to the same ballpark.
And that's not a surprise but it also doesn't have any direct correlation to accuracy.
Quote
I know memory certainly is unreliable, but what to do? As you say there's not much there is. It does not seem attractive to me to replace an inconsistent 'okay' (my memory) with a consistent bad (color checker). I'll think I'll give the checker another chance though, the reference could be used for at least something.
IF the addition of a color checker doesn't help, don't use it. You could just use that in-camera JPEG which may have no relationship to reality in terms of scene accuracy as a starting point. That is again just another rendering interpretation that isn't any more accurate. Just as if you captured the scene with Velvia, Ektachrome and Agfachrome, all would be a different rendering. You'll prefer one. None are accurate. You pick a film stock based on the rendering you prefer.
Logged

Andrew Rodney
Author “Color Management for Photographers”
http://digitaldog.net/
Hening Bettermann
Sr. Member
****
Offline Offline

Posts: 554


WWW
« Reply #27 on: October 03, 2013, 04:25:41 PM »
ReplyReply

Torger,

I am not a color scientist, but I share your view of the problem (as well as the love for Lappland :-) ). May I humbly point you to an attempt I have described here:
http://www.luminous-landscape.com/forum/index.php?topic=73620.0

I have not yet had the time to experiment with it in more detail. It might be interesting to hear if/how it works for you, who has daily access to "extreme" white balance situations.

Good light - and "true" color!
Logged

Vladimirovich
Sr. Member
****
Offline Offline

Posts: 1264


« Reply #28 on: October 03, 2013, 05:37:53 PM »
ReplyReply

Help me out with some color management basics;

As far as I understand most systems are calibrated for some standard light


consider that bayer CFA is on top of a particular sensor surface is set once and for all when camera is designed and manufactured and CFA is designed w/ specific properties of "R"-"G1-"B"-"G2" filters... it is reasonable to assume that you can design a "better" set of filters for a specific spectrum of incoming light, but you can't alter CFA properties on the fly to account for different types of illumination... so yes, it is designed for something like "daylight" and it is naive to think that post processing can fix that 100%...  "damage" (metameric failures, etc) is already done before any software or profiles come into picture.
Logged
torger
Sr. Member
****
Offline Offline

Posts: 1376


« Reply #29 on: October 04, 2013, 01:12:55 AM »
ReplyReply

Torger,

I am not a color scientist, but I share your view of the problem (as well as the love for Lappland :-) ). May I humbly point you to an attempt I have described here:
http://www.luminous-landscape.com/forum/index.php?topic=73620.0

I have not yet had the time to experiment with it in more detail. It might be interesting to hear if/how it works for you, who has daily access to "extreme" white balance situations.

Good light - and "true" color!

I see that you use the same support technique as I do, estimating the white balancing by matching the LCD/live view, just a little bit more elaborate than I've done so far. I also see that you get the expected critique; that the screen is poorly tuned sRGB etc, but we all know that has worked with this that the casts of screens (with a few exceptions, use a good camera model!) and matching errors is much less than the range of white balances you can be confused about in post when you work with these type of scenes and have no recorded reference.

One has to differ between small errors and large errors. Some seem to think as there will always be small errors (and they are good at pointing out those) you could just as well skip it all-together and have very large errors, and then go on questioning why one would want realistic color at all. I don't think that's a very helpful approach. Some of us maybe want to have realism as a style, just as others have surrealism. And even if I want surrealism, at least I prefer to have a realistic starting point so I know what I'm doing. So I'm glad to see that there's someone else that have this problem and have found a technique that makes the post-processing challenge a bit easier.

I like Schewe's anecdote in your thread, about the lecture of white balance in antarctic light. Polar light is special and you get an unusual white balance challenge. The lecturer's solution was to do the "from memory" thing until it looks right, but I think your method by manually tuning and screen matching on site is providing good support to find a fairly realistic starting point in post. The possibility to do so is fairly new, as one needs high quality live view renderings so it's certainly a technique worth evaluating further.
« Last Edit: October 04, 2013, 01:38:32 AM by torger » Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1666


« Reply #30 on: October 04, 2013, 02:28:29 AM »
ReplyReply

The physical equivalent of a "color picker" sounds like a neat thing. Use a modified colorimeter or some other gadget, for either very small or very large AOV. Point it towards the sky, the land or the sun. Get a reading. Use it afterwards as a reference point.

These things are commonly >3 bands, are they not?

-h
Logged
torger
Sr. Member
****
Offline Offline

Posts: 1376


« Reply #31 on: October 04, 2013, 03:30:52 AM »
ReplyReply

The physical equivalent of a "color picker" sounds like a neat thing. Use a modified colorimeter or some other gadget, for either very small or very large AOV. Point it towards the sky, the land or the sun. Get a reading. Use it afterwards as a reference point.

These things are commonly >3 bands, are they not?

Pro spectrophotometers are very expensive of course ($10K++), but a consumer model like my Colormunki is like $500 and measures in 10nm bands. With optics it could possibly be a spot meter style color picker. Trouble is what we should do with those values, how we should get them into the workflow so it translates to more realistic and repeatable color reproduction. Maybe a full spectrum reading of a gray card would be easier to use and provide enough data to make something better than current guesstimating techniques, but we would need to develop color models that translates "extreme" spectrums into some sort of white balance. The current models that exist don't deal with extreme spectrums as far as I know.

I'm suspecting that a live view camera with good display and manual matching at site would be a simpler technique which would give about the same size of the error. This method is exactly the same as the established "pull sliders randomly in post until it looks like you remember it", but you do it on site with the real scene as guide.

I know the method is open for lots of criticism concerning calibration of the camera display and the very strange viewing conditions, but fact is that the range of white balance confusion ("should I use this or that?") that can arise for raw files shot under say arctic dusk light is very large, and as far as I can see much larger than errors introduced with the matching method. When I work with post-processing I use a calibrated screen, but I have noted that even when looking at images on an uncalibrated screen you see the same types of color errors (tints) you do with the calibrated screen, ie calibration is only necessary for really fine-tuned work and print matching, the errors we talk about here when trying to represent extreme landscape light in a realistic manner is on a much grander scale. Therefore I think it could be a quite good method actually, but I need to evaluate it more, and winter is coming :-).
« Last Edit: October 04, 2013, 03:33:15 AM by torger » Logged
stamper
Sr. Member
****
Offline Offline

Posts: 2525


« Reply #32 on: October 04, 2013, 03:43:06 AM »
ReplyReply

torger you seem very knowledgeable about the subject which makes me wonder why you posed the question in the first place?
Logged

ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7251


WWW
« Reply #33 on: October 04, 2013, 04:03:20 AM »
ReplyReply

Hi,

My guess is that it is best to find a color temperature and tint that works.

Part of the problem is that you have a mix of blue skylight and yellow-red sunlight, both add to perception.

Best regards
Erik


Pro spectrophotometers are very expensive of course ($10K++), but a consumer model like my Colormunki is like $500 and measures in 10nm bands. With optics it could possibly be a spot meter style color picker. Trouble is what we should do with those values, how we should get them into the workflow so it translates to more realistic and repeatable color reproduction. Maybe a full spectrum reading of a gray card would be easier to use and provide enough data to make something better than current guesstimating techniques, but we would need to develop color models that translates "extreme" spectrums into some sort of white balance. The current models that exist don't deal with extreme spectrums as far as I know.

I'm suspecting that a live view camera with good display and manual matching at site would be a simpler technique which would give about the same size of the error. This method is exactly the same as the established "pull sliders randomly in post until it looks like you remember it", but you do it on site with the real scene as guide.

I know the method is open for lots of criticism concerning calibration of the camera display and the very strange viewing conditions, but fact is that the range of white balance confusion ("should I use this or that?") that can arise for raw files shot under say arctic dusk light is very large, and as far as I can see much larger than errors introduced with the matching method. When I work with post-processing I use a calibrated screen, but I have noted that even when looking at images on an uncalibrated screen you see the same types of color errors (tints) you do with the calibrated screen, ie calibration is only necessary for really fine-tuned work and print matching, the errors we talk about here when trying to represent extreme landscape light in a realistic manner is on a much grander scale. Therefore I think it could be a quite good method actually, but I need to evaluate it more, and winter is coming :-).
Logged

hjulenissen
Sr. Member
****
Offline Offline

Posts: 1666


« Reply #34 on: October 04, 2013, 04:20:01 AM »
ReplyReply

Pro spectrophotometers are very expensive of course ($10K++), but a consumer model like my Colormunki is like $500 and measures in 10nm bands. With optics it could possibly be a spot meter style color picker. Trouble is what we should do with those values, how we should get them into the workflow so it translates to more realistic and repeatable color reproduction.
If you point a colorimeter (or spectrophotometer) converted to spot-use towards some point in the sky, you would get a precise measurement of the spectral characteristics of that point, yes? (If you had a lot of spare-time, you could raster-scan your scene like an old video camera, and get a spatially crude, but spectrally accurate representation)

Print the image, hang it up on the wall under some lighting, and do a second reading of that same spot (if the sky is smooth, spatial accuracy may not be that important). Or trust the profiles of this printer to be "accurate".

You wish for those two readings to be similar, right? So what operation is needed on the image data to make the print have the closest possible spectral characteristics at that point to those of the original scene (within the degrees of freedom in the ink, paper, lighting etc)? Many operations would likely get you there, but you'd want one that does not mess with tonality, perhaps a simple 3x3 color channel matrix.

What I am suggesting (without really knowing what I am talking about) is a poor-mans substitute for a multispectral camera, rendered with "absolute" spectral response in mind. If your sky had a spectral peak at 620nm and a valley at 640nm etc and you want to recreate just that, then why mess around with concepts such as white balance, illuminant, human perception etc any more than necessary? If for nothing else, if would be really interesting to see if a "camera color system designed by a physics academic with little interest in human perception" could capture your scene more to your liking.

My working hypothesis is all along that if you can recreate all of the (perceptually relevant) physics stimuli that caused such an impression in the real scene, then your vision should respond the same way if presented to the same stimuli at a later time. Generally, this assumes that you cover your entire FOV and that there is no coupling between eyes, ears, taste, knowledge etc. In the abscence of anything better, I think that is fair.

-h
« Last Edit: October 04, 2013, 04:29:46 AM by hjulenissen » Logged
stamper
Sr. Member
****
Offline Offline

Posts: 2525


« Reply #35 on: October 04, 2013, 04:40:45 AM »
ReplyReply

Generally, this assumes that you cover your entire FOV and that there is no coupling between eyes, ears, taste, knowledge etc. In the abscence of anything better, I think that is fair.
 
And you forgot about having a good memory....especially if you took about 100 shots in that particular day? Wink
Logged

hjulenissen
Sr. Member
****
Offline Offline

Posts: 1666


« Reply #36 on: October 04, 2013, 05:02:21 AM »
ReplyReply

And you forgot about having a good memory....especially if you took about 100 shots in that particular day? Wink
I am curious as to why these problems appear, in such cases I have been known to offer suggestions and thought-experiments that can be impractical, investigate asymptotic behaviour etc.

Would a multi-spectral camera (if one suitable were available) coupled to a multi-spectral workflow and printer solve the problem? Or is it more philosophical about the idea that one can do a 2-d "image" of a scene, move it to another time another place and expect to recreate the same perceptual response?

-h
Logged
torger
Sr. Member
****
Offline Offline

Posts: 1376


« Reply #37 on: October 04, 2013, 06:44:48 AM »
ReplyReply

I am curious as to why these problems appear, in such cases I have been known to offer suggestions and thought-experiments that can be impractical, investigate asymptotic behaviour etc.

Would a multi-spectral camera (if one suitable were available) coupled to a multi-spectral workflow and printer solve the problem? Or is it more philosophical about the idea that one can do a 2-d "image" of a scene, move it to another time another place and expect to recreate the same perceptual response?

One problem is the brain's evening-out-effect, most clearly demonstrated with mixed artificial light. If you photograph that and want to have a realistic representation you will feel that you need multiple white balances, to bring the lights closer together as it was experienced on scene.

The simplest way to see on this is that the brain fully compensates, ie "auto white balance" so the brightest spot is seen as pure white, and to model this behavior one can shoot a gray card and the white balance with a color picker. However it's more correct to say that the brain brings the "neutral" color closer to pure white but not all the way. In arctic dusk light the "white" is experienced as very far from white so this model will not yield a realistic result.

As far as I know there are no models made of the brain's "white balance" function, so even if you have a full spectral recording you don't have a model that can translate that into how the eye experienced it. I do think it would be possible to make such a model based on experiments. Another aspect is that possibly the color response changes more drastically (like it does in low light) than just a white balance difference so you would need a totally separate DCP/ICC for more extreme light. The standard DCPs are typically made for 5500 and 6500K. I don't really know how well those translate into extreme light. Probably not perfectly, but my guess is that it's not too bad either, if you can secure the white balance then you have come a far way.

The eye/brain has other "evening out" effects to, for example that you much easier can see past reflections on a window (it's about depth seeing) than in a photograph, or that you don't see haze as disturbing in real life as in a photograph, which means that you may use a polarizer to reduce reflections and slightly reduce blue channel content on distant object to get more realism.

And then we have all other sorts of lighting conditions, low light, narrow band atmospheric phenomena like northern lights etc.

(And we have print paper and viewing conditions which affect the experience, but I think people have a tendency to emphasize these issues way too much compared to color differences occurring in the post-processing step. If the paper is warm-white or cool-white has a much smaller impact on the realism of an arctic winter photograph than choosing appropriate white balance in post-processing. Those are fine-tuning problems.)

About here people get bored and think "it will never be accurate, so why care?" and just make something they find pleasing, not caring about realism at all. I'm fine with that, but some of us find this interesting and I prefer being able to talk about the subject without getting into meta discussions about what photography really should be about Smiley. Sometimes I use the camera response creatively, long exposures in low light are very different from eye response for example, and sometimes I want a realistic rendering of the eye's experience at the scene, such as my arctic winter example. I think it becomes more rewarding to capture extra-ordinary light if it can be realistically reproduced.

In one aspect capturing extra-ordinary light on film was more rewarding than it is on digital. Film did not necessarily render it realistically, it have less potential to do so than digital, but the color conversion was the same so something extra-ordinary in real life became something extra-ordinary in the picture. With digital you just play around with contrast and saturation so everything looks extra-ordinary in the same way. You can be disciplined and limit your manipulation, but in extreme light I find that a challenge arise to find a realistic starting point.

Even in less extreme light I'd be glad if I could have a more realistic representation of colors, I've been shooting fall colors recently with my medium format system and I just did not manage to replicate a color response that seemed true to the scene. Either too brown or too green, couldn't find a proper mixture of yellow and greens. I'm not sure if it was my memory failing me or something else. In the end I chose something that looked pleasing, although I'd prefer the original if I could have replicated it. So I have become interested in what techniques there are to reproduce realistic colors. Due to the brain's "evening out" effects maybe the most true-to-the-original-scene impression is not given by the most accurate colors, there may be some psychovisual effects involved.
« Last Edit: October 04, 2013, 07:03:34 AM by torger » Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1666


« Reply #38 on: October 04, 2013, 07:02:00 AM »
ReplyReply

One problem is the brain's evening-out-effect, most clearly demonstrated with mixed artificial light. If you photograph that and want to have a realistic representation you will feel that you need multiple white balances, to bring the lights closer together as it was experienced on scene.
Not sure if you got my point. If you had a 13"x19" window in your darkened mountain cabin offering a view of the arctic valley below, would you feel a need to WB it? If you swapped this window for a 13"x19" print (with identical spectral characteristics), would you feel a need to WB it? What if you lit the interior of your cabin with candlelights and/or swapped the cabin with a city flat at summer time?

I think that the experience-based reliance upon WB concepts may be counter-productive in cases like this. I might be wrong.

-h
Logged
torger
Sr. Member
****
Offline Offline

Posts: 1376


« Reply #39 on: October 04, 2013, 07:16:47 AM »
ReplyReply

Not sure if you got my point. If you had a 13"x19" window in your darkened mountain cabin offering a view of the arctic valley below, would you feel a need to WB it? If you swapped this window for a 13"x19" print (with identical spectral characteristics), would you feel a need to WB it? What if you lit the interior of your cabin with candlelights and/or swapped the cabin with a city flat at summer time?

I think that the experience-based reliance upon WB concepts may be counter-productive in cases like this. I might be wrong.

If I look on reality through a window in a dark room there's no difference from standing outside. You'll still see that the arctic dusk light is blueish indeed. If the room is lit with ordinary room lights you'll experience the brain's "evening out" effect, you'll see that the outdoor light really is blue and the indoor is much much warmer but it does not really look strange. If you take an indoor-white-balanced photograph of the same you'll see the outdoor scene is much more extremely blue than it was to the eye, and if you adjust the WB so the outdoor look realistic the indoor looks wrong, so in that case you'll need dual white balance, and this is a problem architecture photographers often need to relate to.

Let's say we make a print with identical spectral characteristics, we would have to have it a backlit print to make it work. I think it would be experienced as looking out through the window. To make a regular print that works in ordinary display conditions you may need to make some adjustments, probably even it out a bit, ie maybe reduce the blueishness of the light slightly, but then we're in the fine-tuning phase.
« Last Edit: October 04, 2013, 07:21:22 AM by torger » Logged
Pages: « 1 [2] 3 4 ... 7 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad