Ad
Ad
Ad
Pages: « 1 2 [3] 4 5 »   Bottom of Page
Print
Author Topic: Separating 'RAW' functions in a RAW converter  (Read 27656 times)
NikosR
Sr. Member
****
Offline Offline

Posts: 622


WWW
« Reply #40 on: May 05, 2008, 08:26:01 AM »
ReplyReply

Quote
Nikos, either it is or it isn't, or it's the perspective from which one looks at it, or it depends on the context, so yes there can be some lack of clarity because there may be different equally valid ways of looking at it.

But whatever, I'm still having trouble seeing specifically what practical differences it would make to my workflow knowing which way to interpret this information. Sometimes in these discussions we bear witness to angels dancing on the heads of pins with zero significance to anything that matters except to code writers.
[a href=\"index.php?act=findpost&pid=193556\"][{POST_SNAPBACK}][/a]

To use the example you helped me think of, why should I WB when I can colour balance? Or, why should use EV compensation when I can use 'brightness'?

(I pose these as rhetorical questions and food for thought)
« Last Edit: May 05, 2008, 08:26:22 AM by NikosR » Logged

Nikos
bjanes
Sr. Member
****
Offline Offline

Posts: 2794



« Reply #41 on: May 05, 2008, 09:07:15 AM »
ReplyReply

Quote
Page 4 of Fraser/Schewe "Real World Camera Raw.":

<No matter what the filter arrangement, the raw file simply records the luminance values for each pixel, so the raw file is a grayscale image. It contains color information...so raw converters know whether a given pixel ...........represents red, green, or blue.....but it doesn't contain anything humans can interpret as color.>

<The raw converter interpolates the missing color information for each pixel from its neighbors, a process called demosaciing............>
[a href=\"index.php?act=findpost&pid=193552\"][{POST_SNAPBACK}][/a]

An RGB image in TIFF format is gray scale by the same reasoning. There are three gray scale files with luminance values. Nothing here would be perceived as color by humans. However, each file contains tristimulus values for red, blue, and green and these are perceived in full color when properly decoded by software. In the case of the raw files, the color information is in mosaic form and in the TIFF image the color information is present in three separate channels.

Bill
Logged
Mark D Segal
Contributor
Sr. Member
*
Offline Offline

Posts: 6944


WWW
« Reply #42 on: May 05, 2008, 09:38:15 AM »
ReplyReply

Quote
To use the example you helped me think of, why should I WB when I can colour balance? Or, why should use EV compensation when I can use 'brightness'?

(I pose these as rhetorical questions and food for thought)
[a href=\"index.php?act=findpost&pid=193557\"][{POST_SNAPBACK}][/a]

Nikos, rhetorical or not, you've raised a couple of interesting examples, so I'll respond to them. Re the White Balance: find one of your images which seriously needs to be "white-balanced". Do it in Camera Raw using the white balance tools in the first section of the Basic Tab, render the image and save it as an RGB psd or tiff. Then go back to the raw file, undo the white balance you just created there, and render the file "non-balanced" into Photoshop. Then use whatever techniques you wish to use in Photoshop for correcting the colour balance. Compare the results on two dimensions: (i) perceived accuracy of rendition across the colour gamut, and (2) ease of workflow. I've tried this a number of times and Camera Raw always wins.

As for "EV compensation", there is a contradiction in terms here and a technical matter. Firstly, I assume we are talking at the stage of the scene capture, where EV is Equivalent Value - it trades-off shutter speed against aperture to maintain the equivalent exposure, so there is no such thing as "EV compensation". We do Compensation at capture time to change the exposure, altering the histogram. The technical fact is that we always want to get the best possible exposure at capture and rely less on Brightness or Exposure in either Camera Raw or Photoshop after the fact. The most usual example is the argument made for Expose To The Right (ETTR), also discussed extensively in these forums, and a correct recommendation. Under-exposures boosted in post-capture processing will usually display more noise than appropriate ETTR exposures which don't require such boosting.

In both your examples, doing the right thing at the right time is important, hence I believe the major reason for the interest underlying this discussion, and in both cases we find our answers with simple experiments and observation.
Logged

Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....." http://www.luminous-landscape.com/reviews/film/scanning_workflows_with_silverfast_8.shtml
NikosR
Sr. Member
****
Offline Offline

Posts: 622


WWW
« Reply #43 on: May 05, 2008, 09:40:53 AM »
ReplyReply

Quote
Nikos, rhetorical or not, you've raised a couple of interesting examples, so I'll respond to them. Re the White Balance: find one of your images which seriously needs to be "white-balanced". Do it in Camera Raw using the white balance tools in the first section of the Basic Tab, render the image and save it as an RGB psd or tiff. Then go back to the raw file, undo the white balance you just created there, and render the file "non-balanced" into Photoshop. Then use whatever techniques you wish to use in Photoshop for correcting the colour balance. Compare the results on two dimensions: (i) perceived accuracy of rendition across the colour gamut, and (2) ease of workflow. I've tried this a number of times and Camera Raw always wins.

As for "EV compensation", there is a contradiction in terms here and a technical matter. Firstly, I assume we are talking at the stage of the scene capture, where EV is Equivalent Value - it trades-off shutter speed against aperture to maintain the equivalent exposure, so there is no such thing as "EV compensation". We do Compensation at capture time to change the exposure, altering the histogram. The technical fact is that we always want to get the best possible exposure at capture and rely less on Brightness or Exposure in either Camera Raw or Photoshop after the fact. The most usual example is the argument made for Expose To The Right (ETTR), also discussed extensively in these forums, and a correct recommendation. Under-exposures boosted in post-capture processing will usually display more noise than appropriate ETTR exposures which don't require such boosting.

In both your examples, doing the right thing at the right time is important, hence I believe the major reason for the interest underlying this discussion, and in both cases we find our answers with simple experiments and observation.
[a href=\"index.php?act=findpost&pid=193573\"][{POST_SNAPBACK}][/a]

I believe you have come very close to answering your own question
Logged

Nikos
Mark D Segal
Contributor
Sr. Member
*
Offline Offline

Posts: 6944


WWW
« Reply #44 on: May 05, 2008, 09:56:36 AM »
ReplyReply

Quote
An RGB image in TIFF format is gray scale by the same reasoning. There are three gray scale files with luminance values. Nothing here would be perceived as color by humans. However, each file contains tristimulus values for red, blue, and green and these are perceived in full color when properly decoded by software. In the case of the raw files, the color information is in mosaic form and in the TIFF image the color information is present in three separate channels.

Bill
[a href=\"index.php?act=findpost&pid=193566\"][{POST_SNAPBACK}][/a]

Fine, but what have we bought ourselves? Does this mean it doesn't matter whether one adjusts luminosity and colour in ACR or PS? Isn't there more to it? Different matter, but I believe also relevant to a discussion of what is best processed where, the raw file is linear, and the rendered file a gamma-encoded three channel construct. Wouldn't this create differences in tonal response which influence what element is best adjusted at which stage in the workflow?
Logged

Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....." http://www.luminous-landscape.com/reviews/film/scanning_workflows_with_silverfast_8.shtml
NikosR
Sr. Member
****
Offline Offline

Posts: 622


WWW
« Reply #45 on: May 05, 2008, 09:58:44 AM »
ReplyReply

Quote
Fine, but what have we bought ourselves? Does this mean it doesn't matter whether one adjusts luminosity and colour in ACR or PS? Isn't there more to it? Different matter, but I believe also relevant to a discussion of what is best processed where, the raw file is linear, and the rendered file a gamma-encoded three channel construct. Wouldn't this create differences in tonal response which influence what element is best adjusted at which stage in the workflow?
[a href=\"index.php?act=findpost&pid=193577\"][{POST_SNAPBACK}][/a]

Maybe.
Who knows?

Now you have finally joined me in my premise.
Logged

Nikos
Mark D Segal
Contributor
Sr. Member
*
Offline Offline

Posts: 6944


WWW
« Reply #46 on: May 05, 2008, 10:08:32 AM »
ReplyReply

Quote
Maybe.
Who knows?

Now you have finally joined me in my premise.
[a href=\"index.php?act=findpost&pid=193578\"][{POST_SNAPBACK}][/a]

Well, if we've come full-circle that's fine, but I'm not sure what your premise was. Your original post is a question about whether any readers are frustrated by the state of documentation on raw conversion. I've indicated that I'm not frustrated, because I've found that the best results come from good ETTR technique at time of capture and maximizing the feasible use of the raw converter for adjusting the image before rendering it to Photoshop. If all that works best for us, we have an optimal workflow without knowing all the ingredients of the secret sauce in the raw converter. So re-answering your original question - no, I'm not frustrated - not by that matter, anyhow   . Amen.
Logged

Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....." http://www.luminous-landscape.com/reviews/film/scanning_workflows_with_silverfast_8.shtml
Panopeeper
Sr. Member
****
Offline Offline

Posts: 1805


« Reply #47 on: May 05, 2008, 10:10:32 AM »
ReplyReply

Quote
All you would have to do to white balance the original raw data would be to multiply each pixel of the raw file by its RGB multiplier. No demosaicing would be necessary

This is, how Rawnalyze is doing it, because Rawnalyze does not perform any de-mosaicing (Jeff Schewe tried to call the way Rawnalyze creates the composite calor a"demosaicing", but that's nonsense).

However, the camera's color space is not orthogonal. Every (or most) pixel values carry some of the other two components. Multiplying for example the "red" raw pixel values implicitely multiplies the green and blue components of that pixel as well.

Quote
why should I WB when I can colour balance?

WB is a different quality. After WB the greys remain grey, whatever you do with the color adjustment.

Quote
why should use EV compensation when I can use 'brightness'?

The so-called EV compensation of ACR (it is a misnomer) is a linear operation, at least that is my observation. The "brightness" adjustment is non-linear.
Logged

Gabor
bjanes
Sr. Member
****
Offline Offline

Posts: 2794



« Reply #48 on: May 05, 2008, 10:15:37 AM »
ReplyReply

Quote
Fine, but what have we bought ourselves? Does this mean it doesn't matter whether one adjusts luminosity and colour in ACR or PS? Isn't there more to it? Different matter, but I believe also relevant to a discussion of what is best processed where, the raw file is linear, and the rendered file a gamma-encoded three channel construct. Wouldn't this create differences in tonal response which influence what element is best adjusted at which stage in the workflow?
[a href=\"index.php?act=findpost&pid=193577\"][{POST_SNAPBACK}][/a]

The matter of color information in raw files is more than semantics and there are practical implications. For example, one can white balance undemosaiced data since one does not need to wait for the color to appear during the demosaicing process. Using the Schewe-Fraser concept, this would not be possible, as you stated previously in this thread.

Similarly Schewe maintains that raw files lack a color space, in part because they are gray scale and lack color, assertions by Thomas Knoll and Chris Murphy to the contrary. ACR and other raw converters convert from the camera space to the internal working space (XYZ or linear ProPhotoRGB) using a 3 by 3 matrix conversion. Such conversions are used to convert from one color space to another, and how would this be possible if the camera lacks a color space? Thomas Knoll stated that the question of demosaicing is a "red herring".

Your comment about what is done where is most appropriate. For example, white balance is much easier to accomplish in a linear space since one can simply use RGB multipliers. In a gamma 2.2 space, the process would be nonlinear. Lightroom and the newer versions of ACR can white balance JPEGs; I think they first apply a reverse gamma curve to convert the image back to linear.

Bill
Logged
Mark D Segal
Contributor
Sr. Member
*
Offline Offline

Posts: 6944


WWW
« Reply #49 on: May 05, 2008, 11:44:56 AM »
ReplyReply

Quote
The matter of color information in raw files is more than semantics and there are practical implications. For example, one can white balance undemosaiced data since one does not need to wait for the color to appear during the demosaicing process. Using the Schewe-Fraser concept, this would not be possible, as you stated previously in this thread.

Similarly Schewe maintains that raw files lack a color space, in part because they are gray scale and lack color, assertions by Thomas Knoll and Chris Murphy to the contrary. ACR and other raw converters convert from the camera space to the internal working space (XYZ or linear ProPhotoRGB) using a 3 by 3 matrix conversion. Such conversions are used to convert from one color space to another, and how would this be possible if the camera lacks a color space? Thomas Knoll stated that the question of demosaicing is a "red herring".

Your comment about what is done where is most appropriate. For example, white balance is much easier to accomplish in a linear space since one can simply use RGB multipliers. In a gamma 2.2 space, the process would be nonlinear. Lightroom and the newer versions of ACR can white balance JPEGs; I think they first apply a reverse gamma curve to convert the image back to linear.

Bill
[a href=\"index.php?act=findpost&pid=193583\"][{POST_SNAPBACK}][/a]

Bill, yes I agree it is more than semantics and there are practical implications. We've just been bantering about whether we find them out more practically from knowing the code (difficult for obvious reasons) or by experiment. But I don't understand how one can white balance non-demosaiced data. This eludes me.

No doubt Jeff can speak for himself in response to your second paragraph, but can you point us to where Thomas and Chris have said as you indicated here? I'd like to see the context.

Your rejoinder on the relevance of linearity to white balancing - agreed - much easier to get this right in ACR than in PS.
Logged

Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....." http://www.luminous-landscape.com/reviews/film/scanning_workflows_with_silverfast_8.shtml
madmanchan
Sr. Member
****
Offline Offline

Posts: 2108


« Reply #50 on: May 05, 2008, 12:47:57 PM »
ReplyReply

In general you want certain operations in a raw converter to be done in a linear space. These include white balance and exposure compensation (which is just a linear scaling of the data), for sure. Other things are more debatable but may be convenient to have while you're in the raw conversion interface anyways.

bjanes, you correctly describe the matrix operations used to obtain XYZ coordinates from camera RGB (or GMCY) coordinates. You can think of the camera RGB/GMCY space as a color space but it's not colorimetrically defined. The matrices provide the colorimetric interpretation.

NikosR, your recent questions seem to be more about understanding how to use CR's controls, rather than needing to know the underlying details of how CR works. For example, with regards to your question about Exposure vs. Brightness -- Panopeeper covered it: the former is linear and hence preserves linear tonal relationships, but can cause clipping. The latter is a non-linear tone curve adjustment (and hence doesn't preserve linear tonal relationships) which doesn't clip. Two different ways to adjust tonality, with pros and cons of each.
Logged

bjanes
Sr. Member
****
Offline Offline

Posts: 2794



« Reply #51 on: May 05, 2008, 01:44:19 PM »
ReplyReply

Quote
Bill, yes I agree it is more than semantics and there are practical implications. We've just been bantering about whether we find them out more practically from knowing the code (difficult for obvious reasons) or by experiment. But I don't understand how one can white balance non-demosaiced data. This eludes me.

No doubt Jeff can speak for himself in response to your second paragraph, but can you point us to where Thomas and Chris have said as you indicated here? I'd like to see the context.

Your rejoinder on the relevance of linearity to white balancing - agreed - much easier to get this right in ACR than in PS.
[{POST_SNAPBACK}][/a]


[a href=\"http://luminous-landscape.com/forum/index.php?showtopic=22471&st=0&p=168050&#entry168050]Knoll and Murphy Responses[/url]
Logged
Mark D Segal
Contributor
Sr. Member
*
Offline Offline

Posts: 6944


WWW
« Reply #52 on: May 05, 2008, 02:28:12 PM »
ReplyReply

OK Bill, thanks - I remember that thread now - probably one of the more torturous ones in this Forum - and good to see the usual suspects haven't given-up yet!
Logged

Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....." http://www.luminous-landscape.com/reviews/film/scanning_workflows_with_silverfast_8.shtml
Schewe
Sr. Member
****
Offline Offline

Posts: 5479


WWW
« Reply #53 on: May 05, 2008, 02:46:30 PM »
ReplyReply

Quote
OK Bill, thanks - I remember that thread now - probably one of the more torturous ones in this Forum - and good to see the usual suspects haven't given-up yet!
[a href=\"index.php?act=findpost&pid=193630\"][{POST_SNAPBACK}][/a]


Considering that that particular thread resulted in my choosing to ignore bjanes as a LL forum user, I won't be responding to anything he posts either directly (since I don't see his posts) or indirectly because somebody else quotes him. I'm done talking to that individual...
Logged
bjanes
Sr. Member
****
Offline Offline

Posts: 2794



« Reply #54 on: May 05, 2008, 03:17:30 PM »
ReplyReply

Quote
Considering that that particular thread resulted in my choosing to ignore bjanes as a LL forum user, I won't be responding to anything he posts either directly (since I don't see his posts) or indirectly because somebody else quotes him. I'm done talking to that individual...
[a href=\"index.php?act=findpost&pid=193639\"][{POST_SNAPBACK}][/a]

I guess there are worse things than being ignored by Schewe. It is difficult to disagree with him without drawing abuse and it is also a waste of time to do so, since he seldom retracts demonstrably incorrect positions. No more abuse from the motorcycle contingent, not a bad thing.

Bill
Logged
Panopeeper
Sr. Member
****
Offline Offline

Posts: 1805


« Reply #55 on: May 05, 2008, 03:37:38 PM »
ReplyReply

Quote
The latter is a non-linear tone curve adjustment (and hence doesn't preserve linear tonal relationships) which doesn't clip
IMO this needs a bit explanation (not for madmanchan, who certainly knows this).

"Brightness" does not cause clipping the way the "exposure" adjustment does, i.e. by blowing the linear values out of the range. However, it does cause the *RGB* values growing up to 255 (depending on the the image and other settings), which triggers the clipping indication if turned on.

My point here is, that the user does not see the difference between clippings of different sources:

1. raw pixel saturation

2. induced clipping of the linear values (due to WB, saturation, contrast, exposure adjustment) and

3. reaching the top RGB value in the mapping.
Logged

Gabor
Panopeeper
Sr. Member
****
Offline Offline

Posts: 1805


« Reply #56 on: May 05, 2008, 04:51:14 PM »
ReplyReply

Quote
It would be useful to have 2 highlight clipping indicators:
1. (red) What we have now (presumably) - hard-clipped pixels per raw
2. (orange) "Soft-clipped" pixels that are clipped as a result of current settings

This is not as simple as it sounds, for the raw "color" channels do not translate in RGB red, green and blue. A raw pixel does not even translate into a single RGB pixel.

Quote
I would find this a useful tool to help know whether to mess around trying to recover "blown" highlights

I may be wrong, but my understanding is, that the "true recovery", i.e. guessing the clipped pixels based on their context does not depend on the "recovery" slider, i.e. it always happens.

The "recovery" slider reduces the highlights, no matter if there was raw clipping.
Logged

Gabor
Mark D Segal
Contributor
Sr. Member
*
Offline Offline

Posts: 6944


WWW
« Reply #57 on: May 05, 2008, 05:47:31 PM »
ReplyReply

Quote
I think this is is an interesting point.  It would be useful to have 2 highlight clipping indicators:
1. (red) What we have now (presumably) - hard-clipped pixels per raw
2. (orange) "Soft-clipped" pixels that are clipped as a result of current settings.

I would find this a useful tool to help know whether to mess around trying to recover "blown" highlights.
[a href=\"index.php?act=findpost&pid=193659\"][{POST_SNAPBACK}][/a]

GB, Given their limited time and resources between program up-dates, I would not recommend this as a high priority item. The distinction you're making between "hard" and "soft" clipping seems to me rather elusive. I think it boils down to this: if all three channels are clipped, no recovery is possible because there simply isn't any information on which to build. If at least one channel has some information, Recovery can work as explained on page 38 of Fraser/Schewe. "Messing around" in this context simply means moving the slider to see what it does - or doesn't.
Logged

Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....." http://www.luminous-landscape.com/reviews/film/scanning_workflows_with_silverfast_8.shtml
bernie west
Full Member
***
Offline Offline

Posts: 132



« Reply #58 on: May 05, 2008, 08:40:03 PM »
ReplyReply

Quote
I may be wrong, but my understanding is, that the "true recovery", i.e. guessing the clipped pixels based on their context does not depend on the "recovery" slider, i.e. it always happens.

The "recovery" slider reduces the highlights, no matter if there was raw clipping.
[a href=\"index.php?act=findpost&pid=193664\"][{POST_SNAPBACK}][/a]

Now wouldn't it be nice if we could have some documentation from Adobe explaining what is going on here?  Surely this isn't a commercial confidence issue?
Logged
Mark D Segal
Contributor
Sr. Member
*
Offline Offline

Posts: 6944


WWW
« Reply #59 on: May 05, 2008, 09:12:39 PM »
ReplyReply

Quote
............  Surely this isn't a commercial confidence issue?
[a href=\"index.php?act=findpost&pid=193697\"][{POST_SNAPBACK}][/a]

What makes you think it isn't? Pretty neat feature when you see it working, isn't it?
Logged

Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....." http://www.luminous-landscape.com/reviews/film/scanning_workflows_with_silverfast_8.shtml
Pages: « 1 2 [3] 4 5 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad