Ad
Ad
Ad
Pages: « 1 ... 9 10 [11] 12 »   Bottom of Page
Print
Author Topic: Expose to right, it is as simple as  (Read 35127 times)
crames
Full Member
***
Offline Offline

Posts: 210


WWW
« Reply #200 on: September 05, 2011, 12:03:49 PM »
ReplyReply

Yes, that’s an quite academic definition of "color gamut"
which also seems to consider that a camera is supposed to react on any light source such as e.g. a spectrally pure laser beam, even though such stimulus may lie outside of what we draw as a camera color space.

I could imagine that a camera with "weak" color filters on the Bayer array would have trouble distinguishing a spectrally pure laser beam from a less pure color. (Instead of calling it a color space, I have read the camera spaces being described as a "spectral spaces" since they have responses different from the eye.)

Quote
Anyway, the final conclusion that it doesn’t matter here for ETTR sounds logical,
and Guillermo is of course right when saying that any +Eposure can be undone by linear down-scaling of the (unclipped) Raw RGB data.

+1

Quote
Could still be interesting to hear the "defense" of Andrey Tverdokhleb and Iliah Borg
who brought up this thesis about "ETTR is very harmful for colors - midpoint is the most colorful place in A900 gamut" as quoted here.

In that quote Andrey says:

Quote
Veiling glare from lens and sensor are the culprits here.

Interesting that they don't blame the shape of the RGB color space. I just don't see how exposure can affect the proportion of veiling glare in the image, other than by changing the aperture, which of course changes other things too. If you increase the exposure time without clipping (- what's that called again?) how could that change the percent of flare/glare in the image? Hopefully Andrey or Iliah could elaborate.
Logged

Cliff
PierreVandevenne
Sr. Member
****
Offline Offline

Posts: 510


WWW
« Reply #201 on: September 05, 2011, 03:52:35 PM »
ReplyReply

Could still be interesting to hear the "defense" of Andrey Tverdokhleb and Iliah Borg
who brought up this thesis about "ETTR is very harmful for colors - midpoint is the most colorful place in A900 gamut" as quoted here.

ETTR is about getting the optimal exposure from the sensors, on cameras that were calibrated to actually ETTL (from the point of view of the optimal sensor exposure). Cameras were exposing to the left essentially for two reasons
- sensors and their back office electronics implementation had a somewhat limited DR compared to what the human eye expected to see.
- engineers decided that it was better to protect from over-exposure (where data is simply lost) than fully exploit low light situations.
Sensors and their backend pipeline have improved, drastically in some cases, and that has extended the margins of DR on both sides to the point that, in some recent cameras, you just don't care.
To summarize

- there are situations where ETTR is very useful (typically, Canon 5D MK II)
- there are situations where ETTR is either almost useless or harmful: the cameras that don't ETTL from the sensor point of view in the first place.

Don't forget that sensors that are very different in terms of raw capabilities are normalized for the average photographer's comfort. ISO 100 is an important reference point for photographers and should yield, give or take a bit, a similar result in all cameras. But one you've got enough DR, for example because you handle read noise better than your competitor, you can adjust gain to put it where you want it in your optimal sensor linear range.

ETTR will become more andmore useless as sensors improve in all brands. No need try  to find esoteric arguments against it.
Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1683


« Reply #202 on: September 05, 2011, 04:09:26 PM »
ReplyReply

ETTR will become more andmore useless as sensors improve in all brands. No need try  to find esoteric arguments against it.
The same could be said about resolution; yet as we get ever more resolution, many still crave for more.

As the public taste for HDR (and available, good quality tonemapping appears), the possibility of doing "HDR" for demanding scenes easily even when there is movement and the tripod is at home could be important.

No matter how good image capture is, there is always going to be someone wanting even more accurate capturing, and ETTR seems to be something to be aware of if you want to optimize "SNR".

-h
Logged
Guillermo Luijk
Sr. Member
****
Offline Offline

Posts: 1291



WWW
« Reply #203 on: September 05, 2011, 05:04:50 PM »
ReplyReply

The same could be said about resolution; yet as we get ever more resolution, many still crave for more.

As the public taste for HDR (and available, good quality tonemapping appears), the possibility of doing "HDR" for demanding scenes easily even when there is movement and the tripod is at home could be important.

No matter how good image capture is, there is always going to be someone wanting even more accurate capturing, and ETTR seems to be something to be aware of if you want to optimize "SNR".

I don't agree here. There will be a day when a user will say 'STOP'. You don't bracket a LDR scene, do you? even if you can afford to. When a time comes in which HDR scenes can be captured in a single shot, users won't bother to bracket (bye bye to misalignment issues or ghosting artifacts, less resources needed, handheld HDR).

The same applies to ETTR (for the same reason: sensor noise lowering). If you can afford to underexpose and still have a high quality image, you won't bother to ETTR. In fact you'll probably try not to ETTR in order to make sure highlights are preserved without caring too much about exposure. This will allow to shoot faster and concentrate on other aspects like composition.

This image was underexposed by 6 stops on a Pentax K5 (6 upper stops in the histogram with no information):



(left camera JPEG, right processed RAW)


It's a extreme non-ETTR, but still the resulting image was quite OK. If underexposure had been just by 3 stops (still a huge gap from perfect ETTR), the IQ would have been the same as with ETTR in practice.

Regarding resolution, I am happy Olympus stayed at 12Mpx (unfortunately they'll probably go into 16Mpx for marketing reasons). I not only don't need more, in this case I don't want more because more Mpx mean a lot of disk space and processing power, with nearly no quality improvement for my pictures. Others will stop at 24Mpx, but sooner or later the pixelcount race will come to an end for every user. Digital photography is a young technology, the race is still in progress, that's all.

Regards
« Last Edit: September 06, 2011, 02:53:18 AM by Guillermo Luijk » Logged

jrsforums
Sr. Member
****
Offline Offline

Posts: 744


« Reply #204 on: September 06, 2011, 09:32:31 AM »
ReplyReply


The same applies to ETTR (for the same reason: sensor noise lowering). If you can afford to underexpose and still have a high quality image, you won't bother to ETTR. In fact you'll probably try not to ETTR in order to make sure highlights are preserved without caring too much about exposure. This will allow to shoot faster and concentrate on other aspects like composition.


As I read Emil's guidance, I thought he said it was always of advantage to ETTR, as it would maximize the exposure.  The question was whether there was any advantage to use ISO to achieve this with some of the newer CMOS sensors (and the CCD sensors).

I agree that if one can "still have a high quality image", you may want to concentrate on other aspects.  However, that does not diminish the ability of ETTR, properly applied, to improve on image quality.

John
« Last Edit: September 06, 2011, 10:25:49 AM by jrsforums » Logged

John
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1683


« Reply #205 on: September 06, 2011, 01:13:56 PM »
ReplyReply

I don't agree here. There will be a day when a user will say 'STOP'. You don't bracket a LDR scene, do you? even if you can afford to. When a time comes in which HDR scenes can be captured in a single shot, users won't bother to bracket (bye bye to misalignment issues or ghosting artifacts, less resources needed, handheld HDR).
No disagreement here, we both seem to argue for "single-shot" HDR: camera and exposure techniques that, when combined, record a large DR in a single shot.

But how much sensor DR is enough? How small does the noise contribution have to be before you and everyone else think that "nah, good enough now"? And as a consequence, when are the sensors good enough that you can afford to be sloppy with exposure (or choose to use insane margins against clipping)? My guess is "never". Even a very low DR scene could benefit from having a low noise level?

I see ETTR more as a question of "how and how much should I ETTR this scene/sensor", not "if the upper 5% quantile of the histogram cross the 95% clipping limit of the sensor, then and only then do you have ETTR". For SLR auto-exposure, targeting a 18% grey, or N stops below clipping point or whatever, may be a sane general mechanism. For digital compacts (where exposure can be based on image sensor readouts), perhaps something different. For manual exposure where the user has artistic intensions with the scene, some other choices may fit. In the end, what matters is the exposure time, aperture, in-camera ISO, and how they affect the raw file and the posibilities when developing that raw file. What I gather from these discussions is that there is (often, but not always) a free but small lunch to be had, IQ-wise by increasing either aperture or exposure time if you can avoid clipping with some certainty. Sometimes this is a trained skill where you know that +1 stop of EC is ok, other times you may consult the in-camera histogram (after reverting as much as possible of the in-camera JPEG image processing), other times it may result from simply doing brute-force bracketing.

-h
« Last Edit: September 06, 2011, 01:15:34 PM by hjulenissen » Logged
Guillermo Luijk
Sr. Member
****
Offline Offline

Posts: 1291



WWW
« Reply #206 on: September 06, 2011, 02:44:16 PM »
ReplyReply

that does not diminish the ability of ETTR, properly applied, to improve on image quality.

If the improvement is not visible, the improvement doesn't exist in practice.

If a computer can perform your daily task 10 times faster than another computer, you could think it's a good idea to pick the fast one. But if the slow computer performs the task in a milisecond, and the fast one in 1/10 of a milisecond, spending a single extra euro on the fast computer is a waste of money because you'll never enjoy the advantage in practice.

Going to photography, if your sensor with ETTR has a SNR=60dB, and without ETTR a SNR=40dB, you are wasting your time putting any effort on ETTR because to your eyes both exposures will be as good.

This is what I mean by ETTR becoming less and less useful (so as bracketing for HDR). ETTR will always reduce noise, but at some point that reduction will not be worth the effort to achieve ETTR. When? that will be progressively decided by each user, it's a matter of balance: effort/disadvantages of ETTR vs less visible noise.


when are the sensors good enough that you can afford to be sloppy with exposure (or choose to use insane margins against clipping)? My guess is "never"

Hey, I wonder if you had a look at the 6-stops underexposed example.

Regards
« Last Edit: September 06, 2011, 02:48:29 PM by Guillermo Luijk » Logged

bjanes
Sr. Member
****
Offline Offline

Posts: 2823



« Reply #207 on: September 07, 2011, 09:25:23 AM »
ReplyReply

Yes, that’s an quite academic definition of "color gamut"
which also seems to consider that a camera is supposed to react on any light source such as e.g. a spectrally pure laser beam, even though such stimulus may lie outside of what we draw as a camera color space.

The definition of a camera gamut may seem academic, but examination of the spectral response of existing cameras shows that they do respond to wavelengths of 4,000 to 7,000 A, pretty much the range of human vision. The graph shown below is from Christian Buil's web site. This is consistent with the RIT that a digital camera will give a response to what ever is put in front of it. Perhaps you could say that the gamut of these cameras is 4,000 to 7,000 A.

You referenced an earlier thread in which I was involved, Does a Raw File Have a Color Space? That depends on the definition of what constitutes a color space. The camera does have RGB sensors and we use matrix math to convert from the camera "space" to CIE XYZ or some other well defined space. CIE XYZ gives exact results, but the camera "space" does not because of metameric error. The matrix coefficients used to convert to XYZ are merely a best fit approximation. In this regard, you could say that the camera space is not a fully defined color space. These considerations are well covered in a paper by Doug Kerr where he discusses the Luther-Ives conditions, metameric error and other technicalities of digital sensors.

Regards,

Bill
Logged
joofa
Sr. Member
****
Offline Offline

Posts: 488



« Reply #208 on: September 07, 2011, 10:25:26 AM »
ReplyReply

That depends on the definition of what constitutes a color space.

In colorimetry there is a well-defined notion of a linear color space:

(1) Specification of primaries.
(2) Specification of white point.

Let's not worry about other stuff such as gamma, LUTs, etc., as they are used for further encoding a color value, and are not inherently needed for the definition of a linear color space.

If you think that the above two are satisfied for a camera then one may be able to say that there is a camera color space.

Sincerely,

Joofa
Logged

Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins
ejmartin
Sr. Member
****
Offline Offline

Posts: 575


« Reply #209 on: September 07, 2011, 11:52:43 AM »
ReplyReply

In colorimetry there is a well-defined notion of a linear color space:

(1) Specification of primaries.
(2) Specification of white point.

Let's not worry about other stuff such as gamma, LUTs, etc., as they are used for further encoding a color value, and are not inherently needed for the definition of a linear color space.

If you think that the above two are satisfied for a camera then one may be able to say that there is a camera color space.

The issue is that, unless the Luther-Ives conditions are met (fancy way of saying that the spectral responses are linearly related to the CIE standard observer responses), then the camera primaries and XYZ define different three-dimensional linear subspaces of the much higher dimensional linear space of all possible spectral responses.  So yes a digital camera has intrinsically a linear space of spectral responses, but it is not related a priori to the spectral responses that define any of the color spaces used in colorimetry.  So if the term 'color space' is reserved to the CIE's definition of color, tristimulus, etc, then no, a camera typically does not have a color space.  If the term is meant more loosely, then one might say that it does have one.
Logged

emil
joofa
Sr. Member
****
Offline Offline

Posts: 488



« Reply #210 on: September 07, 2011, 12:03:25 PM »
ReplyReply

The issue is that, unless the Luther-Ives conditions are met (fancy way of saying that the spectral responses are linearly related to the CIE standard observer responses), then the camera primaries and XYZ define different three-dimensional linear subspaces of the much higher dimensional linear space of all possible spectral responses.  So yes a digital camera has intrinsically a linear space of spectral responses, but it is not related a priori to the spectral responses that define any of the color spaces used in colorimetry.  So if the term 'color space' is reserved to the CIE's definition of color, tristimulus, etc, then no, a camera typically does not have a color space.  If the term is meant more loosely, then one might say that it does have one.

Correct.  I agree with you.

Sincerely,

Joofa
Logged

Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins
digitaldog
Sr. Member
****
Offline Offline

Posts: 9189



WWW
« Reply #211 on: September 07, 2011, 05:41:30 PM »
ReplyReply

A camera can capture and encode colors as unique values compared to others, that are imaginary to humans. They don't exist (are they colors?). There are colors we can see, but the camera can't capture, the colors are imaginary to it. So what is to be done with these captured values that humans can't see? They may get mapped to something. Possibly something that another rather different SPD is already mapped to. Two reds that appear the same to us, but have different SPDs, the camera captures as different colors. So it encodes them with two different RGB values. A camera can capture all sorts of different primaries. Two different primaries may be captured as the same values by a camera, and the same primary may be captured as two different values by a camera (if the SPD of the primaries are different). If we had spectral sensitivities for the camera, that would make the job of mapping to XYZ better and easier, but we'd still have decisions on what to do with the colors the camera encodes, that are imaginary to us.
Logged

Andrew Rodney
Author “Color Management for Photographers”
http://digitaldog.net/
bjanes
Sr. Member
****
Offline Offline

Posts: 2823



« Reply #212 on: September 07, 2011, 09:57:18 PM »
ReplyReply

A camera can capture and encode colors as unique values compared to others, that are imaginary to humans. They don't exist (are they colors?). There are colors we can see, but the camera can't capture, the colors are imaginary to it. So what is to be done with these captured values that humans can't see? They may get mapped to something. Possibly something that another rather different SPD is already mapped to. Two reds that appear the same to us, but have different SPDs, the camera captures as different colors. So it encodes them with two different RGB values. A camera can capture all sorts of different primaries. Two different primaries may be captured as the same values by a camera, and the same primary may be captured as two different values by a camera (if the SPD of the primaries are different). If we had spectral sensitivities for the camera, that would make the job of mapping to XYZ better and easier, but we'd still have decisions on what to do with the colors the camera encodes, that are imaginary to us.

Good points, but many digital cameras have little response to wavelengths outside of the range of human vision, roughly 400-700 nm. The spectral responses of the cameras studied by Christian Buil (referenced in my previous post in this thread) are roughly 400-700 nm. Most digital sensors have an extended infrared sensitivity, and infrared cutoff filters are usually employed to limit the response to visible wavelengths. Lack of such filters causes problems in color rendering as evidenced by the Leica M8 (which lacked such a filter). Leica had to supply a filter that fits on in front of the lens. I haven't read about the effect of extended ultraviolet sensitivity. Perhaps someone can comment.

Regards,

Bil
Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1683


« Reply #213 on: September 07, 2011, 10:52:22 PM »
ReplyReply

Good points, but many digital cameras have little response to wavelengths outside of the range of human vision, roughly 400-700 nm. The spectral responses of the cameras studied by Christian Buil (referenced in my previous post in this thread) are roughly 400-700 nm. Most digital sensors have an extended infrared sensitivity, and infrared cutoff filters are usually employed to limit the response to visible wavelengths. Lack of such filters causes problems in color rendering as evidenced by the Leica M8 (which lacked such a filter). Leica had to supply a filter that fits on in front of the lens. I haven't read about the effect of extended ultraviolet sensitivity. Perhaps someone can comment.
If I point an IR remote into any camera, it will usually record light, while pointing it into my eyes I cannot see anything.

So cameras have 3 sets of spectrall bandpass-filters that are similar but not identical to the standardized human response. And the response of my vision might be slightly different from yours. Did you know that something like 10% of all women have 4 sets of bandpass filters in their eyes (although it is unknown if they are able to make use of that information, I dont doubt it when discussing clothes or interior design with them).

Besides being academically interesting, how does this affect the relatively simple question of "how should I expose my digital camera sensor to maximize image quality"?

-h
Logged
madmanchan
Sr. Member
****
Offline Offline

Posts: 2110


« Reply #214 on: September 08, 2011, 12:35:29 AM »
ReplyReply

Besides being academically interesting, how does this affect the relatively simple question of "how should I expose my digital camera sensor to maximize image quality"?

Simply:  It doesn't.  

I'll say it again: ETTR has nothing to do with color.  It has everything to do with maximizing signal-to-noise, as permitted by shooting conditions.  This is independent of the spectral sensitivities of the cameras, potential UV/IR response, whether or not the filters are "colorimetric", etc.  Example: The Leica M8, Canon S90, Nikon D3, and Phase One P65+ have very different spectral sensitivities.  However, they have something in common: the more light you capture with them, the higher the signal-to-noise, and the cleaner the results will be.  So, whichever of these cameras you may have, the advice remains the same: when the situation allows, capture as much light as possible while avoiding clipping in the areas of the image that you care about.

Mapping the raw color values into standard working spaces (or display & printer spaces) is a matter of gamut mapping as implemented by the raw conversion software.  As long as you didn't clip something in the raw data that you care about, then the information is all there and you can make use of it.
« Last Edit: September 08, 2011, 12:37:04 AM by madmanchan » Logged

hjulenissen
Sr. Member
****
Offline Offline

Posts: 1683


« Reply #215 on: September 08, 2011, 06:26:50 AM »
ReplyReply

Mapping the raw color values into standard working spaces (or display & printer spaces) is a matter of gamut mapping as implemented by the raw conversion software.  As long as you didn't clip something in the raw data that you care about, then the information is all there and you can make use of it.
At least from an information-theoretical viewpoint.

If you use a raw developer that does not allow you to accurately change exposure (multiply raw input data), but mess with the colors at the same time, then your options are to:
1. Change your raw developer
2. Live with whatever color change happens
3. Expose in-camera closer to how you would like the end-exposure on paper/screen (Not use ETTR)

-h
Logged
bjanes
Sr. Member
****
Offline Offline

Posts: 2823



« Reply #216 on: September 08, 2011, 08:13:17 AM »
ReplyReply

At least from an information-theoretical viewpoint.

If you use a raw developer that does not allow you to accurately change exposure (multiply raw input data), but mess with the colors at the same time, then your options are to:
1. Change your raw developer
....

Another option is to learn how to use the raw converter that you do have properly. For example, see the post by Sandymc summing up his investigations on the problems with hue twist in ACR with Eric Chan. The point about the black slider set to zero is a bit confusing, since Eric had stated in another post that the blacks were set prior to the application of the 3D lookup table. What is the official recommendation?

Regards,

Bill
Logged
madmanchan
Sr. Member
****
Offline Offline

Posts: 2110


« Reply #217 on: September 08, 2011, 08:48:26 AM »
ReplyReply

I cannot speak for other raw converters.  But if you're using ACR or Lightroom, my personal (not official) recommendation is simply to use Exposure to compensate for whatever additional in-camera exposure you may have used.  For example, if you did ETTR by adding +1 stop of exposure at time of capture, try setting Exposure -1 in ACR/LR.  This will not result in hue shifts compared to if you hadn't used ETTR.
Logged

Peter_DL
Sr. Member
****
Offline Offline

Posts: 421


« Reply #218 on: September 08, 2011, 11:00:43 AM »
ReplyReply

But if you're using ACR or Lightroom, my personal (not official) recommendation is simply to use Exposure to compensate for whatever additional in-camera exposure you may have used.  For example, if you did ETTR by adding +1 stop of exposure at time of capture, try setting Exposure -1 in ACR/LR.  This will not result in hue shifts compared to if you hadn't used ETTR.

Wouldn't this require a profile with lightness-dependent Hue/Sat.-corrections ?

I'm referring to the first table for accuracy, such as in particular from the Chart Wizard,
not any 3D Look table which may come later on, after the Exposure slider in the processing pipe.

Peter

--
Logged
dreed
Sr. Member
****
Offline Offline

Posts: 1260


« Reply #219 on: September 08, 2011, 03:13:15 PM »
ReplyReply

One of the "gripes" and "desires" is that the histogram isn't based on raw data (to allow for clip detection) and that it should be...

From the new story by Nick Rains comes the following from his "interview" with Leica:

Quote
There is a rumour on the Internet Forums that the histogram on the S2 is raw based – is this true?

No, it is based on the jpeg preview. A histogram from raw data is not possible as the image cannot be said to exist yet. A white balance needs to be chosen and the red blue and green channels need to be combined to give a meaningful histogram. So we always give you more than you think. If your histogram is good, then you will actually have more data in the raw file than you expect.
Logged
Pages: « 1 ... 9 10 [11] 12 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad