Ad
Ad
Ad
Pages: « 1 2 [3] 4 5 »   Bottom of Page
Print
Author Topic: Can you expose too far to the right even if not clipping?  (Read 17144 times)
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1678


« Reply #40 on: December 16, 2010, 04:25:56 AM »
ReplyReply

Yes, completely sound theory.

But the point is, the mapping is not linearly done anymore - three years ago, conversion was linear for most raw converters. It's not anymore - LR and ACR have been non-linear ever since the second generation camera profile with 3D HueSatMap and Look tables that enabled "hue twists" came in. C1 don't publicly document their pipeline, but almost certainly they're the same.

Sandy
If camera response is linear, what is the point of nonlinear color correction in raw converters?

-h
Logged
NikoJorj
Sr. Member
****
Offline Offline

Posts: 1063


WWW
« Reply #41 on: December 16, 2010, 05:56:22 AM »
ReplyReply

If camera response is linear, what is the point of nonlinear color correction in raw converters?
I'd think it's there to mimic a perceptual bias of our vision...
BTW, some did also find such a (color) non-linearity in the processing of images taken with Highlight priority on with Canon DPP (see this discussion in French).
Logged

Nicolas from Grenoble
A small gallery
thierrylegros396
Sr. Member
****
Offline Offline

Posts: 652


« Reply #42 on: December 16, 2010, 07:33:38 AM »
ReplyReply

The camera sensor and ADC converter are almost linear (small non linearties are unavoidables).

But your vision is logarithmic, that's why gamma correction exist !

And yes in the low lights and high lights, there are tint shift, and gamut variation.

So, converting RAW data to a jpg file is not an easy task !!

Niko, article in French is very interresting.

Thierry
Logged
bjanes
Sr. Member
****
Offline Offline

Posts: 2794



« Reply #43 on: December 16, 2010, 08:41:58 AM »
ReplyReply

The camera sensor and ADC converter are almost linear (small non linearties are unavoidables).
But your vision is logarithmic, that's why gamma correction exist !
That is a common misconception, but accurate reproduction of a scene requires a 1:1 relationship between input and output. The application of gamma allows recording and processing in a perceptually uniform way and requires fewer bits than linear encoding. However, the gamma encoding is reversed for viewing so that the output is linear. Gamma correction is somewhat of a misnomer--one should use the term gamma encoding.

This illustration from the Gamma FAQ makes the point, but it is geared to CRTs which automatically perform the inverse gamma function.

« Last Edit: December 16, 2010, 08:45:56 AM by bjanes » Logged
madmanchan
Sr. Member
****
Offline Offline

Posts: 2108


« Reply #44 on: December 16, 2010, 12:47:43 PM »
ReplyReply

Sensors are (mostly) linear, but human vision and subjective preferences are not.
Logged

hjulenissen
Sr. Member
****
Offline Offline

Posts: 1678


« Reply #45 on: December 16, 2010, 01:35:31 PM »
ReplyReply

That is a common misconception, but accurate reproduction of a scene requires a 1:1 relationship between input and output. The application of gamma allows recording and processing in a perceptually uniform way and requires fewer bits than linear encoding. However, the gamma encoding is reversed for viewing so that the output is linear. Gamma correction is somewhat of a misnomer--one should use the term gamma encoding.
I do believe that the end-to-end gamma is usually different from 1:1. Supposedly this compression simply looks good on low-dynamic range displays.

-h
Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1678


« Reply #46 on: December 16, 2010, 01:39:29 PM »
ReplyReply

The camera sensor and ADC converter are almost linear (small non linearties are unavoidables).

But your vision is logarithmic, that's why gamma correction exist !

And yes in the low lights and high lights, there are tint shift, and gamut variation.

So, converting RAW data to a jpg file is not an easy task !!

Gamma exists partly due to the native response of CRT displays, and partly because it maps noise/distortion in a perceptually uniform manner.

And I dont see the relevance to my question. I was wondering why one would do nonlinear complex stuff in the color-correction part where some native camera response is transformed into some reference color representation (e.g. XYZ). Gamma, jpeg, etc is done later.

-h
Logged
digitaldog
Sr. Member
****
Offline Offline

Posts: 9093



WWW
« Reply #47 on: December 16, 2010, 02:25:49 PM »
ReplyReply

Sensors are (mostly) linear, but human vision and subjective preferences are not.

And nearly all our output devices (display and print).
Logged

Andrew Rodney
Author “Color Management for Photographers”
http://digitaldog.net/
Guillermo Luijk
Sr. Member
****
Offline Offline

Posts: 1290



WWW
« Reply #48 on: December 16, 2010, 02:56:29 PM »
ReplyReply

Gamma exists partly due to the native response of CRT displays, and partly because it maps noise/distortion in a perceptually uniform manner.

There is absolutely no need of gamma per se applied to digital images. The correct mapping of the RGB values so that they display right can be done in software, in the output device (monitor plus video card), or both working together.

Gamma in digital images is only necessary to avoid posterization in the shadows if they are encoded with integer values (for instance 16-bit TIFF files and specially 8-bit JPEG files). But an image editor working with floating point numbers doesn't need gamma at all; images can be linear and deep shadows would be represented as richly as the highlights. It's a matter of encoding efficiency.

These two images display correctly thanks to the software (Photoshop is interpreting the 2.2 gamma of the image on the left, and the 1.0 gamma of the image on the right), which is sending to the output video devices the appropiate values for correct rendering:



However the RGB values are totally different, just look at the histograms.

Gamma exists because of the original output devices (basicall CRT's) non linear behaviour, and has no relation to the way human vision works. To correctly mimic real life, any imaging system must be linear end to end. If gamma exists at some point, is to compensate an inverse non-linearity somewhere else into the system, and this has no relation to the way we perceive light.

Today, in digital imaging, gamma is only needed because integer formats are still the most widely used.

Regards


« Last Edit: December 16, 2010, 03:15:08 PM by Guillermo Luijk » Logged

digitaldog
Sr. Member
****
Offline Offline

Posts: 9093



WWW
« Reply #49 on: December 16, 2010, 03:10:38 PM »
ReplyReply

These two images display correctly thanks to the software (Photoshop is interpreting the 2.2 gamma of the image on the left, and the 1.0 gamma of the image on the right), which is sending to the output video devices the appropiate values for correct rendering

Yup, all you need is a profile to define that condition and it previews fine. I use a similar example in classes, loading an untagged gamma 1.0 image (which looks awful), put some Photoshop samples on the image and show how Assign Profile with the correct profile makes the image look fine without altering the numbers a lick. Most people who say “linear images look dark” don’t realize that without the proper embedded profile, its assumed to be gamma corrected and looks just awful.

Of course without ICC aware apps, this kind of falls apart.

Poynton sums up the debate about this in his FAQ (http://www.poynton.com/notes/colour_and_gamma/GammaFAQ.html#gamma):

Quote
21. Should I do image processing operations on linear or nonlinear image data?

If you wish to simulate the physical world, linear-light coding is necessary. For example, if you want to produce a numerical simulation of a lens performing a Fourier transform, you should use linear coding. If you want to compare your model with the transformed image captured from a real lens by a video camera, you will have to "remove" the nonlinear gamma correction that was imposed by the camera, to convert the image data back into its linear-light representation.

On the other hand, if your computation involves human perception, a nonlinear representation may be required. For example, if you perform a discrete cosine transform on image data as the first step in image compression, as in JPEG, then you ought to use nonlinear coding that exhibits perceptual uniformity, because you wish to minimize the perceptibility of the errors that will be introduced during quantization.

The image processing literature rarely discriminates between linear and nonlinear coding. In the JPEG and MPEG standards there is no mention of transfer function, but nonlinear (video-like) coding is implicit: unacceptable results are obtained when JPEG or MPEG are applied to linear-light data. In computer graphic standards such as PHIGS and CGM there is no mention of transfer function, but linear-light coding is implicit. These discrepancies make it very difficult to exchange image data between systems.

When you ask a video engineer if his system is linear, he will say Of course! - referring to linear voltage. If you ask an optical engineer if her system is linear, she will say Of course! - referring to linear luminance. But when a nonlinear transform lies between the two systems, as in video, a linear transformation performed in one domain is not linear in the other.
Logged

Andrew Rodney
Author “Color Management for Photographers”
http://digitaldog.net/
ejmartin
Sr. Member
****
Offline Offline

Posts: 575


« Reply #50 on: December 16, 2010, 03:33:25 PM »
ReplyReply

Sensors are (mostly) linear, but human vision and subjective preferences are not.

Indeed, but I think the question is where should the software become nonlinear.  I would think that exposure compensation adjustments should take place in the linear domain, for instance with the D7000 there should be no difference (at least for non-clipped raw data) between choosing ISO 800 in the camera and converting with 0 EV exposure compensation, and ISO 200 pushed two stops in the converter, or for that matter ISO 1600 pulled one stop (even though the latter makes little sense from the point of view of noise).  Of course after the initial EC to get the histogram in the right rendering range, all sorts of nonlinear transformations may help achieve a better rendering, but shouldn't this initial exposure map take place in the linear domain?
Logged

emil
joofa
Sr. Member
****
Offline Offline

Posts: 488



« Reply #51 on: December 16, 2010, 04:09:17 PM »
ReplyReply

There is absolutely no need of gamma per se applied to digital images. ... Today, in digital imaging, gamma is only needed because integer formats are still the most widely used.

Digital imaging is not a small term applied to only acquisition stage. Gamma, essentially a non-linearity of a certain type, is used for several reasons in digital images, including compression.

Indeed, but I think the question is where should the software become nonlinear ... shouldn't this initial exposure map take place in the linear domain?

Yes, if the acquisition device is known to be linear.

Joofa
Logged

Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins
Guillermo Luijk
Sr. Member
****
Offline Offline

Posts: 1290



WWW
« Reply #52 on: December 16, 2010, 04:26:19 PM »
ReplyReply

Digital imaging is not a small term applied to only acquisition stage. Gamma, essentially a non-linearity of a certain type, is used for several reasons in digital images, including compression.

Compression of the RGB data is not necessary when a floating point format is used. Can you speak about those reasons for which gamma is used?
Logged

dmerger
Sr. Member
****
Offline Offline

Posts: 686


« Reply #53 on: December 16, 2010, 04:29:07 PM »
ReplyReply

Yup, all you need is a profile to define that condition and it previews fine.

I don't have the expertise to add much to this discussion, but I have an example of using a profile to handle a linear image.  My Minolta 5400 scanner and software permit the output of a 16 bit linear file.  When viewed without assigning the proper Minolta profile, the image indeed looks very dark.  Apply the proper profile, and the image looks normal. 
Logged

Dean Erger
BartvanderWolf
Sr. Member
****
Online Online

Posts: 3643


« Reply #54 on: December 16, 2010, 04:35:26 PM »
ReplyReply

Of course after the initial EC to get the histogram in the right rendering range, all sorts of nonlinear transformations may help achieve a better rendering, but shouldn't this initial exposure map take place in the linear domain?

Hi Emil,

Indeed, there are several operations that are best done in linear gamma space. Blackpoint correction, exposure compensation, and white balancing come to mind, to name a few obvious ones. Then, once we enter the realm of human perception (color appearance models) though, there may be more efficient transforms needed. Efficiency is the reason for transforms, but they should not introduce color shifts unless that models human vision better.

Cheers,
Bart
Logged
joofa
Sr. Member
****
Offline Offline

Posts: 488



« Reply #55 on: December 16, 2010, 04:40:49 PM »
ReplyReply

Compression of the RGB data is not necessary when a floating point format is used. Can you speak about those reasons for which gamma is used?


Compression is dictated by an output device parameters such as bitrate, bucket capacity, etc. You are mixing up input stage (acquisition) with an output stage technique (compression). However, I just wanted to let you know that the term "digital imaging" is not to be used loosely for a certain domain-specific computation while giving an impression that it is true for all of its domain.

Now a little OT: the floating point's resolution is not always greater than an integer! For the same number of bits, the usual floating point will have more resolution in lower numbers, but rest assured that after some crossover point the integer would have more resolution. For e.g., I think for the IEEE 32-bit floating point the crossover points occur a little before 23 bits. In usual calculations that situation should not occur. Or in other words, make sure to keep your calculations below 2^23 if using IEEE 32-bit floats.

Now coming back to your question. Gamma is used at many different places: (1) Image data visualization, say in Lab space; uses a gamma of 1/3. (2) Compression; linear data is not good for compression such as jpeg/wavelet, etc. A gamma of an appropriate variety is applied many times. (3) SNR range "stabilization" in certain systems; that is more audio, but just to let you know that it is there. (4) Etc., Etc.

Joofa
Logged

Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins
sandymc
Sr. Member
****
Offline Offline

Posts: 269


« Reply #56 on: December 16, 2010, 11:04:12 PM »
ReplyReply

If camera response is linear, what is the point of nonlinear color correction in raw converters?

Like Eric Chan said - "Sensors are (mostly) linear, but human vision and subjective preferences are not."

Non-linear processing is intended to give results that are perceptually "better"

Sandy
Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1678


« Reply #57 on: December 17, 2010, 02:10:14 AM »
ReplyReply

Like Eric Chan said - "Sensors are (mostly) linear, but human vision and subjective preferences are not."

Non-linear processing is intended to give results that are perceptually "better"

Sandy
I think that you are all mixing different things. Producing an XYZ-space "image" is not so much perception, but mathematics, I believe. dcraw does no strange stuff in producing XYZ. Now, after getting rid of camera quirks and having an image this is somewhat "absolutely referred" (within sensor limits, and on a normalized perceptual basis), it is time to make it "look good". That is when all of the proprietary stuff (that e.g. Adobe would apply to all cameras, since it has a generic representation in XYZ-space). Am I not right?

-h
Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1678


« Reply #58 on: December 17, 2010, 02:21:05 AM »
ReplyReply

There is absolutely no need of gamma per se applied to digital images. The correct mapping of the RGB values so that they display right can be done in software, in the output device (monitor plus video card), or both working together.
...
Gamma exists because of the original output devices (basicall CRT's) non linear behaviour, and has no relation to the way human vision works. To correctly mimic real life, any imaging system must be linear end to end. If gamma exists at some point, is to compensate an inverse non-linearity somewhere else into the system, and this has no relation to the way we perceive light.
Gamma is not only used for still images. If you read Poyntons books, I think that you will find that he supports my claim that Gamma exists for primarily two reasons:
  • The first reason is to compensate for CRT native behaviour (and it was cheaper to do this once in the camera instead of in every customers tv set back in the days of analog signal processing).
  • The second reason is to make it easier for lossy compression or any other dsp thath wants to work on "perceptually linear" data. I think that in dsp terms, this could be called a homomorphic transform.
Non-linear end-to-end functions exist and are described by Poynton:
From Poynton:
Quote
Television is usually viewed in a dim environment. If an images’s correct physical intensity is reproduced in a dim surround, a subjective effect called simultaneous contrast causes the reproduced image to appear lacking in contrast, as demonstrated above. The effect can be overcome by applying an end-to-end power function whose exponent is about 1.1 or 1.2. Rather than having each receiver provide this correction, the assumed 2.5-power at the CRT is under-corrected at the camera by using an exponent of about 1⁄ 2.2 instead of 1⁄ 2.5. The assumption of a dim viewing environment is built into video coding
Logged
sandymc
Sr. Member
****
Offline Offline

Posts: 269


« Reply #59 on: December 17, 2010, 04:36:38 AM »
ReplyReply

I think that you are all mixing different things. Producing an XYZ-space "image" is not so much perception, but mathematics, I believe. dcraw does no strange stuff in producing XYZ. Now, after getting rid of camera quirks and having an image this is somewhat "absolutely referred" (within sensor limits, and on a normalized perceptual basis), it is time to make it "look good". That is when all of the proprietary stuff (that e.g. Adobe would apply to all cameras, since it has a generic representation in XYZ-space). Am I not right?

-h

No, actually I'm not mixing things. Adobe (and others) are Grin

If you use a camera profile with hue twists in it, then you have non-linear processing. To be more precise, before we get hung up on what "non-linear processing" is, you have hue varying with luminance. In the case of Adobe's profiles, that happens to occur in HSL space, but the space in which it occurs is irrelevant. What is relevant is where in the pipeline the non-linearity occurs. The point for ETTR is whether the non-linearity occurs before of after exposure correction. If the non-linearity is before, you have variations in hue in hue as a result of ETTR. And for many, although not all, Adobe profiles, the hue twists are configured to occur before the exposure settings take effect.

Where there is confusion is around what I would call "accidental" versus "deliberate" non-linearity. Accidental non-linearity is as a result of sensor imperfections or whatever, and can (should) be corrected for in the image processing pipeline somewhere. So generally the accidental non-linearities are not a major problem for ETTR, because they're usually quite small, and corrected for anyway. However, hue twists (and other similar mechanisms) are deliberate non-linearities introduced to enhance perceived image quality. They are generally large, and by definition not corrected for, and so are a problem for ETTR.

BTW, DCRaw doesn't have any hue twists, or anything similar to them - it does a straight linear conversion. Old school!! But DCRaw isn't the problem.

Sandy
Logged
Pages: « 1 2 [3] 4 5 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad