Ad
Ad
Ad
Pages: « 1 2 3 [4]   Bottom of Page
Print
Author Topic: Correcting images in LAB vs RGB  (Read 22063 times)
joofa
Sr. Member
****
Offline Offline

Posts: 488



« Reply #60 on: February 27, 2010, 10:54:43 PM »
ReplyReply

Quote from: digitaldog
Most color scientists will point out that Lab exaggerates the distance in yellows thereupon it underestimate the distances in the blues. Lab assumes that hue and chroma can be treated separately. There's an issue where hue lines bend with increase in saturation perceived by viewers as an increase in both saturation and a change in hue when that's really not supposed to be accruing.  ..... The result is mathematically the same hue as the original, but the results end up appearing purple to the viewer. This is unfortunately accentuated with blues, causing a well recognized shift towards magenta.

Andrew, there has been progress in resolving some of these issues, for e.g. by warping Lab space to correct for non-linearity in hue. Please see the following reference, for e.g:

G.J. Braun, F. Ebner, and M.D. Fairchild, "Color Gamut Mapping in a Hue-Linearized CIELAB Color Space," Proc. of IS&T/SID 6th Color Imaging Conference, Scottsdale, pp. 163-168 (1998).
« Last Edit: February 27, 2010, 11:00:06 PM by joofa » Logged

Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins
Peter_DL
Sr. Member
****
Offline Offline

Posts: 423


« Reply #61 on: February 28, 2010, 06:02:56 AM »
ReplyReply

Lab became less interesting for me (years ago)
when I discovered the RGB-based HSL blend modes in Photoshop
i.e. Luminosity and Saturation.

But then came Simon Tindemans and his HS-L*/Y curves.
It’s not a Color Appearance Model but still one step ahead compared to current technology – IMO.

Peter

--


Logged
ErikKaffehr
Sr. Member
****
Online Online

Posts: 8024


WWW
« Reply #62 on: February 28, 2010, 06:47:14 AM »
ReplyReply

Quote from: ErikKaffehr
Hi,

Outputting to sRGB removes the colors falling outside the sRGB gamut. The advantage you keep as much information as possible. Also, whatever you do in ACR or Lightroom (which shares the same processing engine) is guaranteed to be done in the right order, at least according to the views held by Adobe.

Actually, there are a few other issues. ACR/LR does not use Pro Photo RGB but it uses "Pro Photo Primaries in a linear space". So it has the same gamut as Pro Photo RGB but no gamma (or gamma equal one).

In most cases the differences will be subtle. Dan Margulis, a well know authority on image processing, has the view that 16 bit processing is not needed. Most other image processing experts say that using more bits is beneficial. The way I see it, it's a good approach to keep as much information as possible. Of course, having a parametric workflow based on raw images essentially means that nothing is lost, except in the final stage, when a picture is processed for it's intended use.

Best regards
Erik
Logged

ErikKaffehr
Sr. Member
****
Online Online

Posts: 8024


WWW
« Reply #63 on: February 28, 2010, 06:48:58 AM »
ReplyReply

Hi,

I didn't find the info I was thinking about, so I modified my original posting.


Quote from: ejmartin
And those are...?
Logged

EsbenHR
Newbie
*
Offline Offline

Posts: 39


« Reply #64 on: February 28, 2010, 07:26:13 AM »
ReplyReply

Well, we know L*a*b*, in its original form, sucks as a color appearance model.
We use anyway in all kinds of settings it was not designed for, mostly due to lack of something better.

I doubt any color-space (existing or future) designed to retain a simple relationship between the number of just-noticeable-differences between any two colors and an Euclidean distance can simultaneously perform well as an appearance model that works well for editing.
Also, I see no reason one should exist.

It is a bit like making a flat map of the earth: you can preserve lengths, angles or areas but you can not get it all simultaneously.

What we lack, in my opinion, is a dataset that can be used to create a color-space suitable for a given application.
Unfortunately, that is a huge and expensive job to do right. It would likely take hundreds of test-persons (including "enough" people for the common types of color blindness) and a huge amount of time to exhaustively test the entire visual range for sensitivity and color matching under various relevant viewing conditions.

Until we have such a (public) data set, I think the choice of color-space remains a subjective choice where any choice is arguably right.
Logged
BernardLanguillier
Sr. Member
****
Offline Offline

Posts: 8388



WWW
« Reply #65 on: February 28, 2010, 09:04:16 AM »
ReplyReply

Quote from: Ishmael.
I've recently discovered the power of the LAB color space and it almost seems too good to be true.I'm sure certain adjustments should be done in RGB, but for the most part I am getting significant more powerful images out of LAB than RGB. Given that I'm relatively new to photoshop I would really like to know from you veterans out there why someone would correct an image in RGB instead of LAB.

There is little doubt that a conversion from RGB to LAB and back is an entropic process that results in less color information (some pixels that were initially carrying different color values end up carrying the same). The only question of importance though is whether this is really an issue on actual photographs.

There are some things that the LAB space is very useful for, and that is improve the color separation in mostly mono-chromatic images with dominantly redish or greenish tints. Some will argue that the same can be done in RGB, but I am still to find a method as fast and straightforward as a symmetric steepening of the curve in a or b channels in LAB color space. This being said, it should be used mostly on 16 bits images and is better kept as one of the last steps in a workflow before printing.

For some reason that still eludes me, it seems that some people think that it is a duty for a photographer to chose a camp for or against LAB... as if we didn't have enough of that with Canon and Nikon discussions.

Cheers,
Bernard
Logged

A few images online here!
bjanes
Sr. Member
****
Offline Offline

Posts: 2882



« Reply #66 on: February 28, 2010, 09:36:41 AM »
ReplyReply

Quote from: digitaldog
It most certainly can be (its not prefect and there are caveats). For one, you can define “colors” with numeric values that we can’t see (hence, they are not colors).
Of course, the apices of the ProPhotoRGB color triangle have to be well outside the CIE horseshoe so that the visible colors can be encoded--this leads to encoding inefficiency, but memory and disc space these days is cheap and this inefficiency is not a significant disadvantage. In L*a*b, the a and b values also extend beyond the visible range. In both systems, injudicious editing can result in imaginary colors.

A more fundamental problem with an RGB space is that it does not take into account how information is processed in the retina and the brain. The retina has three types of photoreceptors corresponding to red, green and blue (ignoring tetrachromats for the time being) and this is the basis for the Young-Helmholtz theory of color vision. Opponent processing occurs in the neural network of the retina and in the brain as is taken into account by Hering theory of color vision: red opposes green and yellow opposes blue. This opponency is taken into account in the L*a*b model. How this relates to color processing is not immediately apparent and perhaps others can comment.

It was formerly thought that opponency was hard wired in the retina, but recent studies (see Scientific American, February 2010) have shown that opponency can be overcome in certain situations and red-green and blue-yellow can be perceived as new colors. How would these colors be expressed in current models?

Hue twists with changes in luminance in L*a*b are well known and must be taken into account in any color appearance model. Bruce Lindbloom has published a uniform perceptual LAB space using lookup tables. RGB spaces use a power curve (gamma) for perceptual uniformity of luminance. The L* tone curve of L*a*b is designed to be perceptually uniform an an exponent of 2.2 for a power curve most closely approaches the L* curve; the exponent of 1.8 used for ProPhotoRGB is suboptimal (see Bruce Lindbloom's companding calculator). Computing power is now sufficient so that one can edit one color space and have the results on the screen and in the info palette simulate another space. For example, in Lightroom the working space is linear ProPhotoRGBx, but the RBG values and screen preview are for ProPhotoRGB with an sRGB tone curve. The actual working space may become less relevant and its imperfections can be corrected in the color appearance model. So far as I know, most CAMs use L*a*b or similar CIE space as the reference space (not ProPhotoRGB).
Logged
bjanes
Sr. Member
****
Offline Offline

Posts: 2882



« Reply #67 on: February 28, 2010, 09:41:16 AM »
ReplyReply

Quote from: bjanes
Of course, unmentioned in this discussion, is the fact that ProPhotoRGB is not a color appearance model either.

Quote from: digitaldog
No, its not, its an RGB working space (which apparently isn’t obvious).
And L*a*b is a reference color space, not a CAM, which should be equally obvious, but is ignored by you.
Logged
digitaldog
Sr. Member
****
Offline Offline

Posts: 9315



WWW
« Reply #68 on: February 28, 2010, 11:50:42 AM »
ReplyReply

Quote from: bjanes
And L*a*b is a reference color space, not a CAM, which should be equally obvious, but is ignored by you.

I don’t recall implying that Lab was a CAM...
Logged

Andrew Rodney
Author “Color Management for Photographers”
http://digitaldog.net/
BernardLanguillier
Sr. Member
****
Offline Offline

Posts: 8388



WWW
« Reply #69 on: February 28, 2010, 04:12:21 PM »
ReplyReply

Quote from: DPL
Lab became less interesting for me (years ago)
when I discovered the RGB-based HSL blend modes in Photoshop
i.e. Luminosity and Saturation.

But then came Simon Tindemans and his HS-L*/Y curves.
It’s not a Color Appearance Model but still one step ahead compared to current technology – IMO.

Thanks for the link.

Cheers,
Bernard
Logged

A few images online here!
Hening Bettermann
Sr. Member
****
Offline Offline

Posts: 586


WWW
« Reply #70 on: March 01, 2010, 08:16:19 AM »
ReplyReply

Hi Cliff

This CIECAM02 plug-in sounds very interesting to me. On your site, I read

"Another motivation to create the plug-in was to explore the use of CIECAM02 as a perceptually-uniform image editing space. For example, it is often desirable to be able to increase the colorfulness of an image, uniformly for all hues (see Evans), and without causing hue shifts."

Does this mean that you have implemented Bruce Lindblooms Perceptually Uniform Lab space?

Kind regards - Hening.
Logged

crames
Full Member
***
Offline Offline

Posts: 210


WWW
« Reply #71 on: March 01, 2010, 05:23:48 PM »
ReplyReply

Quote from: Hening Bettermann
Does this mean that you have implemented Bruce Lindblooms Perceptually Uniform Lab space?
No, the Lindbloom space has been implemented by him as a color profile (for a fixed viewing condition).

CIECAM02 is something else: a comprehensive color appearance model, which predicts how colors look under different viewing conditions. (Link to the CIE) It is much more uniform than CIELAB, probably about as uniform as Lindbloom's profile - blues do not turn purple, etc.

Unfortunately it can be 10 times more complicated than regular color management, which is not a good thing. But it has the potential to solve some sticky problems for photographers, like screen-to-print matching.
Logged

Cliff
papa v2.0
Full Member
***
Offline Offline

Posts: 198


« Reply #72 on: March 03, 2010, 12:08:22 PM »
ReplyReply

CIELAB  was developed as a color space to be used for the specification of color differences on reflective media. It was not exactly perceptually uniform and this non-uniformity was addressed in subsequent colour difference formula.
I dont think it was  designed with image editing in mind. Hence the problem.
CIELAB is however used in the ICC profile format as one of the  Profile connection Spaces.

CIECAM02 is a more perceptually uniform colour space and is used for gamut mapping. It is quite  complicated to use requiring several input and and output parameters. I have been using it in a RAW image pipeline with reasonable results. My approach is to use  a camera sensor RGB to XYZ matrix (optimised by error reduction in CIECAM02 space) to achive accurate scene referred colorimerty before passing to CIECAM02.
I have not however used it as an editing space although it wouldnt be to hard to implement or design an interface for such. Cliff Rames has produced a good CIECAM02 plug in for photoshop as mentioned earlier in this thread.

This can also now be done using ACR (see Creating scene-referred images using Photoshop CS3  and ) and then passing to the CIECAM02 plugin. The scene viewing conditions would then need to be entered.

CIECAM02 has several colour spaces for example:

JMh Lighness Colourfulness and Hue
JCh Lighness Chroma and Hue
QMh Brightness Colourfulness and Hue
QCh Brightness Chroma and Hue
Jab Lightness  redness greenness  and blueness yellowness
also has correlates for saturation

CIECAM02 is used as gamut mapping space and might yet be a ICC PCS.
Logged
Hening Bettermann
Sr. Member
****
Offline Offline

Posts: 586


WWW
« Reply #73 on: March 03, 2010, 03:51:23 PM »
ReplyReply

Hi papa v2.0!

Thank you very much for your post and this highly interesting link. I am very eager to explore this, substituting Raw Developer for ACR. If I find the learning curve of CIECAM02 too steep, I may just use the profiles and use Simon Tindemans Tonability plug-in to begin with.
http://21stcenturyshoebox.com/tools/tonability.html

Good light! - Hening.
« Last Edit: March 03, 2010, 03:52:36 PM by Hening Bettermann » Logged

Pages: « 1 2 3 [4]   Top of Page
Print
Jump to:  

Ad
Ad
Ad