Not exactly, and Guillermo is right - L*a*b is NOT perceptually uniform, so there might be certain color shifts. The most common example is well known "blue turns purple" problem.
Absolutely! Its supposed to be perceptually uniform, that was its original goal when developed mathematically from CIEXYZ. It wasn't totally successful and has, like the deltaE calculations been attempted to be "fixed" over the years.
Aside from that, Lab has been oversold. And its been oversold for a long time. Here we have Bruce Fraser discussing this way back in 1999 (in only the way Bruce could do so):
Here's a simple experiment you can try that clearly demonstrates the
shortcomings of Lab, no matter how good your perceptual rendering is.
Make a square image, and make half of the background L*48 a*4 b*-77. Make
the other half of the background L*76 a*36 b*86.
Then, place a small square on each background color, and fill it with L*65
You'll find that the green square on the orange background looks darker
than the green square on the blue background. If you want to make the
greens match, you'll have to edit one of them -- no pixel-by-pixel
automatic conversion will do it. The greens are colorimetrically identical,
but they look different.
This is why I feel unable to trust any automatic conversion, whether it's
done in the RIP or elsewhere, and no matter what color space you use.
Lab may be device-independent, but so are the RGB working spaces in
Photoshop 5. Given the current limitations on implementation, I prefer to
use calibrated RGB, but even if those limitations were to disappear, you'd
still have to edit the image to make the greens match if you worked in Lab.
If you want to argue the superiority of LCH as an editing interface, that's
fine. I think people should edit in the interface with which they're most
Let me make it clear that I'm not adamantly opposed to Lab workflows. If
they work for you, that's great, and you should continue to use them.
My concern is that Lab has been oversold, and that naive users attribute to
it an objective correctness that it does not deserve.Even if we discount the issue of quantization errors going from device space
to Lab and vice versa, which could be solved by capturing some larger number
of bits than we commonly do now, (though probably more than 48 bits would be
required), it's important to realise that CIE colorimetry in general, and
Lab in particular, have significant limitations as tools for managing color
appearance, particularly in complex situations like photographic imagery.
CIE colorimetry is a reliable tool for predicting whether two given solid
colors will match when viewed in very precisely defined conditions. It is
not, and was never intended to be, a tool for predicting how those two
colors will actually appear to the observer. Rather, the express design goal
for CIELab was to provide a color space for the specification of color
differences. Anyone who has really compared color appearances under
controlled viewing conditions with delta-e values will tell you that it
works better in some areas of hue space than others.
When we deal with imagery, rather than matching plastics or paint swatches,
a whole host of perceptual phenomena come into play that Lab simply ignores.
Simultaneous contrast, for example, is a cluster of phenomena that cause the
same color under the same illuminant to appear differently depending on the
background color against which it is viewed. When we're working with
color-critical imagery like fashion or cosmetics, we have to address this
phenomenon if we want the image to produce the desired result -- a sale --
and Lab can't help us with that.
Lab assumes that hue and luminance can be treated separately -- it assumes
that hue can be specified by a wavelength of monochromatic light -- but
numerous experimental results indicate that this is not the case. For
example, Purdy's 1931 experiments indicate that to match the hue of 650nm
monochromatic light at a given luminance would require a 620nm light at
one-tenth of that luminance. Lab can't help us with that. (This phenomenon
is known as the Bezold-Brucke effect.)
Lab assumes that hue and chroma can be treated separately, but again,
numerous experimental results indicate that our perception of hue varies
with color purity. Mixing white light with a monochromatic light does not
produce a constant hue, but Lab assumes it does -- this is particularly
noticable in Lab modelling of blues, and is the source of the blue-purple
There are a whole slew of other perceptual effects that Lab ignores, but
that those of us who work with imagery have to grapple with every day if our
work is to produce the desired results.
So while Lab is useful for predicting the degree to which two sets of
tristimulus values will match under very precisely defined conditions that
never occur in natural images, it is not anywhere close to being an adequate
model of human color perception. It works reasonably well as a reference
space for colorimetrically defining device spaces, but as a space for image
editing, it has some important shortcomings.
One of the properties of LCH that you tout as an advantage -- that it avoids
hue shifts when changing lightness -- is actually at odds with the way our
eyes really work. Hues shift with both lightness and chroma in our
perception, but not in LCH.
None of this is to say that working in Lab or editing in LCH is inherently
bad. But given the many shortcomings of Lab, and given the limited bit depth
we generally have available, Lab is no better than, and in many cases can be
worse than, a colorimetrically-specified device space, or a colorimetrically
defined abstract space based on real or imaginary primaries.
For archival work, you will always want to preserve the original capture
data, along with the best definition you can muster of the space of the
device that did the capturing. Saving the data as Lab will inevitably
degrade it with any capture device that is currently available. For some
applications, the advantages of working in Lab, with or without an LCH
interface, will outweigh the disadvantages, but for a great many
applications, they will not. Any time you attempt to render image data on a
device, you need to perform a conversion, whether you're displaying Lab on
an RGB monitor, printing Lab to a CMYK press, displaying scanner RGB on an
RGB monitor, displaying CMYK on an RGB monitor, printing scanner RGB to a
CMYK press, etc.
Generally speaking, you'll need to do at least one conversion, from input
space to output space. If you use Lab, you need to do at least two
conversions, one from input space to Lab, one from Lab to output space. In
practice, we often end up doing two conversions anyway, because device
spaces have their own shortcomings as editing spaces since they're generally
The only real advantage Lab offers over tagged RGB is that you don't need to
send a profile with the image. (You do, however, need to know whether it's
D50 or D65 or some other illuminant, and you need to realise that Lab (LH)
isn't the same thing as Lab.) In some workflows, that may be a key
advantage. In many, though, it's a wash.
One thing is certain. When you work in tagged high-bit RGB, you know that
you're working with all the data your capture device could produce. When you
work in Lab, you know that you've already discarded some of that data.