L* is the "rage" for some (many in Europe). But many in the US are asking for some concrete examples of the benefits in some kind of white paper or with empirical data that shows an advantage. This was discussed in length on the ColorSync list last year. One of the most useful posts was from Lars Borg the chief color scientist at Adobe:
L* is great if you're making copies. However, in most other
scenarios, L* out is vastly different from L* in. And when L* out is
different from L* in, an L* encoding is very inappropriate as
Let me provide an example for video. Let's say you have a Macbeth
chart. On set, the six gray patches would measure around L* 96, 81,
66, 51, 36, 21.
Assuming the camera is Rec.709 compliant, using a 16-235 digital
encoding, and the camera is set for the exposure of the Macbeth
chart, the video RGB values would be 224,183,145,109,76,46.
On a reference HD TV monitor they should reproduce at L* 95.5, 78.7,
62.2, 45.8, 29.6, 13.6.
If say 2% flare is present on the monitor (for example at home), the
projected values would be different again, here: 96.3, 79.9, 63.8,
48.4, 34.1, 22.5.
As you can see, L* out is clearly not the same as L* in.
Except for copiers, a system gamma greater than 1 is a required
feature for image reproduction systems aiming to please human eyes.
For example, film still photography has a much higher system gamma
Now, if you want an L* encoding for the video, which set of values
would you use:
96, 81, 66, 51, 36, 21 or
95.5, 78.7, 62.2, 45.8, 29.6, 13.6?
Either is wrong, when used in the wrong context.
If I need to restore the scene colorimetry for visual effects work, I
need 96, 81, 66, 51, 36, 21.
If I need to re-encode the HD TV monitor image for another device,
say a DVD, I need 95.5, 78.7, 62.2, 45.8, 29.6, 13.6.
In this context, using an L* encoding would be utterly confusing due
to the lack of common values for the same patches. (Like using US
Dollars in Canada.)
Video solves this by not encoding in L*. (Admittedly, video encoding
is still somewhat confusing. Ask Charles Poynton.)
When cameras, video encoders, DVDs, computer displays, TV monitors,
DLPs, printers, etc., are not used for making exact copies, but
rather for the more common purpose of pleasing rendering, the L*
encoding is inappropriate as it will be a main source of confusion.
Are you planning to encode CMYK in L*, too?
Color Geek Chris Murphy had this to say about the proposal to make L* an ISO Spec (and the request for proof of concept):
On 3/11/08 7:16 AM, "Jan-Peter Homann" <homann at colormanagement.de>
The idea behind L* based RGBworkingspaces is a perceptual uniform
encoding of ligthness.
I believe Wyszecki & Stiles state that L*a*b* is an approximately
uniform color space. I don't know that they'd suggest it is
perceptually uniform. So out of the gate there is at least some
recognition that it might not be ideal or exact. The question then is,
if it's not, is that a problem? Or can it be ignored? And that depends
on the context of its usage.
As the profile connection space between ICC profiles is mostly Lab
(for LUT-profiles...), profile conversion from an L* based
workingspace to the PCS and from there to an L* calibrated output
device will avoid unnecessary tonal conversions, especially for 8-
Yes but even eciRGBv2, with L* based tone reproduction curve (TRC),
uses an XYZ PCS. I think it's been demonstrated for practical purposes
that as these conversions all occur through a minimum of 16bpc
precision that the tone reproduction curve of the PCS is not relevant.
In practice this bears out by the fact we have non-trivial processing
and editing occurring in real world situations where the TRC is
linear, not based on L*.
Depending on the CMM the data may be converted through XYZ and or LAB,
or it may go directly from source space to destination space, not
actually be converted into the PCS color space at all.
In practice, input devices, output devices, and display devices do not
have a tone response curve described by an L* function. Could they be
compelled to have a natural TRC defined by L*, by inherent design? I'm
not an engineer. They behave the way they behave, and that isn't
described by L*. If we compel them to do otherwise, there is loss of
levels to coerce them away from their natural behavior in favor of a
different TRC. It's easy to quantify these loss mathematically. It's
perhaps not as easy to quantify it in visual terms, as human vision
can be quite tolerant with loss of tonality. Up to a point.
We have now several vendors of high-end monitors and high end
monitor calibration / profiling solutions offering L* based monitor-
calibration, which are used in eciRGBv2 workflows especially in
Germany in the area of post production for photography and prepress.
Such displays have sufficient precision in internal LUTs to compel
their TRC to be defined with a complex function such as L* rather than
a simple gamma function. I don't see a problem with this, although I
have yet to read of a compelling case for it. The benefit of doing so
Unless the press condition is also defined by L* there will be loss of
levels from source to destination. It's another matter if this is a
problem visually. But the advocates of L* based workflows have yet to
present any information that demonstrates the advantages or
disadvantages of such a workflow.
Further, the ECI web site contradicts itself. On the one hand it says:
"The focus of the eciRGB_v2 profile still is on the print and
but then says
"In general, ECI now recommends to always use the eciRGB_v2 profile
for new projects or when creating new data. This is especially true
when converting from RAW data or from 16 bit image data."
1. The print and publishing industries are clearly 8bpc predominant
workflows. They are not 16bpc workflows.
2. Why, in the fact of it being focused on print and publishing
industries, is eciRGBv2 especially applicable to high-bit imagery? Why
does encoding matter at all with high-bit data?
They created also L* based version of the ProPhotoRGB called
Well on the face of it I find it ridiculous for several reasons:
1. It ignores the origin of ProPhoto RGB being ROMM. This is an output
centric color space. It is not an input centric color space. For that
we should be looking to RIMM or ERIMM or scRGB or what OpenEXR uses or
2. Kodak had some explicit reasons for deciding upon the tone
reproduction curve for ROMM, and it had to do with reversibility.
Since it is an intermediate color space, we have images that are
converted into that space and out of that space. The ROMM non-
linearity, strictly speaking is not defined by a gamma 1.8 function.
But at the time, due to a limitation in Photoshop where a LUT based
editing space could not be defined, they went with a two part
solution. First the ICC profile established the TRC with a gamma 1.8
function, and second the CMM used a limiting function to deal with
loss at the dark end when the curve is inverted.
The L* function, in particular in an 8bpc context, is a more complex
curve and is more aggressive than a gamma 1.8 or gamma 2.2 function,
depending where you are on the curve. Now I don't know that this is a
problem, but have any of the advocates bothered to investigate it? And
what did they find? I haven't read any data indicating even a modicum
of serious scrutiny to its proposal.
3. ProPhoto RGB, while it has an 8bpc implementation, it is considered
best practices to have a 16bpc workflow when using ProPhoto RGB. In
that context, again, why do we care about encoding? Why is the
definition of the TRC important, within obviously reasonable limits?
Does it really matter when we're talking about 16bpc?
So if the advantage to L* is in an 8bpc context, how is it relevant in
a 16bpc context, if at all? Who is advocating an 8bpc "ProStarRGB"
workflow? Anyone? If not, what is the point of changing the TRC?
What's the advantage? What is made simpler? What is made faster? What
are the alternatives to having yet another flavor of the day editing
A strong opinion against L* based workingspaces and outputput
calibration comes from the well known color expert Chris Murphy. He
states, that the "native Gamma" or native tone reproduction curve of
printing processes is better represented by a Gamma of 1,8 and not
of L*. But till today, he has not pointed to scientific research, to
verify this statement.
It is considered conventional wisdom. That conventional wisdom may be
mistaken, but it is present nevertheless. Even in the document you
have cited in your post states this conventional wisdom:
"Apple®, on the other hand adopted a display gamma function of 1.8 in
an attempt to drive the display closer to the gamma of printed
material. This decision is probably why Apple® Macintosh™ computers
became so prevalent in print production."
It is not my burden to point to scientific research when it comes to
conventional wisdom. The advocates of L* based workflows have that
burden. And so far it has fallen completely flat. And that is why I
say it is uncompelling.
It is false to suggest that I'm against someone doing the necessary
work to demonstrate it's possible usefulness. I just having seen a
Have any of the questions I've asked on several lists, including this
very posting, been asked by the proponents of L* based workflows? What
were the answers to those questions? Where is the data? What were the
test parameters? What was the metric for success? Under what
conditions is there improvement in quality? Under what conditions was
there a reduction in quality? What were the conditions? What
equipment? What alternatives were explored? What were the results?
My personal view on this topic: The L* based concept for an RGB
workingspace makes the most sense, if we have also L* based monitor-
AND printer calibration.
Why does that make more sense than matching TRC's at the front end and
back end of the workflow, which has been done since the early 1990's?
Why and how is this new idea better?
So I think the initiative for ProStarRGB makes a lot sense and
should be integrated into ISO 22028 We still need more research
about the native tone reproduction curves of printing processes like
e.g. ISO 12647-2 / FOGRA-data, G7, standard inkjet drivers, or
standard settings of digital printing systems.
Well I would suggest more research first before advocating something
that changes people's workflows, let alone than also as an ISO
standard. Otherwise you're putting the cart before the horse. Let's
demonstrate the benefits first.
I think in digital imaging we have other things to be more concerned
about. In particular in the photography and museum markets, the issues
are well beyond L*. L*a*b* is by definition based on relative
luminance and this is not exactly helpful when it comes to describing
scene-referred data, let alone HDR imagery.
It is interesting to note that Microsoft's WCS implementation makes no
use of L*a*b* whatsoever. Its device profiles are XYZ based, and then
on top of that they have implemented CIECAM02. So their mapping all
depends on JCh. Not L*a*b*. Now, regardless of other issues regarding
Microsoft and WCS, I think everyone understands that the color
scientists and engineers who worked on WCS, are quite competent
individuals. I find the lack of dependency on L*a*b* relevant to
point out because there are aspects of WCS that I would like to see
adopted as we move forward.
The entire series of posts are on the ColorSync Archives for those that want to go further into the "debate".