Ad
Ad
Ad
Pages: « 1 [2]   Bottom of Page
Print
Author Topic: Bit precision  (Read 8582 times)
Jonathan Wienke
Sr. Member
****
Offline Offline

Posts: 5759



WWW
« Reply #20 on: August 11, 2006, 06:05:12 AM »
ReplyReply

Quote
Simple ideas are often most challenging to understand.

Obviously, or we wouldn't still be having this discussion.

Quote
You’re changing the rules while the game is running.  We were starting with an 8 bit file.  Also, you persistently refuse to consider the sequence I suggest (layers first, 16 bit then, flatten last).

I'm not changing the rules, I'm pointing out a glaring flaw in your editing method, and offering a much better alternative. If you take my 16-bit sample TIFF, convert it to 8-bit, and then duplicate the action steps via adjustment layers as you suggest, the resulting image is going to have heavily posterized shadows, just like the JPEG posted on my web site showing the result after running the action in 8-bit mode. So your proposal accomplishes nothing. OTOH, if you convert my sample TIFF to 8-bit or start out with the JPEG, convert to 16-bit, run a Neat Image pass, and then run the action or apply the equivalent stack of adjustment layers, you'll get a result that won't be as good as the one derived from RAW processed 100% 16-bit, but will be much better than anything processed in 8-bit mode, whether via actions or ajdustment layers.

Quote

You're confusing image layers and adjustment layers; the techniques you cited save the results in separate layers so you can adjust the intensity of the edit by changing the opacity of the layer, but there is no such thing as a "Neat Image noise reduction adjustment layer" or "unsharp mask adjustment layer" in Photoshop's adjustment layer palette. Just because a tool saves its results in a separate layer does NOT mean it uses an adjustment layer to create the result.

Image layers contain actual image data; adjustment layers do not--all they are is an instruction to perform a specific adjustment (level, curve, hue/saturation, etc.) on whatever image layer(s) are below them in the layer stack, with an optional layer mask to limit the adjustment  to specific areas of the image if desired.
Logged

PeterLange
Guest
« Reply #21 on: August 11, 2006, 02:29:40 PM »
ReplyReply

Quote
...
You seem hellbent on trying to convince yourself that camera jpgs are less bad than they are...

Here’s an inspiring article:
http://www.robgalbraith.com/bins/multi_pag...cid=7-6468-7844
‘All of Majoli's pictures are captured in JPEG format…’

Why not just saying that Photoshop is an excellent tool to make the best of a non-ideal situation like JPGs from in-camera conversion.

Honestly, I would not like to see Photoshop as a plug-in for Camera Raw.
In particular IF I decide to buy a Powershot S3 IS.


Quote
... and that somehow, majically, converting to 16 bit for edits gives you something. It gives you a "little bit" of enhanced precision only in the color/tone adjustments but nowhere near "high bit precision". About all you can say is it's less bad than pure 8 bit editing...but not by much.

For example, let’s take a ready processed Raw file in ProPhoto RGB at 16 bit. Provided that there are no out-of-sRGB colors, do you see a difference on screen when you convert to sRGB and then change to 8 bit mode?

I guess not.  Because the 8-bit data are skillfully arranged through gamma-encoding. And, the sRGB gamut is small enough to avoid perceivable 8-bit quantization errors.

So why should an 8-bit JPG released from a camera bear the end of days.  Some highlight details are probably missing due to the S-curve which is applied during in-camera processing (to accomplish an output-referred rendition), but situation should improve when the shot is done with the camera set to Low Contrast.  Sure, there are further issues to address like noise, a lack of sharpness…

But e.g. with a first step of noise reduction in PS – whether done this way or that way, as long as executed at 16 bit – you have high bit data again (which don’t have an integer 8 bit equivalent anymore) to work with…

Peter

--
Logged
PeterLange
Guest
« Reply #22 on: August 11, 2006, 02:37:21 PM »
ReplyReply

Quote
8-bit gamma adjusted data can have a tremendous amount of DR.  Its highest value with 2.2 gamma is about 196,000x as high as its lowest value, as opposed to about 4000:1 for 12-bit RAW data.

Uh, would you mind to disclose the equations behind these insights.

Be sure that some math would be no problem.

--
Logged
PeterLange
Guest
« Reply #23 on: August 11, 2006, 02:51:01 PM »
ReplyReply

Quote
If you take my 16-bit sample TIFF, convert it to 8-bit, and then duplicate the action steps via adjustment layers as you suggest, the resulting image is going to have heavily posterized shadows, just like the JPEG posted on my web site showing the result after running the action in 8-bit mode. So your proposal accomplishes nothing.

OTOH, if you convert my sample TIFF to 8-bit or start out with the JPEG, convert to 16-bit, run a Neat Image pass, and then run the action or apply the equivalent stack of adjustment layers, you'll get a result that won't be as good as the one derived from RAW processed 100% 16-bit, but will be much better than anything processed in 8-bit mode, whether via actions or ajdustment layers.

So your option uses Neat Image and mine does not.

And btw, where are the 16 bit with my option.

Nice comparison.

--
Logged
John Sheehy
Sr. Member
****
Offline Offline

Posts: 838


« Reply #24 on: August 11, 2006, 07:45:36 PM »
ReplyReply

Quote
Uh, would you mind to disclose the equations behind these insights.

Be sure that some math would be no problem.
--
[a href=\"index.php?act=findpost&pid=73075\"][{POST_SNAPBACK}][/a]

255^2.2 = 196964.7

1^2.2 = 1
« Last Edit: August 11, 2006, 07:46:44 PM by John Sheehy » Logged
PeterLange
Guest
« Reply #25 on: August 12, 2006, 04:19:38 AM »
ReplyReply

Quote
255^2.2 = 196964.7

1^2.2 = 1
Uhh......

Gamma encoding uses normalized data, always divided by (2^bpc –1); with bpc = bits per channel. Also the exponent is the inverse of that what is commonly called gamma:

255 x (255_linear / 255)^(1/2.2) = 255_encoded

255 x (128_linear / 255) ^(1/2.2) = 186_encoded

255 x (53_linear / 255) ^(1/2.2) = 128_encoded

255 x (0 /_linear / 255)^(1/2.2) = 0_encoded

You see that the lowest value of zero and the highest value of 255 don’t change when you go from a linear to a gamma encoded state. Just the distribution of data in-between changes significantly.

IF you follow the equations as suggested by Norman Koren, you will find that a 12 bit A-to-D converter can adequately describe a potential dynamic (input) range of about 9 f-stops; which means a contrast ratio of 2^9 :1 = 512 :1 (not talking about noise here).

Same is still approximately correct for an 8-bit file (gamma-encoded) due to above explained trick: gamma-encoding on a 12 bit basis first, reduction to 8 bit then. This leads to a kind of ideal distribution of levels per f-stop; i.e. there are 69 levels in zone 1, 50 in zone 2, and so on.  In the deepest shadows there’s nearly a 1:1 maintenance of levels (see my post above, or again Norman Koren’s table).

Possible concerns about missing levels fall apart considering that the output dynamic range is smaller anyway; e.g. 85 cd/m2 : 0.34 cd/m2 = 250 : 1.  And that’s the real problem: DR compression from e.g. a luminous landscape of DR 10000 : 1 to an output of e.g. 250:1.

Recommended reading (authored by Andrew Rodney et al.).

In practice this means that the representation on screen is much darker than reality.  Again, this has nothing to do with ‘gamma’ or bit precision. It’s a physical thing: the white luminance of a monitor is easily 100x darker than a reflective white on a sunny day.

That’s why all Raw conversion software (also in-camera) likes to apply a brightening S-curve. Note, the pronunciation lies on ‘brightening’; in order to achieve a pleasing output-referred rendition. The S-shape can be seen as a minimum-damage strategy.  However, if the upper shoulder compresses the highlights too much (loss of perceivable distances), it’s time to outsource this curve to Photoshop in order to add a Contrast mask (or, to consider HDR from multiple exposures).


As a personal note:  There are really good reasons to shoot Raw. I would list Chromatic aberration correction among the first.  Also white balance is easier to handle on the basis of native linear data.  Hey, I'm mostly shooting Raw. But wrong reasoning and Raw arrogance won't be accepted  .

Peter

--
Logged
Jonathan Wienke
Sr. Member
****
Offline Offline

Posts: 5759



WWW
« Reply #26 on: August 12, 2006, 08:03:55 AM »
ReplyReply

Quote
Here’s an inspiring article:
http://www.robgalbraith.com/bins/multi_pag...cid=7-6468-7844
‘All of Majoli's pictures are captured in JPEG format…’

Why not just saying that Photoshop is an excellent tool to make the best of a non-ideal situation like JPGs from in-camera conversion.

Honestly, I would not like to see Photoshop as a plug-in for Camera Raw.
In particular IF I decide to buy a Powershot S3 IS.

I've never said that one can't get reasonably good results from editing Camera JPEGs in Photoshop. They won't be as good from a technical perspective as 100% 16-bit edited RAWs, (which is why most technically astute photographers shoot RAW) but that doesn't mean they're worthless, either. However, the best way to process JPEGs well is not to do the major edits in 8-bit mode, even with adjustment layers as was originally proposed. You'll get a lot closer to the technical quality of 100% 16-bit edited RAWs if you convert your 8-bit image to 16-bit as soon as you open it, then edit.

Quote
For example, let’s take a ready processed Raw file in ProPhoto RGB at 16 bit. Provided that there are no out-of-sRGB colors, do you see a difference on screen when you convert to sRGB and then change to 8 bit mode?

I guess not.  Because the 8-bit data are skillfully arranged through gamma-encoding. And, the sRGB gamut is small enough to avoid perceivable 8-bit quantization errors.

No, because all of the edits have been done while in 16-bit mode, and the only quantization errors left in the data are the +/- 0.5 level rounding errors inherent to the conversion from 16-bit to 8-bit anyway.

The gamut of sRGB is just small enough that 8 bits per color channel is enough to avoid visible posterization, but when using larger color spaces like ProPhoto, 16 bits are needed to avoid visible posterization and banding. Since many common subjects contain colors well outside sRGB (fall foliage, car shows, rock formations, concerts and theatrical events with colored stage lighting, flowers, fireworks, sunsets, etc), limiting yourself to 8-bit sRGB is a good way to cripple yourself as a photographer. Why bother, when 16-bit mode is just a menu click away?

Quote
So why should an 8-bit JPG released from a camera bear the end of days.  Some highlight details are probably missing due to the S-curve which is applied during in-camera processing (to accomplish an output-referred rendition), but situation should improve when the shot is done with the camera set to Low Contrast.  Sure, there are further issues to address like noise, a lack of sharpness…

But e.g. with a first step of noise reduction in PS – whether done this way or that way, as long as executed at 16 bit – you have high bit data again (which don’t have an integer 8 bit equivalent anymore) to work with…

This is why I shoot RAW. Yes, you can reduce the amount of highlight detail thrown away by the camera by setting the contrast as low as possible, and minimize sharpening artifacts by setting in-camera sharpening as low as possible. You can even reduce the amount of edit-induced posterization and banding by immediately switching to 16-bit mode and running noise reduction before performing any edits. But all of these are second-best compromises. Even with contrast set to minimum, the camera will still throw away highlights you might prefer to keep. And 16-bit interpolations of 8-bit data can never be as good as the original 16-bit data.
« Last Edit: August 12, 2006, 08:05:37 AM by Jonathan Wienke » Logged

John Sheehy
Sr. Member
****
Offline Offline

Posts: 838


« Reply #27 on: August 12, 2006, 09:00:44 AM »
ReplyReply

Quote
Even with contrast set to minimum, the camera will still throw away highlights you might prefer to keep.
[a href=\"index.php?act=findpost&pid=73134\"][{POST_SNAPBACK}][/a]

With the 20D I've determined this to be about 0.3 stops green, 0.7 stops blue, and 1.2 stops red with daylight WB and -2 contrast.  Even what is retained just below the clipping point, however, is extremely posterized in terms of its real-world luminance, so you really want to use this shoulder area for catching highlights to be viewed as-is as a JPEG; not as a medium for extreme ETTR.
Logged
PeterLange
Guest
« Reply #28 on: August 12, 2006, 10:31:48 PM »
ReplyReply

Quote
The gamut of sRGB is just small enough that 8 bits per color channel is enough to avoid visible posterization, but when using larger color spaces like ProPhoto, 16 bits are needed to avoid visible posterization and banding.
Yep.


Quote
Since many common subjects contain colors well outside sRGB (fall foliage, car shows, rock formations, concerts and theatrical events with colored stage lighting, flowers, fireworks, sunsets, etc), limiting yourself to 8-bit sRGB is a good way to cripple yourself as a photographer.
"Bear in mind that the in-camera conversions almost certainly don't use straight matrix profiles, rather they all use highly proprietary gamut compression techniques that the camera vendors definitely consider part of their crown jewels."

Quoted from here (post #10).

And that’s not the only trick as far as I can tell.  In order to reach a pleasing rendition, colors are moved in 3D on a per hue basis; at least regarding saturation + brightness (aside from the tone curve).  Interesting chapter, and I would love to know if and what kind of Color Appearance Models are standing behind…

Sure, sRGB is a tiny space. No need to enter into gamut comparisons, etc.


Peter

--
« Last Edit: August 12, 2006, 10:43:29 PM by PeterLange » Logged
PeterLange
Guest
« Reply #29 on: August 12, 2006, 10:40:07 PM »
ReplyReply

I’ve just tortured a Granger rainbow a little bit (starting w/8-bit, sRGB assigned) by adding two nice adjust layers:

1. layer)  Applied a Gaussian blur of 4 pixel width on a Duplicated layer in Color blend mode (though there’s probably less need for color noise reduction on a Granger rainbow).
2. layer)  Levels’ adjust layer w/Shadows 20, Mids 5.0, Highlights 220 (OK, that’s strong).

The image at 100% magnification shows dark blotch and a kind of posterization along the black border at the bottom.  However, these irregularities disappear instantly when changing to 16-bit mode... !!

It’s even possible to go back and forth on the History palette to repeat this effect. And also the updated histogram is worth a look, respectively.

In a reference test, there was however no additional benefit from changing the Granger rainbow immediately to 16 bit mode, before creating both layers.

For me, this short trip to 16-bit is still a nice trick - in the sense of an ‘insurance’ with regard to real-world images.

That said, there are also limitations.  For example, any functional layer which is created via New layer + merge visible layers.  Or, the use of third-party software like e.g. NoiseNinja or PK Sharpener.  In such cases it’s certainly best to change from 8 to 16-bit immediately, from the beginning.

Peter

--
Logged
John Sheehy
Sr. Member
****
Offline Offline

Posts: 838


« Reply #30 on: August 13, 2006, 07:48:35 AM »
ReplyReply

Quote
Uhh......

Gamma encoding uses normalized data, always divided by (2^bpc –1); with bpc = bits per channel. Also the exponent is the inverse of that what is commonly called gamma:

255 x (255_linear / 255)^(1/2.2) = 255_encoded

255 x (128_linear / 255) ^(1/2.2) = 186_encoded

255 x (53_linear / 255) ^(1/2.2) = 128_encoded

255 x (0 /_linear / 255)^(1/2.2) = 0_encoded

You see that the lowest value of zero and the highest value of 255 don’t change when you go from a linear to a gamma encoded state. Just the distribution of data in-between changes significantly.

Nope.  Zero is irrelevant.  One is the lowest meaningful "step", not zero.  Zero is The Joker, The Fool, and has no relevance to DR.

Now, you may not want to count exactly from 1, since there is nothing to resolve below it, but the fact of the matter is, if you translate 12-bit linear data to 8-bit 2.2-gamma-adjusted data, the linear values are very sparse relative to the gamma-adjusted ones, in the deepest shadows, so even if you choose the third or fourth lowest recorded level as the center of your usable lowest value, it is still quite a bit denser in the 8-bit gamma-adjusted scale, regardless of whether you use 4095 or 2048 (standard) as the whitepoint for the 12-bit linear.  Here is the correspondence of the first 26 2.2-gamma-adjusted values (grey column is 8-bit, white column is scaled to 4095 linear values):

.
[a href=\"index.php?act=findpost&pid=73120\"][{POST_SNAPBACK}][/a][/quote]

How can my saying that 8-bit gamma-adjusted data can convey more DR than RAW can, be interpretted as "RAW arrogance"?
« Last Edit: August 13, 2006, 07:50:30 AM by John Sheehy » Logged
PeterLange
Guest
« Reply #31 on: August 13, 2006, 03:35:36 PM »
ReplyReply

Quote
... if you translate 12-bit linear data to 8-bit 2.2-gamma-adjusted data, the linear values are very sparse relative to the gamma-adjusted ones, in the deepest shadows ... (grey column is 8-bit, white column is scaled to 4095 linear values):
I’m glad to see that the math (table) is now correct and corresponds to mine; according to:
RGB_8bit_gamma-encoded = 255 x (RGB_12bit-linear / 4095)^(1/2.2)

Purpose was to illustrate that a 8-bit-gamma-encoded scale even can hold the deepest shadows of an 12-bit-linear scale. As a tradeoff, there are less levels dedicated to the highlights where it’s visually less important. It’s simply a skillful distribution, making the best of limited resources – under consideration of human vision (though gamma-encoding was never meant to comply to human vision; it was invented to compensate for the non-linear input/output relationship of CRTs).

If you’re really picky, you would now have to consider that sRGB does not represent a regular 2.2 gamma.  The TRC tags in fact hold a curve which starts significantly less steep, then goes up to a ‘simplified gamma’ of 2.2 (the latter is what the custom profile function in PS indicates).  The equation in detail can be found somewhere at Bruce Lindbloom’s and/or Gernot Hoffmann’s website.

Anyway, this will almost solve the problem which you now see with the ‘sparse’ linear scale.


Quote
...  You can easily get at least 2, maybe 3 stops more DR from the 8-bit, if the source of data can provide it.
Breaking up the DR into zones is not the most useful way of looking at things, IMO.  It is an easy way for people who are not good with math to repeat what they read, though.

Nope.  It’s just the other way round.

A contrast ratio is a quite poor tool to describe DR. Because minor changes of the denominator drastically change the ratio. For example let’s take two monitors with different black points of 0.25 and 0.5 cd/m2.  In one case you get 85 cd/m2 : 0.25 cd/m2 = 340 : 1. In the other case the result is 170 : 1.  The same image on both monitors won’t look as different as the ratios seem to suggest.  White luminance is much more important than such fluctuations around the black point.

Coming back to the input DR of cameras, Norman Koren’s procedure and table are completely valid: Calculate through the zones, do it right, and decide if you have enough levels per zone for a meaningful description with regard to perception. The Weber-Fechner law is a reasonable starting point, though it’s not valid in the shadows (there are less levels needed).

A 12 bit A-to-D converter is good for about 9 zones.
8-bit-gamma-encoded data can hold this barely.
However, output DR with monitors & prints is the limiting factor anyway.


Quote
How can my saying that 8-bit gamma-adjusted data can convey more DR than RAW can, be interpretted as "RAW arrogance"?
Peter wrote:  “Face it, camera engineers know what they do (mostly).”.

John wrote:  “Yes, they know that the camera will still sell if its JPEGs are not good representations of the sensor capture.”

Anyway, the whole question is not, if Raw is better.  We’ve gone though this: it is, in terms of ultimate image quality.  The challenge is to understand why JPG captures can be to some extend surprisingly reasonable, less worse than maybe expected.  Or, what are the core strengths of Raw, and which JPG issues can be easily overcome with a little bit Photoshop wizardry…

And in this context, it might not be wrong to risk a look at different opinions:

/>  Tom Niemann on his rich website (‘So for my purposes I shoot sRGB JPEGs’).
/>  Ken Rockwell’s polarizing point of view.
/>  Noel Carboni, whom former RG-members will certainly remember.
/>  Ron Bigelow, after writing a three-parts article on his understanding of the Raw advantage, finally comes out with an example which I my eyes could rather be suited to show the opposite (see the purple flower).
/>  Canon seems to be extremely proud on their Digic II processor and picture styles.
/>  And finally – he may apologize to be listed here - Michael Reichmann liked to review the Powershot S3 Pro IS though it does not offer Raw recording.


Peter

--
« Last Edit: August 13, 2006, 03:40:30 PM by PeterLange » Logged
Pages: « 1 [2]   Top of Page
Print
Jump to:  

Ad
Ad
Ad