Ad
Ad
Ad
Pages: « 1 [2] 3 4 »   Bottom of Page
Print
Author Topic: Is the 16bit advantage a bit of a myth?  (Read 16681 times)
digitaldog
Sr. Member
****
Offline Offline

Posts: 9225



WWW
« Reply #20 on: February 09, 2008, 03:46:12 PM »
ReplyReply

Quote
Jonathon,

Can you post the raw file for the image you used in your example? I think that would be of interest to people wanting to investigate this further.
[{POST_SNAPBACK}][/a]

You mean you don't recall the Raw's I mentioned on Dan Margulis silly list years ago Peter? I was pretty sure you subscribed this his list (if no longer, I can't at all blame you).

They are on my iDisk. Bruce Lindbloom has a link on his famous page that dismisses Dan's 16-bit challenge nonsense (http://www.brucelindbloom.com/):

[a href=\"http://www.retouchpro.com/forums/input-output-workflow/4826-reconsidering-16-bit.html#post104205]http://www.retouchpro.com/forums/input-out...html#post104205[/url]

If you read down, you'll get to hear all about Dan's lame reasons why the proof provided didn't suite him (moving the goal posts in mid-game again). He didn't buy the use of "Ultra Wide gamut working spaces" like Prophoto RGB. He wrote:

Quote
In early September, Andrew Rodney posted his own "real-world" example of 8-bit vs. 16-bit editing. As soon as it appeared, it was dismissed both by me and by Lee Varis because it depended on an exotic RGB definition, the ultra-wide gamut ProPhoto RGB, where the perceived impact of tiny variations is much larger than in the RGB definitions used by almost everyone. Andrewhas known for at least five years that I consider testing in such RGBs irrelevant--see "The Attempts to Obfuscate" below.

That's the pot calling the kettle black (Dan using the term Obfuscate in terms of the proof).

Anyway, you can read all his bullshit in the link above, the Raw illustrates data loss and image degradation in an 8-bit image that doesn't result in the 16-bit document. Its in a folder called 16bit challenge. You can download the Raws and do all the edits as described or just open a smaller TIFF of the two images processed as described.

My public iDisk:

thedigitaldog

Name (lower case) public
Password (lower case) public

Public folder Password is "public" (note the first letter is NOT capitalized).

To go there via a web browser, use this URL:

http://idisk.mac.com/thedigitaldog-Public

Getting back to some recent comments here, some have correctly said, you may (MAY) introduce banding in 8-bit documents at some time, depending on the edits. And that's important as we don't know WHEN or HOW we may take a perfectly good 8-bit document and move it over the edge in terms of an edit that would introduce banding on some output device that may not even be on the market yet. 16-bit insures this will not happen. There's only ONE downside of high bit files today, that's their size. Everything done in a Raw converter is happening high bit. Just about every global tone and color correction on an existing rendered image (and many selective tools) work in high bit. Its an insurance policy that you can send the best 8-bit data to any device today and in the future.
Logged

Andrew Rodney
Author “Color Management for Photographers”
http://digitaldog.net/
Jonathan Wienke
Sr. Member
****
Offline Offline

Posts: 5759



WWW
« Reply #21 on: February 09, 2008, 03:57:31 PM »
ReplyReply

Quote
Jonathan -
For the purposes of these tests I have done no processing in the RAW converter

That's impossible. Everything a RAW converter does is some kind of processing, unless you regularly work with undemosaiced linear images. You can't avoid 16-bit processing when working with RAWs. Without seeing examples, I have no way of knowing what non-optimal adjustments you made during conversion that had to be "fixed" in 8-bit mode for your tests. Check out Andrew's RAWs for test subject material.

Quote
There may well be a difference when dealing with in-camera JPGs, but that's not what I'm concerned about here - I work entirely in RAW, so I wanted to test starting from that point.

And the reason you shoot RAW instead of JPEG, whether you think consciously in such terms or not, has a great deal to do with the >8-bit advantage that RAW workflow offers.
Logged

pcox
Full Member
***
Offline Offline

Posts: 154


WWW
« Reply #22 on: February 09, 2008, 04:38:58 PM »
ReplyReply

Firstly, apologies for the edits. I had a brain fart, and worked on the wrong source action (one of my own tests).
Seconly, let's try to keep the discussion civil - no need for the testiness and personal nature of some of the recent comments.

I've gone back and run the right action, and here are the results:

8 bit:


16 bit:


I honestly can't see the difference here. This was done by opening the .CRW, accepting the edits in the included .xmp and opening in Photoshop in ProPhoto, 16 bit.

I then ran the actions provided, one to create and process the 8 bit copy and the other on the 16 bit image.

For fun, I also opened the CRW as ProPhoto, 8 bit and ran the basic adjustments on it - no difference (not that I was expecting any).

Jonathan -
You're picking nits - the meaning of my saying 'I have done no processing in the RAW converter' was that I had not moved any sliders, merely accepted the defaults.

The other tests were intended to compress the histogram as much as I could in RAW, and then expand it significantly in Photoshop.

I'm also well aware of the higher-than-8-bit advantage of RAW capture - but capture depth isn't the issue. It's processing in 8 vs. 16 bit that's the subject of this debate.

Cheers,
Peter
« Last Edit: February 09, 2008, 04:56:05 PM by pcox » Logged

Peter Cox Photography
Photography Workshops in Ireland
Fine Art Landscape Photographs
www.petercox.ie
digitaldog
Sr. Member
****
Offline Offline

Posts: 9225



WWW
« Reply #23 on: February 09, 2008, 05:12:46 PM »
ReplyReply

Quote
I honestly can't see the difference here. This was done by opening the .CRW, accepting the edits in the included .xmp and opening in Photoshop in ProPhoto, 16 bit.

Well I sure can. Look in the center area of the crop of the two, the opening of the bird feeder. Look at the green bottom of the feeder below that, one's much smoother than the other. Or process both and subtract them. It may not be huge, but its visually there and one can only wonder what further editing on the 8-bit image would produce.
Logged

Andrew Rodney
Author “Color Management for Photographers”
http://digitaldog.net/
bernie west
Full Member
***
Offline Offline

Posts: 132



« Reply #24 on: February 09, 2008, 05:23:14 PM »
ReplyReply

Qualifier:  I work in a large university engineering library which contains 100's of digital signal and digital image processing texts, which I have a habit of browsing through whenever I shelve one.

From my reading I've discovered that humans can only differentiate about 6-bits of grayscale tones.  Fiddling around on photoshop using the posterize command I can just start to see posterizing around 7-bits (although I'm not sure how accurate using this command is for the purposes of this argument).  What happens when you throw colour into the mix? I'm not sure.  But from my reading I recall seeing that normal humans (trichromates) can differentiate about 1 - 3 million or so colours.  Now 7-bit RGB can represent about 2 million colours.  So I imply from all this that anything much more than 7-bit for VIEWING is most probably wasted.

Now obviously the question is about editing, not viewing.  But how much real world editing does it take to reduce an 8-bit image to a 7-bit image (ie. half the number of levels).  Some examples have been shown how this can happen, and obviously 16-bit, like Andrew said, will garantee no visible degradation occurs, but in the world of only small edits surely 8-bits is enough.

It's worth remembering that in the end, no matter what bit depth your display or printer is capable of, normal humans are only going to be able to differentiate 6- or 7-bits of data anyway (hopefully my assessment of colour depth was accurate; if not, i'm more than happy to be corrected, as I'm certainly not a colour scientist).
Logged
digitaldog
Sr. Member
****
Offline Offline

Posts: 9225



WWW
« Reply #25 on: February 09, 2008, 05:28:26 PM »
ReplyReply

Quote
It's worth remembering that in the end, no matter what bit depth your display or printer is capable of, normal humans are only going to be able to differentiate 6- or 7-bits of data anyway (hopefully my assessment of colour depth was accurate; if not, i'm more than happy to be corrected, as I'm certainly not a colour scientist).
[a href=\"index.php?act=findpost&pid=173596\"][{POST_SNAPBACK}][/a]

Indeed and the point is, we want to send the best 8-bits to the printer. If we start with only 8-bits, that's not necessarily going to happen. Or to put another spin in this, we have no guarantee that we'll send 8 good bits to this device. With high bit data, its a non issue.
Logged

Andrew Rodney
Author “Color Management for Photographers”
http://digitaldog.net/
DarkPenguin
Guest
« Reply #26 on: February 09, 2008, 05:46:46 PM »
ReplyReply

I smell the dullest episode of Myth Busters ever.
Logged
bernie west
Full Member
***
Offline Offline

Posts: 132



« Reply #27 on: February 09, 2008, 06:05:19 PM »
ReplyReply

Quote
Indeed and the point is, we want to send the best 8-bits to the printer. If we start with only 8-bits, that's not necessarily going to happen. Or to put another spin in this, we have no guarantee that we'll send 8 good bits to this device. With high bit data, its a non issue.
[a href=\"index.php?act=findpost&pid=173597\"][{POST_SNAPBACK}][/a]

But I guess the question is, what does it take to reduce 8-bits to 7- or 6-bits(one-quarter the levels)?  Obviously it can be done, but how likely is it in the normal way of things?  Actually, on that Lab thing, Jonathan mentioned that 87% of colours can be lost in a Lab conversion.  13% of 16.7million equals about 2.2million colours, more than enough for human vision.  Of course this depends on which colours are lost.  If they are equally spaced (probably not the right term) then we can probably wear the 87% loss.  However, if they are grouped in some way then this could become a problem.  Just some food for thought.

By the way, in case I am inadvertantly placing myself in the Margulis camp, I actually do edit in 16-bit as much as possible, mainly for the reason Andrew said: It garantees no visible degredation.
« Last Edit: February 09, 2008, 06:08:01 PM by bernie west » Logged
pcox
Full Member
***
Offline Offline

Posts: 154


WWW
« Reply #28 on: February 09, 2008, 06:36:44 PM »
ReplyReply

Andrew -
I grant you there is a very slight degredation of that small part of the image for a pretty hefty gamma and sharpening adjustment, and a modest saturation boost. And that gamma change would have been better done in RAW anyway. Personally, I wouldn't use an image that needed that much processing as neither the 8 nor 16 bit versions have resulted in a good quality image.

I think that there's a lot of hyperbole about this whole issue, and many people take the stance that you _must_ use 16 bits or you lose a whole lot of quality.
We've seen here that this is just not the case.

Only if you are interested in the absolute pinnacle of quality in the most demanding of applications, or if you need to make radical adjustments to an image in order to attempt to rescue it from the bin should you require the use of 16 bits - or if you don't mind using the space and just want to cover your bases.

Now as I said - personally I'm going to keep using 16 bit in my entire workflow (and yes, Jonathan, that includes using 16 bit mode for my Z3100). This is because storage is cheap, and while slim there are advantages to it.

My approach to my students will be to tell them about editing in 16 bit, but state the facts - it's only necessary under very narrow circumstances, and if they can't afford the space that 8 bits is just fine.

Thanks all for your help in figuring this out.

Cheers,
Peter
Logged

Peter Cox Photography
Photography Workshops in Ireland
Fine Art Landscape Photographs
www.petercox.ie
Panopeeper
Sr. Member
****
Offline Offline

Posts: 1805


« Reply #29 on: February 09, 2008, 06:47:45 PM »
ReplyReply

Quote
I honestly can't see the difference here

Let's not argue about if the *visible* differences are important enough. There is a principal issue: a counter-example does not prove, that there can not be other, positive examples by the millions.

If you ignore the existing differences, then you have proven that this image and these adjustments would not justify 16bit processing.

However, when I start editing an image, I do not know if my following adjustments would justify 16bits or not. So, I start out with 16bits and archive the almost completely processed images in 16bit format.
« Last Edit: February 09, 2008, 06:48:40 PM by Panopeeper » Logged

Gabor
digitaldog
Sr. Member
****
Offline Offline

Posts: 9225



WWW
« Reply #30 on: February 09, 2008, 07:01:52 PM »
ReplyReply

Quote
And that gamma change would have been better done in RAW anyway.

I totally agree. It reinforces the idea that all the heavy lifting should be done in high bit, linear encoded Raw processing (despite the fellow who dismisses high bit editing and Raw processing).

The point wasn't to suggest otherwise, the point was to dismiss a silly 16-bit challenge that has been going on far too long. And the shocking result was the challenger saying the exercise was faulty due to edits made in an ultra wide gamut space.

Quote
Personally, I wouldn't use an image that needed that much processing as neither the 8 nor 16 bit versions have resulted in a good quality image.

And neither would I. This was, if memory serves, the default rendering of the converter. But the challenger this image was addressed to, suggests we SHOULD set the processor in such a default mode, then "fix" the rendered pixels in Photoshop (in 8-bit no less). He also said "anyone who knows what they are doing can fix a JPEG faster and better in Photoshop than a Raw in Camera Raw". Nonsense I say and when I challenged him to prove it, he dismissed this.

Read the original URL from Bruce Lindbloom about this 16-bit challenged, it sums up the nonsense that Dan has proposed from day one. A challenge that changes whenever he sees fit. Once again, the images I uploaded were simply to address this challenge not to suggest it was best practices. We should render the best possible quality from our Raw converters.

Quote
I think that there's a lot of hyperbole about this whole issue, and many people take the stance that you _must_ use 16 bits or you lose a whole lot of quality.

The potential to lose quality is there. We don't know when, we don't know why one edit may produce the damage. As I've said from day one, high bit editing is cheap insurance. The other side says "I challenged you to prove there's a  benefit" not "I will prove there is no benefit" which is quite a different challenge. Worse, when someone does attempt to prove the point, either using simple math or an image, its dismissed. The math is undeniable. The printed results are not always so clear cut.

Quote
Only if you are interested in the absolute pinnacle of quality in the most demanding of applications, or if you need to make radical adjustments to an image in order to attempt to rescue it from the bin should you require the use of 16 bits - or if you don't mind using the space and just want to cover your bases.

If your goal is to produce a catalog of 1000 images of widgets on a white bkdng 3x3 on a 150 linescreen CMYK page, working in high bit probably isn't a good idea. I understand the need to get the job done quickly, based on the final reproduction requirements. If the work is for your portfolio, or a very important image you may not know how you'll ultimately reproduce, then high bit editing is simply good insurance with little penalty. That's not the mindset of the challenger of the 16-bit workflow. He states its simply not necessary. At least he did until some of us attempted to prove him otherwise and now he has modified his stance somewhat to say "sometimes" and points to those who use unnecessary (his words) ultra wide gamut, "dangerious" working spaces like ProPhoto RGB.

Quote
My approach to my students will be to tell them about editing in 16 bit, but state the facts - it's only necessary under very narrow circumstances, and if they can't afford the space that 8 bits is just fine.

I'd agree with you on the first part, that its probably necessary under some, narrow circumstances. I don't agree with "just fine" because I don't know what may be fine for you is unacceptable for me. And I don't know when just "fine" becomes not so fine. So, its far easier to simply keep the data in its original bit depth from the capture device and not worry about when "fine" becomes unacceptable.
« Last Edit: February 09, 2008, 07:03:07 PM by digitaldog » Logged

Andrew Rodney
Author “Color Management for Photographers”
http://digitaldog.net/
Panopeeper
Sr. Member
****
Offline Offline

Posts: 1805


« Reply #31 on: February 09, 2008, 08:15:50 PM »
ReplyReply

Quote
Qualifier:  I work in a large university engineering library which contains 100's of From my reading I've discovered that humans can only differentiate about 6-bits of grayscale tones

1. Perhaps you should contact some of the authors of those papers and ask them, how many squares they can distinguish between in the attached image.

2. If one can distinguish between two adjacent shades, then it is called posterization. A "continuous color" image has to consists of such shades, which can not be distinguished from each others. That way the transitions do not appear as posterizations.
« Last Edit: February 09, 2008, 08:16:22 PM by Panopeeper » Logged

Gabor
Panopeeper
Sr. Member
****
Offline Offline

Posts: 1805


« Reply #32 on: February 09, 2008, 08:19:27 PM »
ReplyReply

Somehow I did not manage to attach the image. It can be downloaded from

http://www.panopeeper.com/Demo/100DifferentGrayshades.tif
Logged

Gabor
bernie west
Full Member
***
Offline Offline

Posts: 132



« Reply #33 on: February 09, 2008, 08:47:33 PM »
ReplyReply

Quote
1. Perhaps you should contact some of the authors of those papers and ask them, how many squares they can distinguish between in the attached image.
[a href=\"index.php?act=findpost&pid=173629\"][{POST_SNAPBACK}][/a]

Not sure what your point is, as 100 shades of gray is between 6- and 7-bits, which is what I have been talking about.

A useful test would be to do 9-bits worth of gray shades and see if we can still distinguish between them.
Logged
Panopeeper
Sr. Member
****
Offline Offline

Posts: 1805


« Reply #34 on: February 09, 2008, 09:30:07 PM »
ReplyReply

Quote
Not sure what your point is, as 100 shades of gray is between 6- and 7-bits, which is what I have been talking about

1. You mentioned "about 6 bits". 100 is about 7 bits.

2. I tried to make it understandable, that the number of required shades is *much higher* than the number of distinguishable shades.

3. You are totally ignoring the main factor, namely the the question, how high the differences are between the shades. If you are looking at a monitor with contrast ratio 3000:1 (or 10000:1, they are coming), you can distinguish between much more shades, than on a cheap laptop LCD with contrast ratio 200:1.

Quote
A useful test would be to do 9-bits worth of gray shades and see if we can still distinguish between them

Yes, on a high-end HDTV.
Logged

Gabor
bernie west
Full Member
***
Offline Offline

Posts: 132



« Reply #35 on: February 09, 2008, 09:40:49 PM »
ReplyReply

Quote from: Panopeeper,Feb 10 2008, 01:30 PM
1. You mentioned "about 6 bits". 100 is about 7 bits.


No, 128 shades is 7 bit.  Look at what I wrote.  I have been talking about 6 and 7 bit.


2. I tried to make it understandable, that the number of required shades is *much higher* than the number of distinguishable shades.


try writing that next time and I won't have to read your mind anymore.  Not sure what you mean anyway.  Why would you require more than what you can distinguish?  The extra ones won't add anymore information to the image.


3. You are totally ignoring the main factor, namely the the question, how high the differences are between the shades. If you are looking at a monitor with contrast ratio 3000:1 (or 10000:1, they are coming), you can distinguish between much more shades, than on a cheap laptop LCD with contrast ratio 200:1.


I'm not ignoring anything.  I'm just making discussion points.  Whatever dialogue you've got going on in your head with yourself, I'm happy for you.

Yes, on a high-end HDTV.

Actually, in print would be better.
« Last Edit: February 09, 2008, 09:43:11 PM by bernie west » Logged
Jonathan Wienke
Sr. Member
****
Offline Offline

Posts: 5759



WWW
« Reply #36 on: February 09, 2008, 11:35:29 PM »
ReplyReply

The real issue:

Starting with 8-bit data, you have more colors than the human eye can distinguish between. That's the reason 8-bit image formats are so common; they are good enough to avoid posterization due to the file format itself. But when editing in 8-bit format, quantization rears its ugly head. A single conversion from RGB to LAB can reduce an image to 13% of its original color count. Curves, levels and other adjustments have wildly variable effects ranging from negligible to drastic depending on the parameters of the adjustments.

If you edit in 16-bit from RAW, you are guaranteed to always have the best possible 8 bits to send to 8-bit devices, whether printers, monitors, or an 8-bit end-use file format like a web JPEG. If you edit in 8-bit mode, you are guaranteed NOT to have the best 8 bits available to the end user, as evidenced by toothcomb histograms. Whether that difference is distinguishable in a final print depends on the image content and the editing steps required to process it. But as the quality of monitors and printers continues to increase (like LCD panels going from 6-bit to 10-bit, and the increasing availability of 16-bit printing solutions), the differences will become more obvious. You may not see the difference now, but in 5 years it may be quite obvious, rather like buying a new set of speakers and hearing a guitar riff in the background of a favorite song you never noticed before.
« Last Edit: February 09, 2008, 11:36:54 PM by Jonathan Wienke » Logged

Ray
Sr. Member
****
Offline Offline

Posts: 8939


« Reply #37 on: February 10, 2008, 01:27:21 AM »
ReplyReply

Quote
Whether that difference is distinguishable in a final print depends on the image content and the editing steps required to process it. But as the quality of monitors and printers continues to increase (like LCD panels going from 6-bit to 10-bit, and the increasing availability of 16-bit printing solutions), the differences will become more obvious. You may not see the difference now, but in 5 years it may be quite obvious, rather like buying a new set of speakers and hearing a guitar riff in the background of a favorite song you never noticed before.
[a href=\"index.php?act=findpost&pid=173650\"][{POST_SNAPBACK}][/a]

Jonathan,
Whilst all that is true, is it necessarily going to be an issue in 5 or perhaps 10 years time when perhaps not only will monitors be able to display the full ProPhoto RGB colors, but printers might also be able to take advantager of the full range of the gamut of colors and hues in the ProPhoto color space in 16 bit mode?

If this scenario arises, I might prefer to go back to the original RAW files and reprocess them in 32 bit with the enhanced techniques that Adobe will have presumably provided by then using my own presumably enhanced skills.

There is something to be said for not wasting space and time creating a quality for the future which might be irrelevant at the present time. On the occasions that I took some of my slides and negatives to a professional lab for scanning, I was always asked the question, "What size print do you want to make?"

At first, the question seemed a little odd. Why should I want less than the highest quality scans? But I quickly caught on. For the professional, time is money. A builder doesn't build a house that is stronger than the building regulations require, but an amateur or owner builder might. Likewise, you don't spend the time and money scanning a 35mm slide at 8,000 dpi if all you want is an 8x10 print. However, if your purpose is to archive the film, then you do want the maximum quality.

16 bit processing does take more time and does involve more resources but it's not serving any archival purpose. The RAW file is the archive.
Logged
bernie west
Full Member
***
Offline Offline

Posts: 132



« Reply #38 on: February 10, 2008, 02:13:52 AM »
ReplyReply

Quote
But as the quality of monitors and printers continues to increase (like LCD panels going from 6-bit to 10-bit, and the increasing availability of 16-bit printing solutions), the differences will become more obvious. [a href=\"index.php?act=findpost&pid=173650\"][{POST_SNAPBACK}][/a]

No they won't if it's true that human resolution is only 7-bit (or even 8-bit)!. High bit rate is good for editing but will do nothing for viewing.
« Last Edit: February 10, 2008, 03:43:30 AM by bernie west » Logged
NikosR
Sr. Member
****
Offline Offline

Posts: 622


WWW
« Reply #39 on: February 10, 2008, 02:39:29 AM »
ReplyReply

Quote
Well I sure can. Look in the center area of the crop of the two, the opening of the bird feeder. Look at the green bottom of the feeder below that, one's much smoother than the other. Or process both and subtract them. It may not be huge, but its visually there and one can only wonder what further editing on the 8-bit image would produce.
[a href=\"index.php?act=findpost&pid=173595\"][{POST_SNAPBACK}][/a]

In any such demonstration we are ignoring a variable factor. Jpeg (used for display) is a compressed format. I strongly suspect that the differences you notice result from Jpeg processing behaviour (of course itself based on differences in input - 8bit vs 16bit) than by any posterisation effects directly attributed to 8bit vs 16bit differences.

I'm always suspicious of such comparisons when the demonstration is based on a jpeg final product. I would rather compare TIFF images (rather hard to do on the web).
Logged

Nikos
Pages: « 1 [2] 3 4 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad