Ad
Ad
Ad
Pages: « 1 ... 3 4 [5] 6 »   Bottom of Page
Print
Author Topic: Camera's histogram reliable to the RAW data  (Read 258895 times)
Guillermo Luijk
Sr. Member
****
Offline Offline

Posts: 1290



WWW
« Reply #80 on: January 15, 2008, 07:18:21 AM »
ReplyReply

Quote
Forget about the bokeh. No camera will make a lens suitable for good bokeh, no matter of the DoF. I don't know, what "strong" bokeh is, but if you like nice bokeh, you need a suitable lens, and that will not be an F4 lens.

I cannot Panopeeper, I see clear that the same lens capturing the same scene (same FoV over the subject) will provide at maximum aperture a shorter DoF in the 5D than in the 40D. This is what I call, maybe wrongly, strong bokeh. And it's what I am looking for in FF (apart from wide angle), a higher capability of differentiating the subject from the background/foreground.

I agree with the rest of your post although I think none of the statements will mean a real and noticeably improvement on 40D's images. 9 f-stops is simply not enough to really NEED extra bits, 12-bit cameras have showed to be able to capture with a reasonable good definition 9 f-stops (the Sony A700 for instance).
I haven't done tests over the 5D's DR at ISO100, but I assume it is lower than 40D's, so in the 5D the extra bits are even less necessary.
For a good DR in high contrast scenes, both machines need extra exposures. Any possible advantage of 14 bits is definitively gone in these circumstances.

I will try to post a couple of overlapping shoots tonight. Didn't have time to check yours sorry.
« Last Edit: January 15, 2008, 07:39:58 AM by GLuijk » Logged

John Sheehy
Sr. Member
****
Offline Offline

Posts: 838


« Reply #81 on: January 15, 2008, 07:35:21 AM »
ReplyReply

Quote
It is not reasonable to reduce this question to the number of bits; it's not so simple.

The 5D creates about ~3570 levels. The 20D creates ~3970 levels. The 40D creates ~12800 levels at ISO 100, and ~15200 levels at higher, full stop ISOs.

This is about 3.5 times more at ISO 100 than that of the 5D. It is open to debate, how many of the 12800 levels of the 40D are really informative, but IMO it is clear, that the 5D is far underequipped with the 3570 levels.
[a href=\"index.php?act=findpost&pid=167015\"][{POST_SNAPBACK}][/a]


The 5D needs about 2500 levels at ISO 100 and 1090 levels at ISO 1600 for normal (non-stacking) use.  That's for the RAW data; for conversion, of course, it is always good to force extra precision if the converter loads RAW data aligned by the LSB.
Any bit depth (or number of levels) that brings the standard deviation of a black frame above 1.4 ADU is inefficient for storage purposes.
Logged
Jonathan Wienke
Sr. Member
****
Offline Offline

Posts: 5759



WWW
« Reply #82 on: January 15, 2008, 03:01:13 PM »
ReplyReply

Quote
Interesting example. Would this be possible to demonstrate visually? I can appreciate that from a purely technical point of view, one could have the two halves of an image a very slightly different hue so that in order to technically (at the machine level) distinguish between them, you might need 14 bit or 16 bit capture instead of 12 bit, but would the eye be able to distinguish between such subtle differences that required this huge increase in the number of levels afforded by 14 or 16 bit processing?

The level count can cause low-contrast detail to disappear. Try loading my DR test chart in PS and reducing the number of levels and see how many you can throw out before the smallest, lowest-contrast text starts disappearing.
Logged

John Sheehy
Sr. Member
****
Offline Offline

Posts: 838


« Reply #83 on: January 15, 2008, 07:33:16 PM »
ReplyReply

Quote
The level count can cause low-contrast detail to disappear. Try loading my DR test chart in PS and reducing the number of levels and see how many you can throw out before the smallest, lowest-contrast text starts disappearing.
[a href=\"index.php?act=findpost&pid=167396\"][{POST_SNAPBACK}][/a]

Such a test would be totally irrelevant, if the decrease in levels weren't in the original RAW data.  Conversions deteriorate much more quickly from quantization.
Logged
Jonathan Wienke
Sr. Member
****
Offline Offline

Posts: 5759



WWW
« Reply #84 on: January 16, 2008, 07:05:32 AM »
ReplyReply

Quote
Such a test would be totally irrelevant, if the decrease in levels weren't in the original RAW data.

The point of such a test would be to get a rough idea of how many levels per stop are needed to avoid visually apparent banding/posterization and the disappearance of low-contrast fine detail.
Logged

John Sheehy
Sr. Member
****
Offline Offline

Posts: 838


« Reply #85 on: January 16, 2008, 07:32:18 AM »
ReplyReply

Quote
The point of such a test would be to get a rough idea of how many levels per stop are needed to avoid visually apparent banding/posterization and the disappearance of low-contrast fine detail.
[a href=\"index.php?act=findpost&pid=167516\"][{POST_SNAPBACK}][/a]

But of what?  You replied to Ray, who was talking about RAW bit depth, and consequently, RAW levels.  Quantizing them is not the same thing as quantizing finished conversions.  RAW data is far more quantization-resistant than full RGB images with realistic tone curves.
Logged
Jonathan Wienke
Sr. Member
****
Offline Offline

Posts: 5759



WWW
« Reply #86 on: January 16, 2008, 07:58:13 AM »
ReplyReply

Quote
RAW data is far more quantization-resistant than full RGB images with realistic tone curves.

WTF??? In the highlights, yes, but certainly not in the shadows. Try zeroing out all the lower-order bits of a RAW file so that it is effectively 8-bit, and run it through any RAW converter. Quantization will be much worse than if you convert to 8-bit after conversion especially in the shadows.
« Last Edit: January 16, 2008, 07:59:32 AM by Jonathan Wienke » Logged

Panopeeper
Sr. Member
****
Offline Offline

Posts: 1805


« Reply #87 on: January 16, 2008, 02:09:09 PM »
ReplyReply

Quote
I can appreciate that from a purely technical point of view, one could have the two halves of an image a very slightly different hue so that in order to technically (at the machine level) distinguish between them, you might need 14 bit or 16 bit capture instead of 12 bit, but would the eye be able to distinguish between such subtle differences that required this huge increase in the number of levels afforded by 14 or 16 bit processing?
It depends on the absolute luminousity of these values as well. The minimum difference between two luminousities (in proportion, not in absolute value) depends on the range these luminousities are in (in absolute values). See the paper James has linked to a few posts above.
Logged

Gabor
Panopeeper
Sr. Member
****
Offline Offline

Posts: 1805


« Reply #88 on: January 16, 2008, 02:20:24 PM »
ReplyReply

Quote
I see clear that the same lens capturing the same scene (same FoV over the subject) will provide at maximum aperture a shorter DoF in the 5D than in the 40D. This is what I call, maybe wrongly, strong bokeh

Well, "strong bokeh" is not a generally used term, at least I don't know that, so you can use it in whatever sense you want to. However, "nice bokeh" is a well-known term, and even though it is not well defined, most people are in agreement in the judgement of bokehs.

The "quality" of bokeh depends primarily on the lens; it is a special quality of the lens. Larger DoF does not guarantee nicer bokeh. For example the Canon 80-200mm f/2.8L shot in my bokeh collection shows, that that lens is not a "bokeh lens". The 50mm f/1.4 yields a medium good bokeeh only, no matter if at f/1.4 or f/2.8.

The reasons for creating a nice or bad bokeh are often discussed and guessed, but as far as I see it, this is rather shamanry. I read already, that some of the MTF curves indicate the bokeh quality, but I don't see, why. However, the number and shape of the aperture blades appear to be important, except with the maximum aperture, I guess.
Logged

Gabor
John Sheehy
Sr. Member
****
Offline Offline

Posts: 838


« Reply #89 on: January 16, 2008, 02:26:05 PM »
ReplyReply

Quote
WTF??? In the highlights, yes, but certainly not in the shadows. Try zeroing out all the lower-order bits of a RAW file so that it is effectively 8-bit,

All zeros is not the proper way to do it, unless you're also going to move the blackpoint down to compensate.  Using "1000" (for 12-bit) is what you need to replace with.

Quote
and run it through any RAW converter. Quantization will be much worse than if you convert to 8-bit after conversion especially in the shadows.
[a href=\"index.php?act=findpost&pid=167533\"][{POST_SNAPBACK}][/a]

I was thinking more along the lines of 16 vs 14 vs 12 etc.  If you go to 8 significant bits, you're going to see the quantization (especially with low ISOs).
Logged
Panopeeper
Sr. Member
****
Offline Offline

Posts: 1805


« Reply #90 on: January 16, 2008, 02:31:46 PM »
ReplyReply

Quote
The 5D needs about 2500 levels at ISO 100 and 1090 levels at ISO 1600 for normal (non-stacking) use.  That's for the RAW data; for conversion, of course, it is always good to force extra precision if the converter loads RAW data aligned by the LSB

With only 2500 levels, quite a few of the 256 RGB levels would be wasted.

Quote
Any bit depth (or number of levels) that brings the standard deviation of a black frame above 1.4 ADU is inefficient for storage purposes.

I wonder how you calculate this, specifically for the 5D.
Logged

Gabor
Panopeeper
Sr. Member
****
Offline Offline

Posts: 1805


« Reply #91 on: January 16, 2008, 02:36:14 PM »
ReplyReply

Quote
Try zeroing out all the lower-order bits of a RAW file so that it is effectively 8-bit, and run it through any RAW converter. Quantization will be much worse than if you convert to 8-bit after conversion especially in the shadows.

If you read The Dialog, the first and largest section of the new Rawnalyze manual, you find how to zero out some bits (chapter Changing the original raw data). However, shifting out bits instead of zeroing is better with Canon cameras, because of the black level (explanation inside).
Logged

Gabor
John Sheehy
Sr. Member
****
Offline Offline

Posts: 838


« Reply #92 on: January 16, 2008, 03:44:52 PM »
ReplyReply

Quote
With only 2500 levels, quite a few of the 256 RGB levels would be wasted.

About 1000 of the ~3590 levels currently used are already wasted by read noise.

Quote
I wonder how you calculate this, specifically for the 5D.
[a href=\"index.php?act=findpost&pid=167618\"][{POST_SNAPBACK}][/a]

"1.4 ADU" is not specific to the 5D.

It is a general figure based upon my experiments with quantizing RAW data to various bit depths, and observing at what read noise level, in ADUs, is the number of levels sufficient.  Anything resulting in much less than about 1.4 ADUs starts to get quantized visibly, and anything above 1.4 ADU looks the same as anything around 1.4 ADU.

I'm using 1.4 as a generous safety margin; 1.25 works quite well, as can be seen in the cameras that have 14 bits @ about 5 ADU.
Logged
Panopeeper
Sr. Member
****
Offline Offline

Posts: 1805


« Reply #93 on: January 16, 2008, 04:21:32 PM »
ReplyReply

Quote
About 1000 of the ~3590 levels currently used are already wasted by read noise

This does not depend on noise, nor on DR. It depends on the transform function and on the target range. With sRGB this is not a real issue, but with Adobe RGB 98 one would need much more (over 10000) levels in order to utilize all 256 target levels (and where is that from 10-bit printers and monitors?).

Quote
1.4 ADU" is not specific to the 5D

I know. I asked for a specific example of your calculation, with raw values and noise levels.
Logged

Gabor
bjanes
Sr. Member
****
Offline Offline

Posts: 2794



« Reply #94 on: January 16, 2008, 06:03:22 PM »
ReplyReply

Quote
About 1000 of the ~3590 levels currently used are already wasted by read noise.
[{POST_SNAPBACK}][/a]

I would like to see data supporting this assertion. As Roger Clark points out, noise in DSLR cameras consists mostly of shot noise until you get fairly deeply into the shadows. Here is a noise model taken from [a href=\"http://www.clarkvision.com/imagedetail/evaluation-1d2/index.html]Roger's analysis of the 1D MII[/url]. I think you overemphasize read noise.

The table shows shot noise and read noise for the 1DMII at ISO 100 with shot and read noise expressed in electrons on the left and DNs on the right. A 12 stop range is covered, but if the darkest f/stop needs 8 levels, the effective DR is limited to 9 stops by bit depth considerations alone, disregarding noise, and as shown by Norman Koren.. The analysis assumes that the camera places the highlights at 4095, but in practice it may be lower as you indicate.

In zones 0 through 7, the noise is predominately shot noise, and these zones are considered shot noise limited. There are 4080 levels. Only zone 8 is read noise lilmited, and it contains 8 levels. If some posterization is acceptable, you could include 7 more levels for a total of 15. Where do you get the value of 1000?

Of course, DR may be limited by noise as well as posterization, but the S:N of 1.49 in the deepest shadows, while quite low, does contain image information.

« Last Edit: January 16, 2008, 06:06:16 PM by bjanes » Logged
John Sheehy
Sr. Member
****
Offline Offline

Posts: 838


« Reply #95 on: January 17, 2008, 02:35:54 PM »
ReplyReply

Quote
I would like to see data supporting this assertion. As Roger Clark points out, noise in DSLR cameras consists mostly of shot noise until you get fairly deeply into the shadows.

Define "fairly deep(ly)".  Do you realize how far below saturation "middle blue" is in tungsten WB?

Also, if someone wants to make a statement to say current cameras are shot noise limited, it's the kind of thing that can't be proven right or wrong easily, because the point of reference is arbitrary.  My question is, why would someone even say that?  What purpose does a statemnt like that serve except to be academic fluff or filler?

And, as it turns out, shot noise is not the only noise in highlight area of cameras.  You should be seeing Emil Martin's updates to his web pages soon, where he discusses another form of noise present in highlights (usually only visible in the top 1 to 2 stops of the sensor's DR).

Quote
Here is a noise model taken from Roger's analysis of the 1D MII. I think you overemphasize read noise.

Unlike you and Roger, I have actually investigated what pure shot noise would look like in the deep shadows, and it is a relatively beautiful thing, compared to the reality of read noise.  Blacks are actually black.

Quote
The table shows shot noise and read noise for the 1DMII at ISO 100 with shot and read noise expressed in electrons on the left and DNs on the right. A 12 stop range is covered, but if the darkest f/stop needs 8 levels, the effective DR is limited to 9 stops by bit depth considerations alone, disregarding noise, and as shown by Norman Koren.. The analysis assumes that the camera places the highlights at 4095, but in practice it may be lower as you indicate.

In zones 0 through 7, the noise is predominately shot noise, and these zones are considered shot noise limited.

And that is significant because ...?  Would anyone really expect SNR limits to occur in the highlights?

Neither the sensor nor the camera are shot noise limited.

Quote
There are 4080 levels. Only zone 8 is read noise lilmited, and it contains 8 levels. If some posterization is acceptable, you could include 7 more levels for a total of 15. Where do you get the value of 1000?

I wrote 1000 for ISO 1600, not ISO 100, which I said would be served well by 2500 levels at ISO 100.

(Edit: I thought you were referring to my statement about ~1000 levels being enough.  Now that I think of it, you may be talking about the "1000 levels already wasted" - that referred to the fact that about 3500 are used in the 5D at ISO 100, and  only about 2500 are needed (not including some room for negative read noise).  I do not mean to imply by this that the levels should requantized to this amount; I simply mean that the original capture could have been at this many levels in the original digitization, for a more compressible file size, with infinitessimal loss of signal.)

The basic idea is that if you only need enough linear levels for the read noise to be at least 1.4 ADU, then you can scale down the number of levels by 1.4/readnoise, to get a baseline for the minimum number of levels for digitizing.

Quote
Of course, DR may be limited by noise as well as posterization, but the S:N of 1.49 in the deepest shadows, while quite low, does contain image information. [a href=\"index.php?act=findpost&pid=167668\"][{POST_SNAPBACK}][/a]

Of course it does.  You can't improve on it by having more than a certain number of levels, though, as the signal is just an average of lots of noise.  Rounding the noise values out to infinitessimally more accurate values with more levels does not help the signal come through the noise.

I have measured this visually, with extreme stretching of the levels, with RAW data from numerous cameras, and simulations, and 1.4 ADU is my safe limit.
« Last Edit: January 17, 2008, 03:35:43 PM by John Sheehy » Logged
bjanes
Sr. Member
****
Offline Offline

Posts: 2794



« Reply #96 on: January 17, 2008, 03:30:11 PM »
ReplyReply

Quote
Unlike you and Roger, I have actually investigated what pure shot noise would look like in the deep shadows, and it is a relatively beautiful thing, compared to the reality of read noise.  Blacks are actually black.
And that is significant because ...?  Would anyone really expect SNR limits to occur in the highlights?

Neither the sensor nor the camera are shot noise limited.
I wrote 1000 for ISO 1600, not ISO 100, which I said would be served well by 2500 levels at ISO 100.
[a href=\"index.php?act=findpost&pid=167839\"][{POST_SNAPBACK}][/a]

OK, here is the same type of analysis showing read and shot noise for the 1DMII at ISO 1600. Again I would like to see your data supporting your assertions, not some blanket statement from high.

« Last Edit: January 17, 2008, 03:44:01 PM by bjanes » Logged
John Sheehy
Sr. Member
****
Offline Offline

Posts: 838


« Reply #97 on: January 17, 2008, 05:05:55 PM »
ReplyReply

You replied about 1 minute too soon.  I just edited the post you replied to, as there were two figures of 1000 or close in my post, and I apparently assumed the wrong reference in the post you replied to.

Quote
John,

You talk a lot, but your logic is questionable and you present absolutely no data to back up your assertions. Could you give us a link to your data?
[a href=\"index.php?act=findpost&pid=167849\"][{POST_SNAPBACK}][/a]

You need to be a bit clearer about what you don't believe.  I felt a very vague feeling in your previous reply.

So let me state the core of what I am saying, and then you can object specifically to something or ask for proof.  It's as if I am supposed to know what is apparently illogical to you and defend it.  Nothing I've written recently is illogical to me, so I can't figure out what you think is questionable.

Here is a summary of what I have been saying:
"In the face of all the analog noises involved in the readout of a sensor, quantization of any practical significance can only occur when the number of linear levels used is such that the blackframe read noise, in those ADUs, falls significantly below 1.4, for single-exposure RAW images."

As I've stated previously, 1.4 is a conservative value.  We can get away with read noise ADUs as low as 1.1 without incident (and that implies even less levels needed).

The tools that I am using to look at these matters are horrible, in terms of the workflow involved in compositing images for comparisons.  You can verify what I say for yourself, quite easily, though.  Just open two instances of IRIS, and in one, select a RAW from a camera whose read noise is known at the ISO.  Then, calculate the division needed to bring that read noise down to 1.4 ADU.  Set the threshold sliders to some window down in the shadows, crop an area of interest, and then save out the .fit file.  In the second instance of IRIS, load the crop.  In this second instance, divide the image by the factor needed to scale the read noise down to 1.4 ADU.  Multiply back by that number again (or rescale the threshold sliders, if that is more convenient).  What you will see, is exactly the same thing, as far as your eyes can tell, in both instances.  Now, reload the crop into the second instance of IRIS, and divide by the read noise in original ADUs (to make the new noise 1.0 ADU).  *NOW*, you can see a little bit of quantization.  Try again, bringing the read noise down to 0.8, and 0.7 ADU and things fall apart very rapidly.  This is true regardless of what level of read noise you started at; you only start running out of useful levels when you get down to 1.4 ADU of noise.

Now, try similar things with highlights.  Find the brightest area in a smooth, OOF gradient, and measure its sigma.  Then, do the same as before, to bring this down to 1.4 ADU.  The bright area that you got the shot noise deviation from will look like it lost no smoothness, but the darker areas that might be in the image will show quantization.  Try again with 1.0, 0.8, 0.7, etc.  Same principal keeps applying.

It is easiest to do this with single color channels, because they are easier to crop.  You can verify, however that the same principle applies to color by carefully cropping so that the RGB CFA patterns in the crops are unaltered, or if you have plenty of RAM, just don't crop at all and just window-in the areas to compare (quantization must be before color conversion, though).

One day you will realize that noise is a hard ruler of appreciable levels, and all the anecdotes about levels and levels per stop and such is usually irrelevant in today's noisy digital photography.  All those anecdotes come from noiseless, synthetic graphics and are totally meaningless in digital photography.
« Last Edit: January 17, 2008, 05:11:45 PM by John Sheehy » Logged
John Sheehy
Sr. Member
****
Offline Offline

Posts: 838


« Reply #98 on: January 17, 2008, 05:09:37 PM »
ReplyReply

Quote
OK, here is the same type of analysis showing read and shot noise for the 1DMII at ISO 1600. Again I would like to see your data supporting your assertions, not some blanket statement from high.
[a href=\"index.php?act=findpost&pid=167849\"][{POST_SNAPBACK}][/a]

I am not telepathic.  What assertion do you believe I am making, which needs to be proven?

In any event, I don't see the relevance of the chart you posted to the topic at hand.  We're talking about necessary levels in digitization, are we not?
Logged
bjanes
Sr. Member
****
Offline Offline

Posts: 2794



« Reply #99 on: January 17, 2008, 05:30:25 PM »
ReplyReply

Quote
I am not telepathic.  What assertion do you believe I am making, which needs to be proven?
[a href=\"index.php?act=findpost&pid=167864\"][{POST_SNAPBACK}][/a]

That 1000 levels are lost to read noise.

Quote
In any event, I don't see the relevance of the chart you posted to the topic at hand.  We're talking about necessary levels in digitization, are we not?
[a href=\"index.php?act=findpost&pid=167864\"][{POST_SNAPBACK}][/a]


The chart clearly shows the relative contributions of read and shot noise for the exposure zones, and shot noise predominates in the green area of the chart, which comprises 4080 levels, whereas read noise dominates only in the very deep shadows, comprising 15 levels at most. How can you lose 1000 levels to read noise, when it predominates in only 15?
Logged
Pages: « 1 ... 3 4 [5] 6 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad