Ad
Ad
Ad
Pages: [1] 2 3 »   Bottom of Page
Print
Author Topic: Canon 40D Dynamic Range test: 9 f-stops  (Read 36447 times)
Guillermo Luijk
Sr. Member
****
Offline Offline

Posts: 1275



WWW
« on: December 13, 2007, 06:48:34 PM »
ReplyReply

In the same way I did a subjective test to check how much DR is usable in the Nikon D3 (Nikon D3 Dynamic Range Test: 9 f-stops), and Sony A700 (Sony A700 Dynamic Range test: 9.5 f-stops), I repeat it here for the Canon 40D. My final verdict is: 9 f-stops of usable DR.
The same processing and criteria applied as in the other tests (see Nikon's link).

This was the high DR scene:




A bit brighter to distinguish what's going on:




And the noise tests on the 7th, 8th, 9th and 10th f-stop areas, marked respectively as -6EV, -7EV, -8EV and -9EV:

« Last Edit: December 13, 2007, 06:50:16 PM by GLuijk » Logged

Panopeeper
Sr. Member
****
Offline Offline

Posts: 1805


« Reply #1 on: December 13, 2007, 07:10:32 PM »
ReplyReply

Guillermo,

what about posting the raw file? One should be able to reproduce what you have done (for the Sony A700 as well).
Logged

Gabor
Guillermo Luijk
Sr. Member
****
Offline Offline

Posts: 1275



WWW
« Reply #2 on: December 13, 2007, 07:12:12 PM »
ReplyReply

Quote
Guillermo,

what about posting the raw file? One should be able to reproduce what you have done (for the Sony A700 as well).
[a href=\"index.php?act=findpost&pid=160523\"][{POST_SNAPBACK}][/a]

ok ok I will ask (think they are not mine, and are the houses of people I even don't know)
Logged

Jonathan Wienke
Sr. Member
****
Offline Offline

Posts: 5759



WWW
« Reply #3 on: December 13, 2007, 07:51:57 PM »
ReplyReply

Guillermo, have you tried playing with my DR test target & method and comparing the results with your tests so far?
Logged

Jonathan Ratzlaff
Full Member
***
Offline Offline

Posts: 193


« Reply #4 on: December 13, 2007, 10:42:52 PM »
ReplyReply

With all the tests that you have posted, you have used different images. If you are comparing different cameras, don't you think you should be using some sort of standard image so that you are actually testing the response of the camera against a similar image.  For example with the Sony, you used a window that had a curtain against it while with the Nikon D3, you used a window  with a background lit by skylight,  In the sony the curtain was underexposed by about 1/2 stop while the nikon was overexposed by at least 3.5 stops.  In terms of comparisons this is not a particularly scientific test.

If you take a meter reading, preferably an incident meter in the various areas, What type of difference in light values are you getting?
Logged
Guillermo Luijk
Sr. Member
****
Offline Offline

Posts: 1275



WWW
« Reply #5 on: December 14, 2007, 05:18:37 AM »
ReplyReply

Quote
Guillermo, have you tried playing with my DR test target & method and comparing the results with your tests so far?
[a href=\"index.php?act=findpost&pid=160535\"][{POST_SNAPBACK}][/a]

Not yet, anyway I don't have all those cameras so could do it just in my 350D. If I find time this weekend I will give it a try Jon.
Logged

Guillermo Luijk
Sr. Member
****
Offline Offline

Posts: 1275



WWW
« Reply #6 on: December 14, 2007, 05:23:53 AM »
ReplyReply

Quote
With all the tests that you have posted, you have used different images. If you are comparing different cameras, don't you think you should be using some sort of standard image so that you are actually testing the response of the camera against a similar image. For example with the Sony, you used a window that had a curtain against it while with the Nikon D3, you used a window with a background lit by skylight, In the sony the curtain was underexposed by about 1/2 stop while the nikon was overexposed by at least 3.5 stops. In terms of comparisons this is not a particularly scientific test.

I completely agree, it would be desirable to use the same scene for all of them. Unfortunately for me it's impossible to have all those cameras gathered together. In fact I didn't shoot them, I just was sent the RAW files from different people.

My point is that this can be a good starting reference just to find out that there are really no huge differences in the practical DR you can expect to achieve with today's cameras. 2 years ago DR used to be around 8, now has improved to 9, so DR shouldn't be a critical decision parameter in a purchase (of course leaving aside the Fuji Super CCD, which is a completely different philosophy, and is very far from the others).

Regards.
« Last Edit: December 14, 2007, 05:25:43 AM by GLuijk » Logged

Guillermo Luijk
Sr. Member
****
Offline Offline

Posts: 1275



WWW
« Reply #7 on: December 14, 2007, 03:07:03 PM »
ReplyReply

Quote
Guillermo,

what about posting the raw file? One should be able to reproduce what you have done (for the Sony A700 as well).
[{POST_SNAPBACK}][/a]

Hi Panopeeper (and anyone interested) I was allowed to offer both RAW files:

[a href=\"http://www.guillermoluijk.com/download/sonya700_drtest.arw]Sony A700 RAW file[/url]
Canon 40D RAW file

I am specially interested that you check the Sony A700. In my test I actually doubted between 9,5 or 10 f-stops of DR, and chose 9,5 to be more conservative. But now I have realised I did a mistake: the zones were incorrectly labelled and where I said -10EV it actually was -9EV. So the final subjetive DR was shifted. It should not be 9.5 but 8.5, or maybe 9 f-stops, being less conservative.

So the final conclusion is there is no clear winner among Nikon D3, Sony A700 and Canon 40D in terms of DR; it seems all last year cameras have raised DR from 8 to 9 f-stops, and DR at low ISO is not a critical choice parameter.

Regards.
« Last Edit: December 14, 2007, 05:11:11 PM by GLuijk » Logged

Panopeeper
Sr. Member
****
Offline Offline

Posts: 1805


« Reply #8 on: December 14, 2007, 09:16:20 PM »
ReplyReply

Guillermo,

I checked out both images.

1. I find both unsuitable for this purpose; not only because of the low exposure (see the histograms), but because of the lack of clear, uniform surfaces in different tones, which are necessary to judge the noise. However, this is a non-issue anyway, as long as uncoordinated, casual shots will be used, which can not be compared. So, let's stay subjective.

2. I don't understand how you came to the results with the 40D. For example the top right corner of the TV is under the -10th stop.

3. I don't know the A700. I was looking at it in DNG format (I did not bother yet to implement the handling of its messy native raw file in my program). I find the low noise in the very deep shadows impressive; on the other hand, I see (relatively) considerable noise in not so dark areas. Unfortunately, there is no good basis for comparison with the 40D (equally uniform areas in equal tones).


[attachment=4257:attachment]
Logged

Gabor
Guillermo Luijk
Sr. Member
****
Offline Offline

Posts: 1275



WWW
« Reply #9 on: December 15, 2007, 04:01:25 AM »
ReplyReply

Quote
2. I don't understand how you came to the results with the 40D. For example the top right corner of the TV is under the -10th stop.

I think is for the RAW development criteria I chose, and for a particular problem with the 40D's RAW and DCRAW:

I applied white balance, and the criteria to do it was to develop the RAW file with <=1 multipliers, so no information is lost, with DCRAW's -H 2 option that guarantees neutral gray in the blown highlights. With this procedure a gap is always created in the highlights and I took the criteria to eliminate that gap (i.e. increase linear exposure of the resulting image to fit right part of the histogram).
I am not 100% happy with this criteria, but I think this approach is the closest to the way I would develop the RAW with the idea of not losing any information in mind.

What happened with the 40D file (and so as with the 30D file I have just analysed)? after the highlight clip there is still some more information in the non-WB image. DCRAW preserves it, probably it would be better not to do it, but I cannot take control of that part of the process since it's developement stage.
See these histograms:

NO WB APPLIED:


AFTER WB WITH -H 2 (highlight preservation and neutral gray blown areas):
 .  

You can see the end part of the histogram provided by DCRAW is wrong, and a magenta cast was introduced in the highlights becuase DCRAW considered the highlights that needed to be white as that negligible information contained in the non-WB gap, while the real gross part of blown information was coming after. I am going to report about this white point to David Coffin; it already happened to him with other camera (Fuji or Oly I think).
I guess this is not DCRAW's fault, simply 40D's RAW files contain information after the white point, so it's a design problem in the RAW data produced by the 40D. But still it affects DCRAW and any other RAW developer. This is clear when looking at the linear non-WB histogram:

NO WB APPLIED LINEAR:

There are non-null data after the highlight blown peak.

Anyway, after getting that histogram, and knowing by certain that the shadows are correcltly aligned, and only the highlights were wrong, I simply shifted exposure linearly up again (as I would have done in a normal picture edition) to make the blown areas white instead of magenta. And surely that's why I got a zone distribution higher that yours. But I insist that it was the closest approach to what I would have done from a photographic point of view.
I agree with you there is too much mess to be too confident with the results.

Of course we could have done to a non-WB analysis, which is technologically correct, but I wanted a demosaiced-WB approach in these tests.
Another option would have been to develop with >=1 WB multipliers, leaving G channel fix, but again that produces the G channel not to have a proper value in the blown highlights and another even more niticeable magenta cast appears:

AFTER WB WITH -H 0 (>=1 multipliers):
 .  

Are you doing all your measures with undemosaiced data? Did you see anything about this apparent highlight clip missalignment?
« Last Edit: December 15, 2007, 05:17:45 AM by GLuijk » Logged

Panopeeper
Sr. Member
****
Offline Offline

Posts: 1805


« Reply #10 on: December 15, 2007, 06:55:38 PM »
ReplyReply

Guillermo,

I analyze the raw images with my own program. White balanced or not, it's only a click, and the program shows the pixel values and the std deviation of the original raw values as well as the transformed RGB values.

My program is not doing a true de-mosaicing, nor color space translation; it simply averages the same-colored pixels of a CFA (usually two greens, one red and one green), and uses those values for the mapping (in this mode all four pixels of a CFA have the same color). This method eliminates the possibility of non-displayable colors due to color space transformation.

There is an alternative mode: each pixel is either red, or green, or blue (i.e. the other components are 0), and that value gets mapped. I use the latter for example when judging the sharpness and resolution of a lens.

Anyway, I don't intend to use this occasion to "advertise" my program (it's free anyway, but the manual is outdated).

Back to "your" shot:

1. the saturation level of the 40D with ISO 100 is 13820. The histogram I posted is in the space 0-13900,

2. the useful masked pixel values are between 1010 and 1030 or so (there are many outlandish values). Let's work now with a fixed value of 1020.

So, the top possibe effective pixel value is about 12800 after discounting the black level. I made a selection on the top right corner of the TV screen (the images is displayed with +5 EV, in channel mode, i.e. not "color mixed"). The average of the reds is 1039, the greens 1049 and the blues is 1037. Discounting the "universal" black level, these are 19, 29 and 17.

12800 (0 EV) 6400 (-1 EV) 3200 (-2 EV) 1600 (-3 EV) 800 (-4 EV) 400 (-5 EV) 200 (-6 EV) 100 (-7 EV) 50 (-8 EV) 25 (-9 EV) 12 (-10 EV) 6

So, the red and blue pixels of the selected area are in the -9th EV, the greens are at the border between the -8th and the -9th. Of course, even slight changing of the selection changes these values; therefor, "standardized shots" are necessary to compare cameras.

The above assumes, of course, that halving the pixel value range corresponds to reducing the exposure by 1 EV.  In a previous thread I disputed, that this has to be so, not that it is so. This can be verified with a step wedge, which I don't have.
Logged

Gabor
Guillermo Luijk
Sr. Member
****
Offline Offline

Posts: 1275



WWW
« Reply #11 on: December 15, 2007, 08:18:20 PM »
ReplyReply

All this info is really interesting, I am going to give your program a try (I already knew it was yours). So my -9EV calculation for the TV set was ot so wrong.

Regarding the non-zero values after the presumibly blown highlights peak, did you find them? I am restricted to what DCRAW shows me and here it was not good enough. There is a function in DCRAW's source (scale_colors()) wich scales all levels so that after having substracted the black offset they extend from 0..65535 in the demosaiced 16-bit data. It seems for some reason, in the 40D this scaling let the main peak of blown highlights with a value quite lower than 65535. Probably Coffin was using some scaling parameter that was wrong.

I wonder if you were able to see those ghost pixels beyond the blown highlights peak you can see in my non-WB histograms.
« Last Edit: December 15, 2007, 08:21:46 PM by GLuijk » Logged

Panopeeper
Sr. Member
****
Offline Offline

Posts: 1805


« Reply #12 on: December 15, 2007, 10:17:24 PM »
ReplyReply

Quote
Regarding the non-zero values after the presumibly blown highlights peak, did you find them?

After the blowing there is nothing. Thie question is, what is interpreted as blowing. The histogram I posted does show a thin red and blue line at the very end, but it is barely visible. That is the clipping point; I guess this is at the same location as in your histogram (it has to be, does not it?).

I uploaded a layered TIFF showing the relevant area (the window) with some screen captures: http://www.panopeeper.com/Demo/Guillermo_4...t_analysis1.TIF

The first layer is in composite color view (averaged colors), showing only from 12000 to 14000. This yields very high contrast, but the colors are wrong, of course, because of the clipping. The clipped pixels are with 0 in the next layer, that shows, which pixels are affected by clipping. However, this is not exact because of the mixing of colors (a pixel appears black only if all three components clipped).

As you can see, *all* pixels of the upper part of the window are clipped, i.e. there is no detail to get there.

The third layer shows the same data but in channel view, i.e. each displayed pixel is either red or green or blue; the color is very bad, but this is much sharper than the composite color view.

The fourth layer is channel view but the clipped pixels are black. This shows exactly, where clipping occured.

The following four images are in exposure view. Every displayed pixel is either red or black, or green or black, or blue or black, depending on which pixel it is in the CFA, and if the pixel value is within the selected range (or outside the range, if chosen so). Note, that the intensity of the pixel is fix in this mode, it does not reflect the pixel value.

The first of these shows those pixels, which are outside of 13823 (the limiting value is always both inside AND outside). If your monitor has small pixels, then the colors melt together, but if you magnify it, you see the individual pixels in their own color (I did not want to multiply the number of layers by three to show each color on its own).

The next layer shows the pixels from 13824. See the pixel stats: the difference is not much, only a few pixels are with exactly 13823.

The next layer is with 13825. There are only red pixels there, i.e. 13824 was the clipping point for green and blue, but the red did not clip yet. However, the following layer shows, that there is nothing with 13826, i.e. the difference in the clipping points is only one level (there are other sensors with much larger differences).

Quote
I am going to give your program a try

You can download it from http://www.cryptobola.com/Photobola/Rawnalyze.htm
I started implementing the native raw file support a few days ago, only DNG was supported before. The list of supported native raws is in Rawnalyze.htm (your camera is supported).

The manual is several months old, many features are not even mentioned. I will create a new one in one-two weeks.
« Last Edit: December 15, 2007, 10:27:05 PM by Panopeeper » Logged

Gabor
Guillermo Luijk
Sr. Member
****
Offline Offline

Posts: 1275



WWW
« Reply #13 on: December 16, 2007, 02:48:18 AM »
ReplyReply

Hi Panopeeper, sorry for messing you up with all these questions.

I have done what I should have from the beginning, analyze the undemosaiced RAW data (DCRAW's -D option), and now I see clear that all those considerations about the pixels after the blown highlights peak are due to some DCRAW's fault in the scaling process prior to demosacing.

In the pure RAW histogram (I still cannot distinguish R from G or B channels, but it's not necessary here) I can see the RAW file was perfect and found the same as you:
- A peak containing blown channels exactly in level 13825 and from there there is NOTHING up to 16383
- 3 pixels (I cannot say whether they are R, G or B ) in level 0
- An empty gap from 1 to 1002
- Main histogram starting at 1003



Just to show you what I was talking about, this is the demosaiced but non-WB (multipliers=1) linear histogram produced by DCRAW, Ymax=985 pixels:



White point is not properly shifted to match 65535, in fact after substracting the black level (DCRAW used 1024) it seems DCRAW scaled the channels so that the peak's relative position in the new 16-bit range remains the same. So there is demosaiced garbage  after the blown highlights peak. And the same happens when we develop with WB producing the undesired magenta highlights.


I will definitively give a try to your PhotoBola (bola means ball in Spanish). BTW are you implementing one by one the decoding of the different vendor RAW formats? I ask this as I think DCRAW code is free to anyone wanting to use it, and Coffin focuses a lot to be able to decode any new format appearing in the market so it could save you work and keep your tool easily updated.

Thanks for your effort.
« Last Edit: December 16, 2007, 04:01:54 AM by GLuijk » Logged

John Sheehy
Sr. Member
****
Offline Offline

Posts: 838


« Reply #14 on: December 16, 2007, 11:59:24 AM »
ReplyReply

Quote
All this info is really interesting, I am going to give your program a try (I already knew it was yours). So my -9EV calculation for the TV set was ot so wrong.

Regarding the non-zero values after the presumibly blown highlights peak, did you find them? I am restricted to what DCRAW shows me and here it was not good enough. There is a function in DCRAW's source (scale_colors()) wich scales all levels so that after having substracted the black offset they extend from 0..65535 in the demosaiced 16-bit data. It seems for some reason, in the 40D this scaling let the main peak of blown highlights with a value quite lower than 65535. Probably Coffin was using some scaling parameter that was wrong.

I wonder if you were able to see those ghost pixels beyond the blown highlights peak you can see in my non-WB histograms.
[a href=\"index.php?act=findpost&pid=160948\"][{POST_SNAPBACK}][/a]

I gave up on DCRAW for extracting RAW data, until the program was given the -D (document) mode.  Even then, it wasn't very useful as the data was mildly destroyed going into PS' stupid 15-bit+1 "16-bit" mode, since the data was only using the least significant bits of the 16-bit output to PSD or PGM.
Logged
Guillermo Luijk
Sr. Member
****
Offline Offline

Posts: 1275



WWW
« Reply #15 on: December 16, 2007, 12:31:26 PM »
ReplyReply

Quote
Even then, it wasn't very useful as the data was mildly destroyed going into PS' stupid 15-bit+1 "16-bit" mode, since the data was only using the least significant bits of the 16-bit output to PSD or PGM.

I am very interested in this John, because I always wondered if the linear bit that we loose going into PS from a linear DCRAW TIFF is not lost in the same way when developing in ACR for instance. Does ACR perform the development and gamma correction in floating point before the final 15-bit rounding, or it applies a 16-bit linear rounding and then gamma?

The second option would be a bit stupid if you can afford the first at the same price, so I guess you get more precision from your RAW files developing them in a non-linear output developer such as ACR rather than in DCRAW (if the edition is tobe done on a gamma corrected image of course).

However I never noticed any kind of loss (posterization in the shadows and so forth) for putting DCRAW's linear files into PS. In fact when blending several different exposure shots into one HDR, the tonal richness in the shadows into PS is amazing even if starting from 16-bit linear.
« Last Edit: December 16, 2007, 12:40:31 PM by GLuijk » Logged

John Sheehy
Sr. Member
****
Offline Offline

Posts: 838


« Reply #16 on: February 04, 2008, 04:22:02 PM »
ReplyReply

Quote
I am very interested in this John, because I always wondered if the linear bit that we loose going into PS from a linear DCRAW TIFF is not lost in the same way when developing in ACR for instance. Does ACR perform the development and gamma correction in floating point before the final 15-bit rounding, or it applies a 16-bit linear rounding and then gamma?

I don't know how ACR works internally, except that it loads all 12-bit RAWs as at least 16(15)-bit, but using the LSBs of the 16(15).
Logged
ejmartin
Sr. Member
****
Offline Offline

Posts: 575


« Reply #17 on: February 05, 2008, 08:50:13 AM »
ReplyReply

Quote
I don't know how ACR works internally, except that it loads all 12-bit RAWs as at least 16(15)-bit, but using the LSBs of the 16(15).
[a href=\"index.php?act=findpost&pid=172322\"][{POST_SNAPBACK}][/a]


How do you know that it loads RAW files using LSB's?
Logged

emil
John Sheehy
Sr. Member
****
Offline Offline

Posts: 838


« Reply #18 on: February 05, 2008, 12:16:40 PM »
ReplyReply

Quote
How do you know that it loads RAW files using LSB's?
[a href=\"index.php?act=findpost&pid=172458\"][{POST_SNAPBACK}][/a]

Well, now that I think of it, what I am saying applies to DNGs and not necessarily to original RAWs.

If you make an uncompressed DNG from a 12 bit RAW, and put in lines of values that use up to the MSB of the 16-bit data, they will clip, of course, but their demosaicing influence can be seen in neighboring pixels.  This means that the DNGs, at least, load with the 4 MSBs of 16-bit zeroed, normally, which means that the MSB of the original data is shifted 4 bits to the right, compared to the MSB of a true 16-bit DNG.  The 12-bit RAW, therefore, is apparently discriminated against in terms of conversion precision.

I suspect that this is one reason that in ACR, and in other converters, people are seeing better results with 14-bit cameras, even though the extra 2 bits contain no significant signal.
Logged
Panopeeper
Sr. Member
****
Offline Offline

Posts: 1805


« Reply #19 on: February 05, 2008, 01:28:33 PM »
ReplyReply

Quote
If you make an uncompressed DNG from a 12 bit RAW, and put in lines of values that use up to the MSB of the 16-bit data, they will clip, of course, but their demosaicing influence can be seen in neighboring pixels.  This means that the DNGs, at least, load with the 4 MSBs of 16-bit zeroed, normally, which means that the MSB of the original data is shifted 4 bits to the right, compared to the MSB of a true 16-bit DNG.  The 12-bit RAW, therefore, is apparently discriminated against in terms of conversion precision.
I'm afraid you are attributing the DNG conversion characteristics, which are simply not there.

1. There is no difference in compressed vs. uncompressed DNG, except the method of storage.

2. 12bit raw data appears in the low-order 12 bits of the DNG data, and that is completely normal. The fact, that DNG keeps always either 8 or 12 bits means *nothing* relatnig to the quality of the stored data. You have to treat the stored pixel values as numbers without regard to the actual storage capacity (which 16 bits, except a few cases).

To properly interpret these numeric values, the WhiteLevel info has to be used. This is one of the numerous errors in the DNG specification: the sensor's bit depth is not apparent from the metadata, either WhiteLevel has to be rounded up, or the camera proprietory info needs to be used.
Logged

Gabor
Pages: [1] 2 3 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad