Ad
Ad
Ad
Pages: « 1 [2] 3 »   Bottom of Page
Print
Author Topic: Canon 40D Dynamic Range test: 9 f-stops  (Read 36541 times)
ejmartin
Sr. Member
****
Offline Offline

Posts: 575


« Reply #20 on: February 06, 2008, 01:30:13 AM »
ReplyReply

Quote
Well, now that I think of it, what I am saying applies to DNGs and not necessarily to original RAWs.

If you make an uncompressed DNG from a 12 bit RAW, and put in lines of values that use up to the MSB of the 16-bit data, they will clip, of course, but their demosaicing influence can be seen in neighboring pixels.  This means that the DNGs, at least, load with the 4 MSBs of 16-bit zeroed, normally, which means that the MSB of the original data is shifted 4 bits to the right, compared to the MSB of a true 16-bit DNG.  The 12-bit RAW, therefore, is apparently discriminated against in terms of conversion precision.

I suspect that this is one reason that in ACR, and in other converters, people are seeing better results with 14-bit cameras, even though the extra 2 bits contain no significant signal.
[a href=\"index.php?act=findpost&pid=172496\"][{POST_SNAPBACK}][/a]


I took a 12-bit raw file, developed it in ACR, written to tiff from CS3. I then took the tiff file into IRIS and generated a list of the levels in the green channel. At the lower levels it looks like this (first number is the level from the range 0-32767, second one is the number of pixels with that level):


109 0
110 0
111 0
112 494
113 0
114 0
115 0
116 0
117 0
118 0
119 0
120 0
121 0
122 0
123 0
124 0
125 0
126 0
127 0
128 420
129 0
130 0


and going to higher values

1000 0
1001 14407
1002 0
1003 0
1004 0
1005 15338
1006 0
1007 0
1008 0
1009 0
1010 16481
1011 0
1012 0
1013 0
1014 0
1015 17865
1016 0
1017 0
1018 0
1019 18862


and to still higher values

5000 0
5001 642
5002 642
5003 618
5004 602
5005 638
5006 0
5007 636
5008 594
5009 572
5010 663
5011 0

and to higher ones

10000 74
10001 155
10002 83
10003 155
10004 68
10005 81
10006 163
10007 78
10008 157


The large gaps at small 15+1 bit level are due to gamma correction from the 16-bit linear, demosaiced image data; the gaps get smaller at larger level until they disappear completely, and then in highlights two linear levels get squeezed into one output level by the gamma correction.  If one does the same exercise with a 14-bit raw file, the output tiff will show the *same* gaps and highlight compressions.

Undoing the gamma correction, the inferred gaps in the raw data just prior to gamma correction are one part in 2^16 (this can be quite accurately assessed at the point in the raw data where the gaps disappear, at this point a change of one in the output level is equal to one unit of change in the input level just prior to gamma correction).

The conclusion?  ACR is using the full 16 bits to calculate up to the point of gamma correction. If ACR were able to do a linear raw conversion, the file I analyzed would have no gaps in the 16-bit raw data. It's actually slightly better than that, since the output is a 16-bit signed integer, half the values go unused (the negative ones); the output is effectively 15-bit color depth, while the numbers before gamma correction have 16-bit depth. I presume the population of intermediate 16-bit tonal values is due to the interpolation of 12-bit values during the debayering step, which will fill in the missing values by taking averages over neighboring pixels.   Operations on 12-bit tonal values that don't mix pixel data from multiple pixels will not accomplish that.

If ACR were doing calculations with less precision on 12-bit data, I would presume that the gaps in the tiff output would be larger (ie exhibit more combing).  I don't see how these results fit with the proposal that ACR populates the LSB's to do its calculations, that would naively force the output to have bigger gaps between populated levels where both 12-bit and 14-bit files are mapped to the same tonal range 0-32K.

BTW, if one does this same exercise with C1 LE, or DPP, one finds no gaps at all in the tiff output.  I interpret this to mean that these converters are doing their internal math in floating point rather than the 16-bit unsigned integer that ACR uses.
Logged

emil
John Sheehy
Sr. Member
****
Offline Offline

Posts: 838


« Reply #21 on: February 06, 2008, 10:11:51 AM »
ReplyReply

Quote
If ACR were doing calculations with less precision on 12-bit data, I would presume that the gaps in the tiff output would be larger (ie exhibit more combing).  I don't see how these results fit with the proposal that ACR populates the LSB's to do its calculations, that would naively force the output to have bigger gaps between populated levels where both 12-bit and 14-bit files are mapped to the same tonal range 0-32K.
[a href=\"index.php?act=findpost&pid=172649\"][{POST_SNAPBACK}][/a]

What I did, precisely, was open an uncompressed DNG of a blackframe in a hex editor, and replace dashes of original pixels with contiguous lines of various values above 4095, from just above 4095 to up near 65K.

I then opened the DNG in ACR, and looked for these altered dashes, and they all clipped right within the pixels that I re-wrote, but playing with various parameters to emphasize the demosaicing effect in the surrounding pixels, I was able to see differences between the different dash intensities.  That suggests that when a 12-bit RAW becomes a 16-bit DNG, ACR acknowledges those 4 MSBs, which should have all zeros, within its dynamic range.

I can't imagine why they would do this as a special case; that's why I thought that ACR gave precision to DNGs (at least), by aligning them on the LSB.  Wouldn't it make more sense for ACR to shift 4 bits to the left, knowing that the highest value shouldn't be more than 4095?  If they did do that, why would these 4 MSBs from the DNG still be there?  There are no RAWs with more than 16 bits right now, and I find it hard to believe that adobe thought that far ahead, and dedicated the extra memory for greater-than-16-bit RAW.
« Last Edit: February 06, 2008, 10:16:27 AM by John Sheehy » Logged
John Sheehy
Sr. Member
****
Offline Offline

Posts: 838


« Reply #22 on: February 06, 2008, 10:34:38 AM »
ReplyReply

Quote
I'm afraid you are attributing the DNG conversion characteristics, which are simply not there.

1. There is no difference in compressed vs. uncompressed DNG, except the method of storage.

Once again, we fail to communicate.

Nothing I said about the uncompressed DNG implied any difference other than compression/storage.

Quote
2. 12bit raw data appears in the low-order 12 bits of the DNG data, and that is completely normal. The fact, that DNG keeps always either 8 or 12 bits means *nothing* relatnig to the quality of the stored data. You have to treat the stored pixel values as numbers without regard to the actual storage capacity (which 16 bits, except a few cases).

It might be normal for DNG, but it is certainly not a good thing for the user.  It might be advantageous to write a program that takes a DNG from a 12-bit source, and bit-shifts the 12 LSBs to the 12 MSBs, and multiplies the embedded whitepoint, and blackpoint, and any other levels-related value in the DNG by 16.  This might also help aid in ACR's 1-dimensional (line or banding) noise reduction.

Quote
To properly interpret these numeric values, the WhiteLevel info has to be used. This is one of the numerous errors in the DNG specification: the sensor's bit depth is not apparent from the metadata, either WhiteLevel has to be rounded up, or the camera proprietory info needs to be used.
[a href=\"index.php?act=findpost&pid=172511\"][{POST_SNAPBACK}][/a]

Yes, but that doesn't tell us how much precision is used.  The converter is still free to either scale the data before processing, or align in memory on the LSB.
Logged
ejmartin
Sr. Member
****
Offline Offline

Posts: 575


« Reply #23 on: February 07, 2008, 10:11:37 AM »
ReplyReply

Quote
What I did, precisely, was open an uncompressed DNG of a blackframe in a hex editor, and replace dashes of original pixels with contiguous lines of various values above 4095, from just above 4095 to up near 65K.

I then opened the DNG in ACR, and looked for these altered dashes, and they all clipped right within the pixels that I re-wrote, but playing with various parameters to emphasize the demosaicing effect in the surrounding pixels, I was able to see differences between the different dash intensities.  That suggests that when a 12-bit RAW becomes a 16-bit DNG, ACR acknowledges those 4 MSBs, which should have all zeros, within its dynamic range.

I can't imagine why they would do this as a special case; that's why I thought that ACR gave precision to DNGs (at least), by aligning them on the LSB.  Wouldn't it make more sense for ACR to shift 4 bits to the left, knowing that the highest value shouldn't be more than 4095?  If they did do that, why would these 4 MSBs from the DNG still be there? 
[a href=\"index.php?act=findpost&pid=172732\"][{POST_SNAPBACK}][/a]

Hmm.  I'm having a tough time reconciling our two observations.  On the one hand, as you say, if ACR shifts the 12-bit raw data to the MSB's internally in ACR, then it shouldn't know about your supra-white pixels' specific values.  On the other hand, if it were mapping the data to the LSB's in ACR, then the spacing of gaps in the output file should be a part in 2^12 or 2^14 depending on the bit depth, rather than a part in 2^16 as observed.
Logged

emil
Panopeeper
Sr. Member
****
Offline Offline

Posts: 1805


« Reply #24 on: February 07, 2008, 10:33:54 AM »
ReplyReply

Quote
It might be normal for DNG, but it is certainly not a good thing for the user.  It might be advantageous to write a program that takes a DNG from a 12-bit source, and bit-shifts the 12 LSBs to the 12 MSBs, and multiplies the embedded whitepoint, and blackpoint, and any other levels-related value in the DNG by 16.
You are concentrating on bits; however, that's not how ACR is working. How many bits should be used for example of the 40D with ISO 100 or ISO 160?

Forget about bits and concentrate on pixel values on their own. The white level specifies the limit of pixel values for the processing. If you reduce the "exposure", i.e. the pixel values or change the WB, the contrast, etc. which leads to pixel value changes, such pixels, which have fallen outside the range will be "pulled back" into the range and thus processed.

Example: take a 40D shot with ISO 200, exposed to the right. ACR believes, that the level of clipping is 13600, and accordingly indicates lots of clipping. Reduce the exposure and the "clipped" pixels appear in their full glory.
Logged

Gabor
Guillermo Luijk
Sr. Member
****
Offline Offline

Posts: 1283



WWW
« Reply #25 on: February 09, 2008, 02:40:41 AM »
ReplyReply

Quote
If ACR were doing calculations with less precision on 12-bit data, I would presume that the gaps in the tiff output would be larger (ie exhibit more combing). I don't see how these results fit with the proposal that ACR populates the LSB's to do its calculations, that would naively force the output to have bigger gaps between populated levels where both 12-bit and 14-bit files are mapped to the same tonal range 0-32K.

BTW, if one does this same exercise with C1 LE, or DPP, one finds no gaps at all in the tiff output. I interpret this to mean that these converters are doing their internal math in floating point rather than the 16-bit unsigned integer that ACR uses.

Your last comment is very interesting.

I have developed a RAW file on ACR setting neutral parameters on all (except WB which I used 'as shoot' and colour space which had to be shosen some way (sRGB in this case). It is true that gaps appear, so I guess the maths on the demosaicing and development process are not done in floating point (otherwhise would be silly to round to 16+1 bit integer right before the final gamma correction...).
So if it is true that C1 and DPP fill all those gaps and it is due to the fact they work in floating point, is ACR much faster than C1 and DPP?

On the other side I am not an expert, but I think a very important part of the process is the colour profile conversion: when I develop images in DCRAW setting no output colour profile (-o 0 option) I can clearly distinguish Bayer interpolated from captured levels (seen as equally spaced strong peaks).

But when I develop with output conversion to sRGB the matrix operations over levels completely hide the peaks sequence.

Since the gamma value is deeply related to final colour space, I wonder if the filling of gaps in DPP and C1 is more related to some different process of the colour profiling rather than floating point vs integer operation.


Another point is the fact that having 16 (or 15+1)-bit integer data before gamma correction can be very convenient in terms of speed, since a gamma correction could be quickly implemented through a gamma[0..65535] precalculated integer array, much faster to apply than the: X^(1/2.2) power calculations.

I heard from Julia Borg (easily found in DPreview forums) that she had coded a completely floating point RAW developer (evolution from DCRAW I think) for getting the highest possible quality of data in a 100% floating point image workflow.
« Last Edit: February 09, 2008, 04:27:26 AM by GLuijk » Logged

ejmartin
Sr. Member
****
Offline Offline

Posts: 575


« Reply #26 on: February 10, 2008, 11:40:03 AM »
ReplyReply

I suppose one scenario consistent with both our observations would be the following:  ACR loads the RAW file into LSB's, consistent with John's observation, and then does the bayer interpolation.  Then sets white and black points and rescales the data to span 2^16 levels (though filling only 2^12 or 2^14 of the possible values).  If the conversion from camera color data to XYZ color space (or LAB, Prophoto, etc) is now performed, it will be done at 16-bit precision and since ACR does this via matrix multiplication, the resulting linear combinations of the three color channels will take on all 2^16 possible values, consistent with my observation.  So some early calculations would be done at the bit depth of the camera, and later ones at 16-bit depth.
Logged

emil
bjanes
Sr. Member
****
Offline Offline

Posts: 2784



« Reply #27 on: February 10, 2008, 12:15:48 PM »
ReplyReply

Quote
I took a 12-bit raw file, developed it in ACR, written to tiff from CS3. I then took the tiff file into IRIS and generated a list of the levels in the green channel. At the lower levels it looks like this (first number is the level from the range 0-32767, second one is the number of pixels with that level):

[{POST_SNAPBACK}][/a]

Emil,

A very interesting post. BTW, Guillermo has a [a href=\"http://www.guillermoluijk.com/software/histogrammar/index.htm]Histogrammar Program[/url] that can show the gaps graphically at the lower end and how they fill in at the upper end of the range. It reads TIFFs directly.

Bill
Logged
Guillermo Luijk
Sr. Member
****
Offline Offline

Posts: 1283



WWW
« Reply #28 on: February 10, 2008, 08:54:19 PM »
ReplyReply

Quote
I suppose one scenario consistent with both our observations would be the following: ACR loads the RAW file into LSB's, consistent with John's observation, and then does the bayer interpolation. Then sets white and black points and rescales the data to span 2^16 levels (though filling only 2^12 or 2^14 of the possible values). If the conversion from camera color data to XYZ color space (or LAB, Prophoto, etc) is now performed, it will be done at 16-bit precision and since ACR does this via matrix multiplication, the resulting linear combinations of the three color channels will take on all 2^16 possible values, consistent with my observation. So some early calculations would be done at the bit depth of the camera, and later ones at 16-bit depth.
[{POST_SNAPBACK}][/a]

Emil, I don't want to believe that the demosaicing and Bayer interpolation process is done in the native 12 or 14 bit range in ACR.
I cannot tell, but I can tell about how DCRAW works, which is not floating point but clearly is better than the 12-bit/14-bit approach:

1. DCRAW converts 12/14 bit samples into 16 bit (just a x4 or x2 integer multiplication).
2. Next it calculates black offset (when needed, some cameras don't have) and substract it
3. Next DCRAW is aware of the point at which every camera clips its RGB channels (Panopeeper can tell us a lot about this) and scales them so that the clip point on each channel reachs the maximun (65535). In the same operation (after all it's all about multiplications) it applies the white balance (individual multipliers on each channel)

All these steps are done in a very important function of DCRAW's code called scale_colors() (see below), which the only one I have looked at by the way.

4. Next comes the whole development process: Bayer demosaicing, highlight recovery if set, and colour profiling if set.

So unfortunately DCRAW is not floating point, but it is 16-bit all time.

This is a linear TIFF produced by DCRAW (BTW plotted used the program Billl refers to, find a tutorial here: [a href=\"http://www.guillermoluijk.com/tutorial/histogrammar/index_en.htm]HISTOGRAMMAR TUTORIAL[/url]):

1. THE RAW FILE
 
This is a real native RAW histogram, prior to demosaicing. All values are in the 0..4095 (i.e. 12-bit) range:
 

 
We can see that not the whole 0..4095 range is actually used. This is because of a DC black offset value all cameras have (usually around 250 levels in my 350D). Some brands substract that value before saving the final RAW data,or simply don't produce it, no idea which is the true version.

 
2. THE DEMOSAICING PROCESS (RAW DEVELOPMENT)
 
For demosaicing previous values are scaled by a factor of 2^(16-12)=16 and an additional WB scaling. And from these scaled values, interpolated values (already in the 16-bit range) are calculated:
 
[span style=\'font-size:8pt;line-height:100%\'](NOTE: for simplicity this is the histogram of only the blue channel.[/span]
[span style=\'font-size:8pt;line-height:100%\']Two blue types were used to distinguish levels according to their origin)[/span]


 
3. GETTING INTO PHOTOSHOP
 
Our image is now on a real 16-bit range. Look now what happens to a real histogram when we load it into PS (ACR outputs the same effect), and save it back in 16-bit TIFF format:
 
[span style=\'font-size:7pt;line-height:100%\']Original[/span]


[span style=\'font-size:7pt;line-height:100%\']After just Open and Save in PS[/span]

 
PS looses 1 bit of precision. The Original image produced by DCRAW is linear, that's why it has not gamma holes which will appear afterwards in PS, when converting to a non-linear colour space.

 
Regards.

PS: DCRAW's scale_colors() function:

void CLASS scale_colors()
{
unsigned bottom, right, row, col, x, y, c, sum[8];
int val, dblack;
double dsum[8], dmin, dmax;
float scale_mul[4];

if (user_mul[0])
memcpy (pre_mul, user_mul, sizeof pre_mul);
if (use_auto_wb || (use_camera_wb && cam_mul[0] == -1)) {
memset (dsum, 0, sizeof dsum);
bottom = MIN (greybox[1]+greybox[3], height);
right = MIN (greybox[0]+greybox[2], width);
for (row=greybox[1]; row < bottom; row += Cool
for (col=greybox[0]; col < right; col += Cool {
memset (sum, 0, sizeof sum);
for (y=row; y < row+8 && y < bottom; y++)
for (x=col; x < col+8 && x < right; x++)
FORC4 {
if (filters) {
c = FC(y,x);
val = BAYER(y,x);
} else
val = image[y*width+x][c];
if (val > maximum-25) goto skip_block;
if ((val -= black) < 0) val = 0;
sum[c] += val;
sum[c+4]++;
if (filters) break;
}
for (c=0; c < 8; c++) dsum[c] += sum[c];
skip_block:
continue;
}
FORC4 if (dsum[c]) pre_mul[c] = dsum[c+4] / dsum[c];
}
if (use_camera_wb && cam_mul[0] != -1) {
memset (sum, 0, sizeof sum);
for (row=0; row < 8; row++)
for (col=0; col < 8; col++) {
c = FC(row,col);
if ((val = white[row][col] - black) > 0)
sum[c] += val;
sum[c+4]++;
}
if (sum[0] && sum[1] && sum[2] && sum[3])
FORC4 pre_mul[c] = (float) sum[c+4] / sum[c];
else if (cam_mul[0] && cam_mul[2])
memcpy (pre_mul, cam_mul, sizeof pre_mul);
else
fprintf (stderr,_("%s: Cannot use camera white balance.\n"), ifname);
}
if (pre_mul[3] == 0) pre_mul[3] = colors < 4 ? pre_mul[1] : 1;
dblack = black;
if (threshold) wavelet_denoise();
maximum -= black;
for (dmin=DBL_MAX, dmax=c=0; c < 4; c++) {
if (dmin > pre_mul[c])
dmin = pre_mul[c];
if (dmax < pre_mul[c])
dmax = pre_mul[c];
}
if (!highlight) dmax = dmin;
FORC4 scale_mul[c] = (pre_mul[c] /= dmax) * 65535.0 / maximum;
if (verbose) {
fprintf (stderr,_("Scaling with black %d, multipliers"), dblack);
FORC4 fprintf (stderr, " %f", pre_mul[c]);
fputc ('\n', stderr);
}
for (row=0; row < iheight; row++)
for (col=0; col < iwidth; col++)
FORC4 {
val = image[row*iwidth+col][c];
if (!val) continue;
val -= black;
val *= scale_mul[c];
image[row*iwidth+col][c] = CLIP(val);
}
}
« Last Edit: February 10, 2008, 09:04:09 PM by GLuijk » Logged

203
Guest
« Reply #29 on: February 13, 2008, 07:08:39 AM »
ReplyReply

GLuijk, did you do a similar test with either the 5D or 1Ds series cameras?
Thanks.
Logged
Guillermo Luijk
Sr. Member
****
Offline Offline

Posts: 1283



WWW
« Reply #30 on: February 13, 2008, 08:52:06 AM »
ReplyReply

Quote
GLuijk, did you do a similar test with either the 5D or 1Ds series cameras?
Thanks.
[a href=\"index.php?act=findpost&pid=174511\"][{POST_SNAPBACK}][/a]

Kind of with the 5D, but the image was not ideal for the test. However the 5D showed a bit less DR than the 40D, about 8.5 f-stops subjetively speaking.
« Last Edit: February 13, 2008, 08:52:29 AM by GLuijk » Logged

dmward
Jr. Member
**
Offline Offline

Posts: 56


« Reply #31 on: February 13, 2008, 07:20:01 PM »
ReplyReply

I remember a dynamic range test that someone suggested in a thread on another forum a few years back. The test was to side light something with texture that was near white. (Terry cloth towel for example) Then starting at the camera exposure setting (meter attempting to make the white mid gray) bracket up and down is half stop steps until getting a file that was completely blown out and another with no info. Looking at this span of images in light room makes it relatively easy to count the number of images in the span from black to white. 16 images means 8 stops, etc.

This seems to be a more empirical approach than the one taken here which depends on several factors that may or may not be controllable.

David

Quote
I'm afraid you are attributing the DNG conversion characteristics, which are simply not there.

1. There is no difference in compressed vs. uncompressed DNG, except the method of storage.

2. 12bit raw data appears in the low-order 12 bits of the DNG data, and that is completely normal. The fact, that DNG keeps always either 8 or 12 bits means *nothing* relatnig to the quality of the stored data. You have to treat the stored pixel values as numbers without regard to the actual storage capacity (which 16 bits, except a few cases).

To properly interpret these numeric values, the WhiteLevel info has to be used. This is one of the numerous errors in the DNG specification: the sensor's bit depth is not apparent from the metadata, either WhiteLevel has to be rounded up, or the camera proprietory info needs to be used.
[a href=\"index.php?act=findpost&pid=172511\"][{POST_SNAPBACK}][/a]
Logged
Panopeeper
Sr. Member
****
Offline Offline

Posts: 1805


« Reply #32 on: February 13, 2008, 08:11:34 PM »
ReplyReply

Quote
starting at the camera exposure setting (meter attempting to make the white mid gray) bracket up and down is half stop steps until getting a file that was completely blown out and another with no info. Looking at this span of images in light room makes it relatively easy to count the number of images in the span from black to white. 16 images means 8 stops, etc.

I don't know what this has to do with DNG, but that's ok. However, the mentioned empirical method does noto work so clear-cut as you may think.

A smaller problem is, how much one can trust the camera to expose as expected with an acceptable tolerance. For example my Canon 20D can not be trusted.

The bigger problem is to determine the low end. What is "black" in a digital image? Pixel intensity zero? That does not work; pixel values close to zero are usually so noisy that they don't reveal anything but noise. In any selected area you can find pixels with very low and with higher (sometimes quite high) values.

Is "black" that, what appears close to black on the display? That depends on the raw processor and the respective adjustments.

Is "black" that, what can not be recognized as "intelligent" even with adjusting the intensity? The transition is floating.

Therefor some measurement of the acceptability is necessary. This can be for example the standard deviation between the pixels of a uniform area (ignoring the level of details). It can be based on the presence of fine details, like Jonathan suggests, quite ignoring the noise. Obviously there can be different measures, but "no info" is IMO not a useful category.
Logged

Gabor
203
Guest
« Reply #33 on: February 14, 2008, 04:35:02 PM »
ReplyReply

Quote
GLuijk, did you do a similar test with either the 5D or 1Ds series cameras?
Thanks.
[a href=\"index.php?act=findpost&pid=174511\"][{POST_SNAPBACK}][/a]

Thanks, was that with RAW or jpegs?
Logged
Guillermo Luijk
Sr. Member
****
Offline Offline

Posts: 1283



WWW
« Reply #34 on: February 15, 2008, 10:31:22 AM »
ReplyReply

Quote
Thanks, was that with RAW or jpegs?
[a href=\"index.php?act=findpost&pid=174918\"][{POST_SNAPBACK}][/a]

I always use RAW. JPEG is a gamma corrected image, so it's difficult to use it to measure DR in the way I do my tests: I ask for a single shoot over a high dynamic range scene and study how noisy is the information found in each of the f-stos of the sensor's DR. To allocate the different regions of the image to the right f-stops would be tricky.
« Last Edit: February 15, 2008, 10:32:31 AM by GLuijk » Logged

203
Guest
« Reply #35 on: February 16, 2008, 06:52:53 PM »
ReplyReply

Thanks a lot for the info.
I am also wondering if you have conducted these tests on either the 1Ds3 or any of the MF backs?
Thanks!
Logged
Guillermo Luijk
Sr. Member
****
Offline Offline

Posts: 1283



WWW
« Reply #36 on: February 16, 2008, 09:16:42 PM »
ReplyReply

Quote
Thanks a lot for the info.
I am also wondering if you have conducted these tests on either the 1Ds3 or any of the MF backs?
Thanks!
[a href=\"index.php?act=findpost&pid=175358\"][{POST_SNAPBACK}][/a]

No I haven't, sorry. Regarding the 1Ds3 I guess must be around 9 f-stops. To improve DR through a better sensor design is a slow process; improving one f-stop of DR means aprox. halving noise level in the shadows, so we won't see miracles in the market.

To really achieve better DR in DSLR cameras today you have to go to Fuji's Super CCD which is 'something more' than just a low noise sensor in the shadows since it captures in just one shot, 2 images of different exposure to improve DR. It easily reaches 11 f-stops of DR clearly beating all the Canons and Nikons. The 2 images of the Super CCD are 3.6EV apart so you can get an idea of the expansion in DR these cameras can provide with respect to a single-exposed image sensor.
« Last Edit: February 16, 2008, 09:19:56 PM by GLuijk » Logged

John Sheehy
Sr. Member
****
Offline Offline

Posts: 838


« Reply #37 on: February 16, 2008, 10:53:29 PM »
ReplyReply

Quote
No I haven't, sorry. Regarding the 1Ds3 I guess must be around 9 f-stops. To improve DR through a better sensor design is a slow process; improving one f-stop of DR means aprox. halving noise level in the shadows, so we won't see miracles in the market.

To really achieve better DR in DSLR cameras today you have to go to Fuji's Super CCD which is 'something more' than just a low noise sensor in the shadows since it captures in just one shot, 2 images of different exposure to improve DR. It easily reaches 11 f-stops of DR clearly beating all the Canons and Nikons. The 2 images of the Super CCD are 3.6EV apart so you can get an idea of the expansion in DR these cameras can provide with respect to a single-exposed image sensor.
[a href=\"index.php?act=findpost&pid=175376\"][{POST_SNAPBACK}][/a]

There's no free lunch there, though.  What do you do with the photosite data that is very noisy when its less sensitive twin is clean?  You discard it, that's what you do.  You have to discard the clipped pixels, too.  So, you've thrown away photons, and overall IQ, to get a peek at the extremes of DR.

There's nothing quite like having high quantum efficiency, and no read noise.
« Last Edit: February 16, 2008, 11:17:18 PM by John Sheehy » Logged
Guillermo Luijk
Sr. Member
****
Offline Offline

Posts: 1283



WWW
« Reply #38 on: February 19, 2008, 01:41:11 PM »
ReplyReply

Quote
There's no free lunch there, though.  What do you do with the photosite data that is very noisy when its less sensitive twin is clean?  You discard it, that's what you do.  You have to discard the clipped pixels, too.  So, you've thrown away photons, and overall IQ, to get a peek at the extremes of DR.

There's nothing quite like having high quantum efficiency, and no read noise.
[a href=\"index.php?act=findpost&pid=175390\"][{POST_SNAPBACK}][/a]

But discarding these pixels doesn't result in a lower IQ for the collected information, but in less collected information, i.e. less final resolution in Mpx. That's the problem of the Super CCD, it needs 12Mphotosites to achieve what actually are just 6Mpx as the discarded info has to be interpolated.

The perfect camera does not exist, but shooting a Fuji knowing (almost) by certain your exposure will be fine even if you are not very careful, is something to think about. It's like a car with wheels which can never have a puncture. For some kind of photography is just great.
Logged

Panopeeper
Sr. Member
****
Offline Offline

Posts: 1805


« Reply #39 on: February 19, 2008, 02:02:09 PM »
ReplyReply

Guillermo,

an off-topic but important subject.

I guess you borrowed the Sony A700 from someone to make the DR test, right? That camera had firmware version 2 that time. I found a horrendeous issue with that (in that shot): at ISO 200 (and where else? I have only that image) it operates only with 3000 levels in 12bit mode. With firmware v3 it goes up to 3500 (still a crap).
« Last Edit: February 19, 2008, 02:02:26 PM by Panopeeper » Logged

Gabor
Pages: « 1 [2] 3 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad