Ad
Ad
Ad
Pages: « 1 2 [3] 4 »   Bottom of Page
Print
Author Topic: MF Digital, myths or facts? A bit of drilling down  (Read 8572 times)
EricV
Full Member
***
Offline Offline

Posts: 128


« Reply #40 on: November 07, 2012, 06:14:03 PM »
ReplyReply

... So you see that this is not "precisely compensated" ...

Which is why I was careful to preface my comment with the disclaimer "Under the assumptions that S/N per pixel is dominated by the number of photons collected ..." Smiley

As you point out, when readout noise is significant, there is a penalty for more numerous smaller pixels, which have a noise component which does not scale with pixel size or collected photons.
Logged
ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7252


WWW
« Reply #41 on: November 07, 2012, 09:47:48 PM »
ReplyReply

Hi,

If you check this posting: http://www.luminous-landscape.com/forum/index.php?topic=72167.msg572836#msg572836

You can see that quadrupling the pixel area has a small effect on DR when normalized to print size but a very small effect on tonal range.

For me this is a good indication that EricV is right.

Best regards
Erik


Which is why I was careful to preface my comment with the disclaimer "Under the assumptions that S/N per pixel is dominated by the number of photons collected ..." Smiley

As you point out, when readout noise is significant, there is a penalty for more numerous smaller pixels, which have a noise component which does not scale with pixel size or collected photons.
Logged

ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7252


WWW
« Reply #42 on: November 07, 2012, 10:31:30 PM »
ReplyReply

Hi,

Yes, I agree, but it is pretty much what I say in the article. There is a comparison of two exposures of a color checker which I presume were taken under identical conditions.

My understanding is that color is either accurate or pleasant. Both of the images I tested were oversaturated (technically speaking) and I reduced saturation on both to get close to correct saturation. The article says: "The measured data above actually indicates that the D800E is better in reproducing a color checker card under a given set of conditions. The main difference between the Hassy and the D800E was that the Hassy image processed in LR4.2 was significantly oversaturated. When processing in LR4.2 I pulled back 13 units of saturateion on the Hassy and 4 units on the Nikon. Delta E is about half on the Nikon."

I tried to profile the CC-shots I used but they were both slightly overexposed.

Best regards
Erik


Erik, I think the weakest part of your great article is the section on color accuracy. There is so much that can influence the results, especially in the RAW processor. Every manufacturer imposes their idea of good color into a camera. I guess the best test would be how close could you get the cameras to a target by profiling and then see where the cameras differ from each other. And color accuracy has really two criteria, how accurate it is in absolute and relative terms. You can have very high absolute accuracy and really bad looking (unnatural) color.
« Last Edit: November 07, 2012, 11:36:35 PM by ErikKaffehr » Logged

PierreVandevenne
Sr. Member
****
Offline Offline

Posts: 510


WWW
« Reply #43 on: November 08, 2012, 04:04:50 AM »
ReplyReply

Which is why I was careful to preface my comment with the disclaimer "Under the assumptions that S/N per pixel is dominated by the number of photons collected ..." Smiley

As you point out, when readout noise is significant, there is a penalty for more numerous smaller pixels, which have a noise component which does not scale with pixel size or collected photons.

I think we agree, which is why I said that it doesn't matter much for photographic applications with decent exposure. It still does matter slightly for the dark areas of a properly exposed picture, which is why Nikon is essentially castrating them for a signal processing point of view. It did of course matter more when small pixels (say 15000 / 25000 FWC) where suffering from higher read noise (say 15)...

What tickles me a bit is when instead of saying "for a whole lot of complex reasons it ends up not mattering much in practice and the result is roughly equivalent" one says "this is precisely so as demonstrated there".

Very minor issue, I concede, and in the context of photographic education, your approach probably beats mine.
 
Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1666


« Reply #44 on: November 08, 2012, 07:35:04 AM »
ReplyReply

1) A pixel is just a number, like 1124. Does a number have noise?
If that number is used to represent some real number between 0 and 2000 (e.g. 1124.42), then I would say that it is a noisy measurement, yes.

-h
Logged
bjanes
Sr. Member
****
Offline Offline

Posts: 2756



« Reply #45 on: November 08, 2012, 10:14:43 AM »
ReplyReply

Under the assumptions that S/N per pixel is dominated by the number of photons collected, and that the number of photons collected per pixel is proportional to the pixel area, a sensor of a given size will have the same overall noise performance whether I divide the sensor area into small pixels or large pixels.  The smaller pixels will have worse S/N per pixel, but in the final image that will be precisely compensated for by the increased number of pixels.

This is not exactly correct, since if one bins 4 pixels into one pixel post capture via software, the binned superpixel will have 4 read noise contributions whereas a larger pixel with 4x the area would have only one read noise contribution. Software binning is the mechanism underlying the DXO screen vs pixel data. Hardware binning is widely used with monochrome scientific CCDs (see here), but the process is considerably more complex for Bayer array sensors and as far as I know, hardware binning with Bayer sensors is only available with the Phase One sensor plus technology (see here. Click on the P+ tutorial).

While a large sensor does collect more photoelectrons, one should remember that the SNR contribution from shot noise increases as the square root of the number of photons collected. Doubling the sensor area (as in going from an APS sized sensor to a full frame 35 mm sensor) will improve the SNR by a factor of only 1.4. Newer technology CMOS sensors (such as in the Nikon D7000) can compete quite well with older full frame sensors. The same considerations apply to MFDBs. As Erik has pointed out, the MFDBs are hampered by their high read noise which limits their dynamic range. However, their SNR in the midtones (where shot noise does not contribute significantly to the SNR) is quite good.

For ultimate image quality few well informed observers would deny that MFDBs are the way to go, but the price to performance ratio is quite steep.

Regards,

Bill

« Last Edit: November 08, 2012, 10:23:12 AM by bjanes » Logged
ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7252


WWW
« Reply #46 on: November 08, 2012, 11:08:28 AM »
ReplyReply

Hi,

I guess that my findings agree pretty well with Bill's conclusion. MFDB has a small advantage regarding shot noise in highlights and midtones. I would suggest that this may be hard to illustrate with images, because all the modern sensors are pretty good in this area.

I would expect that MFDBs would respond better to sharpening compared to DSLRs because I would expect them to have less shot noise. MFDBs are normally not OLP-filtered and they normally don't have microlenses, which may also reduce the need of sharpening.

You really need to look at the whole package. I'm pretty sure that MFDBs have an advantage in the resolution/MTF/microcontrast area. On the other hand I suspect that the DR advantage of MF is by and large a myth. Color reproduction and midtone tonality, I don't know.

Best regards
Erik



This is not exactly correct, since if one bins 4 pixels into one pixel post capture via software, the binned superpixel will have 4 read noise contributions whereas a larger pixel with 4x the area would have only one read noise contribution. Software binning is the mechanism underlying the DXO screen vs pixel data. Hardware binning is widely used with monochrome scientific CCDs (see here), but the process is considerably more complex for Bayer array sensors and as far as I know, hardware binning with Bayer sensors is only available with the Phase One sensor plus technology (see here. Click on the P+ tutorial).

While a large sensor does collect more photoelectrons, one should remember that the SNR contribution from shot noise increases as the square root of the number of photons collected. Doubling the sensor area (as in going from an APS sized sensor to a full frame 35 mm sensor) will improve the SNR by a factor of only 1.4. Newer technology CMOS sensors (such as in the Nikon D7000) can compete quite well with older full frame sensors. The same considerations apply to MFDBs. As Erik has pointed out, the MFDBs are hampered by their high read noise which limits their dynamic range. However, their SNR in the midtones (where shot noise does not contribute significantly to the SNR) is quite good.

For ultimate image quality few well informed observers would deny that MFDBs are the way to go, but the price to performance ratio is quite steep.

Regards,

Bill


Logged

ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7252


WWW
« Reply #47 on: November 08, 2012, 11:16:14 AM »
ReplyReply

Hi,

My point is that a pixel in isolation is pretty meaningless. Noise arises when there are several dozens or hundred pixels.

On pixel with 1124 electron charges noise would be 33.5 electron charges, so 65% of the pixels would hold between  1088 and 1158 electron charges (if I recall correctly).

Best regards
Erik


If that number is used to represent some real number between 0 and 2000 (e.g. 1124.42), then I would say that it is a noisy measurement, yes.

-h
Logged

thierrylegros396
Sr. Member
****
Offline Offline

Posts: 636


« Reply #48 on: November 08, 2012, 01:15:08 PM »
ReplyReply

Had a brief look. I would argue that such a technically detailed article needs references. For example, you claim "Readout noise for CCDs used in MFD digital is about 12 electron charges". Why should this be the case? How can I see that?

Some things are rather confusing, e.g. "If we assume that we have a full frame sensor of 24x36 mm and compare it with a MF sensor of 24x48 mm size the later[sp?] one will have twice the area, so it will collect about the same number of photons". Equal intensity assumed, twice the area gives twice the number of photos.


I thought that the IQ180 sensor size was 40.4x53.7mm, not 24x48mm ?

And, please Erik, how do you calculate SNR 12.5 ?!

Thierry
« Last Edit: November 08, 2012, 01:16:46 PM by thierrylegros396 » Logged
PierreVandevenne
Sr. Member
****
Offline Offline

Posts: 510


WWW
« Reply #49 on: November 08, 2012, 01:45:46 PM »
ReplyReply

My point is that a pixel in isolation is pretty meaningless. Noise arises when there are several dozens or hundred pixels.

That pixel (sensel) in isolation is however how all sensors' DR is characterized. FWC, read noise.... You can have a one sensel sensor, a million sensels sensor...

In your definition, where do you put the limit when noise arises? 24 pixels? 36? 48? 480?

If there is a certain amount of noise at 500 pixels, does it go up or down at 1000 pixels?

Assuming by that definition that there is x amount of noise with y pixels, what's the noise for one pixel x/y or x.y?

If there is no noise (zero noise) in the one pixel case how come 1000 of them have some amount of noise at all?
Logged
ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7252


WWW
« Reply #50 on: November 08, 2012, 02:31:31 PM »
ReplyReply

Hi!

Please check the enclosed pixel and tell me the noise level.

If we make an image of an evenly illuminated surface there will be a statistical variation on the pixels. If we presume the data numbers correspond to electron charges we would now that 64% of the pixels would be within 1090 and 1159 electron charges if the mean was 1124. But a single pixel doesn't have noise.

Best regards
Erik


That pixel (sensel) in isolation is however how all sensors' DR is characterized. FWC, read noise.... You can have a one sensel sensor, a million sensels sensor...

In your definition, where do you put the limit when noise arises? 24 pixels? 36? 48? 480?

If there is a certain amount of noise at 500 pixels, does it go up or down at 1000 pixels?

Assuming by that definition that there is x amount of noise with y pixels, what's the noise for one pixel x/y or x.y?

If there is no noise (zero noise) in the one pixel case how come 1000 of them have some amount of noise at all?
« Last Edit: November 08, 2012, 02:39:36 PM by ErikKaffehr » Logged

ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7252


WWW
« Reply #51 on: November 08, 2012, 02:43:24 PM »
ReplyReply

Hi,

Added a sample images of Phase One IQ 180 processed in LR42 and Capture 1 compared with Nikon D800.

http://echophoto.dnsalias.net/ekr/index.php/photoarticles/71-mf-digital-myths-or-facts?start=3

Best regards
Erik


Hi,

I have published a small article named: MF Digital, myths or facts?

http://echophoto.dnsalias.net/ekr/index.php/photoarticles/71-mf-digital-myths-or-facts

It is intended to drill down in some of the issues, hopefully it is unbiased.

Best regards
Erik
Logged

EricV
Full Member
***
Offline Offline

Posts: 128


« Reply #52 on: November 08, 2012, 03:09:02 PM »
ReplyReply

If we make an image of an evenly illuminated surface there will be a statistical variation on the pixels. If we presume the data numbers correspond to electron charges we would now that 64% of the pixels would be within 1090 and 1159 electron charges if the mean was 1124. But a single pixel doesn't have noise.

Single pixel noise is a useful concept and does make sense.  For a group of similar pixels with similar illumination, we agree that noise causes statistical variation in the response across the different pixels.  Similarly for a single pixel, I think we can agree that noise will cause statistical variation in the response of that pixel over time or over repeated measurements.  It is not a great conceptual leap to say that a single pixel has a true value and a measurement error.  The true value is the average light intensity, while the error is the statistical fluctuation expected for that light intensity.  For a single image the best estimate of the true value is the measured value, but many noise reduction techniques rely on capturing multiple images to provide an improved estimate of the true value.

Instead of saying simply "this pixel has a measured value of 1124" it would be more informative to say "this pixel has a measured value of 1124 and a statistical uncertainty of 33".  In scientific publications, when measurement results are reported, it is rather common to see statements like "pixel value = 1124 +- 33".  If there are multiple sources of error, it is even better to say something like "pixel value = 1124 +- 33 (statistical) +- 12 (readout noise)".
Logged
ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7252


WWW
« Reply #53 on: November 08, 2012, 03:39:32 PM »
ReplyReply

Hi,

Yes, I absolutely agree with that reasoning. On the other hand you are still sampling photons so you essentially says the pixel varies over time. It is pretty similar to the statistical variation over a simultaneous sampling over a number of pixels. You still need several samples to see a variation.

My point, mostly, is that we never do anything useful with a single pixel. We always use a large number of pixels and there will be a statistical variation.

You are right about the readout noise. It would be significant at 1124 electron charges on older sensors. On the latest CMOS sensors readout noise seems to be around 3 electron charges. I also think that you would add noise in quadrature. So shot noise which would be around 34 charges would dominate.

Best regards
Erik


Single pixel noise is a useful concept and does make sense.  For a group of similar pixels with similar illumination, we agree that noise causes statistical variation in the response across the different pixels.  Similarly for a single pixel, I think we can agree that noise will cause statistical variation in the response of that pixel over time or over repeated measurements.  It is not a great conceptual leap to say that a single pixel has a true value and a measurement error.  The true value is the average light intensity, while the error is the statistical fluctuation expected for that light intensity.  For a single image the best estimate of the true value is the measured value, but many noise reduction techniques rely on capturing multiple images to provide an improved estimate of the true value.

Instead of saying simply "this pixel has a measured value of 1124" it would be more informative to say "this pixel has a measured value of 1124 and a statistical uncertainty of 33".  In scientific publications, when measurement results are reported, it is rather common to see statements like "pixel value = 1124 +- 33".  If there are multiple sources of error, it is even better to say something like "pixel value = 1124 +- 33 (statistical) +- 12 (readout noise)".

« Last Edit: November 08, 2012, 03:51:26 PM by ErikKaffehr » Logged

BartvanderWolf
Sr. Member
****
Online Online

Posts: 3458


« Reply #54 on: November 08, 2012, 03:59:05 PM »
ReplyReply

Hi!

Please check the enclosed pixel and tell me the noise level.

Hi Erik,

Shoot it several times, or read it several times after resetting it, and its noise can be calculated ...

A noiseless sensel will produce the same output every time (ain't gonna happen).

Cheers,
Bart
« Last Edit: November 08, 2012, 05:43:44 PM by BartvanderWolf » Logged
ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7252


WWW
« Reply #55 on: November 08, 2012, 04:19:55 PM »
ReplyReply

Hi Bart,

Same as EricV says and I agree. My point is more like that a pixel is pretty meaningless without a context. With multiple exposures you add a temporal context, but I still think that very few of us would enjoy a single pixel movie, although the pixel would have both shot noise and readout noise. A couple of millions of those pixels on the other hand give a nice image.

It would be feasible to build a sensor that has binary pixels either black or white. If there was enough of those pixels the sensor would form a good image. As far as I know such sensor designs have been proposed.

Best regards
Erik

Hi Erik,

Shoot it several times, or read it several times after resetting it, and its noise can be calculated ...

A noiseless sensel will produce the same output avery time (ain't gonna happen).

Cheers,
Bart
Logged

PierreVandevenne
Sr. Member
****
Offline Offline

Posts: 510


WWW
« Reply #56 on: November 08, 2012, 06:52:41 PM »
ReplyReply

It would be feasible to build a sensor that has binary pixels either black or white. If there was enough of those pixels the sensor would form a good image. As far as I know such sensor designs have been proposed.

And realized, one often cited paper is
http://www.ece.rice.edu/~duarte/images/csCamera-SPMag-web.pdf
you might be surprised by the results.

Logged
ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7252


WWW
« Reply #57 on: November 12, 2012, 12:42:01 AM »
ReplyReply

Hi,

Sorry for taking time responding.

SNR 12.5 is meant as SNR 3 stops below saturation, 2^3 = 8. If we divide 100% with eight we get 12.5%. 

The 24x48 sensor should be 36x48 (two 24x36 sensors side by side) and this was given as an example of doubling sensor size. The example was absolutely hyptothetical.

The real calculation is based on P65+ as data for that sensor is available from sensorgen. 

Best regards
Erik



I thought that the IQ180 sensor size was 40.4x53.7mm, not 24x48mm ?

And, please Erik, how do you calculate SNR 12.5 ?!

Thierry
Logged

David Watson
Sr. Member
****
Offline Offline

Posts: 394


WWW
« Reply #58 on: November 12, 2012, 06:03:18 AM »
ReplyReply

Totally agree with you Bernard.  The amount of light reaching the sensor will be a function of the gathering power of the lens, the internal transmission losses and the percentage of the image circle that actually falls on the sensor.  Perhaps an interesting test would be to compare various 35mm lenses with various MFD lenses in a test rig on both types of sensors so that these variables can be eliminated. 

I shoot with both MFD (H4D-60) and 35mm (D800E).  Both are fine instruments and very often I could use either camera for a job.  There are situations however where the ease of use and portability of the Nikon make it my tool of choice and situations where the Hasselbald is my preferred option - usually in the studio.  They are both very very good.

However from a subjective point of view I like what the combination of Hasselblad back and lenses and their Phocus software producing in a large format print.
Logged

David Watson ARPS
thierrylegros396
Sr. Member
****
Offline Offline

Posts: 636


« Reply #59 on: November 12, 2012, 06:04:49 AM »
ReplyReply

Hi,

Sorry for taking time responding.

SNR 12.5 is meant as SNR 3 stops below saturation, 2^3 = 8. If we divide 100% with eight we get 12.5%. 

The 24x48 sensor should be 36x48 (two 24x36 sensors side by side) and this was given as an example of doubling sensor size. The example was absolutely hyptothetical.

The real calculation is based on P65+ as data for that sensor is available from sensorgen. 

Best regards
Erik


Thanks Erik !

Thierry
Logged
Pages: « 1 2 [3] 4 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad