Ad
Ad
Ad
Pages: « 1 [2] 3 4 »   Bottom of Page
Print
Author Topic: MF Digital, myths or facts? A bit of drilling down  (Read 9835 times)
ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7703


WWW
« Reply #20 on: November 07, 2012, 01:52:58 PM »
ReplyReply

Hi,

Sorry it should be twice the photons.

Regarding the readout noise I have chosen 12 electrons/pixel because I wanted to use an optimistic value, I believe that 15-20 range is more probable for MFDB. Just a few lines below an example is given that is based on data from sensorgen, I will include a reference in the next revision.

Best regards
Erik

Had a brief look. I would argue that such a technically detailed article needs references. For example, you claim "Readout noise for CCDs used in MFD digital is about 12 electron charges". Why should this be the case? How can I see that?

Some things are rather confusing, e.g. "If we assume that we have a full frame sensor of 24x36 mm and compare it with a MF sensor of 24x48 mm size the later[sp?] one will have twice the area, so it will collect about the same number of photons". Equal intensity assumed, twice the area gives twice the number of photos.

Logged

deejjjaaaa
Sr. Member
****
Offline Offline

Posts: 992



« Reply #21 on: November 07, 2012, 01:57:41 PM »
ReplyReply

Ultimately, all other things being equal, a bigger sensing area always wins.
but the hard fact is that technology for smaller sensors (dSLR/PS/cell phones) is nowadays consistently not the same as for bigger sensors in MF(DB)... so we can't consider all other things equal and operate just by sensor area alone (so while it wins in some, in certain areas it doesn't)
Logged
Doug Peterson
Sr. Member
****
Offline Offline

Posts: 2843


WWW
« Reply #22 on: November 07, 2012, 02:11:21 PM »
ReplyReply

give the business you are in somehow nobody is surprised

I would imagine not. Given that my business is helping photographers pick cameras to make pictures. Not to help scientists diagnose quantum efficiency values.

Don't get me wrong. I think it's a great article. I'm just not in the target audience, nor do I think more than a few % of my customers would be. At least not as written/geared now.
Logged

DOUG PETERSON (dep@digitaltransitions.com), Digital Transitions
Dealer for Phase One, Mamiya Leaf, Arca-Swiss, Cambo, Profoto
Office: 877.367.8537
Cell: 740.707.2183
Phase One IQ250 FAQ
ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7703


WWW
« Reply #23 on: November 07, 2012, 02:27:52 PM »
ReplyReply

Hi,

My take is that Doug's observation is a valid one. It is about the whole package. I have great respect for Doug and I think that he has made great contributions to these forums.

My intention with the article is to put things in some perspective. Now, we can have different perspectives, depending on experience. I just try to present some facts, keeping bias to a minimum.

Let us just take an example. There is something called thermal noise. It is my understanding that Phase One raw files contain info about ambient temperature, and perhaps also sensor temperature. Phase can use that information to selectively reduce noise in the darks. Unfair advantage to Phase? Probably! Do other vendors do similar things? Probably!

Best regards
Erik




I would imagine not. Given that my business is helping photographers pick cameras to make pictures. Not to help scientists diagnose quantum efficiency values.

Don't get me wrong. I think it's a great article. I'm just not in the target audience, nor do I think more than a few % of my customers would be. At least not as written/geared now.
Logged

theguywitha645d
Sr. Member
****
Offline Offline

Posts: 970


« Reply #24 on: November 07, 2012, 02:31:15 PM »
ReplyReply

Regarding the number of photons collected, the only factor that really matters is the number of photons collected. Smaller pixels would collect fewer photons, but there would be more pixels. It matters very little if you collect 24 000 000 x 1000 photons or 6 000 000 x 4000 photons you still end up 24 000 000 000 000 photons. Would you print the image at 8x10" at 360 PPI you would end up with 2300 photons/pixel in both cases.

So if you cut an image in half, you have half the number of photons? But so what. What is visible is still the same. (Lets ignore the fact that prints and files don't have photons.)

And don't you care about the well capacity and how many photons get in that well? This is what signal is after all. A pixel with a small signal is still a pixel with a small signal--what is around it does not change it into a pixel with more of a signal.

The pixel is a luminance/color data point. It has no spacial information beyond its position in the array. That luminance/color value is directly related to how many photon strikes it receives. The pixels around it are unrelated. S/N of the pixel is important.
Logged
ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7703


WWW
« Reply #25 on: November 07, 2012, 02:38:55 PM »
ReplyReply

Hi,

A pixel is just a number. By itself it can have no noise.

If you cut an image in half it will loose half of the photons. If you print the same size it will be 41 percent noisier. It is exactly the same as underexposing one stop.

Using a smaller image you also loose resolution.

Best regards
Erik

So if you cut an image in half, you have half the number of photons? But so what. What is visible is still the same. (Lets ignore the fact that prints and files don't have photons.)

And don't you care about the well capacity and how many photons get in that well? This is what signal is after all. A pixel with a small signal is still a pixel with a small signal--what is around it does not change it into a pixel with more of a signal.

The pixel is a luminance/color data point. It has no spacial information beyond its position in the array. That luminance/color value is directly related to how many photon strikes it receives. The pixels around it are unrelated. S/N of the pixel is important.
Logged

ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7703


WWW
« Reply #26 on: November 07, 2012, 02:43:47 PM »
ReplyReply

Hi Doug,

Thanks for the nice comment!

I would be glad for suggestions about gearing/writing, my intention is to be objective.

Best regards
Erik

I would imagine not. Given that my business is helping photographers pick cameras to make pictures. Not to help scientists diagnose quantum efficiency values.

Don't get me wrong. I think it's a great article. I'm just not in the target audience, nor do I think more than a few % of my customers would be. At least not as written/geared now.
Logged

theguywitha645d
Sr. Member
****
Offline Offline

Posts: 970


« Reply #27 on: November 07, 2012, 02:50:45 PM »
ReplyReply

Hi,

A pixel is just a number. By itself it can have no noise.

But it does have a signal. Are you saying signal is irrelevant? Distributing the exposure over the entire DR of the pixel does not matter?

Quote
If you cut an image in half it will loose half of the photons. If you print the same size it will be 41 percent noisier. It is exactly the same as underexposing one stop.

Now you are stretching. S/N ratio does not change. And since the signal does not change, neither does the exposure.

Quote
Using a smaller image you also lose resolution.

Now, you are changing the topic.
Logged
ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7703


WWW
« Reply #28 on: November 07, 2012, 02:53:03 PM »
ReplyReply

Hi,

What I have seen from the my work behind the article is that larger sensors would have smoother tones in light gray areas and would have advantage in edge contrast on fine detail.

Best regards
Erik


but the hard fact is that technology for smaller sensors (dSLR/PS/cell phones) is nowadays consistently not the same as for bigger sensors in MF(DB)... so we can't consider all other things equal and operate just by sensor area alone (so while it wins in some, in certain areas it doesn't)
Logged

RFPhotography
Guest
« Reply #29 on: November 07, 2012, 03:02:46 PM »
ReplyReply

Isn't it true that both the pixel well capacity and the size of the sensor matter?  The pixel well capacity plays a larger part in the dynamic range.  The size of the sensor plays a larger part in the total number of photons captured and thus the overall signal to noise ratio.  So, Eric, in your example, while the two sensors would capture the same number of photons and could have the same S/N as a result, the sensor with the larger pixels should exhibit a better overall dynamic range.  Correct?

Deeejjjaaa is also correct that not everything is equal between the different formats.  As you point out, Eric, the noise characteristics of CCD and CMOS sensors are different so a direct comparison is somewhat difficult.  Technology such as back-side illumination is also advantageous, allowing more light to be captured (a) in each pixel and (b) in total.
Logged
ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7703


WWW
« Reply #30 on: November 07, 2012, 03:16:11 PM »
ReplyReply

Hi,

Yes, a larger pixel would have a small advantage in DR. The reason that this is often not the case is that the sensor chips with the highest resolution often use a more advanced chip technology, leading to lower readout noise.

Back side illumination has many advantages, but I don't think it has been implemented in DSLRs yet.

Best regards
Erik






Isn't it true that both the pixel well capacity and the size of the sensor matter?  The pixel well capacity plays a larger part in the dynamic range.  The size of the sensor plays a larger part in the total number of photons captured and thus the overall signal to noise ratio.  So, Eric, in your example, while the two sensors would capture the same number of photons and could have the same S/N as a result, the sensor with the larger pixels should exhibit a better overall dynamic range.  Correct?

Deeejjjaaa is also correct that not everything is equal between the different formats.  As you point out, Eric, the noise characteristics of CCD and CMOS sensors are different so a direct comparison is somewhat difficult.  Technology such as back-side illumination is also advantageous, allowing more light to be captured (a) in each pixel and (b) in total.
« Last Edit: November 07, 2012, 03:18:26 PM by ErikKaffehr » Logged

ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7703


WWW
« Reply #31 on: November 07, 2012, 03:39:18 PM »
ReplyReply

Hi,

1) A pixel is just a number, like 1124. Does a number have noise?

2) I'm not stretching. If you crop the image it will be enlarged more. So each pixel in the print will "see" less photons, noise will increase.

Best regards
Erik

But it does have a signal. Are you saying signal is irrelevant? Distributing the exposure over the entire DR of the pixel does not matter?

Now you are stretching. S/N ratio does not change. And since the signal does not change, neither does the exposure.

Now, you are changing the topic.
Logged

RFPhotography
Guest
« Reply #32 on: November 07, 2012, 03:44:00 PM »
ReplyReply

Hi,

Yes, a larger pixel would have a small advantage in DR. The reason that this is often not the case is that the sensor chips with the highest resolution often use a more advanced chip technology, leading to lower readout noise.

And I think this is the point deeejjjaaa was making.

Quote
Back side illumination has many advantages, but I don't think it has been implemented in DSLRs yet.

Best regards
Erik

Not sure about DSLRs.  Certainly P&S cameras and cell phones.  Maybe some mirrorless type cameras, not sure.  But this gets back to the point about how different technologies can impact the final analysis.






[/quote]
Logged
EricV
Full Member
***
Offline Offline

Posts: 132


« Reply #33 on: November 07, 2012, 04:10:25 PM »
ReplyReply

Regarding the number of photons collected, the only factor that really matters is the number of photons collected. Smaller pixels would collect fewer photons, but there would be more pixels. It matters very little if you collect 24 000 000 x 1000 photons or 6 000 000 x 4000 photons you still end up 24 000 000 000 000 photons. Would you print the image at 8x10" at 360 PPI you would end up with 2300 photons/pixel in both cases.  Once a print scale is fixed photons/pixel in the sensor is irrelevant.
I agree completely.  My point was that your article does not make this clear, since the discussion is almost all about photons per pixel.  Adding the content above to your article in progress would be a great improvement.
Logged
ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7703


WWW
« Reply #34 on: November 07, 2012, 04:14:02 PM »
ReplyReply

Hi Eric,

Thanks a lot. Suggestions like yours are most helpful ;-)

Best regards
Erik

I agree completely.  My point was that your article does not make this clear, since the discussion is almost all about photons per pixel.  Adding the content above to your article in progress would be a great improvement.
Logged

ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7703


WWW
« Reply #35 on: November 07, 2012, 04:19:23 PM »
ReplyReply

Hi!

I just added a comment from Doug Peterson from Digital Transitions here: http://echophoto.dnsalias.net/ekr/index.php/photoarticles/71-mf-digital-myths-or-facts?start=11

Doug is a long time member of LuLa with an endless number of good contributions.

Best regards
Erik

Hi,

I have published a small article named: MF Digital, myths or facts?

http://echophoto.dnsalias.net/ekr/index.php/photoarticles/71-mf-digital-myths-or-facts

It is intended to drill down in some of the issues, hopefully it is unbiased.

Best regards
Erik
Logged

ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7703


WWW
« Reply #36 on: November 07, 2012, 04:31:10 PM »
ReplyReply

Hi,

I'd suggest that there are some advantages to back illuminated CMOS like better fill factor and less lens cast effects, but a change of technology would have little effect on the analysis. It would foremost give better low light performance and that is not included in the article. I may add some info about the issues.

Thanks very much for your input!

Best regards
Erik



And I think this is the point deeejjjaaa was making.

Not sure about DSLRs.  Certainly P&S cameras and cell phones.  Maybe some mirrorless type cameras, not sure.  But this gets back to the point about how different technologies can impact the final analysis.







Logged

EricV
Full Member
***
Offline Offline

Posts: 132


« Reply #37 on: November 07, 2012, 05:04:07 PM »
ReplyReply

And don't you care about the well capacity and how many photons get in that well? This is what signal is after all. A pixel with a small signal is still a pixel with a small signal--what is around it does not change it into a pixel with more of a signal.

The pixel is a luminance/color data point. It has no spacial information beyond its position in the array. That luminance/color value is directly related to how many photon strikes it receives. The pixels around it are unrelated. S/N of the pixel is important.

A pixel is not just a luminance data point at a particular location, it also represents a certain subject size.  One way to think of this is by taking the angular coverage of the lens and dividing by the number of pixels covered.  Another way to think of this is by considering how many pixels are occupied by a physical object in the scene.  By representing a physical object (say a patch of uniform gray sky) with more pixels, I get inherently better signal/noise in the final printed image.  It is not just noise per pixel that matters.  

Under the assumptions that S/N per pixel is dominated by the number of photons collected, and that the number of photons collected per pixel is proportional to the pixel area, a sensor of a given size will have the same overall noise performance whether I divide the sensor area into small pixels or large pixels.  The smaller pixels will have worse S/N per pixel, but in the final image that will be precisely compensated for by the increased number of pixels.
« Last Edit: November 07, 2012, 05:21:02 PM by EricV » Logged
PierreVandevenne
Sr. Member
****
Offline Offline

Posts: 510


WWW
« Reply #38 on: November 07, 2012, 05:09:17 PM »
ReplyReply

1) A pixel is just a number, like 1124. Does a number have noise?

When your pixel states 1124, it simply means that on a certain sensor you can expect the actual incoming data that made that pixel value was (for example) between 1114 and 1134. The neighbouring pixel wich could be measuring the same data (say a uniform background) could have 1113 +/-10 and another one, slightly less sensitive 983 +/-11. You'd then see a "noisy" band of three pixels in place of the uniform one you were hoping for.

If you want to state it in another way, there is a margin of incertitude on that single value, and that uncertain part of the signal is the noise in the signal.

Quote
2) I'm not stretching. If you crop the image it will be enlarged more. So each pixel in the print will "see" less photons, noise will increase.

If, in the simplest method, the pixel is doubled. That doubling doesn't change the SNR. You are simply increasing the area that represents the 1124 (+/-10) measurement. Neither changing its value, nor changing the error margin that occurred when it was captured.
Logged
PierreVandevenne
Sr. Member
****
Offline Offline

Posts: 510


WWW
« Reply #39 on: November 07, 2012, 05:36:54 PM »
ReplyReply

area, a sensor of a given size will have the same overall noise performance whether I divide the sensor area into small pixels or large pixels.  The smaller pixels will have worse S/N per pixel, but in the final image that will be precisely compensated for by the increased number of pixels.

In practice, this works roughly for digital cameras (as demonstrated in Emil Martinec's paper).

Now, consider the following cameras that happen to have 1 unit of read noise.

Camera 1 has a sensel that collects 16 units of lights. When read, it will be 15 - 16 - 17
Camera 2 has 4 sensels that collect 4 units of lights each. When read, you'll have 3-4-5 / 3-4-5 / 3-4-5 / 3-4-5 (worst case 12 - 20)

So you see that this is not "precisely compensated" although the distribution of your signal will indeed be centered on the same 15-16-17.

It doesn't matter too much with photography because you are almost always using a decent exposure and working with a significantly larger number of units than the ones above. So you clould simply say that you'll ignore the issue because it is negligible (do the math with 60000 units and 4 times 15000 units and a 10 units read noise for example) but redefining noise or demonstrating an equality that doesn't exist will not satsify everyone.

The noise (uncertainty on the captured signal) doesn't change after the capture if you can store the data reliably.

One last thing: the "scientific" definition of noise and the "photographic" perception of noise are two different things: shooting a deep dark sky background should be noisy simply because the sky background is not uniform (there's also the variability rates or light unit arrivals, Poisson, but le't's not get into that) but a photographer will find the totally uniform black background produced by the Nikon blackbox less noisy.

(fixed typos)
Logged
Pages: « 1 [2] 3 4 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad