Ad
Ad
Ad
Pages: « 1 2 3 [4]   Bottom of Page
Print
Author Topic: Sensor and Sensibility  (Read 18070 times)
jani
Sr. Member
****
Offline Offline

Posts: 1604



WWW
« Reply #60 on: August 23, 2006, 05:22:56 AM »
ReplyReply

Quote
The $64,000 question is thus, 'How does the 6mp F30 image at ISO 800 compare with the 8mp 20D image at ISO 3200?'

Where are such comparisons? Why are reviewers not getting to the crux of the matter?
You can probably do a manual comparison by opening the review page for the 20D or 30D, and then the review page for the F30, on DPReview.

They already have a comparison with the Nikon D50.

Getting back to the reviews of the 30D and the F30, we can look at the SD graphs; all numbers are approximate readings off the graphs.

Luminance SD black, gray:

30D @ ISO 1600: 2.1, 2.5
30D @ ISO 3200: 3.3, 3.7
F30 @ ISO 1600: 4, 2.8
F30 @ ISO 3200: 5.1, 3.8

RGB SD:

30D @ ISO 1600: 3.3
30D @ ISO 3200: 3.7
F30 @ ISO 1600: 3.2
F30 @ ISO 3200: 4.3

I wouldn't consider a +/- 0.2 difference as significant, since we can see similar differences between the 20D test and the 30D test.

Preliminary conclusion: the F30 is really good at gray luminance noise and RGB noise. Fujifilm's use of 6 Mpx plus 6 Mpx for highlights works well in the studio test, and judging from user reports, it works well in real life, too.

Now if they could only launch an F40 or something with anti-shake or whatever they want to call it, raw support for us nerds, a wider lens and improve on their automatic exposures, I'd buy one and carry it around with me "always".

Edit: Yes, I know I didn't answer your question about ISO 800; I thought ISO 1600 was more interesting, but you could always look for yourself.
« Last Edit: August 23, 2006, 05:23:53 AM by jani » Logged

Jan
Ray
Sr. Member
****
Offline Offline

Posts: 8900


« Reply #61 on: August 23, 2006, 07:00:18 AM »
ReplyReply

Quote
You can probably do a manual comparison by opening the review page for the 20D

Jani,
You probably can, but it's odd that dpreview always mentions that the numbers should not be compared with other reviews. Why is that, I wonder? They can't change their methodology that often.

Quote
They already have a comparison with the Nikon D50.

This is precisely the sort of comparison I find inconclusive. Same ISO but nothing else the same. I mean, who takes a photo based on a choice of ISO rather than DoF and/or shutter speed? We have the F30 at f4.9 and 1/460th compared with the D50 at f9 and 1/160th, followed by a comment that the F30 had more in-camera sharpening applied and more DoF. Instead of using f9 with the D50 they could have used f12 or f13 and at least got the same DoF equivalent, or used f3.5 with the FinePix instead of f4.9.
Logged
EricV
Full Member
***
Offline Offline

Posts: 132


« Reply #62 on: August 23, 2006, 06:34:44 PM »
ReplyReply

Here is a stab at comparing sensors with different pixel sizes, keeping everything else the same.  

Suppose some manufacturer decides to make two cameras, using the same lens and sensor area, but different pixel sizes.  Camera A has large pixels, while camera B has small pixels.  Let's say the small pixels of camera B are made by subdividing each large pixel of camera A into four sub-pixels.

Clearly both cameras need the same lens and aperture and shutter speed to capture equivalent images.  What then are the differences between the resulting images?

The camera with smaller pixels will have an advantage in system resolution.  The magnitude of this effect depends on whether the system resolution is dominated by the lens or by the sensor.  If the lens can resolve 100 lp/mm and the sensor has 10 micron pixels, both resolutions are comparable, and pixel size has a significant effect on resolution.  If the lens is not this spectacular (or is stopped down below f/8), or if the sensor pixels are smaller, then smaller pixels provide less improvement in system resolution.

The camera with larger pixels will have an advantage in system noise.  Siince not everyone seems to agree with this, let's go through the argument slowly.  

Let's agree to expose the image correctly in each camera, to just barely saturate the brightest pixels.  The large pixels capture 4 times as many photons as the small pixels, but the sensor charge capacity (full well depth) is correspondingly 4 times as great, so this requires the same exposure (aperture and shutter speed) in both cases.  Let's say this exposure produces digitized values of 256 in the saturated small pixels and 1024 in the saturated large pixels, at the same electronics gain.  (Of course camera A may subsequently multiply all values by 1/4 to provide a common intensity scale, but this does not affect signal/noise considerations.)

Now what about noise?  Sensor noise is generally dominated by the readout electronics.  Let's say the readout noise is a fixed number of electrons, corresponding to a digitized value of 1 count.  Since every pixel is subject separately to this readout noise, it will be the same for the small pixels and the large pixels, at the same electronics gain.  But remember the large pixels produce 4 times more signal per pixel, so the signal/noise ratio will be 4x better for the large pixels than for the small pixels.

The situation is not quite as bad as this, because the small pixels are bunched together closer in the final displayed image, so some averaging occurs.  This averaging may occur digitally, if the image is displayed small, as pixels are down-sampled to match the printer or screen resolution.  If the four small pixels are averaged back into one large pixel, the noise will improve by a factor of 2 by simple statistics.  This still leaves a factor of 2 worse noise than if the averaging was done in the sensor before digitization.

Photon statistics (shot noise) is another noise source.  This will be the dominant noise for bright pixels, where signal/noise is already good, but it will not be the dominant noise for dark pixels, where signal/noise is poor and hence important.  If sensor readout noise was somehow made negligible and photon statistics became the dominant noise source, then we would have a different situation.  The small pixels would have 1/4 the signal and hence 1/2 the noise of the large pixels, and by the averaging argument given above this factor of 2 would cancel out.  In this situation, the small pixels would be equivalent to the large pixels in signal/noise.
Logged
phila
Sr. Member
****
Offline Offline

Posts: 263



WWW
« Reply #63 on: August 23, 2006, 07:31:22 PM »
ReplyReply

Lots and lots of info here:

www.robgalbraith.com/public_files/Canon_Full-Frame_CMOS_White_Paper.pdf
Logged

bjanes
Sr. Member
****
Offline Offline

Posts: 2822



« Reply #64 on: August 23, 2006, 09:25:55 PM »
ReplyReply

Quote
Here is a stab at comparing sensors with different pixel sizes, keeping everything else the same. 

Suppose some manufacturer decides to make two cameras, using the same lens and sensor area, but different pixel sizes.  Camera A has large pixels, while camera B has small pixels.  Let's say the small pixels of camera B are made by subdividing each large pixel of camera A into four sub-pixels.

Photon statistics (shot noise) is another noise source.  This will be the dominant noise for bright pixels, where signal/noise is already good, but it will not be the dominant noise for dark pixels, where signal/noise is poor and hence important.  If sensor readout noise was somehow made negligible and photon statistics became the dominant noise source, then we would have a different situation.  The small pixels would have 1/4 the signal and hence 1/2 the noise of the large pixels, and by the averaging argument given above this factor of 2 would cancel out.  In this situation, the small pixels would be equivalent to the large pixels in signal/noise.
[a href=\"index.php?act=findpost&pid=74284\"][{POST_SNAPBACK}][/a]

The above analysis of noise is not correct. Photon statistics follow a Poisson distribution, where the standard deviation of the noise is equal to the square root of the number of photons collected. If the small pixel collects 4 photons, the large pixel will collect 16 photons. The standard deviation of the noise will be 2 photons with the small pixel and 4 photons with the large pixel. The S/N will be 4/2  or 2 for the small pixel and 16/4 or 4 for the large pixel. The large pixel wins.
Logged
John Sheehy
Sr. Member
****
Offline Offline

Posts: 838


« Reply #65 on: August 23, 2006, 09:36:09 PM »
ReplyReply

Quote
and then the review page for the F30, on DPReview.
[a href=\"index.php?act=findpost&pid=74205\"][{POST_SNAPBACK}][/a]

Comparisons like this are only good for "out of the camera" shots.  It is clear that the Fuji f30 has lots of noise reduction (high ISO images look distorted); what's to say that the Canon compared on that page wouldn't clean up considerably with noise removal software?  The F30 images have no room for any further NR.

Small sensors will always have lots of RAW noise at high ISOs; even with ideal readout, they will still have lots of shot noise unless they have extremely low resolution.
Logged
Ray
Sr. Member
****
Offline Offline

Posts: 8900


« Reply #66 on: August 23, 2006, 10:41:17 PM »
ReplyReply

I've been looking at some 'Imaging Resource' comparison images and notice that the F30 at ISO 800 in broad daylight, and sunshine, produces the plasticky effect sometimes seen in Neat Image noise reduction when noise reduction is over-done.

I've never noticed such an effect with the 20D at ISO 3200 with a full exposure to the right, so John has a very valid point. Any valid comparison between the F30 (at ISO 800) and the 20D (at ISO 3200, or more fairly, the 10D to keep the total pixel count the same) would have to include noise reduction applied to the 10D image.
Logged
jani
Sr. Member
****
Offline Offline

Posts: 1604



WWW
« Reply #67 on: August 24, 2006, 04:45:00 AM »
ReplyReply

Quote
Comparisons like this are only good for "out of the camera" shots.  It is clear that the Fuji f30 has lots of noise reduction (high ISO images look distorted);
Yes, and if I recall correctly, that is also noted in the review of the camera, although it's less heavy-handed with NR than most other compact cameras.

I didn't quite expect my post to be taken out of context of the original reviews, but I see now that was a mistake; I'll try to include the caveats next time.
Logged

Jan
EricV
Full Member
***
Offline Offline

Posts: 132


« Reply #68 on: August 24, 2006, 11:48:28 AM »
ReplyReply

Quote
The above analysis of noise is not correct. Photon statistics follow a Poisson distribution, where the standard deviation of the noise is equal to the square root of the number of photons collected. If the small pixel collects 4 photons, the large pixel will collect 16 photons. The standard deviation of the noise will be 2 photons with the small pixel and 4 photons with the large pixel. The S/N will be 4/2  or 2 for the small pixel and 16/4 or 4 for the large pixel. The large pixel wins.
[a href=\"index.php?act=findpost&pid=74292\"][{POST_SNAPBACK}][/a]
This is exactly what I said ("the small pixels would have 1/4 the signal and hence 1/2 the noise of the large pixels"), but thank you for making it clearer with a concrete example.  Considering photon statistics alone, the larger pixel will have 2x better signal/noise per pixel.  

However, that is not the end of the story.  The smaller pixel camera can completely recover from this apparent extra noise simply by summing four pixels into one.  Using your example, the signal from the four small pixels will become 4+4+4+4=16, matching the large pixel, and the noise will become sqrt(4+4+4+4)=4, again matching the large pixel.  This averaging can be done explicitly, by resizing the digital file, or implicitly, by viewing the print from a distance where the four pixels are merged by the human eye.  In this situation, there is no noise penalty for breaking the large pixel into smaller pixels.
Logged
Ray
Sr. Member
****
Offline Offline

Posts: 8900


« Reply #69 on: August 25, 2006, 12:28:16 AM »
ReplyReply

Quote
Suppose some manufacturer decides to make two cameras, using the same lens and sensor area, but different pixel sizes. Camera A has large pixels, while camera B has small pixels. Let's say the small pixels of camera B are made by subdividing each large pixel of camera A into four sub-pixels.
[a href=\"index.php?act=findpost&pid=74284\"][{POST_SNAPBACK}][/a]

Eric,
There's another issue that just recently came to my attention as a result of the announcement of the 10mp Canon 400D. I had always assumed that the gaps between microlenses was very small, basically insignificant.

It now seems, if one can believe Canon's White Papers, these gaps have in the past  been really quite large and that, with each successive model, Canon has been reducing the width of the gaps so that, in effect, the fill factor increases to offset (at least to some extent) the smaller light gathering area of the smaller pixel pitch.

If a manufacturer decided to place 4 small pixels in the area of one large pixel, the total fill factor of the 4 individual microlenses would probablyamount to a smaller area than the one big microlens. There'd be a greater total area taken up with interconnects.

Another issue that's not clear to me is read noise. Do we know for certain that read noise is proportional to the strength of the signal? I recall reading somewhere that one method of reducing read noise is 'binning' where a group of say 4 pixels is read as one pixel. Resolution is then reduced but S/N is increased. Perhaps this method only works well with CCD sensors.
« Last Edit: August 25, 2006, 12:30:30 AM by Ray » Logged
EricV
Full Member
***
Offline Offline

Posts: 132


« Reply #70 on: August 25, 2006, 11:18:03 AM »
ReplyReply

Quote
Another issue that's not clear to me is read noise. Do we know for certain that read noise is proportional to the strength of the signal? I recall reading somewhere that one method of reducing read noise is 'binning' where a group of say 4 pixels is read as one pixel. Resolution is then reduced but S/N is increased.
[a href=\"index.php?act=findpost&pid=74406\"][{POST_SNAPBACK}][/a]
Readout noise is constant, independent of the strength of the signal.  This is ultimately the reason why large pixels have better signal/noise than small pixels, as I described in my original post.  Photon statistics does not behave this way, but readout noise is more important than photon statistics for all current sensors (at least in dark areas of the image, which is where signal/noise gets low and noise becomes noticable). Since the readout noise penalty is paid per pixel, an image can be made less noisy by performing fewer digitizations, using a sensor with fewer pixels.  As you noted, some manufacturers recognize this and deliberately bin pixels together at their highest ISO settings, to improve noise at the cost of resolution.  The reason this trick works is that the signals are binned in the sensor before digitization, so the readout noise penalty is paid on fewer pixels.
Logged
Ray
Sr. Member
****
Offline Offline

Posts: 8900


« Reply #71 on: August 25, 2006, 11:40:45 AM »
ReplyReply

Quote
Readout noise is constant, independent of the strength of the signal.  [a href=\"index.php?act=findpost&pid=74445\"][{POST_SNAPBACK}][/a]

This is not what I've read, but it would take me some time to find the reference. Read noise is greater in larger pixels but not proportionally greater so there is some net reduction when binning pixels.
Logged
John Sheehy
Sr. Member
****
Offline Offline

Posts: 838


« Reply #72 on: August 25, 2006, 01:25:10 PM »
ReplyReply

Quote
This is not what I've read, but it would take me some time to find the reference. Read noise is greater in larger pixels but not proportionally greater so there is some net reduction when binning pixels.
[a href=\"index.php?act=findpost&pid=74449\"][{POST_SNAPBACK}][/a]

It may be variable amongst cameras, but with the 20D, with a short exposure, noise (measured in electrons) at any RAW level is almost *exactly* (I'm talking like 1% error here) the square root of the sum of the electrons captured squared, and the number of electrons corresponding to blackframe noise at that ISO, squared.  IOW, shot noise and blackframe (readout) noise account for 99% of the noise.  That is ignoring any difference in amplification between odd and even lines, which is very small in my 20D (but significant in my 10D, and some 1Ds RAWs I have seen).
Logged
Ray
Sr. Member
****
Offline Offline

Posts: 8900


« Reply #73 on: August 25, 2006, 08:39:21 PM »
ReplyReply

Quote
It may be variable amongst cameras, but with the 20D, with a short exposure, noise (measured in electrons) at any RAW level is almost *exactly* (I'm talking like 1% error here) the square root of the sum of the electrons captured squared, and the number of electrons corresponding to blackframe noise at that ISO, squared.  IOW, shot noise and blackframe (readout) noise account for 99% of the noise.  That is ignoring any difference in amplification between odd and even lines, which is very small in my 20D (but significant in my 10D, and some 1Ds RAWs I have seen).
[a href=\"index.php?act=findpost&pid=74464\"][{POST_SNAPBACK}][/a]

Sorry, John. You'll have to explain that more clearly. ".. the square root of the sum of the electrons captured squared.." to my simple mind means, 'the sum of the electrons captured'   .
Logged
John Sheehy
Sr. Member
****
Offline Offline

Posts: 838


« Reply #74 on: August 25, 2006, 09:55:11 PM »
ReplyReply

Quote
Sorry, John. You'll have to explain that more clearly. ".. the square root of the sum of the electrons captured squared.." to my simple mind means, 'the sum of the electrons captured'  .
[a href=\"index.php?act=findpost&pid=74504\"][{POST_SNAPBACK}][/a]

The word "sum" means that at least two items will be mentioned ... not the first thing mentioned after the word "sum".

Like, "the length of the hypotenuse of a right triangle is equal to the square root of the sum of the vertical side squared,  and the horizontal side squared".

All in electrons:
Nt = total noise
Ns = shot noise  
Nb = blackframe noise

Nt = sqrt(Ns^2 + Nb^2), as an equation.

I compared this to about a dozen real-world samples (various ISOs and levels), and most were about 1% off.  The error was generally that predicted was lower than measured, but that may be because it is impossible to get a perfect flat area
in a RAW sample.
« Last Edit: August 25, 2006, 09:56:39 PM by John Sheehy » Logged
Olivier_G
Newbie
*
Offline Offline

Posts: 34



« Reply #75 on: August 26, 2006, 08:03:30 AM »
ReplyReply

Back with some comments:

Anon,
Quote
I read this, but I have no idea what you are talking about. Your "photon" based hypothosis does not work. A wave based explination is far better.
I think there are still some misunderstandings, as my 'photon theory' does not go against the 'wave theory' nor 'Exposure'. I'll try to continue the detailled explanation of my previous post to clarify things... (time needed, though).

Ray,
Quote
Let's consider a situation where the exposure on the F828 at f4 and ISO 100 gives us the required shutter speed we need to freeze subject movement. In these circumstances, if I want the shot without blurring and with good DoF, I have no option but to underexpose the shot from the 20D by 2.5 stops, which is quite a significant underexposure. How would the two images compare? I'd say they'd be pretty much on a par.
Now we know those 'push processing' instructions really do serve a purpose. We bump up the ISO setting to ISO 600.
F828 at 100 ISO will probably be not too far from 20D 100 ISO underexposed by 2.5 stops nor from 20D 600 ISO. But keep in mind that Sony 8MP 2/3" sensor was really underperforming. Here is an interesting note about pushing the Leica D2 1 or 2 stops to get the best from it.
Quote
The $64,000 question is thus, 'How does the 6mp F30 image at ISO 800 compare with the 8mp 20D image at ISO 3200?'
There are actually 3 stops difference due to sensor's size. Let's compare Fuji F30 6MP 400 ISO and Canon 20D 8MP 3200 ISO. So what do we get? Not much!   They look different: less noise and details on the Fuji, which could be due to Noise Reduction, settings, sensor, etc... Note: noise measurement figures are near useless. You really need to look at similar pictures with details to get an idea.

EricV,
Quote
Let's say the readout noise is a fixed number of electrons, corresponding to a digitized value of 1 count.  Since every pixel is subject separately to this readout noise, it will be the same for the small pixels and the large pixels, at the same electronics gain.  But remember the large pixels produce 4 times more signal per pixel, so the signal/noise ratio will be 4x better for the large pixels than for the small pixels. If the four small pixels are averaged back into one large pixel, the noise will improve by a factor of 2 by simple statistics.  This still leaves a factor of 2 worse noise than if the averaging was done in the sensor before digitization.
Photon statistics (shot noise) is another noise source. The small pixels would have 1/4 the signal and hence 1/2 the noise of the large pixels, and by the averaging argument given above this factor of 2 would cancel out.  In this situation, the small pixels would be equivalent to the large pixels in signal/noise.
I agree with your explanation.

More comments:
- This should confirm (p3) that Read Noise is independant from Signal's strength.
- Read Noise and Dark Current Noise are lower in smaller photosites for a same technology.
- Photodiode/Photosite area (aperture) and microlenses efficiency will not scale as well when reducing Photosites (although tech advances improve this regularly).
- I read that Canon reduced the microlenses gap by 2 with the 400D vs 350D... which also means that the remaining improvement we can expect here is the same (ie: not huge).
- Some cameras (20D...) seems to be already Shot Noise limited for high quality work. This is based on this document: Read Noise=7.4e-, which gives balanced Read=Shot Noise at Signal=55e- for a SNR=5 (very low quality). At high quality (ex: SNR=40), Shot Noise is already more than 5x higher than Read Noise...

Olivier
Logged
Pages: « 1 2 3 [4]   Top of Page
Print
Jump to:  

Ad
Ad
Ad