Ad
Ad
Ad
Pages: « 1 2 3 [4] 5 »   Bottom of Page
Print
Author Topic: Foveon vs no on chip filters  (Read 12073 times)
joofa
Sr. Member
****
Offline Offline

Posts: 486



« Reply #60 on: January 10, 2013, 03:27:56 PM »
ReplyReply

Given that the (perhaps) main feature of "4:2:0" is that it lays the ground for 10:1 or 100:1 lossy compression in codecs that seem to have been tuned to the characteristics of YCbCr (BT 601 or 709) in the way that it trades visual errors for bandwidth reduction in a way that can only be reliably measured using largish panels of viewers, how would you go about to make a replacement, and confirm that it improves the end-to-end characteristics significantly more than the measurement uncertainty?

-h

For objective criterion stuff such as distortion measure or bitrate allocated per sample may be compared.
Logged

Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins
Jack Hogan
Full Member
***
Offline Offline

Posts: 214


« Reply #61 on: January 10, 2013, 03:32:24 PM »
ReplyReply

Thanks Bart, I am starting to see where you are coming from.  My question concerned the following quote:

Quote
'Some are suggesting that up to a stop can be gained by not filtering out 2/3rd of the spectrum at a given sampling position, which is not true because the 2/3rds are added through interpolation instead of being sampled directly.'

I framed it in terms of the same sensor with or without CFA because I understand those better and I do not know Foveon at all.  Let's see if I can rephrase it to include Foveon.

To make things simple I will assume APS-C size for both.  As far as I understand this should mean that (simplifying) 1 Foveon sensel will have about the same area as 3 RGB ones.  For a given Exposure, 384 photons of daylight spectrum will fall on the area of the Foveon sensel which (simplifying) will capture them all and sort them producing a raw count of (simplifying) (128,128,128) as you suggested.

For the same exposure a similar set of 384 photons will arrive over the 3 RGB simplified-bayer sensels, or 128 each.  However, the sensel under the Red filter will only see (simplifying) 42 photons because the others will have been rejected by the passband of the filter.  Similarly for the other two sensels under the respective idealized Blue and Green filters.  The result will be a raw count of (simplifying) (42,43,43).

Neighboring sensels in a uniform patch will produce the same results so interpolation, demosaicing or super-resolution will yield the same average values, which are 1/3 those of the Foveon (we know this is not true, but indulge me in this simplified example) with consequently lower IQ because 2/3 of the photons and their information were turned into a puff of (warm) smoke :-)

Is my question clearer now?
Logged
BJL
Sr. Member
****
Offline Offline

Posts: 5121


« Reply #62 on: January 10, 2013, 03:58:51 PM »
ReplyReply

... As you can see, the interpolation guessed right regardless of the filter color (because a uniform patch is simple to interpolate) regardless of the amount of light that was recorded through the filter and the colors that were absorbed by the filter. The missing channel data and the level were interpolated/reconstructed from the surrounding pixels.
For what you acknowledge is the easiest possible case, where there is no color variation so that all three color channels are providing accurate luminosity information. Can you please at least acknowledge that real world subjects often have some significant color variation, making it somewhat harder for CFA interpolation to retain resolution while avoiding problems like moiré? At least so long as we do not have totally oversampled sensors.
Logged
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3464


« Reply #63 on: January 10, 2013, 04:12:42 PM »
ReplyReply

Thanks Bart, I am starting to see where you are coming from.  My question concerned the following quote:

I framed it in terms of the same sensor with or without CFA because I understand those better and I do not know Foveon at all.  Let's see if I can rephrase it to include Foveon.

To make things simple I will assume APS-C size for both.  As far as I understand this should mean that (simplifying) 1 Foveon sensel will have about the same area as 3 RGB ones.  For a given Exposure, 384 photons of daylight spectrum will fall on the area of the Foveon sensel which (simplifying) will capture them all and sort them producing a raw count of (simplifying) (128,128,128) as you suggested.

For the same exposure a similar set of 384 photons will arrive over the 3 RGB simplified-bayer sensels, or 128 each.  However, the sensel under the Red filter will only see (simplifying) 42 photons because the others will have been rejected by the passband of the filter.  Similarly for the other two sensels under the respective idealized Blue and Green filters.  The result will be a raw count of (simplifying) (42,43,43).

Hi Jack,

But that's not how the Foveon sensor works, so allow me to interrupt the reasoning. Simplifying, the Foveon R+G+B sensels have the same surface area as the R, or G, or B, filtered Bayer CFA sensels. Over that same surface area, they receive the same number of photons. The Bayer CFA filters out 2/3rd of the spectrum (later to be reconstructed from surrounding sensels), the Foveon type filters out the 3 spectral bands (and keeps all 3/3rd) by using the absorption depth of visible light in silicon as detectors, let's say 1/3rd Blue depth, 1/3rd Green depth, and 1/3rd Red depth. So the corresponding spectral bands receive the same number of photons. If the Bayer CFA happens to have a Green filter, it will see the same number of photons as the Green recording depth of silicon, and likewise for the other colors. It's actually a whole lot more uncertain, but that is roughly the concept.

Cheers,
Bart
Logged
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3464


« Reply #64 on: January 10, 2013, 04:30:12 PM »
ReplyReply

For what you acknowledge is the easiest possible case, where there is no color variation so that all three color channels are providing accurate luminosity information. Can you please at least acknowledge that real world subjects often have some significant color variation, making it somewhat harder for CFA interpolation to retain resolution while avoiding problems like moiré? At least so long as we do not have totally oversampled sensors.

Hi,

I have no problem acknowledging that, in fact I am the one who even demonstrated it earlier with an extremely unlikely worst case scenario. Not only does that demonstration show that the Bayer CFA reconstruction starts to gradually fail as the fine detail approaches Nyquist (zoom in and you'll see false color artifacting), but it can even result in a loss of half of the resolution (in that unlikely worst case scenario).

I'm just trying to keep some explanations simple enough to follow. I thought that the concept of interpolation in a uniform area was easier to follow.

Cheers,
Bart
Logged
Fine_Art
Sr. Member
****
Offline Offline

Posts: 1063


« Reply #65 on: January 10, 2013, 05:14:21 PM »
ReplyReply

Conceptually its like using the spot healing brush to remove dust spots all over the image. Obviously it works well as the output of our cameras shows. For pixel level changing detail it would be inferior to foveon or full sensor full color.

3CCD and 3 CMOS are used in the video camera realm, it's too bad its not brought to cameras. A 3 chip camera would be a legitimate $4,000 camera to me.
Logged
joofa
Sr. Member
****
Offline Offline

Posts: 486



« Reply #66 on: January 10, 2013, 07:02:07 PM »
ReplyReply

If the Bayer CFA happens to have a Green filter, it will see the same number of photons as the Green recording depth of silicon, and likewise for the other colors. It's actually a whole lot more uncertain, but that is roughly the concept.

I think that there is some transmission loss in the Bayer CFA. So a "bare" Foveon type device without a color filter in front should possibly collect more of those photons that are otherwise absorbed in the filter layer.
Logged

Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins
Jack Hogan
Full Member
***
Offline Offline

Posts: 214


« Reply #67 on: January 11, 2013, 03:14:04 AM »
ReplyReply

Simplifying, the Foveon R+G+B sensels have the same surface area as the R, or G, or B, filtered Bayer CFA sensels. Over that same surface area, they receive the same number of photons. The Bayer CFA filters out 2/3rd of the spectrum (later to be reconstructed from surrounding sensels), the Foveon type filters out the 3 spectral bands (and keeps all 3/3rd) by using the absorption depth of visible light in silicon as detectors, let's say 1/3rd Blue depth, 1/3rd Green depth, and 1/3rd Red depth. So the corresponding spectral bands receive the same number of photons. If the Bayer CFA happens to have a Green filter, it will see the same number of photons as the Green recording depth of silicon, and likewise for the other colors. It's actually a whole lot more uncertain, but that is roughly the concept.

Ah, ok.  Therefore with that assumption 1 ideal Foveon 'pixel' is the same size as 1 Bayer 'sensel' (as opposed to the three I had imagined) in current DSLRs.  In that case the result of our simplified thought experiment would indeed be what you suggest, raw values (128,128,128) vs (128,0,0) respectively, and with a Bayer pattern in a uniform patch you could indeed take a guess at the missing information.

In this case too, however, the 2/3s of information thrown away would be apparent.  Let's say for instance that the pixel under examination was receiving light from an isolated star that produced a circle of confusion on the sensor with a diameter equal to the pixel pitch: for a given exposure 384 photons recorded by the ideal sensor; 128 by the ideal Bayer, 1/3 the SNR for the Bayer.  Of course if you took pictures of scenes with a lot less detail, say a foggy day, you could fill-in a lot of the missing information by interpolation.  But then you would not need a sensor with such a high resolution.  So the issue is still there, simply shifting from noise to resolution and back.

While I was thinking about your comment, I decided to compare for fun the Sigma SD15 (pixel pitch about 8um) to the Nikon D3200 (sensel pitch about 4 um): 1 SD15 contains almost exactly 1 D3200 RGBG quadruplet - now we are back to my example in the post above with SNR the issue instead of resolution.  But let's take it a step further: assume comparing the SD15 sensor to a Bayer with a 1um pixel pitch: now 16 Bayer quadruplets fit inside 1 foveon pixel.  You use the best demosaicing algorithm in town...  You see where this is going?

So at one extreme the missing information results in degraded resolution, at the other noise performance.  If we keep the resolution constant, it seems to me that removing the CFA would indeed improve noise performance by a factor equal to the absorption ratio of the filters.  Of course we would then lose all color information.  This is one reason why manufacturers are using weaker and weaker CFAs, relying ever more on the 'demosaicing' capabilities of their in-camera engines, a sort of horizontal (vs vertical) Foveon :-)

Jack
« Last Edit: January 11, 2013, 03:29:11 AM by Jack Hogan » Logged
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3464


« Reply #68 on: January 11, 2013, 03:19:55 AM »
ReplyReply

I think that there is some transmission loss in the Bayer CFA. So a "bare" Foveon type device without a color filter in front should possibly collect more of those photons that are otherwise absorbed in the filter layer.

Hi,

Yes, in an ideal world. But don't forget that the Foveon photosites require a significant amount of transfer gates and possibly transistors, that take up a lot of real estate that's no longer available for letting in photons. In addition, I've seen diagrams (a long time ago) of a somewhat circular and ring like doping structure used to sample at different silicon penetration depths.

On the other hand, we also do not know exactly how transparent the CFA filters are and how effective the doping is in transferring energy, and how effective the microlenses, if any, are in condensing light onto the photosensitive areas. The filters are not perfect band-pass filters, and they also have secondary transmission outside their target band.

And all filters are transparent to IR, so how strong is the IR filter layer really (i.e. how pure is the signal after subtracting the residual IR contribution from all color bands)? Silicon is also transparent for IR, but how much is still recorded due to scatter?

Therefore, for a thought experiment to explain the principle, lets assume they are equal. At least it won't complicate the discussion more than necessary.

But for a general discussion, I agree. I've heard quantum efficiency numbers for the current Foveon of some 25-30% and of Bayer CFA designs at 40-50%, so any reputable sources to support that would be welcome for the general discussion.

Cheers,
Bart
Logged
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3464


« Reply #69 on: January 11, 2013, 05:44:44 AM »
ReplyReply

Ah, ok.  Therefore with that assumption 1 ideal Foveon 'pixel' is the same size as 1 Bayer 'sensel' (as opposed to the three I had imagined) in current DSLRs.  In that case the result of our simplified thought experiment would indeed be what you suggest, raw values (128,128,128) vs (128,0,0) respectively, and with a Bayer pattern in a uniform patch you could indeed take a guess at the missing information.

In this case too, however, the 2/3s of information thrown away would be apparent.  Let's say for instance that the pixel under examination was receiving light from an isolated star that produced a circle of confusion on the sensor with a diameter equal to the pixel pitch: for a given exposure 384 photons recorded by the ideal sensor; 128 by the ideal Bayer, 1/3 the SNR for the Bayer.

Hi Jack,

Yes, there is a difference between the 384 photons recorded, and the 128 photons recorded for a single isolated spike signal (that's why, in addition to lens blur, an OLPF makes sense). However, the photon shot noise is equal to the square root of the number of photons, so the difference would be 384/sqrt(384) = 19.6, versus 128/sqrt(128) = 11.3 . That doesn't account for the fact that the per channel noise adds in quadrature, and that interpolated channels (from a properly low-pass filtered image source) will probably have a lower than actual spatial noise frequency (interpolation usually gives some loss of modulation due to the weighted averaging), and that should be included in the total equation.

For a better simulation of the S/N ratios due to 1 channel versus 3 channel sampling, one could add Poisson noise to a test image, and measure the differences before and after demosaicing. In fact, here is the result for a specific (VNG) demosaicing algorithm, only Poisson (shot) noise was added (no read noise):

A patch of uniform Gray level 128, Poisson Noise added, [R,G,B] Standard Deviation = [11.230, 11.355, 11.314]


Here is the ideal CFA version of that patch with zero contribution for 2/3rds of each pixel:


And here is the result after VNG demosaicing, [R,G,B] Standard Deviation = [10.301, 9.485, 10.274]


As you can see, the noise was blurred by averaging and by undersampling, and overall noise was not increased but slightly reduced. A simpler demosaicing algorithm, e.g. bilinear, would have blurred even more but would also have lost more real detail had it been present.
  
Quote
Of course if you took pictures of scenes with a lot less detail, say a foggy day, you could fill-in a lot of the missing information by interpolation.  But then you would not need a sensor with such a high resolution.  So the issue is still there, simply shifting from noise to resolution and back.  And people appear to like having clean images and/or more resolution these days of cameras that end in 'e' :-)

Correct, and there are several other issues, some of which have not been mentioned yet. One that may partly be related to the relatively small charge capacity wells of the Foveon design (it needs to store 3 channel charges in the same area that the CFA design can allocate for one channel), is how effective is the color filtering by silicon penetration depth really? When one inspects the Raw data of a Foveon capture, the Raw data looks almost like a monochrome image. There is hardly any difference between the color channels, which means that some serious heavylifting needs to be done on that data to boost saturation, and with that comes color noise amplification. It's one of the reasons that Foveon sensors are relatively poor at higher ISO settings.

Quote
While I was thinking about your comment, I decided to compare for fun the Sigma SD15 (pixel pitch about 8um) to the Nikon D3200 (sensel pitch about 4 um): 1 SD15 contains almost exactly 1 D3200 RGBG quadruplet - now we are back to my example in the post above with SNR the issue instead of resolution.  But let's take it a step further: assume comparing the SD15 sensor to a Bayer with a 1um pixel pitch: now 16 Bayer quadruplets fit inside 1 foveon pixel.  You use the best demosaicing algorithm in town...  You see where this is going?

Yes, there is no such thing as Bayer quadruplets, unless one does binning which averages noise.

Quote
So at one extreme the missing information results in degraded resolution, at the other noise performance.  Some believe that truth lies in the middle: perhaps it's because that way six of one can ignore the half dozen of the other ;-)?  Correct me if I am wrong.

The technologies do not scale down with equal ease. Remember what I said about the well depth for 3 channels versus 1 channel on the same area of silicon real estate ... Being able and store 3x as many electrons for a color channel will reduce the noise to 58%.

And then there is the fact that Red, Green, and Blue do not contribute equally to Luminance ...

Cheers,
Bart
« Last Edit: January 11, 2013, 07:13:11 AM by BartvanderWolf » Logged
Fine_Art
Sr. Member
****
Offline Offline

Posts: 1063


« Reply #70 on: January 11, 2013, 03:24:16 PM »
ReplyReply

Bart,

I'm not (maybe later) buying your explanation on bayer having better full well capacity based on foveon being layered. The foveon technology is saying nothing from blue will go deeper than they make their layer. The same for green. Being able to make a blue bayer sensor deeper is not going to do anything.

I can accept the value of putting the electronics underneath so as to not interfere with the light path. Of course foveon must put electronics in the side with their layers. Good micro-lenses would even that advantage quite a bit.
Logged
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3464


« Reply #71 on: January 11, 2013, 07:38:39 PM »
ReplyReply

Bart,

I'm not (maybe later) buying your explanation on bayer having better full well capacity based on foveon being layered. The foveon technology is saying nothing from blue will go deeper than they make their layer. The same for green. Being able to make a blue bayer sensor deeper is not going to do anything.

I can accept the value of putting the electronics underneath so as to not interfere with the light path. Of course foveon must put electronics in the side with their layers. Good micro-lenses would even that advantage quite a bit.

Hi,

There is not that much usable info published to go on. The Foveon .X3F file format cannot be analysed with e.g. Rawdigger, so we'll have to make do with recent summaries like: http://people.rit.edu/hxc1311/ChenDetectorPaper.pdf

That document mentions the detection design, but not the actual charge storage model. How it really works, is anyone's guess. It also repeats some charts and formulas from older Foveon papers, such as the penetration depth dependency on angle of incidence (hence my assumption that large format sensors with more oblique incident light are not likely to happen), and the problematic color separation.

Maybe there is some info in the patent applications that's a bit more specific about the design 'improvements'.

Cheers,
Bart
Logged
Fine_Art
Sr. Member
****
Offline Offline

Posts: 1063


« Reply #72 on: January 11, 2013, 09:33:31 PM »
ReplyReply

Bart,

As I mentioned in the medium format discussion on CCD vs CMOS:

You know a lot more about the technology than me.

What I will say is I have seen 3 chip HD video vs 1 chip HD video. The 3 chip systems look way better. Maybe an order of magnitude better. Go to your local electronics store and compare the cameras for yourself. Panasonic makes a nice 3CMOS camcorder. Compare it to any manufacturer using 1 chip of similar size. Not the sony nex, that is a much bigger chip.

Edit: by compare I mean shoot video in the store with each. Output it to a HDTV.

If the theory of Bayer is so close to delivering as much as non- bayer why do 3 chip HD camcorders seem to have such a huge advantage? I am not talking theory, I am talking the units for sale in the stores. About a year and a half ago I was undecided on a new camera. When I was in the store they had the camcorders that take stills right beside the cameras. I spent some time in several stores comparing the models. There was one camcorder that seemed to have a massive advantage as output was viewed on a HDTV. The Panasonic 3 chip cameras. Anyone can go to test for themselves in their local electronics store. I doubt much as changed in the last year.

Record some video in the store. Why wouldnt 3 chip or non-bayer via some other method not have the same advantage? My understanding is almost all Pro video systems use 3 chips.
Logged
ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7257


WWW
« Reply #73 on: January 11, 2013, 11:42:38 PM »
ReplyReply

Hi,

My understanding is that pro video is mostly single chip. I'm pretty sure Arri Alexa is single chip and so is Red One.

I'm not sure that a three chip design could be fit into a DSLR , as they need a beam splitter and three sensors. The three sensors need to be aligned within perhaps 2 microns for all pixels if resolution is not to be diminished.

It seems that the Foveon, works best at low ISO's which probably means that it has a relatively low quantum efficiency at the system level or that signal processing needed to extract color information demands very low noise levels.

Best regards
Erik


Bart,

As I mentioned in the medium format discussion on CCD vs CMOS:

If the theory of Bayer is so close to delivering as much as non- bayer why do 3 chip HD camcorders seem to have such a huge advantage? I am not talking theory, I am talking the units for sale in the stores. About a year and a half ago I was undecided on a new camera. When I was in the store they had the camcorders that take stills right beside the cameras. I spent some time in several stores comparing the models. There was one camcorder that seemed to have a massive advantage as output was viewed on a HDTV. The Panasonic 3 chip cameras. Anyone can go to test for themselves in their local electronics store. I doubt much as changed in the last year.

Record some video in the store. Why wouldnt 3 chip or non-bayer via some other method not have the same advantage? My understanding is almost all Pro video systems use 3 chips.
Logged

Fine_Art
Sr. Member
****
Offline Offline

Posts: 1063


« Reply #74 on: January 12, 2013, 12:51:49 AM »
ReplyReply

B&H's Pro Video Studio equipment page

JVC, Panasonic, Sony all 3 chip units

http://www.bhphotovideo.com/c/buy/Studio-EFP-Cameras/ci/16764/N/4256818816
Logged
ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7257


WWW
« Reply #75 on: January 12, 2013, 01:01:27 AM »
ReplyReply

Yes,

Small chip devices, 1/3". I thought you were thinking larger sensors. The equipment I mentioned is higher up the scale.

Best regards
Erik


B&H's Pro Video Studio equipment page

JVC, Panasonic, Sony all 3 chip units

http://www.bhphotovideo.com/c/buy/Studio-EFP-Cameras/ci/16764/N/4256818816
Logged

Fine_Art
Sr. Member
****
Offline Offline

Posts: 1063


« Reply #76 on: January 12, 2013, 01:59:50 AM »
ReplyReply

Perfectly reasonable, I think most pro video is the stuff used for all HDTV broadcast every day.

You are talking hot new movie studio cameras.
Logged
Jack Hogan
Full Member
***
Offline Offline

Posts: 214


« Reply #77 on: January 12, 2013, 08:46:41 AM »
ReplyReply

Hi Bart, I see we are starting to deviate from the initial 'ideal' thought experiment :-)

the photon shot noise is equal to the square root of the number of photons, so the difference would be 384/sqrt(384) = 19.6, versus 128/sqrt(128) = 11.3 . That doesn't account for the fact that the per channel noise adds in quadrature, and that interpolated channels (from a properly low-pass filtered image source) will probably have a lower than actual spatial noise frequency (interpolation usually gives some loss of modulation due to the weighted averaging), and that should be included in the total equation.

Ok, in this case we need to decide whether we are dealing with a uniform patch of tones or the single sensel (star) version.  I take it from your example that we are looking at a uniform patch, so the first image works and we are forgetting about the loss of detail.  For simplicity, let's assume that the standard daylight light source can be filtered perfectly by three equally sized bands (vertically by the Foveon sensor and horizontally by the Bayer) and that the area of one Foveon Pixel is the same as that of a single Bayer sensel (the Green one in your example) so that the sensors would output (128,128,128) and (0,128,0) to their respective R*G*B* raw data.

The Bayer sensor would therefore produce a matrix of repeating data such as (128,0,0) (0,128,0) (128,0,0)... in a 'Red' row followed by an offset repeating (0,0,128) (0,128,0) (0,0,128)... in the 'Blue' row and so on. On the other hand the Foveon would produce an equal number of raw data points of value (128,128,128).   As you say the value 128 above is the mean, while in fact it would vary from raw value to raw value according to Poisson statistics.  These variations would be restricted to the recorded values and not spill over from one sensel to the one next to it because they are inherent in the incoming photons which, if filtered, would simply not be there.

If the above is correct, then the second image appears too sparse and too noisy, but I assume you did not use it for demosaicing since the final one looks correct :-)  In theory any demosaicing noise improvement in the ideal Bayer will only result from a further reduction in the substantially lower detail available (the real differentiator in the 'uniform patch' case). 

On the other hand, in the case where sensor resolution and detail are part of the equation the answer is quite simple if we simplify things a bit: 384 daylight spectrum photons would yield an average SNR of 19.6.  From the ideal Foveon's raw data we could write SNR Foveon (in quadrature) = sqrt(128+128+128) = 19.6.  From the ideal Bayer's raw data, since we only have 1/3 of the photons the answer would be, still in quadrature,  sqrt(128) = 11.3 as you suggested.

Quote
Yes, there is no such thing as Bayer quadruplets, unless one does binning which averages noise.

Yes, in a uniform patch where detail is 'binned'.  On the other hand in the case of a twinkly little star... :-)

Jack
« Last Edit: January 12, 2013, 08:53:20 AM by Jack Hogan » Logged
Jack Hogan
Full Member
***
Offline Offline

Posts: 214


« Reply #78 on: January 12, 2013, 01:00:53 PM »
ReplyReply

On further thought I may have misunderstood your comment below

Yes, there is no such thing as Bayer quadruplets, unless one does binning which averages noise.

in response to my earlier one

compare for fun the Sigma SD15 (pixel pitch about 8um) to the Nikon D3200 (sensel pitch about 4 um): 1 SD15 contains almost exactly 1 D3200 RGBG quadruplet - now we are back to my example in the post above with SNR the issue instead of resolution

To clarify, there is no need for binning in this thought experiment, just simple demosaicing.

Four ideal Bayer Sensels (one red, one blue and two green) cover exactly the same real estate as one ideal Foveon Pixel in the example above.  If, for a given exposure as before, 384 photons of D50 light arrive on such an area of the Foveon sensor, it will record (128,128,128) for this position in its R*G*B* raw file.  On the other hand the same 384 photons will arrive on the same area of a Bayer sensor - but each of the sensels, being 1/4 the area of the Pixel, will only see 1/4 of them on its turf, that is 96.  And since each sensel is covered by a perfect bassband filter that only lets through 1/3 of the arriving daylight photons, the sensor will record (32,0,0) (0,32,0) (0,0,32) (0,32,0) in its R*G*B*G* raw file.

For simplicity let's assume that no demosaicing is needed for the Foveon and a simple algorithm (say -h) is used to demosaic the Bayer data.  The result would be (32,32,32) with the green data cleaner than red and blue because the two greens were averaged.  And, ignoring different weights for simplicity,  the relative SNRs would be sqrt(128*3)=19.6 in favor of the Foveon vs sqrt(32*4)=11.3 for the Bayer as you mentioned earlier.  The difference being entirely due to the CFA which reduces the signal.

Contrary to the others we dreamed up, imo this example is better at comparing apples to apples because the resolution from both ideal sensors should be similar, including the effects of a 4-dot beam splittin' antialiasing filter.  Therefore the advantage of not having a CFA is clearer, reflecting a fairer compromise between noise and detail in the two approaches.

Cheers,
Jack
« Last Edit: January 12, 2013, 01:05:43 PM by Jack Hogan » Logged
Fine_Art
Sr. Member
****
Offline Offline

Posts: 1063


« Reply #79 on: January 12, 2013, 03:01:14 PM »
ReplyReply

Here are 2 100% crops out of a high ISO shot. Notice the tendency to make red and green splotches? We have gotten so used to just turning on a bit of NR that wipes it out. Is it really noise or is it de-bayering? The ISO was to crank the gain making the splotches visible. A low ISO shot does not show them in any visible way. Are they there in a subtle way?

Logged
Pages: « 1 2 3 [4] 5 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad