Ad
Ad
Ad
Pages: « 1 2 [3] 4 5 »   Bottom of Page
Print
Author Topic: Kodak's new sensor  (Read 25298 times)
AJSJones
Sr. Member
****
Offline Offline

Posts: 353



« Reply #40 on: June 28, 2007, 11:24:09 PM »
ReplyReply

Quote
Andy,
<snip>
The only problem I foresee that might occur is when people want pixel accuracy with regard to color. If the y pixel we're referring to, not in contact with any adjacent red pixel value, were in fact a small speck on a textured surface that was as small as, or even smaller than the pixel pitch, such a tiny speck could be either red, green or blue and there would be no way of determining which.
[a href=\"index.php?act=findpost&pid=125539\"][{POST_SNAPBACK}][/a]

That's as true a statement for that scenario as in the Bayer one where the speck is assigned a color based solely on which pixel it falls on and not which color it is in real life. You get an RG or B value for it but no way of knowing what the other two colour values are, so it's no more or less info.   For that super-high res scenario, the only way to get it right (color detail smaller than pixel pitch) is the Foveon one - no CFA can succeed.  However, this failure is not likely to be noticeable for your scenario the vast majority of the time  - does the AA filter restore some of the info in this case?
Logged
Ray
Sr. Member
****
Offline Offline

Posts: 8939


« Reply #41 on: June 29, 2007, 04:00:12 AM »
ReplyReply

Quote
That's as true a statement for that scenario as in the Bayer one where the speck is assigned a color based solely on which pixel it falls on and not which color it is in real life. You get an RG or B value for it but no way of knowing what the other two colour values are, so it's no more or less info.   For that super-high res scenario, the only way to get it right (color detail smaller than pixel pitch) is the Foveon one - no CFA can succeed.  However, this failure is not likely to be noticeable for your scenario the vast majority of the time  - does the AA filter restore some of the info in this case?
[a href=\"index.php?act=findpost&pid=125543\"][{POST_SNAPBACK}][/a]

Andy,
You might be right. I'm afraid it's too complicated for me. I don't know how these algorithms do their job. If a red blob overlaps very slightly a green and blue pixel, one might think it would be easy to work out that the blob is red. But it's not as though we have a visual representation of red light overlapping green and blue light. The computer knows that the green and blue pixels are producing a voltage due to a certain intensity of green and blue light because there's a filter that blocks out the other two primaries. But there's no 'before and after' and there's no information about the true color of the panchromatic pixel other than the effect it has on neighbooring pixels, so I don't know how in this situation, in Pattern A, an algorithm could determine what frequency the light is that's illuminating the adjacent panchromatic pixel, except in a rather inaccurate way by analysing lots of surrounding pixels and making a rough guess.
Logged
jani
Sr. Member
****
Offline Offline

Posts: 1604



WWW
« Reply #42 on: June 29, 2007, 01:51:19 PM »
ReplyReply

Quote
But there's no 'before and after' and there's no information about the true color of the panchromatic pixel other than the effect it has on neighbooring pixels, so I don't know how in this situation, in Pattern A, an algorithm could determine what frequency the light is that's illuminating the adjacent panchromatic pixel, except in a rather inaccurate way by analysing lots of surrounding pixels and making a rough guess.
I guess an astrophysicist could tell you quite a bit about how interferometry and aperture synthesis are used to get details that seem "impossible" to get, by combining several radio or optical telescopes.

It seems obvious to me that interferometry and aperture syntesis can be and probably are used in sensor array interpolation today.
Logged

Jan
Ray
Sr. Member
****
Offline Offline

Posts: 8939


« Reply #43 on: June 29, 2007, 06:19:42 PM »
ReplyReply

Quote
I guess an astrophysicist could tell you quite a bit about how interferometry and aperture synthesis are used to get details that seem "impossible" to get, by combining several radio or optical telescopes.

It seems obvious to me that interferometry and aperture syntesis can be and probably are used in sensor array interpolation today.
[a href=\"index.php?act=findpost&pid=125630\"][{POST_SNAPBACK}][/a]


Jani,
An interferometer is an expensive piece of scientific equipment which analyses the interference caused by the interaction of different wavelengths of light which have been split by mirrors. You are not suggesting that the humble digital camera has become an interferometer, are you?

As far as I know, all the demosaicing and interpolation that takes place when converting a RAW image is done 'after the fact'. There's no real time analysis of incoming panchromatic signals. The real time analysis only takes place by virtue of the color filter. All pixels in the Bayer type array know what color they are. Panchromatic pixels haven't a clue, except from what their neighbours are doing. If one of their neighbours is too far away, as in 'Pattern A', it would seem likely to cause greater uncertainty and create more scope for color error.
Logged
Steve Kerman
Full Member
***
Offline Offline

Posts: 142


« Reply #44 on: June 30, 2007, 10:44:11 PM »
ReplyReply

Quote
But there's no 'before and after' and there's no information about the true color of the panchromatic pixel other than the effect it has on neighbooring pixels, so I don't know how in this situation, in Pattern A, an algorithm could determine what frequency the light is that's illuminating the adjacent panchromatic pixel, except in a rather inaccurate way by analysing lots of surrounding pixels and making a rough guess.
[a href=\"index.php?act=findpost&pid=125567\"][{POST_SNAPBACK}][/a]

Remember that Kodak is designing this array to optimize the image quality in MPEG-encoded video.  The concerns about color detail are very different in a highly-compressed, moving video stream than they are for a still image that one intends to make 3x4 foot exhibition prints of.  The eye doesn't have time to process the small color details in a moving image, so it is acceptable to de-emphasize your color resolution in exchange for better low-light performance.  Also, I understand that this pattern is designed to encode efficiently under inverse-cosine encoding, which is completely irrelevant to "RAW files only, please" landscape photographers.

It is not entirely evident that this filter design would result in better landscape photos, because that is not at all what it is designed to do.
« Last Edit: June 30, 2007, 10:45:17 PM by Steve Kerman » Logged
Ray
Sr. Member
****
Offline Offline

Posts: 8939


« Reply #45 on: July 01, 2007, 06:14:52 AM »
ReplyReply

Quote
Remember that Kodak is designing this array to optimize the image quality in MPEG-encoded video.  The concerns about color detail are very different in a highly-compressed, moving video stream than they are for a still image that one intends to make 3x4 foot exhibition prints of.  The eye doesn't have time to process the small color details in a moving image, so it is acceptable to de-emphasize your color resolution in exchange for better low-light performance.  Also, I understand that this pattern is designed to encode efficiently under inverse-cosine encoding, which is completely irrelevant to "RAW files only, please" landscape photographers.

It is not entirely evident that this filter design would result in better landscape photos, because that is not at all what it is designed to do.
[a href=\"index.php?act=findpost&pid=125859\"][{POST_SNAPBACK}][/a]

Fair enough! But this has not been made clear in the press reports and interviews that I've read. One could expect the first implementation to be in video cameras and P&S cameras, but the general impression I get is that this is an optional replacement for the Bayer system for all cameras.

There's an analogy here in respect of the Fujifilm SR system. It was initially implemented in P&S cameras but is now a feature of Fujifilm's flagship model, the S5 pro. I'll be looking forward to a dpreview test of this camera.
Logged
jani
Sr. Member
****
Offline Offline

Posts: 1604



WWW
« Reply #46 on: July 01, 2007, 08:22:40 AM »
ReplyReply

Quote
An interferometer is an expensive piece of scientific equipment which analyses the interference caused by the interaction of different wavelengths of light which have been split by mirrors. You are not suggesting that the humble digital camera has become an interferometer, are you?
Ray, an interferometer doesn't have to be more than the same telescope at different times of day.

Aperture synthesis interferometry doesn't require mirrors, but it does require mathematical calculations.
Logged

Jan
Ray
Sr. Member
****
Offline Offline

Posts: 8939


« Reply #47 on: July 01, 2007, 09:54:08 AM »
ReplyReply

Quote
Ray, an interferometer doesn't have to be more than the same telescope at different times of day.

Aperture synthesis interferometry doesn't require mirrors, but it does require mathematical calculations.
[{POST_SNAPBACK}][/a]

Doesn't have to be more than the same telescope at different times of day??

How does that help a single shot with a still camera?

I just typed "Aperture Synthesis Interferometry' into Google and got the following.

Please tell me in what ways this relates to a single shot with a digital camera   .

[a href=\"http://www.atnf.csiro.au/people/mdahlem/pop/tech/synth.html]http://www.atnf.csiro.au/people/mdahlem/pop/tech/synth.html[/url]
Logged
jani
Sr. Member
****
Offline Offline

Posts: 1604



WWW
« Reply #48 on: July 01, 2007, 04:57:07 PM »
ReplyReply

Quote
Doesn't have to be more than the same telescope at different times of day??

How does that help a single shot with a still camera?

I just typed "Aperture Synthesis Interferometry' into Google and got the following.
Why didn't you just follow my original Wikipedia links?

Quote
Please tell me in what ways this relates to a single shot with a digital camera   .
The airy disks from diffraction effects may span more than single pixels. This means that you in effect have several pixels showing parts of the same picture. Interferometry may (theoretically) be used to recover some of the information.
Logged

Jan
Ray
Sr. Member
****
Offline Offline

Posts: 8939


« Reply #49 on: July 01, 2007, 07:35:04 PM »
ReplyReply

Quote
Why didn't you just follow my original Wikipedia links?

[a href=\"index.php?act=findpost&pid=125972\"][{POST_SNAPBACK}][/a]

I did and came across the following:

Quote
Aperture synthesis or synthesis imaging is a type of interferometry that mixes signals from a collection of telescopes to produce images having the same angular resolution as an instrument the size of the entire collection............
 
In order to produce a high quality image, a large number of different separations between different telescopes are required

Now, admittedly the article goes on to imply that the number of different sets of data required for this process can be reduced through use of powerful and computationally expensive algorithms, and it would be a reasonable assumption to make that developments in this area could be relevant to the demosaicing required for this new Kodak CFA, but aren't you in danger of demolishing your own argument here, Jani?    It was you who raised the objections initially about the feasibility of this new CFA because of sacrifices in color accuracy that would have to be made.

I merely make the observation that it looks as though Pattern A will be more problematic than the other 2 patterns because all 3 primaries do not have an edge or a corner in common with each panchromatic pixel.

I'm quite optimistic about this new sensor even though a supercomputer might be required to get the best results   .
Logged
jani
Sr. Member
****
Offline Offline

Posts: 1604



WWW
« Reply #50 on: July 02, 2007, 06:31:35 AM »
ReplyReply

Quote
I did and came across the following:

...

Quote
Now, admittedly the article goes on to imply that the number of different sets of data required for this process can be reduced through use of powerful and computationally expensive algorithms, and it would be a reasonable assumption to make that developments in this area could be relevant to the demosaicing required for this new Kodak CFA, but aren't you in danger of demolishing your own argument here, Jani? 
Not as I see it, no. Perhaps you think I'm arguing something I'm not?

Quote
It was you who raised the objections initially about the feasibility of this new CFA because of sacrifices in color accuracy that would have to be made.
I haven't claimed it isn't feasible. Could you perhaps go back and re-read what I've written?

Quote
I merely make the observation that it looks as though Pattern A will be more problematic than the other 2 patterns because all 3 primaries do not have an edge or a corner in common with each panchromatic pixel.
And I was merely pointing you in the direction of techniques that are used to calculate parts of the "missing" information, because you claimed you couldn't see how it could be done without "making a rough guess".
Logged

Jan
Ray
Sr. Member
****
Offline Offline

Posts: 8939


« Reply #51 on: July 02, 2007, 07:22:23 AM »
ReplyReply

Quote
And I was merely pointing you in the direction of techniques that are used to calculate parts of the "missing" information, because you claimed you couldn't see how it could be done without "making a rough guess".
[a href=\"index.php?act=findpost&pid=126041\"][{POST_SNAPBACK}][/a]

Well, that depends on how rough is a 'rough guess'. I suppose the fact that it takes almost twice the number of interpolated Bayer pixels to equal the resolution of non-interpolated Foveon type pixels would be a fair indication of the roughness of the guess.

I do not expect the new Kodak sensor to provide greater resolution (except in the shadows) or greater color accuracy than existing Bayer sensors.
Logged
Graeme Nattress
Sr. Member
****
Offline Offline

Posts: 582



WWW
« Reply #52 on: July 02, 2007, 08:41:39 AM »
ReplyReply

Foveon pixels are rarely non-interpolated though.... Luma is derived from summing the 3 recorded channels. I hesitate to call them RGB as they're pretty far from looking like any RGB we'd recognise. Then chroma is made from a spatial averaging noise reduction technique using surrounding pixels. You can see this clearly as the camera goes up the ISO range the chroma blurs out. If you take an image into Photoshop and go into LAB you can see the resolution of the chroma is not that of the luma. (Foveon white papers mention the noise reduction and seperation of luma and chroma but don't give full details, but enough to go on to, and through looking at images, to sort of see what's happening.)

That said, the apparant resolution of the Sigma cameras is more down to their lacking the necessary optical low pass filtering than the Foveon chip. You can distinctly see the nasty artifacts from luma aliassing in practically every in-focus Sigma photograph.  Indeed most of the resolution loss from using a Bayer pattern sensor is from the optical low pass filtering necessary to stop both luma and especially chroma aliassing. (Again, Foveon white papers do mention the necessity of an OLPF, but perhaps they never told Sigma, or, perhaps Sigma decided that lack of OLPF was a selling point where they can pretend it's extra resolution, whereas it's really just artifacts.)

So, the factor 2 is more complex than that..... Indeed, say you use that factor and have a 5mp (*3) of Foveon pixels looking like the resolution of 10mp OLPF filtered  Bayer pixels. Now if you count them as photodecectors, you've got 15mp of Foveon pixels to equal the resolution of 10mp OLPF filtered  Bayer pixels, which seems to me a rather inefficient way to do things.

Graeme
Logged

www.nattress.com - Plugins for Final Cut Pro and Color
www.red.com - Digital Cinema Cameras
Ray
Sr. Member
****
Offline Offline

Posts: 8939


« Reply #53 on: July 02, 2007, 11:12:19 AM »
ReplyReply

Quote
Foveon pixels are rarely non-interpolated though.... Luma is derived from summing the 3 recorded channels. I hesitate to call them RGB as they're pretty far from looking like any RGB we'd recognise. Then chroma is made from a spatial averaging noise reduction technique using surrounding pixels. You can see this clearly as the camera goes up the ISO range the chroma blurs out. If you take an image into Photoshop and go into LAB you can see the resolution of the chroma is not that of the luma. (Foveon white papers mention the noise reduction and seperation of luma and chroma but don't give full details, but enough to go on to, and through looking at images, to sort of see what's happening.)

Interesting! I always assumed that luma is derived from summing the 3 channels, and I can understand that the silicon material that is sensitive to one narrow band of frequencies is not completely transparent to the other frequencies, so there is some unavoidable loss of light energy in the system, as there is in the Bayer system with its CFA. The fact that Foveon based cameras do not seem to do particularly well at high ISO would seem to indicate that this loss of light energy is more serious than it is in the Bayer systems, but I really don't know how true this is.

But it seems reasonable that there'd be some reconstruction going on to compensate for such loss.

Quote
That said, the apparant resolution of the Sigma cameras is more down to their lacking the necessary optical low pass filtering than the Foveon chip. You can distinctly see the nasty artifacts from luma aliassing in practically every in-focus Sigma photograph.  Indeed most of the resolution loss from using a Bayer pattern sensor is from the optical low pass filtering necessary to stop both luma and especially chroma aliassing. (Again, Foveon white papers do mention the necessity of an OLPF, but perhaps they never told Sigma, or, perhaps Sigma decided that lack of OLPF was a selling point where they can pretend it's extra resolution, whereas it's really just artifacts.)

This doesn't sound quite right to me. I've heard there can be false detail above the Nyquist limit. However, the Kodak 14n also lacked an AA filter and as a result provided marginally better resolution than the Canon 1Ds. According to your theory, and bearing in mind the 14n has 14mp compared to to the 1Ds 11mp, the 14n should have had about 1.5x the resolution of the 1Ds, which it doesn't.

Quote
So, the factor 2 is more complex than that..... Indeed, say you use that factor and have a 5mp (*3) of Foveon pixels looking like the resolution of 10mp OLPF filtered  Bayer pixels. Now if you count them as photodecectors, you've got 15mp of Foveon pixels to equal the resolution of 10mp OLPF filtered  Bayer pixels, which seems to me a rather inefficient way to do things.

On the other hand, stacking photodetectors on top of each other could be considered a more efficient physical arrangement which allows for larger photodetectors on the same size sensor. The inefficiency seems to me to be largely due to the lack of transparency of the silicon material in letting other frequencies pass through.
Logged
Graeme Nattress
Sr. Member
****
Offline Offline

Posts: 582



WWW
« Reply #54 on: July 02, 2007, 11:25:36 AM »
ReplyReply

Quote
But it seems reasonable that there'd be some reconstruction going on to compensate for such loss.
This doesn't sound quite right to me. I've heard there can be false detail above the Nyquist limit. However, the Kodak 14n also lacked an AA filter and as a result provided marginally better resolution than the Canon 1Ds. According to your theory, and bearing in mind the 14n has 14mp compared to to the 1Ds 11mp, the 14n should have had about 1.5x the resolution of the 1Ds, which it doesn't.

Sure, there's false detail, but false detail is just that - false. To me it looks like fine grained noise, or, if on edges, jaggies, neither of which are desireable. One of the problems with aliassing as once it's in a system it's very hard to remove as it's practically impossible to know whether detail you see is real or not, and hence it's hard to remove the detail that's not while keeping the detail that is. That's why most cameras use an OLPF as a lesser of two evils. I think this is even more important on moving images than stills as on stills, theoretically, if you've got the patience, you can go in and pixel paint out some of the problems. I certainly can't be bothered to do that and prefer to use a camera with an OLPF.

Of course, there's more to resolution than OLPF or not. Not least the fill factor of the pixels, lens and bayer reconstruction algorithm used.

Indeed, stacking photon detectors on top of each other is rather clever and allows for larger pixels. However, silicon is not the best colour filter, and it's the extreme matrixing needed to transform the layers into RGB that limits the noise performance of the Foveon.
Logged

www.nattress.com - Plugins for Final Cut Pro and Color
www.red.com - Digital Cinema Cameras
John Sheehy
Sr. Member
****
Offline Offline

Posts: 838


« Reply #55 on: July 02, 2007, 05:29:50 PM »
ReplyReply

Quote
Sure, there's false detail, but false detail is just that - false. To me it looks like fine grained noise, or, if on edges, jaggies, neither of which are desireable.

What I find so obviously false about aliased imaging is that I can clearly see that the image is composed of pixels.  I don't want to see that an image was recorded in pixels when I look at it.

Quote
One of the problems with aliassing as once it's in a system it's very hard to remove as it's practically impossible to know whether detail you see is real or not, and hence it's hard to remove the detail that's not while keeping the detail that is. That's why most cameras use an OLPF as a lesser of two evils.

Finer pixel pitches are the solution - when you get fine enough, you don't need AA filters; everything the lens can do is there in all of its glory, without aliasing.  Reading out a 250MP sensor is not an easy chore, however, with current technology and storage mediums.  Hopefully, one day, we will have that convenience, and cameras can have settings to downsample the data as the user wishes, or even have automatic MTF detection systems, that examine the image before writing to the card, to see if there is anything worth recording at maximum resolution, and automatically picks the highest resolution for a downsample that the recording warrants, and only writes out the lower resolution to the storage medium, as a linear DNG or something similar.  There could even be zones with different resolutions, to save storage space on bokeh.

Quote
I think this is even more important on moving images than stills as on stills, theoretically, if you've got the patience, you can go in and pixel paint out some of the problems. I certainly can't be bothered to do that and prefer to use a camera with an OLPF.

Of course, there's more to resolution than OLPF or not. Not least the fill factor of the pixels, lens and bayer reconstruction algorithm used.

Indeed, stacking photon detectors on top of each other is rather clever and allows for larger pixels. However, silicon is not the best colour filter, and it's the extreme matrixing needed to transform the layers into RGB that limits the noise performance of the Foveon.
[{POST_SNAPBACK}][/a]

It is, however, an excellent way to do greyscale.  Foveon RAW data, treated as two channels, "red" and "cyan" ("blue" + "green") are fairly low in noise, as is their sum.  The biggest problems are between the "blue" and "green" layers, where color must be extrapolated, boosting any noise, and there are blotches of a complimentary effect, where the green channel is dark where the blue is light, and visa-versa, especially in shadow areas:

[a href=\"http://www.pbase.com/jps_photo/image/77239784]http://www.pbase.com/jps_photo/image/77239784[/url]
Logged
Graeme Nattress
Sr. Member
****
Offline Offline

Posts: 582



WWW
« Reply #56 on: July 02, 2007, 06:54:02 PM »
ReplyReply

Thanks for the demonstration about blue / green noise on Foveon. I think that shows very clearly what is going on.

Agreed the best solution to aliassing is massive oversampling / fine enough pitch to allow the lens to be the limit to the resolution, not the sampled array.

Lossy RAW compression techniques are evolving and they will be able to handle the data rate admirably. However, as pixels get smaller, we do get noise / dynamic range issues, and they're tricky to deal with also. But smaller and better pixels are a good way to go, as long as we don't loose sight of all the image parameters, not just resolution.

Graeme
Logged

www.nattress.com - Plugins for Final Cut Pro and Color
www.red.com - Digital Cinema Cameras
John Sheehy
Sr. Member
****
Offline Offline

Posts: 838


« Reply #57 on: July 02, 2007, 08:54:47 PM »
ReplyReply

Quote
Thanks for the demonstration about blue / green noise on Foveon. I think that shows very clearly what is going on.

This is before any attempt to create RGB color; those are literally the blue and green RAW channels, one inverted and with an offset.

Quote
Agreed the best solution to aliassing is massive oversampling / fine enough pitch to allow the lens to be the limit to the resolution, not the sampled array.

Lossy RAW compression techniques are evolving and they will be able to handle the data rate admirably. However, as pixels get smaller, we do get noise / dynamic range issues, and they're tricky to deal with also.

I'm not so sure that it is much of a problem; current 2 micron pitches in compacts are yielding very high photon collection per unit of area, and lower read noises when adjusted for pixel size.  The real world of light is not nice, binned pixels.  It is infinite resolution and infinite shot noise, until out retinas or our cameras bin the photons.

Would a list of the analog striking points of individual photons within a focal plane rectangle be any noisier than six counts of photons for six huge bins?  You can create the the latter from the former, but not the former from the latter.

Quote
But smaller and better pixels are a good way to go, as long as we don't loose sight of all the image parameters, not just resolution.[a href=\"index.php?act=findpost&pid=126140\"][{POST_SNAPBACK}][/a]

The only dangers are losing some photons and gaining a small amount of shot noise, due to the need for space between photosites, and getting so much more read noise that the read noise energy per unit of sensor area increases, but that trend does not exist with current cameras; the area-based noise is actually lower with tiny-pixel cameras than with DSLRs of the same and even superior technology.

The area-based shot noise of a Panasonic FZ50 is 1/3 stop lower than a Canon 1Dmk2 at all common ISOs, and the area-based read noise is a stop lower at ISO 100.  The pixel-based read noise of the Panasonic is lower than the Nikon D2X at all ISOs, and the area-based read noise is 2 stops lower for the Panasonic!

The scary "tiny pixel" stories are not coming true.  The scary "tiny sensor" stories are, and are being mistaken for "tiny pixel" issues.
Logged
Ray
Sr. Member
****
Offline Offline

Posts: 8939


« Reply #58 on: July 02, 2007, 08:59:10 PM »
ReplyReply

The problem comparing a Foveon sensor of a given pixel count with an equivalent Bayer type sensor is due to the value judgement one places on the strengths and weakness of the 2 systems. According to Popular Photography, a 10mp Bayer sensor will deliver higher resolution in B&W that the 4.7mp SD-14, but the SD14 will deliver a higher resolution in Red&Blue.

In other color combinations the 10mp Bayer system is as good as and sometimes better than the Foveon. Making a weighted assessment one would could justifiably say the SD14 is closer, on balance, to an 8mp Bayer type camera.

Below is an interesting chart I found comparing the sensitivities of the 3 channels in the Fovoveon sensor with the cone sensitiviy of the eye.

[attachment=2729:attachment]
Logged
Ray
Sr. Member
****
Offline Offline

Posts: 8939


« Reply #59 on: July 03, 2007, 07:52:34 PM »
ReplyReply

Quote
The area-based shot noise of a Panasonic FZ50 is 1/3 stop lower than a Canon 1Dmk2 at all common ISOs, and the area-based read noise is a stop lower at ISO 100.  The pixel-based read noise of the Panasonic is lower than the Nikon D2X at all ISOs, and the area-based read noise is 2 stops lower for the Panasonic!

The scary "tiny pixel" stories are not coming true.  The scary "tiny sensor" stories are, and are being mistaken for "tiny pixel" issues.
[a href=\"index.php?act=findpost&pid=126156\"][{POST_SNAPBACK}][/a]

John,
I'm surprised no-one has picked this up; the area based read noise of the P&S Panasonic FZ50 is 2 stops less than the Nikon D2X?

Such a statement implies that it would be possible using current technology to produce a 100mp APS-C sensor with noise performance at ISO 3200 at least as good as what's currently available in DSLRs, and resolution of course much higher with good lenses, and higher also due to the lack of a need for an AA filter.

I'm basing this calculation on the fact that a D2X sensor is approximately 10x the area of the FZ50's sensor.

The other implication is, if one were to compare shots of the same scene using the FZ50 and D2X and use the same physical aperture size in lenses of equivalent focal length, and use the same exposure so each pixel gets the same amount of light (which means something like f2.8 and ISO 100 for the FZ50, and f8 and ISO 800 for the D2X) then the FZ50 will produce cleaner images. Right?
Logged
Pages: « 1 2 [3] 4 5 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad