Ad
Ad
Ad
Pages: « 1 [2] 3 4 5 »   Bottom of Page
Print
Author Topic: Kodak's new sensor  (Read 23014 times)
Ray
Sr. Member
****
Offline Offline

Posts: 8812


« Reply #20 on: June 24, 2007, 11:03:22 PM »
ReplyReply

Quote
This new technology is mostly aimed at producing compact digital with better high iso image quality isn't it?

If anything, the trend in high end digital is more towards capturing MORE color information and not LESS like in the new Kodak technology.

Consumers don't really care about colot accuracy and noise is perceived as a much bigger issue. Pros see things the opposite way and noise is in fact not so much of an issue for many applications.

Regards,
Bernard
[{POST_SNAPBACK}][/a]

Bernard,
That's a good point and you might be right. When Fuji announced its dual pixel system of one small pixel of low sensitivity for the highlights and one larger, normal pixel for the rest of the image, both pixels under the same micro-lens, this system was first introduced in the P&S cameras and was much criticised for poor implementation of the design and only marginal improvement in DR.

The same thing might happen with the new Kodak sensor. It'll probably take time for the new system to iron out its problems. But what better way than to experiment on P&S cameras with less critical consumers?   (Sorry if I sound elitist.)

Having just read 'What's New' on LL, there's a link to Mike Johsons site and interview. [a href=\"http://theonlinephotographer.typepad.com/the_online_photographer/2007/06/a_brief_intervi.html]http://theonlinephotographer.typepad.com/t...ef_intervi.html[/url]

Here are some relevant quotes from the interview.

Quote
T.O.P.: The human eye puts an emphasis on luminance information for the sake of image detail. Is the new sensor likely to increase the level of real detail in digital images?

John Hamilton: Not really. The panchromatic pixels function just like the green pixels of the Bayer pattern except that they are photographically faster. However, under low light conditions, the new patterns will outperform Bayer because of improved signal-to-noise.

T.O.P.: I appreciate that part of what will make this new array practical is that new interpolation algorithms will have to be devised for it, and some of that work is still in the future. But knowing what you know, do you anticipate that the likely problems or advantages will make the new array best suited for certain applications as opposed to others?

John Hamilton: The new filter patterns were designed with low-light conditions in mind, but it's too soon to say where they work best. Under well-controlled lighting conditions, such as in a studio, I would expect the new filters and a Bayer filter to be roughly equivalent.

T.O.P.: One last question—so how come the new array isn't named after its inventors, like the Bayer Array was named after its inventor, Kodak's Dr. Bryce Bayer, in 1976?

John Hamilton: We are just the tip of the iceberg. Many sensor and algorithm people are involved in bringing this technology forward.

As I understand, you can't patent just an idea. It has to be accompanied by a practical implementation. The idea of such a CFA as in this Kodak design must have been kicking around for years. Dr Bryce Bayer must have been aware of this option, but decided against it, probably because other areas of technology were not sufficiently developed to make it work.

The fundamental principle here is that 2/3rds of the light impinging upon current Bayer type sensors is essentially wasted. There has to be a better way.

Technology is all about increasing the efficiency with which we use our resources, whether it's photons or oil.
Logged
jani
Sr. Member
****
Offline Offline

Posts: 1603



WWW
« Reply #21 on: June 25, 2007, 05:12:29 PM »
ReplyReply

Quote
The comparison of quantities of colored pixels in both designs is 16.6% red, blue and green for CFAv2 versus 25% red, blue and green for Bayer. (I've discounted the extra 25% of green because I believe this is for luminance purposes and I'm not sure how that contributes to over-all color accuracy).
As I understood it, the larger proportion of green comes from how the average human eye functions; green is simply more important.

Technically speaking, the proportion of blue should probably be lower than 25%.

Quote
If we now compare say a 20mp upgrade to the 400D, employing the new CFA, with the existing 10mp 400D, we could expect higher resolution from the 20MP camera without compromising dynamic range or high ISO performance. Agreed?
I'm not sure I can agree to that, because we don't yet know how this works, cf. what you mention about the problem of base ISO. But for the sake of the thought experiment, sure.

Quote
Lets compare color accuracy. The 10MP 400D has 2.5m red, blue and green pixels (plus 2.5m additional green for luminance purposes).

The new 20mp 400D has 3.3m red, blue and green pixels (plus 10m for luminance).

Comparing the final images, one has 2.5m items of red data and the other has 3.3m items of red data. Which is more accurate? Is color accuracy even going to be an issue with such high pixel density?
Colour accuracy is always an issue.

Quote
I know you could argue that a 20MP Bayer sensor would have 5m items of red data and that 5m is better than 3.3m, but that argument discounts the role of the new algorithms for the CFAv2.
I would argue that Kodak say that CFAv2 vs. Bayer is roughly at parity with normal images, but that it's unclear what pixel peepers would see.

Again, it's a bit like Foveon vs. Bayer.

Quote
The other issue which I think deserves more investigation is, "Just how much color information does the human eye require in order to get a realistic sense of accurate color in a scene?"
That clearly depends on how closely you inspect the image in question, as well as on interference effects.

Quote
I'll mention just two observations which make me think it is far less than you suppose.
That assumes that you know how much I suppose is necessary, but it also requires that you answer the question: "necessary for what?"

I agree that it's possible to compress information very well with a minimal loss of visual impact.

The evidence for that lies not only in JPEG vs. TIFF-RGB, but also in GIF.

However, this does not necessarily stand up to scrutiny in all cases.

Again, assuming that Kodak are right in their claims, this new pattern will -- with assumed future improvements in demosaicing algorithms -- be at parity with the Bayer pattern under well-lit conditions.

So how, exactly, is this "far less than I suppose"?
Logged

Jan
Ray
Sr. Member
****
Offline Offline

Posts: 8812


« Reply #22 on: June 25, 2007, 06:33:56 PM »
ReplyReply

Quote
Again, assuming that Kodak are right in their claims, this new pattern will -- with assumed future improvements in demosaicing algorithms -- be at parity with the Bayer pattern under well-lit conditions.

So how, exactly, is this "far less than I suppose"?
[a href=\"index.php?act=findpost&pid=124851\"][{POST_SNAPBACK}][/a]

I was addressing your view that color accuracy would suffer as a result of a 'far' smaller proportion of the total pixels on the sensor providing color information.

Clearly the success of this design depends to a large degree on this factor. For me, I don't require accurate color; only believable color, pleasing color and controllable color.

What I can achieve scanning faded color slides and negatives makes me an optimist on this issue   .
Logged
Ray
Sr. Member
****
Offline Offline

Posts: 8812


« Reply #23 on: June 25, 2007, 10:17:29 PM »
ReplyReply

However, I admit there's some funny math going on here which is causing me to make some basic mistakes in my own reasoning. If half the pixels receive 3x as much light because they are panchromatic, and the other half with a color filter have the same quantum efficiency and a base ISO of 100, then the panchromatic pixels effectively have a base ISO of 300, not 800 as I previously suggested.

If half the pixels get 3x as much light, then the average increase for the whole sensor is just 0.75x the amount of light, which equates to less than a 1 stop improvement.

Since Kodak are claiming a 1-2 stop improvement, there must be something else going on which they are not mentioning. Perhaps they are combining the new CFA with an improved sensor having greater quantum efficiency.

Or perhaps they have a design which selectively applies more analogue preamplification to the signal from the color pixels, prior to digitisation.

On the other hand, 3x the signal should produce less than 3x the photonic shot noise and less than 3x the read noise, so that might explain the 1-2 stop improvement.
« Last Edit: June 25, 2007, 10:34:45 PM by Ray » Logged
jani
Sr. Member
****
Offline Offline

Posts: 1603



WWW
« Reply #24 on: June 26, 2007, 05:54:13 PM »
ReplyReply

Quote
Since Kodak are claiming a 1-2 stop improvement, there must be something else going on which they are not mentioning. Perhaps they are combining the new CFA with an improved sensor having greater quantum efficiency.
This is also mentioned in the blog you first linked to:

Quote
JH: Clearly the color filter pattern and the software interpolation are different with this approach. What's more, the arrangement of the photoreceptors can be changed, but that's not a requirement.

So the answer is, apparently, that improved quantum efficiency in the sensor isn't a requirement.

Besides, improved quantum efficiency would also benefit Bayer or Foveon-alike arrays.
Logged

Jan
Ray
Sr. Member
****
Offline Offline

Posts: 8812


« Reply #25 on: June 26, 2007, 06:58:07 PM »
ReplyReply

Quote
This is also mentioned in the blog you first linked to:
So the answer is, apparently, that improved quantum efficiency in the sensor isn't a requirement.

Besides, improved quantum efficiency would also benefit Bayer or Foveon-alike arrays.
[a href=\"index.php?act=findpost&pid=125047\"][{POST_SNAPBACK}][/a]

That's true. Improved quantum efficiency would always be welcome but that is not necessarily a feature of this new design.

Nevertheless, I can see no point in having a bunch of color pixels that never reach full well capacity at base ISO, so either those color pixels should be smaller than the panchromatic pixels or the voltage generated by the color pixels should be subject to more analogue gain prior to A/D conversion.

Improved quantum efficiency will of course benefit all designs but has nothing to do with the increased sensitivity of the panchromatic pixels which is due entirely to the removal of a color filter allowing more photons to impinge upon the photoreceptor with any given exposure.

Your concern about color accuracy taking a backward step in this new design is a valid concern and I guess that is the major technological hurdle to be overcome here.
Logged
jani
Sr. Member
****
Offline Offline

Posts: 1603



WWW
« Reply #26 on: June 26, 2007, 08:03:11 PM »
ReplyReply

Quote
Nevertheless, I can see no point in having a bunch of color pixels that never reach full well capacity at base ISO, so either those color pixels should be smaller than the panchromatic pixels or the voltage generated by the color pixels should be subject to more analogue gain prior to A/D conversion.
Or perhaps they should be larger, in order to capture more light.

This is pretty interesting! Perhaps innovations in sensor design will make it possible to have differently sized sensor wells for different colours, in order to match normalized human colour perception better.

I'd also like to see what can be done about the strict matrix form of sensors; how about a honeycomb design instead?
Logged

Jan
Ray
Sr. Member
****
Offline Offline

Posts: 8812


« Reply #27 on: June 26, 2007, 09:15:47 PM »
ReplyReply

Quote
Or perhaps they should be larger, in order to capture more light.

[a href=\"index.php?act=findpost&pid=125062\"][{POST_SNAPBACK}][/a]

Come to think of it, whether they are larger or smaller will not fix the discrepancy in sensitivity between the panchromatic and color pixels, assuming the light gathering potential is proportional to the area. Big or small, the color pixel will recieve fewer photons per unit area of sensor.

But what might make sense is to have the same pixel spacing, ie. same pixel pitch, same size microlenses for both types of pixels, but the color photodiode could be smaller because it will always receive a smaller amount of light compared with the panchromatic pixels.

This would allow more room under each color filter for additional processors on the CMOS sensor; better analog pre-amplifiers etc. to help compensate for the lower sensitivity of the color photoreceptors and thus ensure adequate color accuracy.

Voila! I've just solved the design problem   .
Logged
BJL
Sr. Member
****
Offline Offline

Posts: 5085


« Reply #28 on: June 27, 2007, 06:24:55 AM »
ReplyReply

Quote
... Big or small, the color pixel will recieve fewer photons per unit area of sensor.

But what might make sense is to have the same pixel spacing, ie. same pixel pitch, same size microlenses for both types of pixels, but the color photodiode could be smaller because it will always receive a smaller amount of light compared with the panchromatic pixels.
[a href=\"index.php?act=findpost&pid=125073\"][{POST_SNAPBACK}][/a]
For optima DR, wouldn't it be better to increase the highlight headroom of the "white" pixels, with something like Fuji's SR technology.

I am wondering about other patterns since kodak says others are possible. Like
WR
BG
for maximal colour information, or at least
WGWG
BWRW
WGWG
RWBW
which is half white, the other half Bayer pattern rotated 45 degrees (to go on the "black" squares of an imaginary chess board), so decreasing the density of each colour of photosite by 1.4 (sqrt(2)).
Logged
John Sheehy
Sr. Member
****
Offline Offline

Posts: 838


« Reply #29 on: June 27, 2007, 07:25:52 AM »
ReplyReply

Quote
For optima DR, wouldn't it be better to increase the highlight headroom of the "white" pixels, with something like Fuji's SR technology.

The problem Kodak claims to be addressing is low sensitivity, not a lack of highlight headroom.

Quote
I am wondering about other patterns since kodak says others are possible. Like
WR
BG
for maximal colour information, or at least
WGWG
BWRW
WGWG
RWBW
which is half white, the other half Bayer pattern rotated 45 degrees (to go on the "black" squares of an imaginary chess board), so decreasing the density of each colour of photosite by 1.4 (sqrt(2)).
[a href=\"index.php?act=findpost&pid=125123\"][{POST_SNAPBACK}][/a]

The pattern in the news blips looks like it is optimized for binning, as you can reduce 2x2 tiles into single pixels with one pan value and one colored value, with no translation error, as the new pixels are centered on the center of the original pan and colored pairs.
Logged
Ray
Sr. Member
****
Offline Offline

Posts: 8812


« Reply #30 on: June 27, 2007, 09:37:19 AM »
ReplyReply

Quote
For optima DR, wouldn't it be better to increase the highlight headroom of the "white" pixels, with something like Fuji's SR technology.

BJL,
I tend to agree with John here. By employing the Fuji concept you'd be taking one step forward by increasing sensitivity of the panchromatic pixels and one step backwards by reducing the sensitivity of the 'highlight' pixel under the same microlens. You'd be back to square one. Any exposure sufficient to fill the well of the photodiode under a color filter, would overfill the well of the normal panchromatic pixel.

Some of these new patterns do look as though they are designed for binning.

[attachment=2703:attachment]

In Pattern A, for example, there are panchromatic pixels that are not adjacent to any red pixel at any side or corner. Without binning one would suppose that color interpolation would be rather inaccurate with such a pattern.
Logged
jani
Sr. Member
****
Offline Offline

Posts: 1603



WWW
« Reply #31 on: June 28, 2007, 05:58:19 AM »
ReplyReply

Quote
Come to think of it, whether they are larger or smaller will not fix the discrepancy in sensitivity between the panchromatic and color pixels, assuming the light gathering potential is proportional to the area. Big or small, the color pixel will recieve fewer photons per unit area of sensor.
While it was only a half-arsed idea, the idea was that larger sensor sites for the filtered colours would mean more light per colour site, hence reducing the discrepancy.
Logged

Jan
BJL
Sr. Member
****
Offline Offline

Posts: 5085


« Reply #32 on: June 28, 2007, 07:56:17 AM »
ReplyReply

To John: Kodak's current stated goal is sensitivity (for small digicam sensors probably), but surely it would not hurt in the future to improve both sensitivity and dynamic range with this smart matching of the sensor's luminosity/color resolution characteristics more closely to that of our eyes? Not to mention trying to better match the DR of our eyes?

To Ray:
Quote
By employing the Fuji concept you'd be taking one step forward by increasing sensitivity of the panchromatic pixels and one step backwards by reducing the sensitivity of the 'highlight' pixel under the same microlens ... Any exposure sufficient to fill the well of the photodiode under a color filter, would overfill the well of the normal panchromatic pixel.
[a href=\"index.php?act=findpost&pid=125163\"][{POST_SNAPBACK}][/a]
I do no see why using the two photodiode "SR" type photosites would cause problems with the highlight photodiode sensitivity, or with performance comapared to conventional single photodiode photosites. Those highlight "S" photodiodes provide information that is only needed at very well lit photosites (ones where the main "R" photodiode is blown out), so they can easily have enough sensitivity and high S/N ratio while being very small, getting a small fraction of all light reaching the photosite, and interfering very little with performance of the main photodiode.
Logged
Ray
Sr. Member
****
Offline Offline

Posts: 8812


« Reply #33 on: June 28, 2007, 09:53:21 AM »
ReplyReply

Quote
While it was only a half-arsed idea, the idea was that larger sensor sites for the filtered colours would mean more light per colour site, hence reducing the discrepancy.
[a href=\"index.php?act=findpost&pid=125376\"][{POST_SNAPBACK}][/a]


More light for the colored pixels means disproportionately less light for the panchromatic pixels. Put simply, but not necessarily totally accurately I know, on average only one out of every 3 photons that arrive at the color filter get through. The other 2 are filtered out. The greater the area covered by panchromatic pixels, the better the low light performance of the sensor. The greater the area covered by monochrome pixels, the worse the low light performance.
Logged
Ray
Sr. Member
****
Offline Offline

Posts: 8812


« Reply #34 on: June 28, 2007, 10:35:17 AM »
ReplyReply

Quote
I do no see why using the two photodiode "SR" type photosites would cause problems with the highlight photodiode sensitivity, or with performance comapared to conventional single photodiode photosites. Those highlight "S" photodiodes provide information that is only needed at very well lit photosites (ones where the main "R" photodiode is blown out), so they can easily have enough sensitivity and high S/N ratio while being very small, getting a small fraction of all light reaching the photosite, and interfering very little with performance of the main photodiode.
[a href=\"index.php?act=findpost&pid=125398\"][{POST_SNAPBACK}][/a]

BJL,
I don't see any reason why your suggestion of an SR type design for the panchromatic pixels could not be an alternative design providing improved dynamic range at base ISO. Perhaps one step forward and one step backward is an exaggeration. Shall we say, 2 steps forward and one back.

The design goal of the new Kodak sensor (as I understand it) is to provide 1 to 2 stops less noise at high ISO by allowing the sensor to collect more photons with the same exposure. The implication is that base ISO in such a sensor would be more like ISO 300 than ISO 100 because 50% of the area of the sensor is processing 3x the number of photons, with the same exposure (on average).

If you were to place a normal, sensitive panchromatic pixel next to a smaller insensitive pixel, both under the same microlens, the photons directed at the less sensitive pixel would be unproductive at high ISO. Read noise and shot noise would overwhelm the signal, but presumably the signal would still be read and the noise added to the combined signal from both pixels at that site.

Furthermore, because the standard panchromatic pixels, for any given exposure at high ISO, will receive fewer photons because some have been directed to the unproductive highlight pixel, the standard pixels will also have more noise, before the two signals are combined.

Just applying a bit of logical reasoning   .
Logged
BJL
Sr. Member
****
Offline Offline

Posts: 5085


« Reply #35 on: June 28, 2007, 05:03:55 PM »
ReplyReply

Ray, perhaps you misunderstand my proposal, which is using an SR style sensor with the new Kodak "partial colour filter array" over it. Mostly for the benefit of the "panchromatic" pixels, to effectively bring their base ISO back down to about ISO 100 despite their greater illumination. In fact, maybe at colour filtered pixels, the SR effect is unneeded and the output of the two photodiodes could be binned.

As to the following claim, I though I had already refuted it in my previous post:
Quote
Furthermore, because the standard panchromatic pixels, for any given exposure at high ISO, will receive fewer photons because some have been directed to the unproductive highlight pixel, the standard pixels will also have more noise, before the two signals are combined.
[a href=\"index.php?act=findpost&pid=125430\"][{POST_SNAPBACK}][/a]
But as I indicate above, both theoretically and experiment (high ISO performance of Fuji's SR sensors) indicate that this effect is probably not be significant, even if it occurs to a small degree. The point is that only a very small fraction of light needs to be sent to the "highlight" pixels, since they only need to work well with high illumination levels, so that the percentage of light lost to the main "shadow" pixels can be very small.
Logged
EricV
Full Member
***
Offline Offline

Posts: 122


« Reply #36 on: June 28, 2007, 07:48:07 PM »
ReplyReply

Quote
Nevertheless, I can see no point in having a bunch of color pixels that never reach full well capacity at base ISO ....
The lower sensitivity of the color pixels could be used to extend the dynamic range of the sensor.  In a very high contrast scene, "expose to the right" could have a new meaning -- expose so that the panchromatic pixels are saturated, but the color pixels are not.  The color pixels would then reach nearly full well capacity.  The interpolation algorithm in the raw converter could work around the saturated panchromatic pixels in the highlights (with some loss of resolution).  Shadow noise would benefit from the increased exposure.
Logged
Ray
Sr. Member
****
Offline Offline

Posts: 8812


« Reply #37 on: June 28, 2007, 09:24:09 PM »
ReplyReply

Quote
But as I indicate above, both theoretically and experiment (high ISO performance of Fuji's SR sensors) indicate that this effect is probably not be significant, even if it occurs to a small degree. The point is that only a very small fraction of light needs to be sent to the "highlight" pixels, since they only need to work well with high illumination levels, so that the percentage of light lost to the main "shadow" pixels can be very small.
[a href=\"index.php?act=findpost&pid=125488\"][{POST_SNAPBACK}][/a]

BJL,
I have no precise information on how the Fujifilm SR system works in practice. I'm aware only of the principle from schematic diagrams on sites such as dpreview. I recall reading some of the comments on Fuji's first P&S camera that employed this new system. The improvements in DR as I recall were very marginal and I lost interest. I haven't been following the progress of the implementation of this design in subsequent models.

I got the impression you were promoting this SR type arrangement as a way of equallising the sensitivities of the panchromatic and monochromatic pixels so that full 'exposure to the right' (in respect of the monochromatic pixels) could be achieved without blowing out the panchromatic pixels.

Have you not underestimated that increase in sensitivity of the panchromatic pixels? They are three times more sensitive. Your idea of directing only a very small fraction of light to the highlight pixels will not pass muster. We're still stuck with an inequality of sensitivities.
Logged
AJSJones
Sr. Member
****
Offline Offline

Posts: 353



« Reply #38 on: June 28, 2007, 09:35:51 PM »
ReplyReply

Quote
[attachment=2703:attachment]

In Pattern A, for example, there are panchromatic pixels that are not adjacent to any red pixel at any side or corner. Without binning one would suppose that color interpolation would be rather inaccurate with such a pattern.
[a href=\"index.php?act=findpost&pid=125163\"][{POST_SNAPBACK}][/a]

Ray, I think there's a lot of opportunity for development going on in the new algorithms for decoding such an array.  Top leftish in that image is a pan pixel surrounded by 2G and 2B (of the type you are concerned about)  The pan pixel's G and B are therefore probably "well guessed" by interpolation, right?  And the pan (Y) is the sum of R, G and B (in some way known to the array engineers) so the R value can be deduced quite well. Same for the pixels with no blue touching them, they're well informed by the 2R and 2G so the B value is "well-deduced"?  A bit like video encoding at 4:2:2 or 4:1:1.  The maths is way beyond me, but the idea that there's not as much loss of colour resolution as meets the eye, so to speak, is reasonable

Andy
Logged
Ray
Sr. Member
****
Offline Offline

Posts: 8812


« Reply #39 on: June 28, 2007, 10:47:25 PM »
ReplyReply

Andy,
I always had the impression that color information did not need to be as accurate as luminance information for acceptable or pleasing results. I mean, it's possible nowadays to recreate color movies from B&W movies, and restoring faded color when scanning a slide is not difficult.

The only problem I foresee that might occur is when people want pixel accuracy with regard to color. If the y pixel we're referring to, not in contact with any adjacent red pixel value, were in fact a small speck on a textured surface that was as small as, or even smaller than the pixel pitch, such a tiny speck could be either red, green or blue and there would be no way of determining which.
Logged
Pages: « 1 [2] 3 4 5 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad