Ad
Ad
Ad
Pages: « 1 2 3 [4] 5 »   Bottom of Page
Print
Author Topic: Kodak's new sensor  (Read 25021 times)
John Sheehy
Sr. Member
****
Offline Offline

Posts: 838


« Reply #60 on: July 04, 2007, 12:16:36 AM »
ReplyReply

Quote
The problem comparing a Foveon sensor of a given pixel count with an equivalent Bayer type sensor is due to the value judgement one places on the strengths and weakness of the 2 systems. According to Popular Photography, a 10mp Bayer sensor will deliver higher resolution in B&W that the 4.7mp SD-14, but the SD14 will deliver a higher resolution in Red&Blue.

Then, what happens with green vs blue, or even closer with emerald vs turquoise?  The latter would be a disaster, I would think; the noise would be as strong as the real contrast.

There is also an issue of interpretation with B&W.  What is resolution?  Is it something that you can achieve sometimes and sometimes not, or is is something that requires that you "resolve" consistently, without any attachment to luck of alignment?  A Sigma camera can "resolve" as many lines as the sensor has rows or columns of pixels, but shift the registration just 0.5 pixels, and it sees nothing at all.  Vary from that resolution by a small amount, and you get patterns of alternating resolution and grey across the frame.  Is this what we really want to call "resolution"?  I think the Sigmas get to cheat on resolution tests, because the foundation for resolution tests is built upon film, where aliasing is impossible, and resolution depends on hints of contrast that is rolling off at a taper.  Aliased Sigma images seem to have greater resolution than they do, because of a loophole in the way resolution is measured.  Those sharp edges in aliased images are sharp, but they in the wrong place, and they're distortion, not resolution, IMO.

Quote
Below is an interesting chart I found comparing the sensitivities of the 3 channels in the Fovoveon sensor with the cone sensitiviy of the eye.
[attachment=2729:attachment]
[a href=\"index.php?act=findpost&pid=126157\"][{POST_SNAPBACK}][/a]

The bayer response is generally like the one for the eye, except that the green and red are as well-separated as the blue is from the green.  Compare the heights of the crossover points relative to the peaks in both systems.

The fact that the Foveon separates green from red better than the eye is not an overkill "plus" for the Foveon; the Foveon does not have the brain fabricating the illusion of noiseless color for it.

Also, the curves I see for foveon in your image look like the Foveon with their suggested filter for separating blue and green better, which Sigma does not use, as it increases cost and lowers overall sensitivity.
Logged
Jonathan Wienke
Sr. Member
****
Offline Offline

Posts: 5759



WWW
« Reply #61 on: July 04, 2007, 04:14:48 AM »
ReplyReply

Quote
Kodak have not addressed what this does to the risk of blown highlights, so we don't really know if this will give a higher dynamic range.

Of course it will. All of the pixels in the silicon have the same sensitivity (within manufacturing tolerances) before the color filter array is attached. Afterward, the filtered pixels are getting about 1/3 of the light the unfiltered pixels get. So the unfiltered pixels will clip about 1 1/2 stops before the filtered pixels will, and the unfiltered pixels will record usable detail 1 1/2 stops deeper into the shadow areas than the filtered pixels.

Think of it as two separate sensors on the same chip (one color, one monochrome) with a 1 1/2-stop ISO difference.
« Last Edit: July 04, 2007, 04:22:15 AM by Jonathan Wienke » Logged

Ray
Sr. Member
****
Offline Offline

Posts: 8907


« Reply #62 on: July 04, 2007, 08:06:08 AM »
ReplyReply

Quote
Then, what happens with green vs blue, or even closer with emerald vs turquoise?  The latter would be a disaster, I would think; the noise would be as strong as the real contrast.

Don't know, John. The Pop Photography review tested concentric cricles on a color chart pairing the following colors; Green/White (win for the Nikon D80); Magenta/Black (win for the D80); Yellow/Blue (win for the SD14); Green/White (win for the D80); Cyan/Red (win for the SD14).

The D80 had better performance in 3 out of the 5 tests, but also shows better performance in B&W, so that's 4 out of 6 in favour of the D80.

Quote
But black-and-white test targets for measuring resolution don't show as much detail as Foveon's 14.1MP count implies. Analysis of the IT-10 black-and-white resolution target we use in the Pop Photo Lab finds the Sigma SD14 on par with a good 8-9MP camera (in RAW mode), but not in the same class as 10MP models such as the Nikon D80.

Did you think in my previous post I was claiming the SD14 showed superior B&W resolution??

It seems that many Sigma fans are of the opinion that Pop Photography's review of the SD14 is too harsh and is biased in favour of the D80. I don't have any axe to grind here but would make the point that for most photographic purposes that are not strictly scientific, a bit of false detail is not necessarily objectionable. If it enhances the general appearance, that's fine by me. Don't most of us here spend a lot of time manipulating images in Photoshop to get a particular effect? Getting the most objectively accurate effect is not always the goal.

If my landscape shot includes a building with a balcony and balustrade, it wouldn't necessarily matter to me if my SD14 gave the impression there were 25 vertical struts in the balustrade, when in fact there are only 20   .

Here's the link to the review.  http://www.popphoto.com/cameras/4276/foveo...o-the-test.html
Logged
jani
Sr. Member
****
Offline Offline

Posts: 1604



WWW
« Reply #63 on: July 04, 2007, 08:53:50 AM »
ReplyReply

Quote
Quote
Kodak have not addressed what this does to the risk of blown highlights, so we don't really know if this will give a higher dynamic range.
Of course it will. All of the pixels in the silicon have the same sensitivity (within manufacturing tolerances) before the color filter array is attached. Afterward, the filtered pixels are getting about 1/3 of the light the unfiltered pixels get. So the unfiltered pixels will clip about 1 1/2 stops before the filtered pixels will, and the unfiltered pixels will record usable detail 1 1/2 stops deeper into the shadow areas than the filtered pixels.
That much is clear, but that doesn't necessarily translate into a wider dynamic range.

If all sensor wells clip at the same light levels, the unfiltered ones will be limiting the usable highlight range by 1.5 stops.
Logged

Jan
John Sheehy
Sr. Member
****
Offline Offline

Posts: 838


« Reply #64 on: July 04, 2007, 09:52:40 AM »
ReplyReply

Quote
John,
I'm surprised no-one has picked this up; the area based read noise of the P&S Panasonic FZ50 is 2 stops less than the Nikon D2X?

That's because the common paradigm is bent on maligning small pixels because of the small sensors they are coincidently found in.

Yes, the total blackframe read noise at the pixel level with the FZ50 is 2.7 ADU at ISO 100, and roughly scales with ISO (except that 200 is slightly better than scaled, and 1600 and especially 800 slightly worse than scaled), and is almost 4 ADU with the D2X at ISO 100, and scales to about 60 at ISO 1600.  The pixel pitch ratio is 5.5:1.97, or 2.79:1.  The binned read noise is therefore 1/2.79 as much for the FZ50 at the D2X pixel spacing, for 2.7/2.79 or .97 ADU, compared to almost 4 for the D2X, which is considered a camera with good image quality as long as you don't need deep shadow areas at all ISOs, or any shadow areas at high ISOs.

I don't mention binning because I think it is good to trade off resolution for low pixel-level noise.  I mention it because it shows that at the very least, you can get this little noise.  It think it is far better, however, to filter/resample noise only at spatial frequencies above those of which you like to see detail (or it is available).

Quote
Such a statement implies that it would be possible using current technology to produce a 100mp APS-C sensor with noise performance at ISO 3200 at least as good as what's currently available in DSLRs, and resolution of course much higher with good lenses, and higher also due to the lack of a need for an AA filter.

I'm basing this calculation on the fact that a D2X sensor is approximately 10x the area of the FZ50's sensor.

The other implication is, if one were to compare shots of the same scene using the FZ50 and D2X and use the same physical aperture size in lenses of equivalent focal length, and use the same exposure so each pixel gets the same amount of light (which means something like f2.8 and ISO 100 for the FZ50, and f8 and ISO 800 for the D2X) then the FZ50 will produce cleaner images. Right?
[a href=\"index.php?act=findpost&pid=126309\"][{POST_SNAPBACK}][/a]

Yes, unless your definition of "clean" is based on pixel-level cleanliness.  Lots of people seem to be fixated on that.
Logged
John Sheehy
Sr. Member
****
Offline Offline

Posts: 838


« Reply #65 on: July 04, 2007, 10:02:47 AM »
ReplyReply

Quote
Did you think in my previous post I was claiming the SD14 showed superior B&W resolution??

I thought you suggested it in previous posts.  perhaps an unqualified use of the word "resolution".

Quote
I don't have any axe to grind here but would make the point that for most photographic purposes that are not strictly scientific, a bit of false detail is not necessarily objectionable. If it enhances the general appearance, that's fine by me.

The only time a little aliasing doesn't bother me is when I am looking at a tiny image, in which case I don't expect any image quality anyway, and the contrasty edges make my brain feel like it has achieved focus.  I would not want to have to make prints from an aliased image, though, as I can clearly see the pixel grid implied in the mislocated edges, and the emphasis on horizontal and vertical edges, especially ones that are known to be at slight angles, snapped to 0 degrees or 90 degrees.

Quote
Don't most of us here spend a lot of time manipulating images in Photoshop to get a particular effect? Getting the most objectively accurate effect is not always the goal.

I don't try to get contrasty edges in the wrong places; no.

Quote
If my landscape shot includes a building with a balcony and balustrade, it wouldn't necessarily matter to me if my SD14 gave the impression there were 25 vertical struts in the balustrade, when in fact there are only 20   .
[a href=\"index.php?act=findpost&pid=126432\"][{POST_SNAPBACK}][/a]

It's not just the count that will be wrong, though, the location of the edges of the struts will be distinct and wrong.  That bothers my brain.
Logged
John Sheehy
Sr. Member
****
Offline Offline

Posts: 838


« Reply #66 on: July 04, 2007, 10:24:38 AM »
ReplyReply

Quote
Of course it will. All of the pixels in the silicon have the same sensitivity (within manufacturing tolerances) before the color filter array is attached. Afterward, the filtered pixels are getting about 1/3 of the light the unfiltered pixels get. So the unfiltered pixels will clip about 1 1/2 stops before the filtered pixels will, and the unfiltered pixels will record usable detail 1 1/2 stops deeper into the shadow areas than the filtered pixels.

Think of it as two separate sensors on the same chip (one color, one monochrome) with a 1 1/2-stop ISO difference.
[a href=\"index.php?act=findpost&pid=126369\"][{POST_SNAPBACK}][/a]

I don't think that the green-filtered pixels are losing 2/3 of the light, it is probably more like half.  The green channel is the broadest, and the most sensitive.  I would guess that an unfiltered pixel would be about a stop more sensitive than green, 1.5 stops more than blue, and 2 stops more than red, to white light.  The total light collected in 16 pixels would be 8*1 + 4*0.5 +2*0.35 +2*0.25 = 11.2 units of light, with 1/2 the pixels unfiltered, and 8*0.5 +4*0.35 +4*0.25 = 6.4 units of light, with all the pixels filtered, slightly less than 1 stop less sensitive overall to white light.

The question is how you use this data.  The arrangement is clearly not 100% beneficial; if all the unfiltered pixels clip, what you're left with is only half of the quantum efficiency you would have had with all filtered pixels, so exposure has to be lower, increasing the noise in the filtered pixels, requiring lots of filtering of chroma noise.
Logged
Ray
Sr. Member
****
Offline Offline

Posts: 8907


« Reply #67 on: July 04, 2007, 09:41:37 PM »
ReplyReply

This is a very confused situation, John. I don't pretend I'm not confused by the implications of this mixture of panchromatic and monochromatic pixels.

Until we get to the stage where all the processors of the signal are on the reverse side of the chip, allowing the photoreceptor size to be approximately equal to the pixel pitch, there's going to be some trade-off between the area allocated for the photoreceptor and the area allocated for all the on-board processors.

In this particular design from Kodak, I think it would be better to reduce the size of the pixels under a color filter to make more room for elaborate processing of the weaker color signal.

In other words, accept that the the true IS0 of the camera relates to the sensitivity of the panchromatic pixels (which means that the pixels under a color filter will never reach full well if they are the same size), reduce the size of those pixels under a color filter, but maintain the size of the micro-lens which directs the light to the color photoreceptors. This will ensure that at full exposure to the right, both the panchromatic pixels and monochrome pixels will reach full well capacity.

Clearly, the dynamic range and noise characteristics of the color pixels will suffer under this arrangement, but there's more space on the chip for on-board processing; better analogue preamplifiers etc.

ps. The design of the microlenses covering the color pixels would also have to be different in order to direct the light to the smaller area of the photoreceptor.
« Last Edit: July 05, 2007, 03:47:12 AM by Ray » Logged
st326
Jr. Member
**
Offline Offline

Posts: 62


« Reply #68 on: July 08, 2007, 03:30:49 PM »
ReplyReply

Quote
Having had further thoughts about this new color filter array, (and I suppose it's not so much a new sensor as a new way of filtering the light), I'm wondering if the claimed 1-2 stop increase in sensitivity is an exaggeration. All the patterns I've seen consist of just half of the number of pixels, in total, having the color filter removed, ie. becoming panchromatic.

If one calculates on the basis that each filter covering each pixel in the Bayer type array filters out 2/3rds of the light, then removing color filters from half of the sensors should cause only 1/3rd of the light to be blocked, and that represents a one stop improvement in sensitivity. So how can we get up to two stops improvement? Is Kodak referring to the variability of scene content or sensor design, or both? For example, with the current Bayer type sensor, a scene that is predominantly green will be less noisy at high ISO than a scene that is predominantly red and blue.

If we take an average of the 1-2 stop claim and call it a 1.5 stop improvement in noise, then, if we were to remove the 'color filter array' entirely, we would get, on average, a 3 stop improvement in noise. We would have an extremely low noise B&W digital camera.

It seems to be a fact of life with modern technological products, that one doesn't hear much about the deficiencies of a particular design untill someone discovers a better way of doing things, then, in order to sell the new product, the deficiencies of the old product come to the fore and are widely publicised.

The new Kodak CFA has brought to my attention the possibilities of truly B&W digital photography, which have of course always existed irrespective of this new Kodak sensor. I'm already salivating after applying some simple maths to the situation.

We all know that Foveon type sensors produce higher resolution than Bayer type sensors, pixel for pixel, defining a pixel as a group of one red, blue and green element. This is due to the loss of resolution in the demosaicing and interpolation that takes place with the Bayer type sensor, as well as the presence of an AA filter. Without quibbling, 3.4m Foveon pixels are roughly equivalent to 6m Bayer pixels. This represents a 1.76 increase in resolution, pixel for pixel. Jonathan Wienke claims a 1.5x increase in resolution. Let's compromise on a 1.6x increase.

Now I'm going to propose something that I'm not 100% certain about, but which I think might quite probably be true. A cheap camera like the 10mp Canon 400D could deliver B&W images that could exceed the quality of B&W images from the 1Ds2, if its color filter array and AA filter were removed. In other words, without the demosaicing and interpolation, 10m panchromatic pixels would at least equal, in terms of resolution and luminance, 16m color pixels converted to B&W.

Furthermore, after taking into consideration the 3 stop advantage in noise and sensitivy that would result from removing the CFA and AA filter, a 10mp 400D might well wipe the floor with the 1Ds2 (for B&W images only, of course).

Consider the options that are available with a B&W-only 400D. Not only would we have the resolution of a 1Ds2 color image converted to B&W, but we'd have a usable ISO 25600, as noise-free as ISO 1600 on the current 400D.

Let's re-arrange the possibilities. Instead of going for maximum performance at unheard of ISOs, we could use that advantage to increase pixel count whilst still maintaining the same S/N on a pixel per pixel basis, compared with color filtered pixels. In other words, if we accept the current level of noise at ISO 1600 with a CFA sensor as being reasonable and useful, we can make smaller photodiodes with the same signal-to-noise performance, if they are panchromatic.

How about a 40mp 400D, B&W only, with the same low noise at ISO 1600? Could this be the highest resolving digital camera ever (apart from scanning backs)? Higher resolving than the P45, and all for a cost of $1,000-2,000?
[a href=\"index.php?act=findpost&pid=124464\"][{POST_SNAPBACK}][/a]

Sorry for the late reply, but yes, if you lose the antialiasing filter and the Bayer matrix, you *really do* get all of the benefits mentioned above -- a lot less noise, and hugely increased sharpness. I've been using a Megavision E4 (16 megapixel) monochrome back for about a year now, which actually uses a Kodak chip, so this is actually nothing new for them, though they are sold (including by Megavision) mostly for miltary/industrial/scientific purposes rather than to the likes of me. The base ISO is 100, which gives about 11 bits of usable signal, maybe a bit more. Sharpness is typically lens limited, to the extent that Megavision don't recommend (cough) Hasselblad because the Zeiss lenses aren't quite up to it. I use a Bronica -- most of their newer lenses are fine, including the zooms, though I have swapped out some of my older  MC series lenses for the newer PE versions.

In terms of 'how sharp', it's difficult to describe without showing a print -- I personally think the results are only a hair off those of monochrome conversions from my 150 megapixel Better Light scan back, whist being far easier to achieve in practice.
Logged
Ray
Sr. Member
****
Offline Offline

Posts: 8907


« Reply #69 on: July 08, 2007, 08:12:35 PM »
ReplyReply

Quote
Sorry for the late reply, but yes, if you lose the antialiasing filter and the Bayer matrix, you *really do* get all of the benefits mentioned above -- a lot less noise, and hugely increased sharpness.

Interesting! Since I made that comment, whereby I was prepared to trade off the the lower noise capability of the panchromatic pixels for a higher pixel count on the assumption that smaller, more densly packed pixels would generate both more read noise and more shot noise per unit area of sensor, John Sheehy has weighed in with the observation that in general this is not true and that the reverse applies. Smaller, more densely packed pixels, as in P&S cameras such as the FZ50, tend to generate less noise per unit area of sensor.

On this basis, we could have a 40mp B&W APS-C DSLR, which would not only produce much higher resolution than any current Bayer type DSLR, and even better resolution than the latest MFDBs, but would also have the advantage of significantly less noise to boot, on same size prints.

Quote
In terms of 'how sharp', it's difficult to describe without showing a print -- I personally think the results are only a hair off those of monochrome conversions from my 150 megapixel Better Light scan back, whist being far easier to achieve in practice.

Perhaps you could show us 100% (or even 200%) small crops, at maximum jpeg quality, that are easily downloadable   .
Logged
John Sheehy
Sr. Member
****
Offline Offline

Posts: 838


« Reply #70 on: July 08, 2007, 08:44:24 PM »
ReplyReply

Quote
This is a very confused situation, John. [a href=\"index.php?act=findpost&pid=126523\"][{POST_SNAPBACK}][/a]

Well, there are so many variations possible in using these arrays.  I get the impression, though, that Kodak is just thinking about a direct CFA replacement, with homogenous microlenses and photosites.  Any variation from that introduces a new set of compromises.

What I want to see is hi-tech microlenses that capture all of the light and somehow guides light into different pixels depending on wavelength; that would increase quantum efficiency without losing color resolution.
Logged
John Sheehy
Sr. Member
****
Offline Offline

Posts: 838


« Reply #71 on: July 08, 2007, 08:53:40 PM »
ReplyReply

Quote
John Sheehy has weighed in with the observation that in general this is not true and that the reverse applies. Smaller, more densely packed pixels, as in P&S cameras such as the FZ50, tend to generate less noise per unit area of sensor.[a href=\"index.php?act=findpost&pid=127194\"][{POST_SNAPBACK}][/a]

Well, that applies mainly to read noise, as far as maximum potential is concerned.  Shot noise per unit of area should theoretically be slightly lower with big pixels, as the photosites and microlenses can cover a higher percentage of sensor area, but it seems that in current designs, big pixel/sensor cameras are a little sloppy with collecting maximum electrons, while small-pixel/sensor cameras try to capture every photon they can.
Logged
st326
Jr. Member
**
Offline Offline

Posts: 62


« Reply #72 on: July 10, 2007, 01:39:05 AM »
ReplyReply

Quote
Interesting! Since I made that comment, whereby I was prepared to trade off the the lower noise capability of the panchromatic pixels for a higher pixel count on the assumption that smaller, more densly packed pixels would generate both more read noise and more shot noise per unit area of sensor, John Sheehy has weighed in with the observation that in general this is not true and that the reverse applies. Smaller, more densely packed pixels, as in P&S cameras such as the FZ50, tend to generate less noise per unit area of sensor.

On this basis, we could have a 40mp B&W APS-C DSLR, which would not only produce much higher resolution than any current Bayer type DSLR, and even better resolution than the latest MFDBs, but would also have the advantage of significantly less noise to boot, on same size prints.
Perhaps you could show us 100% (or even 200%) small crops, at maximum jpeg quality, that are easily downloadable   .
[a href=\"index.php?act=findpost&pid=127194\"][{POST_SNAPBACK}][/a]

I've just moved house so I'm not in a position to be able to do that immediately (everything is still in boxes), but hopefully I should be able to put something online within a week or so.

As best I can tell, the Megavision's chip is basically the same as the one in the 16 megapixel Hasselblad back (the one on the anniversary edition 500 series), just without the Bayer matrix and the AA filter. Megavision do a colour version of the E4 too, which uses exactly that chip. It's exactly the same 9 micron pitch, 37mm square 4k x 4k format in both cases. I suspect the improved noise performance is for exactly the obvious reasons -- if I put a deep red filter in front of the lens, I get a loss in sensitivity, so a Bayer matrix is basically doing exactly that (for R, G and  on chip. I quite like the Kodak idea, but I suspect that it will probably only work well for colour images or for panchromatic B&W conversions -- doing a 'red filter' conversion after-the-fact, for example, would probably look a bit weird, kind of like the usual Bayer 'not quite right' look at 1:1 but worse.

Actually, at 1:1, it's pretty tricky to get a capture that looks really sharp at the pixel level -- it can be done, but if you do *anything* wrong you can really see it. Forget hand-holding, or even using anything other than a really solid tripod. Not *at all* forgiving, more like using large format really, but worth it when it's right. One interesting difference is that the Megavision doesn't do any sharpening (obviously there's no interpolation either), so it's not really a like-for-like comparison with a typical colour RAW conversion. The images do sharpen very nicely, though, mostly I suspect because of the complete absence of interpolation artifacts.

Wearing a different hat (my PhD is in extreme environment electronics, so I've had to study the way things like CCDs behave), you get some interesting effects when you scale a semiconductor process. Generally, bigger transistors mean less noise, for the same reason that bigger transistors are more radiation hard (hence the use of lots of 386 processors on the space station). Nevertheless, as processes have scaled down, other optimisations have been made that have improved both power consumption and noise characteristics -- it's not true to say that reducing the size of a transistor improves noise (it does the opposite, without question), improving a semiconductor process (in various ways) can let you scale the features without affecting performance. If you scale them a bit less than you really need, this will give an improvement in performance, though it's not actually because of the scaling per se. Photo receptors on both CCD and CMOS sensors are honking great huge things in comparison with contemporary digital electronics -- making finer pitch sensors isn't really a problem at all, but the smaller the sensor, the fewer photons (and consequentially the fewer electrons), so the job of managing noise becomes a lot harder. I suspect that we're seeing some kind of Moore's law working on sensors, but with a slower growth curve due to the difficulty of managing the noise problem and also the much slower rate of advancement in lens sharpness. Actually, gut feeling is that sensors will hit a wall based on lens sharpness rather than anything else -- arguably this has happened already, but I think there is still some hope for some more improvement.
Logged
BJL
Sr. Member
****
Offline Offline

Posts: 5163


« Reply #73 on: July 14, 2007, 08:00:18 AM »
ReplyReply

Quote
What I want to see is hi-tech microlenses that capture all of the light and somehow guides light into different pixels depending on wavelength; that would increase quantum efficiency without losing color resolution.
[{POST_SNAPBACK}][/a]
Meaning a trichroic prism over every photosite, a tiny version of the three way beam splitters used in 3 CCD cameras? That would be a nice trick if anyone can do it!

A reference: [a href=\"http://en.wikipedia.org/wiki/Dichroic_prism]http://en.wikipedia.org/wiki/Dichroic_prism[/url]

P.S. Perhaps it would already help to use a tiny standard prism at each photosite to partially segregate the light by color, and then have three long thin photodiodes at each site, like
RGB
RGB
RGB
in each site, with the prism splitting horizontally. After all, the current color distinction of CFA'a is far from a clean division into three wavelength groups, but instead has considerable overlap in color sensitivity curves.
« Last Edit: July 14, 2007, 08:05:41 AM by BJL » Logged
Ray
Sr. Member
****
Offline Offline

Posts: 8907


« Reply #74 on: July 15, 2007, 07:48:38 AM »
ReplyReply

Okay! Here's my recipe for the camera of the future   .

(1) Lenses made of artificial materials through nanotechnology; transparent materials with a negative refractive index which allow an image sharpness at f64 which is normally associated with f8.

(2) Microlenses designed as dichroic crystals which will precisely split the light into 3 components and precisely direct each component to a photodiode.

Such a camera would allow one to take noise-free shots in smokey nightclubs, without flash and with tremendous DoF. (And other situations of course. I'm not obsessed with nightclubs and strip joints.)  

You heard it first on LL. Get back to me in 10 years   .
« Last Edit: July 15, 2007, 08:42:54 AM by Ray » Logged
BJL
Sr. Member
****
Offline Offline

Posts: 5163


« Reply #75 on: August 09, 2007, 09:19:57 AM »
ReplyReply

Nikon has patented our idea on dichroic color splitters at each photosite:
http://patft.uspto.gov/netacgi/nph-Parser?...3&RS=PN/7138663
Logged
Ray
Sr. Member
****
Offline Offline

Posts: 8907


« Reply #76 on: August 09, 2007, 07:39:01 PM »
ReplyReply

Quote
Nikon has patented our idea on dichroic color splitters at each photosite:
http://patft.uspto.gov/netacgi/nph-Parser?...3&RS=PN/7138663
[a href=\"index.php?act=findpost&pid=132315\"][{POST_SNAPBACK}][/a]
 

BJL,
Good detective work, eh?  

I get the impression from the following extract from that patent application, that Nikon's idea would not lend itself well to inexpensive, high pixel count sensors. Is that your reading?

Quote
The first method is a three-color separation dichroic prism (three-CCD) method. In the three-CCD method, incident light having been color-separated by the color separation unit, which includes three prisms, an air layer, and a plurality of dichroic filters (for example, a red reflection filter and a blue reflection filter), is applied to the three CCDs. Japanese Unexamined Patent Application Publication No. Hei 5-168023, for example, discloses the three-CCD method (refer to FIG. 2 of the patent document).

In the second method, that is, a single-CCD method, a color separation filter of primary color or additive complementary color is disposed on each light receiving surface of a CCD. Japanese Unexamined Patent Application Publication No. Hei 6-141327, for example, discloses the single-CCD method (refer to page 2 of the patent document).

The three-CCD color separation unit is large and expensive due to the complex structure of an optical system. The single-CCD color separation unit, on the other hand, has the advantage that it is simple, small, and inexpensive. Thus, a video camera, a digital still camera and the like generally use the single-CCD color separation unit.

However, the single-CCD color separation unit has the following problems.

First, the color separation filters disposed in front of the CCD decrease photon utilization efficiency. Therefore, the sensitivity of the CCD decreases.

Second, the different color (red, green or blue) filter is disposed in front of each light receiving surface of the CCD. The color separation filters are arranged in, for example, well-known Bayer Array. Accordingly, the red, green, and blue light receiving surfaces are spatially separate from one another, so that data outputted from each light receiving element corresponding to each light receiving surface has to be interpolated to actualize color. Therefore, there is a problem that false color, which does not exist in reality, appears.
Logged
dilip
Jr. Member
**
Offline Offline

Posts: 61


« Reply #77 on: August 09, 2007, 09:30:02 PM »
ReplyReply

Quote
BJL,
Good detective work, eh?   

I get the impression from the following extract from that patent application, that Nikon's idea would not lend itself well to inexpensive, high pixel count sensors. Is that your reading?
[a href=\"index.php?act=findpost&pid=132428\"][{POST_SNAPBACK}][/a]

that paragraph is in the background to the patent and isn't talking about the inventive sensor.  Instead it's talking about the type of setup you get in the 3-CCD video cameras (prism splits light to 3 different sensors, each one set for R G or .  It's relatively rare that a patent will downplay the advance that it has made.

From a technical point of view, this is an interesting design, but my guess is that since it was first filed in Japan 5 years ago, they're still dealing with either fabrication issues.  The issues would likely be related to not getting it tuned to give substantially better quality at the same resolution, or it could be purely cost based.

Remember, without a fab line to call their own, or a history of semiconductor manufacturing, Nikon is at the mercy of third parties to implement the design.  The sad part is that a company like Sony would probably have no interest in manufacturing this sensor for Nikon unless they could use something similar in their own devices (assuming that it was a much better design). If Nikon isn't willing to share, and this scenario is true, it might take a while to see this thing come to market.

--dilip
Logged
Ray
Sr. Member
****
Offline Offline

Posts: 8907


« Reply #78 on: August 10, 2007, 01:57:23 AM »
ReplyReply

Quote
that paragraph is in the background to the patent and isn't talking about the inventive sensor. Instead it's talking about the type of setup you get in the 3-CCD video cameras (prism splits light to 3 different sensors, each one set for R G or . It's relatively rare that a patent will downplay the advance that it has made.

You're right. It is. However, the new patent seems just as complicated as the dichroic prism/3 CCD method.

With one method we have a 3-color separation employing a dichroic prism and 3 CCDs for each pixel. With the new Nikon patent we have 3 dichroic mirrors and  3 'light receiving surfaces' under the one microlens.

I don't see how this Nikon patented complicated system will lend itself to high pixel count, moderately inexpensive sensor production.

We need some expert here to advise us of the benefits and trade-offs of what now appears to be 3 different systems; the 3 layered Foveon pixel, the dichroic prism & 3 CDDs and the 3 dichroic mirrors with 3 light receiving surfaces.
« Last Edit: August 10, 2007, 01:58:17 AM by Ray » Logged
dilip
Jr. Member
**
Offline Offline

Posts: 61


« Reply #79 on: August 10, 2007, 11:52:29 AM »
ReplyReply

Quote
You're right. It is. However, the new patent seems just as complicated as the dichroic prism/3 CCD method.

With one method we have a 3-color separation employing a dichroic prism and 3 CCDs for each pixel. With the new Nikon patent we have 3 dichroic mirrors and  3 'light receiving surfaces' under the one microlens.

I don't see how this Nikon patented complicated system will lend itself to high pixel count, moderately inexpensive sensor production.

We need some expert here to advise us of the benefits and trade-offs of what now appears to be 3 different systems; the 3 layered Foveon pixel, the dichroic prism & 3 CDDs and the 3 dichroic mirrors with 3 light receiving surfaces.
[a href=\"index.php?act=findpost&pid=132465\"][{POST_SNAPBACK}][/a]


I read the patent last night.  I don't think that they ever promised that it would result in high pixel counts and moderately inexpensive production.  All that they have stated is that this is novel. ( I don't have a copy in front of me, so I might be wrong).

Remember, they filed in 2003, so the patent is good to 2023.  This sensor may not see the light of day, but one of the future designs that someone comes up with might build off this.

--dilip
Logged
Pages: « 1 2 3 [4] 5 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad