Ad
Ad
Ad
Pages: [1]   Bottom of Page
Print
Author Topic: New panasonic sensor tech ... sounds intriguing  (Read 1725 times)
Wayne Fox
Sr. Member
****
Offline Offline

Posts: 2955



WWW
« on: February 04, 2013, 07:27:45 PM »
ReplyReply


http://panasonic.co.jp/corp/news/official.data/data.dir/2013/02/en130204-6/en130204-6.html

Logged

langier
Sr. Member
****
Offline Offline

Posts: 671



WWW
« Reply #1 on: February 05, 2013, 12:36:30 AM »
ReplyReply

Intriguing.

How many years before we see the chip in a Panasonic camera (not just a P&S, but a m4/3 body) and can actually use the tech or will the brute force of CMOS and Bayer simply leave it an evolutionary dead end for the masses like Beta?
Logged

Larry Angier
ASMP, NAPP, ACT, and many more!

Webmaster, RANGE magazine
Editor emeritus, NorCal Quarterly

web--http://www.angier-fox.photoshelter.com
facebook--larry.angier
twitter--#larryangier
google+LarryAngier
Fine_Art
Sr. Member
****
Offline Offline

Posts: 1157


« Reply #2 on: February 05, 2013, 01:36:54 AM »
ReplyReply

It is bound to win. Why filter when you can split? Other companies will cross license this with their tech.
Logged
BJL
Sr. Member
****
Offline Offline

Posts: 5182


« Reply #3 on: February 05, 2013, 11:38:50 AM »
ReplyReply

It is intriguing: Bayer-like dcmosaicing still needed, but with all the photons counted.

From the fragments I have read abput the technical paper in Nature Photonics, the photosites have to be quite small, around one micron, because the splitting depends on having size scales small enough to produce strong diffraction effects. So I guess it will be deployed in camera-phones and such first ... which is probably where most of the revenues are for a sensor maker anyway. Maybe the upsizing to bigger formats like 4/3 will use on-chip binning, at least until read rates can handle the huge photosite counts.
Logged
Wayne Fox
Sr. Member
****
Offline Offline

Posts: 2955



WWW
« Reply #4 on: February 05, 2013, 02:58:19 PM »
ReplyReply

It is intriguing: Bayer-like dcmosaicing still needed, but with all the photons counted.


Maybe. Perhaps its more like binning, with no need to demosaic.  With sites that small you could have as many as 20 or 30 sites in the same physical area that a current sensel occupies.  So you treat all of those sites as one pixel, derive the color from those and do not use any information from neighboring pixels to calculate the color?
Logged

BJL
Sr. Member
****
Offline Offline

Posts: 5182


« Reply #5 on: February 05, 2013, 03:45:05 PM »
ReplyReply

Maybe. Perhaps its more like binning, with no need to demosaic.  With sites that small you could have as many as 20 or 30 sites in the same physical area that a current sensel occupies.  So you treat all of those sites as one pixel, derive the color from those and do not use any information from neighboring pixels to calculate the color?
Wayne,

   I suppose you are thinking about how best to use the inherently tiny photodetectors of this diffraction splitting technology in a larger, "SLR sized" sensor by using the output of a cluster of these tiny photodetectors to produce each larger "super-pixel".

Maybe traditional demosaicing could be avoided, but there would still need to be some reconstruction of additive primaries from the mixed signals given by this approach, which are roughly "white plus red", "red", "white plus blue", and "blue".

From what I have heard, once you have to decode the primary colors using data from several photodetectors, it is best not to draw arbitrary boundaries between clusters of detectors grouped into bigger pixels, which leads to not using data from some nearby photodetectors just because they are on the wrong side of the line separating one pixel from the next. Instead it seems best in practice to do demosaicing with data from all sufficiently nearby sites, even going beyond immediate neighbors, though with greatest weight on the data from the nearest sites.

As an aside,  there is overall all an excesive fear and disparagement of demosaicing and interpolation from people who do not know much about signal processing ... hence the excessive enthusiasm in some quarters for the Foveon X3 approach, with anti-interpolation extremists wielding their red-blue resolution test charts.
Logged
jonathanlung
Jr. Member
**
Offline Offline

Posts: 70


« Reply #6 on: February 05, 2013, 05:03:26 PM »
ReplyReply

Intriguing.

How many years before we see the chip in a Panasonic camera (not just a P&S, but a m4/3 body) and can actually use the tech or will the brute force of CMOS and Bayer simply leave it an evolutionary dead end for the masses like Beta?

I'm still waiting for BSI to appear on a large(-ish) sensor. But this tech doesn't seem like it's it's at odds with CMOS; at some point, most people won't see a need for increased resolution (if they haven't already) and this technology can take advantage of further miniaturization by using the smaller photodetectors to improve colour/sensitivity. This looks like 3CCD but cheaper and smaller.
Logged
Wayne Fox
Sr. Member
****
Offline Offline

Posts: 2955



WWW
« Reply #7 on: February 05, 2013, 08:40:53 PM »
ReplyReply

Wayne,

   I suppose you are thinking about how best to use the inherently tiny photodetectors of this diffraction splitting technology in a larger, "SLR sized" sensor by using the output of a cluster of these tiny photodetectors to produce each larger "super-pixel".

Maybe traditional demosaicing could be avoided, but there would still need to be some reconstruction of additive primaries from the mixed signals given by this approach, which are roughly "white plus red", "red", "white plus blue", and "blue".

From what I have heard, once you have to decode the primary colors using data from several photodetectors, it is best not to draw arbitrary boundaries between clusters of detectors grouped into bigger pixels, which leads to not using data from some nearby photodetectors just because they are on the wrong side of the line separating one pixel from the next. Instead it seems best in practice to do demosaicing with data from all sufficiently nearby sites, even going beyond immediate neighbors, though with greatest weight on the data from the nearest sites.

As an aside,  there is overall all an excesive fear and disparagement of demosaicing and interpolation from people who do not know much about signal processing ... hence the excessive enthusiasm in some quarters for the Foveon X3 approach, with anti-interpolation extremists wielding their red-blue resolution test charts.
Thanks for the insight ... I've never really thought of it that way and it makes sense.  Sometimes simple "logic" isn't maybe that simple.  You are correct in that many (like myself) see the Bayer pattern as a necessary evil. Perhaps unfairly because the end results rarely show any serious problems from it. 

Perhaps the ultra fine pixel density if such a sensor would alleviate the need for the AA filter, something most see as at least slightly detrimental?

Good thing I'm not an engineer Smiley
Logged

BJL
Sr. Member
****
Offline Offline

Posts: 5182


« Reply #8 on: February 06, 2013, 08:59:22 AM »
ReplyReply

Perhaps the ultra fine pixel density if such a sensor would alleviate the need for the AA filter, something most see as at least slightly detrimental?
Yes, that "oversampling" approach has been a hope of mine for years. It seems at least likely that continuing progress in micro-electronics and strategies like on-chip column-parallel signal processing will increase the speed of read-out and processing and reduce read noise to the point that there is no significant downside to having a surfeit of photodetectors. Apart from eliminating the need for optical low pass ("AA") filters, that would open up some conveniences like massive "digital zoom" on the fly, for video on particular.
Logged
Vladimirovich
Sr. Member
****
Offline Offline

Posts: 1320


« Reply #9 on: February 12, 2013, 03:04:48 PM »
ReplyReply

Panasonic paper posted in open access @ http://www.readcube.com/articles/10.1038/nphoton.2012.345

Logged
Pages: [1]   Top of Page
Print
Jump to:  

Ad
Ad
Ad