Ad
Ad
Ad
Pages: « 1 2 3 [4]   Bottom of Page
Print
Author Topic: A true 6x7 CMOS low light sensor camera, can it exist?  (Read 11245 times)
Kolor-Pikker
Jr. Member
**
Offline Offline

Posts: 57


« Reply #60 on: January 09, 2013, 03:57:02 AM »
ReplyReply

Audio sampling and photon capture are not directly commensurable.  The best that audio has managed to do is approximately 21 bits (according to Dan Lavry) while advertising 24.  But they are using an electron stream, rather than converting photoelectrons.  And there is no parity in signal levels between these two phenomena.
Ah, well I didn't know that, the extent of my knowledge on this subject is that a voltage generated from either a photosite or microphone membrane gets digitized and that's it lol.

Quote
If your point is that A-D converters can convert 21 bits very well, that is true.  But in photographic applications, there are not that many electrons to go around.  And there is read error, and shot noise in addition.  Erik or Emil would know better, but it otherwise seems Red is claiming a sensor that uses or exceeds single electron ADUs!
There are problems audio faces too, the limit of dynamic range capture in audio, even assuming perfect equipment performance, is limited by the room noise of even an extremely quiet studio.

Quote
But gain does not multiply out the amount of information.  And it introduces noise.  And you can't do HDR with gain, only pseudo-HDR.  

The pseudo-step wedge is suggestive, but not genuinely informative.  I'd like to see a frame from the Dragon that has that much DR.  I'd really like a detailed technical explanation.  Perhaps there is some innovation going on here, but it needs an explanation.  
Honestly I'm not sure how it works myself exactly, just trying to figure it out from deduction, since this is a technology previously limited to labs. If anything, here it is from the horse's mouth: http://www.arri.com/camera/digital_cameras/technology/arri_imaging_technology/alexas_sensor.html
Edit: It looks like I forgot the specifics, it says the exact opposite, the highlights are derived from the lower gain signal, and the shadows from the high gain. Sorry bout that, I'll change my previous post.

But as I've said before, it's only pseudo-HDR if the different gain levels are derived from one converter, not two converters calibrated to different gain levels. The Native ISO of cinema cameras is around 800-1250 but they still manage to get such extreme amounts of DR, this means that DR is not tied in any way to a camera's gain.

As for HDRx, the Red team says that the Dragon makes HDRx obsolete, and it likely won't be supported by Dragon. There are some members who still want the feature in because it makes still extraction easier, though.
« Last Edit: January 09, 2013, 04:53:51 AM by Kolor-Pikker » Logged
LKaven
Sr. Member
****
Offline Offline

Posts: 772


« Reply #61 on: January 09, 2013, 08:21:44 AM »
ReplyReply

Hi Kolor-Pikker,

If you see my exchange with Erik just before this, we figured out that there are two separate exposures being made to produce HDR-x.  And that solves the puzzle.  The sensor doesn't deliver that many bits in a single exposure, but in a combination of two.  And the added dynamic range comes from the highlight end and not the shadow end. 

As I said, it's easier to expand dynamic range into the highlights by effectively expanding the full-well capacity of the sensor than it is to expand dynamic range in the shadows by increasing quantum efficiency and reducing read noise.  Even in the Nikon D4, the physical full well capacity is doubled over its predecessor in a single exposure, making for a base of ISO100 and a wider dynamic range.  Multiple exposures are another way of doing this.
Logged

Kolor-Pikker
Jr. Member
**
Offline Offline

Posts: 57


« Reply #62 on: January 09, 2013, 10:28:54 AM »
ReplyReply

And if you read the last line of my post, then you'll see that the puzzle isn't solved because the Dragon is not blending two exposures via HDRx, which is being dropped from the camera entirely as a feature. HDRx already exists on the Epic, but it has it's own problems, since the shutter speed between the two exposures is different, and may create ghosting during motion. It was a neat work-around while it lasted.
This sensor is claimed to capture 20 stops natively, which I don't particularly dismiss, but the real question is how they're reading that data off of the sensor. With a 16-bit ADC you're technically limited to 16 stops of dynamic range, so how are they getting another 4?
« Last Edit: January 09, 2013, 10:31:33 AM by Kolor-Pikker » Logged
LKaven
Sr. Member
****
Offline Offline

Posts: 772


« Reply #63 on: January 09, 2013, 03:03:34 PM »
ReplyReply

Thanks for the correction.  And I apologize if you weren't referring to shadow DR in the first place, but highlight DR.

I would still guess that any additional dynamic range is being added at the highlight end through effective increase in well capacity.  With several sensors yielding over 50% quantum efficiency, there isn't more than a theoretical stop to be gained at the low end.  And with the noise floor as low as it is, we aren't /that/ far from counting photons singly. 

But the additional headroom would still be great news for filmmakers.  As they say, like "film DR."  Lots of room at the top.
Logged

ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 6910


WWW
« Reply #64 on: January 09, 2013, 03:25:46 PM »
ReplyReply

Hi,

They probably use on chip converters like Sony.

The main problem I see with 20 stops is that it would need very large pixels, having a full well capacity of about 1e6 electron charges. A normal camera sensor pixel is usually in the 30000 - 60000 range, so the pixels would need to be much larger then still camera pixels like 20 microns. Would they fit on the chip?

Or could they have extra pixels with ND filters?

Best regards
Erik


And if you read the last line of my post, then you'll see that the puzzle isn't solved because the Dragon is not blending two exposures via HDRx, which is being dropped from the camera entirely as a feature. HDRx already exists on the Epic, but it has it's own problems, since the shutter speed between the two exposures is different, and may create ghosting during motion. It was a neat work-around while it lasted.
This sensor is claimed to capture 20 stops natively, which I don't particularly dismiss, but the real question is how they're reading that data off of the sensor. With a 16-bit ADC you're technically limited to 16 stops of dynamic range, so how are they getting another 4?
Logged

LKaven
Sr. Member
****
Offline Offline

Posts: 772


« Reply #65 on: January 09, 2013, 03:42:38 PM »
ReplyReply

They probably use on chip converters like Sony.

The main problem I see with 20 stops is that it would need very large pixels, having a full well capacity of about 1e6 electron charges. A normal camera sensor pixel is usually in the 30000 - 60000 range, so the pixels would need to be much larger then still camera pixels like 20 microns. Would they fit on the chip?

The D4 captures 120k photoelectrons at ISO100, which gives one more stop of headroom.  But there might be other ways to increase "effective capacity." 

I'm interested to see if they use on-chip converters and how well that works.  These things run very hot.  Using live view on the D800 almost doubles the amount of thermal noise to my eye. 
Logged

Kolor-Pikker
Jr. Member
**
Offline Offline

Posts: 57


« Reply #66 on: January 09, 2013, 04:05:06 PM »
ReplyReply

Don't on-chip converters reduce pixel fill-factor?



The Aaton delta uses full frame CCDs for just this purpose, and as such, has massive highlight headroom.
Logged
ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 6910


WWW
« Reply #67 on: January 09, 2013, 10:50:09 PM »
ReplyReply

Hi,

They say so. Here are real world SEM pictures of a pair of CMOS sensels.

The main problem with CCD seems to be readout noise to get 20 stops of DR you need to have a Full Well Capacity (FWC) of 1000000, and a readout noise of 1 electron charge.

CCDs used in MFDBs used to have readout noise like 12-17 EC.

I'm somewhat skeptical of the FWC figures given by "sensorgen" as they give different values for cameras using the same chip. Sony Alpha and Nikon D3X both uses a very similar sensor by Sony. Sensorgen gives FWC = 48975 for the Nikon and FWC = 26843 for the Sony. But, chip geometry is the same. Nikon D3X makes much better use of the Exmoor sensor, but I'm pretty sure the FWC is same on both.

Best regards
Erik


Don't on-chip converters reduce pixel fill-factor?



The Aaton delta uses full frame CCDs for just this purpose, and as such, has massive highlight headroom.
Logged

Kolor-Pikker
Jr. Member
**
Offline Offline

Posts: 57


« Reply #68 on: January 10, 2013, 05:39:17 AM »
ReplyReply

If I am correct in memory, the CCDs used on many MFDBs are interline, which is why some backs use microlenses; if they had used full-frame photogates instead, microlenses would make no sense as the fill-factor would already be 100%

In any case, I'm downloading some Raw files from the Aaton to see how the claimed DR handles on my own computer, at $90k for just the camera, it had better be good  Grin
Logged
Pages: « 1 2 3 [4]   Top of Page
Print
Jump to:  

Ad
Ad
Ad