Ad
Ad
Ad
Pages: « 1 2 [3] 4 »   Bottom of Page
Print
Author Topic: A true 6x7 CMOS low light sensor camera, can it exist?  (Read 11474 times)
JV
Sr. Member
****
Offline Offline

Posts: 520


« Reply #40 on: August 22, 2012, 08:50:43 PM »
ReplyReply

It would make Eric happy.

56x56 would probably make Eric even happier Smiley
Logged
Steve Hendrix
Sr. Member
****
Offline Offline

Posts: 1032


WWW
« Reply #41 on: August 22, 2012, 09:11:50 PM »
ReplyReply

56x56 would probably make Eric even happier Smiley


Well - he wouldn't be alone there!


Steve Hendrix
Capture Integration
Logged

Steve Hendrix
Sales Manager, www.captureintegration.com (e-mail Me)
MFDB: Phase One/Leaf-Mamiya/Hasselblad/Leica/Sinar
TechCam: Alpa/Cambo/Arca Swiss/Sinar
Direct: 404.543.8475
torger
Sr. Member
****
Offline Offline

Posts: 1287


« Reply #42 on: August 23, 2012, 05:08:52 AM »
ReplyReply

I guess larger sensors would be all about a subtle difference in look for short depth of field photography.

Resolution- and photon-wise I don't think we need larger than we already have, maybe we can even go smaller as sensors get more and more sensitive and optical design/manufacturing improves.
Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1617


« Reply #43 on: August 23, 2012, 06:46:48 AM »
ReplyReply

While there's been a lot of focus on pricing in this thread, I've always seen the exercise of creating larger sensors as similar to CPU production. Why does the next generation processor only go up so much? It's become quite incremental. Where computing power has expanded is by adding cores, but for imaging sensors, this is not without challenges. So - in addition to price, I see the challenge as a technical one (not un-related to price), successfully manufacturing a large sensor that produces the same level of quality (or higher would be nice, but it cannot be lower) as today's medium format CCD sensors.

Moores law is about the number of transistors that can fit economically into one economic chip: this growth has been a factor of 2x per 18-24 months since the 1960s.

Moores law does not claim that physically larger cpus will be less expensive (nor is this seen in practice, I believe).

CPUs moving into multi-core was (I believe) largely because they hit a clock-wall (increasing the amount of work done per unit time for a single thread became increasingly difficult), and because applications and programming languages finally hit a point where they could actually exploit multi-core (there is a chicken-and-egg debate).

-h
Logged
PdF
Sr. Member
****
Offline Offline

Posts: 286



« Reply #44 on: August 23, 2012, 10:36:26 AM »
ReplyReply

Today, the question is distorted. There is virtually no camera above the 6/4,5cm. You can find the Mamiya RZ in a small forgotten corner of Mamiya-Leaf sites. Is it still assembled? How long will the remaining stocks be available?

In addition, the lack of autofocus limit its use. But, this is a great machine ...

The Hy6/AFi could certainly support a back size 6/6. But he also went to heaven (or hell, it depends of the point of view).

Sinar, even at the time of Feuerthalen, was limited to 645 with the "m" (also good for the trash shortly, following the fall of Sinar AG).

It also remebers the large and heavy Fuji 680 , well forgotten today. But it was a very specific camera. It is greatly missed by many photographers who had the chance to use it at the end of the analog era.

Which manufacturer will invest in a larger format, when it should develop a range of camera and autofocus lenses for a niche market?

The only possibility is the use of technical camera such as Sinar, Arca, Linhof, etc. ... This is not a big deal!

But this is my personal choice: a good big Sinar camera with the best available back for 95% of my professional work. I would appreciate a bigger size for the back, but it remains a dream.

PdF


« Last Edit: August 23, 2012, 10:38:55 AM by PdF » Logged

PdF
BJL
Sr. Member
****
Offline Offline

Posts: 5087


« Reply #45 on: August 23, 2012, 10:40:23 AM »
ReplyReply

Thanks for the info. The spectral instruments camera in my linked video seems to be a bit different though, it has an STA 1600 CCD 95x95mm with 9um pixel size 10560x10560 pixels resolution.

http://www.sta-inc.net/update-of-sta1600-10560-x-10560-high-resolution-ccd/

It seems to me that it does not have any gaps in it. It is indeed designed for astronomy applications though.

Torger,

    Thanks for the link. Yes, as has been indicated already in this thread, the ultimate barrier is economic viability, not the technological impossibility of making large, high quality sensors. From what I know, wafer scale sensors like these are made by the "on-wafer stitching" process that I mentioned. In fact Teledyne-Dalsa also offers them in CMOS as well as CCDs, and mentions a bit about its stitching process at http://www.teledynedalsa.com/sensors/products/custom.aspx

These sizes require moving the stepper dozens of times and aligning it to sub-micron accuracy after each move, to etch the sensor in chunks each no bigger than the industry-wide stepper field size limit of 26x33mm. This leads to both a slow process to process each wafer and a high rate of rejected wafers, and so extremely high cost. Given the multi-million dollar cost of the "cameras" (large telescopes) that use those big STA sensors, the price of each sensor could easily be many hundreds of thousands of dollars and still be economically viable.

In fact, the combination of
- the existence of such large sensors and of multiple companies that have been seeking customers for them for some years, and
- the failure of any company to produce a digital back with a 56x46mm ("6x6") or 70x46mm ("7x6") or larger sensor
points strongly to cost/demand/sales volume trade-offs as the barrier.

And one more time (not to torger in particular):
Moore's Law progress is irrelevant to improving the affordability of larger sensors, and indeed is driven largely by the ability to get the job done with ever smaller devices. In other words, the main trend that Moore's Law drives in photography is ever smaller sensors with ever higher resolution.
Logged
rcdurston
Full Member
***
Offline Offline

Posts: 156



WWW
« Reply #46 on: January 05, 2013, 01:47:28 PM »
ReplyReply

Well it looks like something similar can be made, now how long before it trickles down to the affordable is the question?
Here
Logged

Kolor-Pikker
Jr. Member
**
Offline Offline

Posts: 57


« Reply #47 on: January 06, 2013, 01:11:34 PM »
ReplyReply

Well it looks like something similar can be made, now how long before it trickles down to the affordable is the question?
Here

Digital cinema technology is surprisingly far ahead of the still camera world. Any camera worth it's salt - such as the Alexa, Delta, and Sony F65 (and presumably the new F55 as well) all have a rock solid 14 stops of dynamic range, and the new Dragon sensor upgrade for Epics has 20 stops of DR, with a projected ~17 stops usable:



...On a 19mp sensor about 30x15mm in size. What the hell are stills camera manufacturers doing?

P.S. This is not just due to advances in CMOS technology, the Aaton Delta uses a CCD sensor. I've played around with a couple of files from the Delta and it's highlights never end.
« Last Edit: January 06, 2013, 01:13:52 PM by Kolor-Pikker » Logged
BJL
Sr. Member
****
Offline Offline

Posts: 5087


« Reply #48 on: January 07, 2013, 01:27:46 PM »
ReplyReply

Well it looks like something similar can be made, now how long before it trickles down to the affordable is the question?
Here
As I said in the post above yours, the barrier is entirely cost, not technical possibility. Cine cameras can be rented out for $3000 a day or more, and sell for as much as $200,000, so sensors costing $100,000 each could be economically viable in a high end digital cine-camera. And even then, it might only by about 8MP, which is what 4K is, so rather useless for high end still photography. The relatively low output resolution needs also give a lot of latitude to increase dynamic range.
Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1617


« Reply #49 on: January 08, 2013, 03:47:53 AM »
ReplyReply

...The relatively low output resolution needs also give a lot of latitude to increase dynamic range.
On a per-sensel level, or on a per-image-level? That sentence seems to support those that claims that "the images from my D800/5Dmk3/... is noisy and low-dynamic-range due to the small sensels. If only Canon/Nikon/... had chosen to stick to 8MP/16MP/32MP instead, my images would look a lot better".

I was under the impression that such claims were disputed by the experts: for a given sensor area, state-of-the-art still-image sensors could probably only
negligibly improved by increasing the sensel size (decreasing the resolution), even if a low (spatial) resolution output was sufficient?


If you are talking about quoted DR per output pixel for a given sensor area, then increasing the sensel size (or downscaling) should improve DR. But then a 24 MP APS-C DSLR should be able to more or less match an equally sized 2k/4k digital movie camera by downscaling appropriately?

-h
« Last Edit: January 08, 2013, 03:52:05 AM by hjulenissen » Logged
ErikKaffehr
Sr. Member
****
Online Online

Posts: 6926


WWW
« Reply #50 on: January 08, 2013, 05:32:07 AM »
ReplyReply

Hi,

On a video camera, resolution is given. So there is little to gain from smaller pixels. Larger pixels are advantageous for DR, much more than for shot noise.

Best regards
Erik

On a per-sensel level, or on a per-image-level? That sentence seems to support those that claims that "the images from my D800/5Dmk3/... is noisy and low-dynamic-range due to the small sensels. If only Canon/Nikon/... had chosen to stick to 8MP/16MP/32MP instead, my images would look a lot better".

I was under the impression that such claims were disputed by the experts: for a given sensor area, state-of-the-art still-image sensors could probably only
negligibly improved by increasing the sensel size (decreasing the resolution), even if a low (spatial) resolution output was sufficient?


If you are talking about quoted DR per output pixel for a given sensor area, then increasing the sensel size (or downscaling) should improve DR. But then a 24 MP APS-C DSLR should be able to more or less match an equally sized 2k/4k digital movie camera by downscaling appropriately?

-h
Logged

hjulenissen
Sr. Member
****
Offline Offline

Posts: 1617


« Reply #51 on: January 08, 2013, 08:16:25 AM »
ReplyReply

Hi,

On a video camera, resolution is given.
Or a choice between 480i/p, 576i/p, 720p, 1080i/p, 2k, 4k, 8k (?)
Quote
So there is little to gain from smaller pixels.
Well, you certainly have more control over your filter in a digital lanczos-type downsampling, than you do in a bayer/olpf sensor operating at the native output resolution.
Quote
Larger pixels are advantageous for DR, much more than for shot noise.
So your opinion is that my Canon 7D (18MP APS-C) would have had more DR if it had an 8MP native sensor (process technology being equal) than it does today, downscaled to that same 8 MP?

-h
Logged
Gel
Full Member
***
Offline Offline

Posts: 148


Excuse me while I bust a cap


WWW
« Reply #52 on: January 08, 2013, 09:59:32 AM »
ReplyReply

Digital cinema technology is surprisingly far ahead of the still camera world. Any camera worth it's salt - such as the Alexa, Delta, and Sony F65 (and presumably the new F55 as well) all have a rock solid 14 stops of dynamic range, and the new Dragon sensor upgrade for Epics has 20 stops of DR, with a projected ~17 stops usable:



...On a 19mp sensor about 30x15mm in size. What the hell are stills camera manufacturers doing?

P.S. This is not just due to advances in CMOS technology, the Aaton Delta uses a CCD sensor. I've played around with a couple of files from the Delta and it's highlights never end.

This.

Also like to add that as much as the 1DX cost me 5 grand, I would of paid 10 grand for it if the dynamic range was good enough. That's another 5k for the sensor alone. I'm sure I'm not alone.
« Last Edit: January 08, 2013, 10:02:15 AM by Gel » Logged

LKaven
Sr. Member
****
Offline Offline

Posts: 775


« Reply #53 on: January 08, 2013, 12:25:08 PM »
ReplyReply

Somebody is going to have to do something besides wave their hands around the Dragon sensor to explain why it gives a supposed 18-20 stops of dynamic range.  All the cine websites are just quoting PR materials.  I haven't been able to find a credible explanation.  Doesn't this surpass the ideal sensor at 100% QE and 0% noise?
Logged

Kolor-Pikker
Jr. Member
**
Offline Offline

Posts: 57


« Reply #54 on: January 08, 2013, 03:05:06 PM »
ReplyReply

Somebody is going to have to do something besides wave their hands around the Dragon sensor to explain why it gives a supposed 18-20 stops of dynamic range.  All the cine websites are just quoting PR materials.  I haven't been able to find a credible explanation.  Doesn't this surpass the ideal sensor at 100% QE and 0% noise?

Not sure what you mean? Audio, for example, is already far ahead of visual technology, given 24-bit AD converters that provide massive dynamic range for capturing and processing. But visual technology is only just reaching 16-bit quantization, and now we have a sensor who's analog properties exceed that of what can properly be digitized. We're not surpassing anything, only just reaching a new milestone in sensor and A/D design.

In order to actually capture all that DR, the Dragon most likely uses what has already been in use for scientific sensors and the Arri Alexa - dual gain amps. The way it works is - you have two ADC's per readout channel, one running at a lower native gain level (say, ISO100) for the highlights, and another running at a higher gain (say, ISO400) for the shadows, the two signals are then summed like an HDR in-camera. This is not the same as making an HDR by changing the ISO level, because the AD component needs to operate at those levels natively rather than being pushed, and still get maximum DR; the aforementioned film cameras all have native gain ranging ISO800~1600.
« Last Edit: January 09, 2013, 04:50:08 AM by Kolor-Pikker » Logged
LKaven
Sr. Member
****
Offline Offline

Posts: 775


« Reply #55 on: January 08, 2013, 05:40:27 PM »
ReplyReply

I don't mean this personally towards you, but towards the marketing and PR materials that Red is putting out. 

Not sure what you mean? Audio, for example, is already far ahead of visual technology, given 24-bit AD converters that provide massive dynamic range for capturing and processing. But visual technology is only just reaching 16-bit quantization, and now we have a sensor who's analog properties exceed that of what can properly be digitized. We're not surpassing anything, only just reaching a new milestone in sensor and A/D design.

Audio sampling and photon capture are not directly commensurable.  The best that audio has managed to do is approximately 21 bits (according to Dan Lavry) while advertising 24.  But they are using an electron stream, rather than converting photoelectrons.  And there is no parity in signal levels between these two phenomena. 

If your point is that A-D converters can convert 21 bits very well, that is true.  But in photographic applications, there are not that many electrons to go around.  And there is read error, and shot noise in addition.  Erik or Emil would know better, but it otherwise seems Red is claiming a sensor that uses or exceeds single electron ADUs! 

Quote
In order to actually capture all that DR, the Dragon most likely uses what has already been in use for scientific sensors and the Arri Alexa - dual gain amps. The way it works is - you have two ADC's per readout channel, one running at a lower native gain level (say, ISO100) for the shadows, and another running at a higher gain (say, ISO400) for the highlights, the two signals are then summed like an HDR in-camera. Since highlights tend to be clean even at higher gain levels, there is little penalty in deriving them from a higher-gain signal. This is not the same as making an HDR by changing the ISO level, because the AD component needs to operate at those levels natively rather than being pushed, and still get maximum DR; the aforementioned film cameras all have native gain ranging ISO800~1600.

But gain does not multiply out the amount of information.  And it introduces noise.  And you can't do HDR with gain, only pseudo-HDR. 

The pseudo-step wedge is suggestive, but not genuinely informative.  I'd like to see a frame from the Dragon that has that much DR.  I'd really like a detailed technical explanation.  Perhaps there is some innovation going on here, but it needs an explanation. 
Logged

ErikKaffehr
Sr. Member
****
Online Online

Posts: 6926


WWW
« Reply #56 on: January 08, 2013, 10:13:20 PM »
ReplyReply

Hi,

I presumed 1080i/p or 2K.

I presume that DSLRs are not using Lanczos-type downsampling. DSLR video has much less resolution than high end video, you can check out the Zacuto Tests. As I recall they resolve around 760 lines vertically.

Regarding DR, yes a 7D with 8 MP would have 0.5 EV more DR the way DxO measures. The reason is that signal would be doubled (twice the full well capacity) but readout noise would be the same. Would you give up 8 MP for 0.5 EV dynamic range?

If you subsample in software readout noise will increase when you add two samples.

This is quite visible on DxO mark's plots of DR and tonal range on the P65+. The P65+ switches to sensor+ at high ISO which bins four pixels in hardware. You see that DR is bumped up in the plots in screen mode while tonal range is virtually unaffected.

Best regards
Erik





Or a choice between 480i/p, 576i/p, 720p, 1080i/p, 2k, 4k, 8k (?) Well, you certainly have more control over your filter in a digital lanczos-type downsampling, than you do in a bayer/olpf sensor operating at the native output resolution.So your opinion is that my Canon 7D (18MP APS-C) would have had more DR if it had an 8MP native sensor (process technology being equal) than it does today, downscaled to that same 8 MP?

-h
Logged

ErikKaffehr
Sr. Member
****
Online Online

Posts: 6926


WWW
« Reply #57 on: January 08, 2013, 10:32:15 PM »
ReplyReply

Hi Luke,

Thanks for sharing info.

Arri uses a CMOS sensor in the Alexa and as far as I recall had superior DR in the Zacuto tests 2011.

I read about RED using some kind of HDR, found this on the Wiki: RED EPIC-X can capture HDRx images with a user selectable 1-3 stops of additional highlight latitude in the 'x' channel.

Here is RED's explanation:
"HOW DOES HDRX WORK?

In a single camera, HDRx simultaneously shoots two image tracks of whatever resolution and frame rate you have chosen. The primary track (A-track) is your normal exposure. The secondary track (X-track) is a "highlight protection" exposure that you determine in the menu settings. You select the amount of highlight protection you need in stops… 2,3,4,5, or 6. Each stop represents a stop less exposure in shutter speed. Example… if you select 2 and your primary exposure is 1/48th sec, the X-track will be two stops less exposure at 1/192 sec. The ISO and aperture remain the same for both exposures.

During recording, the two tracks are "motion-conjoined", meaning there is no gap in time between the two separate exposures. If they were two alternating standard exposures, there would be a time gap between the two tracks that would show up as an undesirable motion artifact. Both tracks (A & X) are stored in a single R3D. Since there are two exposures, the camera is recording double the amount of frames. For example, if you are shooting 24fps, the camera is recording 2-24fps tracks, the data equivalent of 48fps.* After combining the two tracks for playback you see only one 24fps motion stream."

So they say two frames combined using different exposures
.

Best regards
Erik
« Last Edit: January 08, 2013, 10:34:41 PM by ErikKaffehr » Logged

LKaven
Sr. Member
****
Offline Offline

Posts: 775


« Reply #58 on: January 09, 2013, 12:09:31 AM »
ReplyReply

Hey Erik, this makes a lot more sense.  Kolor-Pikker did not mention dual exposures, and I definitely did not see how gain settings were going to make the difference. 

We've anticipated things like HDR-x for some time now, but thought that it would first be implemented on a still camera platform.  But I also see that to make it work, you need a wicked fast global shutter and readout. 

I think the highlight protection is what makes a big difference for the film industry.  Film highlights have a long shoulder.  So yes, now you can get that level of assurance in a digital capture using more than one exposure. 

As for the dynamic range improvements in a single exposure, I think that there will be room for expansion at the top, but comparatively little at the bottom in the shadows.  But the business of expanding effective well capacity is very different from the business of lowering read noise and raising quantum efficiency. 
Logged

hjulenissen
Sr. Member
****
Offline Offline

Posts: 1617


« Reply #59 on: January 09, 2013, 03:42:54 AM »
ReplyReply

Hi,

I presumed 1080i/p or 2K.
Sony are talking about 4k consumer video cameras. Just like still-image-cams, it seems that spatial resolution is deemed benefitial by customers.
Quote
I presume that DSLRs are not using Lanczos-type downsampling. DSLR video has much less resolution than high end video, you can check out the Zacuto Tests. As I recall they resolve around 760 lines vertically.
Previous generation DSLRs (such as my 7D and the 5Dmk2) used line-skipping. Indeed a very poor downsampling method. I thought that the 5Dmk3 improved upon this by actually reading all sensels in video mode (presumably using some digital downsampling to convert to 1080). I thought that Panasonic m4/3 cameras did something similar? My point was that if you want a 1080p (or 720p or 4k) stream with the best possible spatial response, you buy a lot of freedom in shaping the spatial response if your sensor is sampling at a higher resolution than the output format. Say, 4k (Bayer) sensor for a 2k output, using whatever downsampling is deemed optimal.

A "physical downsampling" (using a relatively low resolution sensor) means integrating light over some effective area, defined by sensel active sensing area, micro-lenses, OLPF and the CFA. "Negative light" is impossible, thus the effective filtering kernel contains only positive contributions, on top of being highly imflexible. If you use a higher-resolution sensor, the digital processing is only a question of choosing any imaginable filter kernel and implementing it at sufficiently low cost. (of course, you would have to find a high-res sensor/analog stage that satisfy cost/quality expectations.)

I am guessing that the surprisingly high live broadcast quality I am seeing in SD (576@50i) from my national broadcaster is in part due to "overspecifying" every part of the signal chain until the 576i sampling is the main cause of image degradation (they encode h264 at something like 2x the rate of the crappy commercial low-end channels).

-h
« Last Edit: January 09, 2013, 03:49:46 AM by hjulenissen » Logged
Pages: « 1 2 [3] 4 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad