Ad
Ad
Ad
Pages: [1]   Bottom of Page
Print
Author Topic: Son of Why Do Digital Cameras Need Shutters  (Read 3668 times)
John Camp
Sr. Member
****
Offline Offline

Posts: 1258


« on: September 21, 2005, 04:43:56 PM »
ReplyReply

Suppose you wanted to take a picture of a skier on a snowy slope with a background of evergreen trees, and you want to preserve the tree-needle detail, the snow detail, the clouds in the sky, and the detail in the moving skier. In other words, you awnt an astonishing amount of DR. If you take this photo at a speed slow enough to pull detail out of the dark pine trees, you blow all the highlights. But the detail in those highlights once existed as information passing through the sensor. If you could count the photons that hit each pixel (which is what the software already does) and if you could assume, as I think you could, that the rate of photon hit on each pixel is constant over the time span of the shot (say 1/125) why couldn't you apply a little math to each of those individual pixels, reduce everything by, say, the exposure of four stops, and recover all the blown highlights?

Why would you do this, you ask? To develop amazingly wide DR with a single shot at a reasonably high action-stopping speed, would be my answer.

If this can be done, tell me, and let me share the wealth.

JC
Logged
BJL
Sr. Member
****
Offline Offline

Posts: 5141


« Reply #1 on: September 21, 2005, 05:09:52 PM »
ReplyReply

I think it can be done, because something a bit like that is already being done in special sensors for surveillance video cameras.

It goes something like this: about every 1/10,000 of a second during an exposure, each photosite checks its electron well for fullness. If a well is close to full, it is read out early, and the duration of the exposure up to that time is noted. Combining the read electron and the time taken to reach that value allows extrapolatin to the electron that would have been got with the full time exposure.

Actually the A/D conversion is then done immediately, at each photosite on chip, so that the time adjustment can also be done right there. Read-out from the sensor is then purely digital, avoiding some noise sources.

This probably requires a fair but of circuitry per photosite, so maybe needing relatively large photsites. It is not so important that it sacrifices well capacity sice the high DR is got by other means,and reduced well capacity does not diminish shadow handling abilities.


How about this for a more mundane approach to "high DR with motion". With frame rates high enough and a tripod, a camera could have a fast "extreme bracketing" mode: one frame at high shutter speed, followed a fast as possible by an exposure many times longer, for the shadows. This might allow decent results from blending. In fact, I can imagine the blending being done on RAW sensor data in-camera some day.
Logged
howard smith
Sr. Member
****
Offline Offline

Posts: 1237


« Reply #2 on: September 21, 2005, 05:24:27 PM »
ReplyReply

I'm not a computer engineer or even close, but this seems like a lot to do in 100 nanoseconds.  Divide that by 10 megapixals, and the time required to sample a pixal, decide what to do, and turn it off, or leaver it on, doesn't leave much time for the other pixals.  I suppose you could use a bunch of processors, but there goes cost, size, ballery life and commercial interest.

Commercial interest.  There are plenty of really cool things that never make it to WalMart because not very many people want one.  Why are grand pianos so expensive?  Not much demand to spread the cost over.  They would have to be pretty cheap before I considered one to take up half my living room so I could answer "yes" if someone asked if I owned a grand piano.  Sorta the same reason there aren't mant wrist watches with whistles.  Not hard to make, just no demand.
Logged
Anon E. Mouse
Full Member
***
Offline Offline

Posts: 197


WWW
« Reply #3 on: September 21, 2005, 06:34:59 PM »
ReplyReply

Even if this is possible, the resulting image would look very very flat. You cannot gain any more levels than the bit depth will give you and so you need to distribute the luminance values within that limit. Maybe you can gain "information," but at the cost of pleasing, "natural" looking images.
Logged
AJSJones
Sr. Member
****
Offline Offline

Posts: 353



« Reply #4 on: September 21, 2005, 06:32:34 PM »
ReplyReply

Howard, it's actually 100 microseconds but it still isn't much time. The problem is getting the data off the chip quickly - like emptying a football stadium, the more exits you have the more quickly you can a) get the fans out and  get the fans for the rock concert in. You need more databuses or pipes, to be able to take rapid succession shots and get the whole set captured before skier movement messes them up. Recording "time to full" data sets rather than "how full after a specific time" is a nice way to extend the data but unless you have either extremely fast processors or relatively few pixels, it will be limited by how well you can measure time. For a 1/30 sec exposure (e.g. for video) you would have 0-333 discrete amounts of time if each pixel checked every 1/10,000 and that needs fast communication to whatever is keeping track. Still, that's theoretically at least 8 more bits of brightness information, but only for a small stadium. Who knows, the military may be going down this path  

An idea I like is similar to a Canon patent (an LCD element above each pixel is made darker as the electron count in that photosite goes up) - it's a rapid response photochromic layer above the sensor, like those sunglasses only much faster. Open the shutter for e.g. 1/2 sec to darken the highlight areas of the layer/image, empty the wells and immediately take the exposure before the layer gets light again. It's an in place (almost live) contrast mask. ( It needs to get light quickly too, to take the next shot but that's likely if it gets dark quickly.)
If the response of the layer is fast enough, the highlights would be attenuating themselves while the shadow areas were still counting photons productively - this is the self-darkening photosensor concept: results in a gamma compression of sorts and you'd expand it back out during "raw" conversion. The shutterspeed is the one which would correspond to exposing the shadows correctly, so both methods of getting extended DR suffer from possible motion artifacts.

Who knows what actually goes on in the sensor development departments at Canon and DARPA, but it's a fun exercise to consider the possibilities.

Andy
Logged
Jonathan Wienke
Sr. Member
****
Offline Offline

Posts: 5759



WWW
« Reply #5 on: September 21, 2005, 07:15:56 PM »
ReplyReply

Quote
Even if this is possible, the resulting image would look very very flat. You cannot gain any more levels than the bit depth will give you and so you need to distribute the luminance values within that limit. Maybe you can gain "information," but at the cost of pleasing, "natural" looking images.
That's ridiculous, and not even close to accurate. Given the linearity of current sensors, the shadow areas are allocated very few discrete values compared to the highlights. If something like this could be implemented, the camera RAW data could allocate a fairly fixed number of levels per stop, say 400 or so, for 10 stops of DR in a 12-bit output. That's certainly better than the current situation wasting more than 2000 levels on the brightest stop of the highlights. Capturing additional dynamic range is always a good thing. You can easily throw away unneeded DR with a simple levels adjustment and/or local contrast enhancement, but not capturing it in the first place means you've got insoluble problems in post. You can see an example here.
Logged

BJL
Sr. Member
****
Offline Offline

Posts: 5141


« Reply #6 on: September 21, 2005, 07:26:02 PM »
ReplyReply

Quote
Even if this is possible, the resulting image would look very very flat. You cannot gain any more levels than the bit depth will give you and so you need to distribute the luminance values within that limit. Maybe you can gain "information," but at the cost of pleasing, "natural" looking images.
Firstly, the skepticism about whether this is possible is strange: IT IS ALREADY BEING DONE. The original ideas come from Stanford University by the way.

Secondly, I do not understnd the idea about bit depth at all. The well lit pixels that get read ot early are read when the well is full, which gives the optimal S/N that the electron wells ar capable of at those pixels. The less well lit pixels get normal full exposure to are the same as with a normal exposure. The bit depth comes later after A/D conversion; what counts on the sensor is dynamic range and local S/N levels in various parts of the image.

Local S?N and DR should be distinguished. For good low visible noise levels, the local S/N ratio at individual pixels only needs to be about 40:1. The dynamic range is the far larger ratio between the highest recordable signal level at the brightest highlights to the floor noise level in darkest parts or the scene: that can be far higher, about 4000:1 in the sensors of MF backs.
Logged
Anon E. Mouse
Full Member
***
Offline Offline

Posts: 197


WWW
« Reply #7 on: September 21, 2005, 08:20:13 PM »
ReplyReply

Quote
Quote
Even if this is possible, the resulting image would look very very flat. You cannot gain any more levels than the bit depth will give you and so you need to distribute the luminance values within that limit. Maybe you can gain "information," but at the cost of pleasing, "natural" looking images.
Firstly, the skepticism about whether this is possible is strange: IT IS ALREADY BEING DONE. The original ideas come from Stanford University by the way.

Secondly, I do not understnd the idea about bit depth at all. The well lit pixels that get read ot early are read when the well is full, which gives the optimal S/N that the electron wells ar capable of at those pixels. The less well lit pixels get normal full exposure to are the same as with a normal exposure. The bit depth comes later after A/D conversion; what counts on the sensor is dynamic range and local S/N levels in various parts of the image.

Local S?N and DR should be distinguished. For good low visible noise levels, the local S/N ratio at individual pixels only needs to be about 40:1. The dynamic range is the far larger ratio between the highest recordable signal level at the brightest highlights to the floor noise level in darkest parts or the scene: that can be far higher, about 4000:1 in the sensors of MF backs.
I am talking about the final reproduction. Taking all that luminace information and compressing into reproducible tones will produce a very low contrast image. The greater the contrast range (I am assuming high contrast scenes), the worse this gets. Digital cameras are designed to get an image that will look satisfatory when captured. They are not designed to capture the most image detail possible and then leave it up to the photographer to try to figure out a way to process that data.

As far as being possible, the original post did not specify limits to the contrast range to be reproduced. As far as I know, there are limits to any method. Or are you claiming an infinite dynamic range? Is is possible to capture a scene showing the shading of the photosphere of the sun as well as a black cat in deep shade under a pine tree? And what would the reproduction of that look like? There is a lot of technical research into imaging, but it does not always find its way into consumer products for one reason or another.
Logged
John Camp
Sr. Member
****
Offline Offline

Posts: 1258


« Reply #8 on: September 21, 2005, 08:48:05 PM »
ReplyReply

Some of you are imagining physical ways to do this, and my question was much simpler, and possibly, much stupider, but I just don't know. What I'm saying is that for each photosite, well, pixel, whatever, *there already exists some kind of a photon counter.* That's where we get the picture. It might not be as simple as I imagine, but why couldn't a dedicated calculator chip just apply some statistics to the count -- to each individual well -- and produce a picture that would recapture overexposed portions of the photo? I'm talking about a relatively simple calculation here, not another measurement or sequential exposure; perhaps even as part of a completely different system, like Photoshop? I'm thinking that the reason may be that when the well becomes full (say, 255 on what we see as, in effect, a photon counter) it simply stops counting, and the information gets thrown away and is not retrievable. Is that what happens? The information is actually gone? The well overflows, and the actual count is no longer taken?

JC
Logged
Anon E. Mouse
Full Member
***
Offline Offline

Posts: 197


WWW
« Reply #9 on: September 21, 2005, 09:00:50 PM »
ReplyReply

John, currently there is no way to do what you want to do. The information is simply gone. The only way would be to take two images to record the shadow and highlight regions and combine them in photoshop.
Logged
AJSJones
Sr. Member
****
Offline Offline

Posts: 353



« Reply #10 on: September 21, 2005, 09:08:36 PM »
ReplyReply

John, Yes the sites are effectively photon counters.  Once they've filled up to their limit, they cannot tell us anything about what they didn't collect after that.  That's the problem we're trying to come up with ways around
Logged
Jonathan Wienke
Sr. Member
****
Offline Offline

Posts: 5759



WWW
« Reply #11 on: September 22, 2005, 01:03:08 AM »
ReplyReply

Quote
I am talking about the final reproduction. Taking all that luminace information and compressing into reproducible tones will produce a very low contrast image. The greater the contrast range (I am assuming high contrast scenes), the worse this gets.
This is only true if you're completely incompetent at post-processing. There are many ways to increase print contrast; setting a rational white and black point via a levels adjustment, followed by well-executed local contrast enhancement works very well in the vfast majority of cases, even when the original image is a bracketed blend capturing an extremely wide dynamic range. Contrast is easy to increase and enhance, clipping is impossible to completely fix. You're always better off capturing more dynamic range than less; you can always throw it away in post if you decide you don't really need it.
Logged

BJL
Sr. Member
****
Offline Offline

Posts: 5141


« Reply #12 on: September 27, 2005, 07:31:09 PM »
ReplyReply

Quote
I am talking about the final reproduction. Taking all that luminace information and compressing into reproducible tones will produce a very low contrast image.
That has been teh dilemmma for a lng time with B&W fim in particualr; it can reconed a subject brightness range far greater than teh 100:1 or less of B&W pritning paper, so contrsadt adnb brightness has to be manipulated in various ways, like low contrssxt papers or more selectivel manipulati with dodging and burning in. Use of ND grad. fiters dos simsir compression at the time the photo is taken.

The results can be surprisingly good. Perhaps such luminosity compression mimics what happens when the eye scans over a high SBR scene.
Logged
Pages: [1]   Top of Page
Print
Jump to:  

Ad
Ad
Ad