Ad
Ad
Ad
Pages: « 1 ... 4 5 [6]   Bottom of Page
Print
Author Topic: Uwe Steinmuller of DOP on dynamic range and HDR  (Read 26355 times)
RFPhotography
Guest
« Reply #100 on: January 06, 2011, 12:11:49 PM »
ReplyReply

Fill Light is pretty cool.
Recovery may leave room for improvement.

Peter

--

Really nice technique, Peter.  Like it!  Thanks.
Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1666


« Reply #101 on: January 06, 2011, 12:54:59 PM »
ReplyReply

Supposing we display that HDR image on a monitor which has a DR capability of 16 stops. What would be the purpose if the eye can only encompass a DR of something between 6 1/2 and 10 F stops? Get my point?
The eye
http://en.wikipedia.org/wiki/Adaptation_(eye)
Quote
The human eye can function from very dark to very bright levels of light; its sensing capabilities reach across nine orders of magnitude. This means that the brightest and the darkest light signal that the eye can sense are a factor of roughly one billion apart. However, in any given moment of time, the eye can only sense a contrast ratio of one thousand.[citation needed] What enables the wider reach is that the eye adapts its definition of what is black. The light level that is interpreted as "black" can be shifted across six orders of magnitude—a factor of one million.

The eye takes approximately 20–30 minutes to fully adapt from bright sunlight to complete darkness and become ten thousand to one million times more sensitive than at full daylight. In this process, the eye's perception of color changes as well. However, it takes approximately five minutes for the eye to adapt to bright sunlight from darkness. This is due to cones obtaining more sensitivity when first entering the dark for the first five minutes but the rods take over after five or more minutes.[1]

I was simply striving for the simple goal of reproducing reality. If real scenes can have a large dynamic range, I would like for all that to be perfectly reproduced end-to-end. If we ever get there, we will see if it is worth it. I am certain that some scenes contain a large DR that I cannot reproduce using current non-HDR capture and display, but thatI can make sense of when "being there". This suggests to me the potential in a high-DR reproduction system.

-h
Logged
Ray
Sr. Member
****
Offline Offline

Posts: 8847


« Reply #102 on: January 07, 2011, 07:09:57 PM »
ReplyReply

I was simply striving for the simple goal of reproducing reality. If real scenes can have a large dynamic range, I would like for all that to be perfectly reproduced end-to-end. If we ever get there, we will see if it is worth it. I am certain that some scenes contain a large DR that I cannot reproduce using current non-HDR capture and display, but thatI can make sense of when "being there". This suggests to me the potential in a high-DR reproduction system.

To reproduce reality you would need a 3-D monitor or 3-D print for a start. However, the problems of insufficient dynamic range in the reproduction chain has already been solved for static subjects, using exposure bracketing.

Having captured the scene with its full dynamic range, the problem is not the lack of a monitor which can display that full dynamic range, but the lack of skill and technique of image processing in order to compress that captured dynamic range to something that matches the compressed 'field of view' of the print or monitor, and the compressed dynamic range of the eye which is reduced to a 'more or less' fixed gaze when viewing that reproduction.

If one compresses the field of view in the reproduction, as any monitor must do when displaying any scene taken with only a moderately wide lens, it is appropriate in the interests of realism to compress the dynamic range, because the eye, when viewing the reproduction, does not have the opportunity to dilate and contract to the same degree as it did when viewing the original scene.

The 5,000,000:1 contrast ratio of a modern plasma screen should be sufficient, even allowing for a little marketing hyperbole  Grin .
Logged
Dave Millier
Full Member
***
Offline Offline

Posts: 118


WWW
« Reply #103 on: January 08, 2011, 01:26:09 PM »
ReplyReply

The answer to my question is probably "You know nothing about how sensors work" but on the slight chance that this is wrong, may I ask this:

Instead of trying to lower noise and other tricks to increase DR, why can't photosites be emptied when they reach saturation and then refilled during a single exposure. As long as you keep count of how many times the reset it done (say by setting a flag) you can calculate the exposure each photosite receives by adding the value of the last reset + the number of resets on your counter. This ought to be capable of dealing effectively with any subject brightness range. And we wouldn't have to worry about shadow noise because the sensor could easily handle 5 stops of overexposure!

Go on, tell me why this is impossible.
Logged

My website and photo galleries: http://www.whisperingcat.co.uk/
LKaven
Sr. Member
****
Offline Offline

Posts: 784


« Reply #104 on: January 09, 2011, 12:16:29 AM »
ReplyReply

The answer to my question is probably "You know nothing about how sensors work" but on the slight chance that this is wrong, may I ask this:

Instead of trying to lower noise and other tricks to increase DR, why can't photosites be emptied when they reach saturation and then refilled during a single exposure. As long as you keep count of how many times the reset it done (say by setting a flag) you can calculate the exposure each photosite receives by adding the value of the last reset + the number of resets on your counter. This ought to be capable of dealing effectively with any subject brightness range. And we wouldn't have to worry about shadow noise because the sensor could easily handle 5 stops of overexposure!

Go on, tell me why this is impossible.
I really feel that something /like this/ is coming, and that there are a whole class of dynamic capture methods that could be deployed, including things such as this.

Back-side illuminated sensors afford possibilities for stacking electronics on each photosite without compromising the light-gathering ability of the sensor.  I see the future in more sophisticated local processing on the sensor.
Logged

ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7235


WWW
« Reply #105 on: January 09, 2011, 01:05:39 AM »
ReplyReply

Comments below,

Erik

However CS5 has offered an impressive solution in HDR-2, with its 'Remove Ghosts' feature. This feature must be very useful for Psychics and Spiritualist Mediums who wish they could stop seeing ghosts.  Grin

:-)

Here's a scene of the living room of a friend I'm visiting over the Christmas/New Year break, and crops of the processed HDR images, with and without ghost removal.

Ah, you don't have snow?!

Now I ask you, are these images surrealistic? Untidy, maybe! But surrealistic?... no!

I'm very surprised and very impressed with the ghost removal result in this particular example. In order to reduce the possibility of movement as much as possible, I used ISO 1600 for these shots. Exposures varied from 1/3000th to 1/10th, and the shadows are still noisy. At the base ISO of the D700, the maximum exposure would have been a full second, improving SNR in the dark parts significantly but probably causing too much blur for the 'remove ghosts' feature to handle.

Logged

Ray
Sr. Member
****
Offline Offline

Posts: 8847


« Reply #106 on: January 09, 2011, 05:00:49 AM »
ReplyReply

Comments below,

Ah, you don't have snow?!

Erik


No, but we have rain. Lots and lots of it. I thank the Lord I don't have to suffer the cold winters of Europe.  Grin
Logged
BJL
Sr. Member
****
Offline Offline

Posts: 5120


« Reply #107 on: January 09, 2011, 10:22:04 PM »
ReplyReply

Instead of trying to lower noise and other tricks to increase DR, why can't photosites be emptied when they reach saturation and then refilled during a single exposure. ...
I think it could happen. A few ideas similar to this are being tried, but I have only heard of them being used in some security camera sensors.

One method I know of involves checking each well at various times during the exposure (say after 1/2000s, 1/1000s, 1/500s ...) and reading out just the ones that are close to full, using A/D conversion done at each photosite. The output of each photosite is then adjusted for its exposure time (the time sequence above, doubling at each step, means that the adjustment is just a bit shift.) The downside of that is needing an ADC at each photosite, probably limiting sensors to relatively few, large photosites.

Maybe a variant of the old "frame transfer" global shutter CCD approach could be used, to need less ADC's:
- each photosite has a light-masked storage (capacitor) next to the light sensitive area.
- when one of those intermediate scans detects that a well is at least half full, its charge is moved to the masked storage, and the time noted, and maybe a "drain" opened on the light-receiving well to stop further accumulation.
- at the and of the exposure, the signal in each light-masked storage is read, A/D converted, and the level scaled to allow for the different exposure times.
Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1666


« Reply #108 on: January 10, 2011, 01:26:52 AM »
ReplyReply

To reproduce reality you would need a 3-D monitor or 3-D print for a start.
There may be several aspects of reality reproduction. I dont see why the lack of stereoscopy should be an argument against striving for realistic dynamic range.
Quote
However, the problems of insufficient dynamic range in the reproduction chain has already been solved for static subjects, using exposure bracketing.
Bracketing only solves the capture problem, not the entire reproduction chain.
Quote
Having captured the scene with its full dynamic range, the problem is not the lack of a monitor which can display that full dynamic range, but the lack of skill and technique of image processing in order to compress that captured dynamic range to something that matches the compressed 'field of view' of the print or monitor, and the compressed dynamic range of the eye which is reduced to a 'more or less' fixed gaze when viewing that reproduction.
You are assuming that the monitor covers only a small field of view of the viewer. I dont think that your assumption is generally true. I went to the movies yesterday, and the big screen covered a substantial part of my FOV.
Quote
If one compresses the field of view in the reproduction, as any monitor must do when displaying any scene taken with only a moderately wide lens, it is appropriate in the interests of realism to compress the dynamic range, because the eye, when viewing the reproduction, does not have the opportunity to dilate and contract to the same degree as it did when viewing the original scene.
If that function is needed, it shoud be applied automatically, in the screen (as that is often the only component that has any idea about how large the viewer fov is. Large displays, projectors, or people sitting with their nose up against the monitor/paper should be able to cover close to 180 degrees of their viewer (with some artifacts).
Quote
The 5,000,000:1 contrast ratio of a modern plasma screen should be sufficient, even allowing for a little marketing hyperbole  Grin .
I am sceptic about all marketing.

Plasmas are usually limited to 2 megapixels. That may be an issue for critical applications if the image is to be seen very large.

The black point may be affeced by incident light. In other words, your room may have to be painted black to come nar the quoted DR. Further, I believe that the maximumbrightness is not all that much from plasmas, giving further problems with other light sources, and possibly issues if the absolute brightness of a scene have perceptual relevance.

I have been told that plasmas can produce very black blacks, but that there is a "hole" in the tonal range between the blackest level, and the next blackest. Supposedly this is connected to plasma inherently being PWM-devices of a limited switching speed, and turning a pixel "off" is easy, but turning it "nearly off" means having one bright cycle and many dark cycles, something that cause flickering. If they cannot produce a perceptually uniform gray scale from black to white, then all the DR in the world may not make them good for this application.

-h
Logged
Ray
Sr. Member
****
Offline Offline

Posts: 8847


« Reply #109 on: January 16, 2011, 09:02:32 AM »
ReplyReply

There may be several aspects of reality reproduction. I dont see why the lack of stereoscopy should be an argument against striving for realistic dynamic range.

I've been away for a few days due to slight flooding problems.

I would never argue that because one aspect of reality is lacking one should not strive to get other aspects right. My point was, if reproduction of reality is your goal, rather than creating an image to your taste which, although strongly based upon the real scene because it's a photograph, is probably also at least slightly fictitious to some degree with regard to the manner of post-processing, then a 3-D image may go further towards creating that sense of reality, of being there, than an extra couple of stops of DR.

Quote
Bracketing only solves the capture problem, not the entire reproduction chain.

Post processing is required for all images whether they are bracketed or not, unless one allows the camera to do the job. And of course, bracketing doesn't always solve the capture problem if there is movement either in the scene or of the camera.

Quote
You are assuming that the monitor covers only a small field of view of the viewer. I dont think that your assumption is generally true. I went to the movies yesterday, and the big screen covered a substantial part of my FOV.

Yes. It's generally true if one is referring to monitors for image processing. Big screens in the cinema, or big projections on the wall, could hardly be described as monitors for image processing. If the big screen in the cinema were to display the full dynamic range of the real scene in order to reproduce reality, you'd no longer be sitting in a darkened room. It would be like sitting in one's lounge at home looking out of a large window onto the praire, with cowboys and Indians galloping by. Your lounge room would inevitably be very well lit with such a large window.

Quote
If that function is needed, it shoud be applied automatically, in the screen (as that is often the only component that has any idea about how large the viewer fov is. Large displays, projectors, or people sitting with their nose up against the monitor/paper should be able to cover close to 180 degrees of their viewer (with some artifacts).

The monitor can have no idea of the field of view from the viewer's perspective, which is dependent upon the distance between the viewer and the monitor as well as the FoV of the original scene. At the actual scene of a landscape shot, taken with a moderately wide-angle lens, it's necessary to turn one's head to some degree, either to the left and to the right, or up to the sky and down to the foreground, in order to focus clearly on each part of the scene.

When viewing that captured scene on a 24" monitor, or even a 65" TV from an appropriate viewing distance, a slight movement of the eyeballs is all that's required to encompass the entire FoV of the displayed picture. The further you are from the monitor, the less the movement of the eyeballs required.

Quote
Plasmas are usually limited to 2 megapixels. That may be an issue for critical applications if the image is to be seen very large.

Few monitors for image processing boast a higher resolution than 2mp, although I'm thinking of getting a 30" NEC model that claims a resolution of 2560x1600.

The larger the screen, the further away one can view it. The monitor I'm using to write this, is a small 17" model which I'm viewing from a distance of around 2 ft. If I were using my 65" Plasma HDTV as a computer monitor, I'd be viewing it from a distance of 2-3 metres. A 2mp image on a small monitor viewed from a close distance will provide no more detail than the same 2mp image viewed on a larger monitor from an appropriately greater distance.

Quote
The black point may be affeced by incident light. In other words, your room may have to be painted black to come nar the quoted DR.

Absolutely correct! I made this point earlier in the thread. The lower the DR of the monitor or display, the darker the room needs to be in order to produce even a semblance of reality. It's why cinemas are darkened rooms, and it's why you need to darken your living room when using a video projector in place of a TV.

Quote
Further, I believe that the maximum brightness is not all that much from plasmas, giving further problems with other light sources, and possibly issues if the absolute brightness of a scene have perceptual relevance.

In my opinion, the maximum brightness of the plasma screen is totally sufficient. If it were any brighter it would cause eye strain. However, the blacks are more detailed on the plasma screen, provided that the viewing conditions are reasonably suitable.

Still images that have been processed in Photoshop, converted to sRGB, downsized then saved as maximum quality jpegs, look remarkably sharp and vibrant on a 65" plasma screen from a distance of about 10ft or 3 metres. There's no sense of any loss of DR, shadow detail or any loss of detail at all. In fact there's an increased sense of realism compared with a much higher resolution print of the same scene, of the same size, viewed from the same distance.

Quote
I have been told that plasmas can produce very black blacks, but that there is a "hole" in the tonal range between the blackest level, and the next blackest. Supposedly this is connected to plasma inherently being PWM-devices of a limited switching speed, and turning a pixel "off" is easy, but turning it "nearly off" means having one bright cycle and many dark cycles, something that cause flickering. If they cannot produce a perceptually uniform gray scale from black to white, then all the DR in the world may not make them good for this application.

Who cares if there's a hole between the blackest black and the next blackest when you have a contrast ratio of 2 million to one (and now 5 million to one in the latest models)? Also, the refresh rate of these new Panasonic plasmas is 600Hz, which is the smallest number divisible by all the main video and movie frame rates, such as 24Hz, 25Hz, 30Hz, 50Hz and 60Hz. I've never noticed any flicker in any part of the display of any still image. Not even in the deepest shadows.

I see no problem here.

Logged
PierreVandevenne
Sr. Member
****
Offline Offline

Posts: 510


WWW
« Reply #110 on: January 16, 2011, 07:17:29 PM »
ReplyReply

Instead of trying to lower noise and other tricks to increase DR, why can't photosites be emptied when they reach saturation and then refilled during a single exposure. Go on, tell me why this is impossible.

Not impossible, but there are a lot of issues. Having one ADC per pixel + additional circuitry is one constraint. There are others at the manufacturing level, circuitry level, blooming control, non homogeneity of the individual pixel ADC, knowing what to do with the photons that arrive while the pixel is read, etc...

Just keep in mind that DR is basically a signal to noise ratio issue: increasing signal or decreasing noise is not a "trick". In fact, what you suggest is increasing signal by increasing well capacity by way of multiple exposures at the pixel level. One of the issues that multiple exposures introduce is multiple read noise. Therefore, the goal of lowering noise remain as important, if not more, as it is in the single exposure scenario. But yes, having a virtual higher well capacity on the high side where read noise matters less could be beneficial in practice because it would be transparent to the photographer (but so would automatic transparent bracketing).

They are doing interesting stuff with sensors though.

From light to light on that one for example.
http://www.freepatentsonline.com/y2010/0091128.html
Logged
Pages: « 1 ... 4 5 [6]   Top of Page
Print
Jump to:  

Ad
Ad
Ad