Ad
Ad
Ad
Pages: « 1 ... 3 4 [5] 6 »   Bottom of Page
Print
Author Topic: Uwe Steinmuller of DOP on dynamic range and HDR  (Read 30322 times)
RFPhotography
Guest
« Reply #80 on: January 03, 2011, 06:56:52 AM »
ReplyReply

What Luke is talking about goes to the origins of HDR which are in the CG world.  Oloneo has a feature in their software called HDR Relight that, apparently (haven't tried it yet myself), allows the user to control indvidual light sources within an image via multiple blended exposures.  It's not a 32 bit function (yet) but it's an interesting first step.  I put together a 'wish list' for Adobe on my blog and one of the things I wished for was the ability to selectively tonemap different areas of an image (without tonemapping multiple times and blending different tonemap versions after the fact) which would then really start to take us to the ability to relight a scene.  For HDR to show its full potential we need, at least, monitors that can display the entire brightness range so we can get a feel for what our true starting point is and where we want to take it from there.
Logged
Ray
Sr. Member
****
Offline Offline

Posts: 8943


« Reply #81 on: January 03, 2011, 08:49:40 PM »
ReplyReply

I put together a 'wish list' for Adobe on my blog and one of the things I wished for was the ability to selectively tonemap different areas of an image (without tonemapping multiple times and blending different tonemap versions after the fact) which would then really start to take us to the ability to relight a scene. 

Just a couple of points here, Bob. Photoshop already gives us the facility to selectively tonemap different areas of the image. Just use the lasso tool to select an area, feather significantly, say 100 or even 200 pixels depending on the size of the slection, then make whatever adjustments in brightness, contrast color etc you think appropriate.

Part of the skill is in the choice of a suitable degree of feathering so the transition in tonality between the inside and outside of the selection does not appear unnatural.

Quote
For HDR to show its full potential we need, at least, monitors that can display the entire brightness range so we can get a feel for what our true starting point is and where we want to take it from there.

Wouldn't this present enormous problems for proofing? Uwe mentioned in his Part I article that the human eye has a dynamic range of about 10 stops, which seems similar to that of a modern DSLR. The eye is said to have a maximum DR of around 24 stops only when taking into consideration the full range of aperture changes that the eye's pupil is capable of.

Such extreme changes in aperture would be caused, for example, when shifting one's gaze from a bright part of a sky where the sun is partially visible as it peeks through the clouds, to the scene of a black cat sitting in the shade of dense undergrowth in the near foreground.

To capture such a scene with autobracketing, not even a Nikon would be sufficient with its 9 exposures of 1 EV interval, providing an additional 8 stops of DR.

Of course, with 9 exposures which might vary between 1/3000th and 1/10 of a second, movement in the scene can be an insurmountable problem.

However CS5 has offered an impressive solution in HDR-2, with its 'Remove Ghosts' feature. This feature must be very useful for Psychics and Spiritualist Mediums who wish they could stop seeing ghosts.  Grin

Here's a scene of the living room of a friend I'm visiting over the Christmas/New Year break, and crops of the processed HDR images, with and without ghost removal.

Now I ask you, are these images surrealistic? Untidy, maybe! But surrealistic?... no!

I'm very surprised and very impressed with the ghost removal result in this particular example. In order to reduce the possibility of movement as much as possible, I used ISO 1600 for these shots. Exposures varied from 1/3000th to 1/10th, and the shadows are still noisy. At the base ISO of the D700, the maximum exposure would have been a full second, improving SNR in the dark parts significantly but probably causing too much blur for the 'remove ghosts' feature to handle.
Logged
Guillermo Luijk
Sr. Member
****
Offline Offline

Posts: 1336



WWW
« Reply #82 on: January 04, 2011, 08:03:32 AM »
ReplyReply

Ghost removal is nothing to be surprised at IMO Ray. In fact what is surprising to me is that HDR software don't have it implemented time ago.

The cause of ghosting is not actually moving parts in the scene. The cause of ghosting is the HDR software building the output image taking information from more than one source file in an area where some element was moving. So to avoid ghosting we just need to tell the software: "hey, in this area always take information from a single source file", and ghosting will be gone. If the software is clever, it will analyse the affected area and choose the most exposed non-clipped source file, and will simply take all the information from it for that part of the scene.

Eliminating ghosting is not for free, it will usually mean the de-ghosted areas will be noisier, but the price to pay is always lower than having a man with a leg cut:

No anti ghosting:



Manual anti ghosting:





One of the advantages of increasing DR sensors will be that if we can capture the entire DR of a scene in a single shot, ghosting will be history. Bracketing is a patch for insufficient DR sensors, and ghost removal is a patch for non-static scenes captured with insufficient DR sensors.

Regards
« Last Edit: January 04, 2011, 08:08:41 AM by Guillermo Luijk » Logged

RFPhotography
Guest
« Reply #83 on: January 04, 2011, 08:37:22 AM »
ReplyReply

Yes, Ray, of course you can use some of the regular editing tools in PS on HDR images via selections.  The term for that is 'soft tonemapping'.  What I'm referring to; however, is the ability to use the HDR Toning tool on a 32 bit image (or 16 bit, for that matter) with a selection active.  That can't be done currently.

Yes, the eye has an 'on the fly' variable aperture that allows it to have a large dynamic range.  I'm not going to get into whether the static drange is 10 stops or 8 stops or 15 stops.  But if a monitor could display something like 15 or 16 stops of brightness, that would be far better than what we have now and would, I'd think, cover a (large) majority of the HDR images being created.  If a camera sensor can reasonably capture 8 of useable brightness (talking real world conditions, not lab/test bench conditions) which, I think, is a decent assumption and you've got a +/-4 bracket, that's 16 stops.  In my own experience, it's pretty rare that I need to go more than that to capture the full range of a scene.  I think that wouldn't be difficult for the eye/brain combination to process.  When I'm editing, I dont' stare at the middle of the screen, I move my eyes around so the variable aperture of the eye would come into effect and allow the photographer to see the range that the monitor presents.

Beyond that, printers/paper would then need to be upgraded significantly to handle all that brightness range.  Ultimately, I think monitors will get there.  I don't think printers/paper will. 

GL, that's what the deghosting process in CS5 HDR Pro attempts to do.  It lets you choose what exposure you want to use as the base for deghosting.  Other software with 'selective' or 'semi-manual' deghosting do similar things.  I do have to say though, that you'd have to think there are some pretty smart people doing the programming for these HDR applications so if they're having difficulty getting deghosting processes to work well, maybe it's a little more difficult than you want to make it out to be.  If it's not, then perhaps you could create a software app. that's useable by people and solve everyone's problems.  And make yourself wealthy in the process.   Roll Eyes
Logged
Guillermo Luijk
Sr. Member
****
Offline Offline

Posts: 1336



WWW
« Reply #84 on: January 04, 2011, 08:57:12 AM »
ReplyReply

that you'd have to think there are some pretty smart people doing the programming for these HDR applications so if they're having difficulty getting deghosting processes to work well, maybe it's a little more difficult than you want to make it out to be.  If it's not, then perhaps you could create a software app. that's useable by people and solve everyone's problems.  And make yourself wealthy in the process.   Roll Eyes

I do think what you say, that is precisely why I am surprised to find that we needed to wait till 2010 to start seeing any real antighosting features in commercial apps (the anti ghosting checkbox in Photomatix is just a joke, it only reduces the progresiveness in the blending, which is unsufficient and moreover affects the entire image). The only explanation I find is not that they were having any difficulty as you say, is simply that they didn't put focus on this matter.

Achieving an effective antighosting is not difficult at all, I have already done it into my own blending app and I don't consider myself smarter than anyone. In fact the example above comes from it, that B&W image you see is the automatically generated blending map. The user just needs to detect the conflict area (something which can also be automated by correlating the source images, but IMO is not worth) and brush the blending map with the brightest gray participating in the area, as I did above. This forces the most exposed non-clipped source file to be the only one used in the area, and the problem is solved.

Regards

PS: BTW, the last Sony sensor used in the Pentax K5 and Nikon D7000 can capture in a single shot 11 stops of DR with acceptable noise (SNR=12dB criteria), and this technology translated to a FF sensor would mean no less than 12 effective stops. So your 8 stops figure is out of date with today's technology. The following plot from DxO Mark is very clarifying about the big step taken by Sony with its new sensor in DR (just look at the trend and relative DR figures, the absolute DR figures are too high for us photographers since they were obtained with the SNR=0dB criteria):

« Last Edit: January 04, 2011, 09:42:56 AM by Guillermo Luijk » Logged

RFPhotography
Guest
« Reply #85 on: January 04, 2011, 09:35:12 AM »
ReplyReply


PS: BTW, the last Sony sensor used in the Pentax K5 and Nikon D7000 can capture in a single shot 11 stops of DR with acceptable noise (SNR=12dB criteria), and this technology translated to a FF sensor would mean no less than 12 effective stops. So your 8 stops figure is out of date with today's technology.
 

As I said, I'm talking real world shooting conditions, not in a lab on a bench test.  When those types of images start to be available for for evaluation and comparison and when comparisons of those 'real' images to other cameras are done, then I'll start to believe the hype.  Until then..... And that's two cameras.  Others still don't make it that far. 
Logged
Guillermo Luijk
Sr. Member
****
Offline Offline

Posts: 1336



WWW
« Reply #86 on: January 04, 2011, 09:57:35 AM »
ReplyReply

As I said, I'm talking real world shooting conditions, not in a lab on a bench test.  When those types of images start to be available for for evaluation and comparison and when comparisons of those 'real' images to other cameras are done, then I'll start to believe the hype.  Until then..... And that's two cameras.  Others still don't make it that far.
Sensor performance is the same in real world than in the lab, basically because labs are located in the real world. I have extensively used my Canon 350D in shooting interiors, and it never performed worse than when measuring its DR at the lab (if my room at home can be considered a lab). The 350D was an APS-C sized sensor camera launched in the beginning of 2005, having an effective DR of 8 stops.

I have measured myself the DR of the Pentax K5 sensor, and firmly believe in what I did and what I got, which is consistent with other measurements.

This real world capture was 6 stops underexposed (i.e. the first 6 stops in the RAW histogram are empty, the JPEG displays nearly pure black) on a Pentax K5, and produced the following image with a level of noise still acceptable: click here.

Find here a Nikon D7000 vs Fuji S5, D90, D700 DR evaluation: Nikon D7000. Comparing DR in a scene. The D7000 performed the same as the Fuji S5, with the Super CCD sensor.

Regards
« Last Edit: January 04, 2011, 10:08:11 AM by Guillermo Luijk » Logged

RFPhotography
Guest
« Reply #87 on: January 04, 2011, 10:05:29 AM »
ReplyReply

I'm not going to get into a pissing contest with you, GL. 
Logged
Guillermo Luijk
Sr. Member
****
Offline Offline

Posts: 1336



WWW
« Reply #88 on: January 04, 2011, 10:10:13 AM »
ReplyReply

I'm not going to get into a pissing contest with you, GL. 
You do well. Next time you decide to be ironic to someone, make sure you have the needed resources.
Logged

BJL
Sr. Member
****
Offline Offline

Posts: 5182


« Reply #89 on: January 04, 2011, 11:25:12 AM »
ReplyReply

BTW, the last Sony sensor used in the Pentax K5 and Nikon D7000 can capture in a single shot 11 stops of DR with acceptable noise (SNR=12dB criteria)
Guillermo,
    how do you get that figure of 11 stops? From what I have read, that sensor has full well capacity of about 30,000e-, so 11 stops down is a signal of about 16e-, and then shot noise is 4e- RMS, limiting SNR to 4:1. Is that figure of 12dB (16:1) computed only with respect dark noise and read noise, not photon shot noise?

To put it another way, that target of 12dB or 16:1 SNR (which seems reasonable to me for tolerable shadow noise) requires a signal of at least 2^8=256 photons detected even if the noise generated within the camera is negligible, and to have that photon count 11 stops below maximum signal requires the ability to count up to 2^8*2^11=1^19 photons, a bit over 500,000. With well capacity of about 32K or 2^15, the limit is seven stops about that 12dB threshold.


P. S. [Added later] It just occurred to me that you might be using the strange "power referred" use of dB, so a factor of two in SNR is 6dB. Then the numbers are consistent, with 12dB meaning 4:1 SNR.  But I am not sure how good a local SNR as low as 4:1 can look even in very dark parts of the displayed image.
« Last Edit: January 04, 2011, 11:48:27 AM by BJL » Logged
RFPhotography
Guest
« Reply #90 on: January 04, 2011, 12:33:44 PM »
ReplyReply

You do well. Next time you decide to be ironic to someone, make sure you have the needed resources.


It's not a matter of resources.  It's a matter that there's no point in trying to have a reasoned discussion with the hardheaded.
« Last Edit: January 04, 2011, 01:46:38 PM by BobFisher » Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1713


« Reply #91 on: January 04, 2011, 01:29:10 PM »
ReplyReply

Regarding "anti-ghosting".

Motion compensation should have more potential than simply scewing the blending in favor of a single source image for some regions.

The problem of "leaves moving slightly in the wind" should be quite different from "camera movement", which again is different from "subject rotating, exposing a different side at different times".

-h
Logged
Ray
Sr. Member
****
Offline Offline

Posts: 8943


« Reply #92 on: January 04, 2011, 06:09:39 PM »
ReplyReply

But if a monitor could display something like 15 or 16 stops of brightness, that would be far better than what we have now and would, I'd think, cover a (large) majority of the HDR images being created.  create a software app. that's useable by people and solve everyone's problems.  And make yourself wealthy in the process.   Roll Eyes

Bob,
I'm having trouble getting my mind around this. It would seem to me that such a monitor, capable of displaying 15 or 16 stops of DR, would have to be so bright in order to display the brightest parts of an HDR capture, it would dazzle and hurt the eyes, unless the monitor were the size of a wall so that the eye could exclude the brighter parts as it focussed attention on the darker parts.

However, if the monitor were the size of a wall, the room would be so brightly lit that the shadows would appear like midtones.

My Panasonic plasma HDTV claims to have a contrast ratio of 2 million to 1. How many stops of DR is that? About 21?

I see the latest Panasonic models claim a CR of 5,000,000:1, 10/12 bit color depth, and 6,144 steps of gradation. Can anyone decipher these figures for me?

I think maybe in order to appreciate the maximum dynamic range of one's display, the viewing room needs to be essentially a 'black box', ie all walls, floor and ceiling painted non-reflective matte black.

I just came across a Wikipedia article in which it is claimed the retina has a static contrast ratio of only 6 1/2 stops. Here's the relevant extract.

Quote
The retina has a static contrast ratio of around 100:1 (about 6 f-stops). As soon as the eye moves (saccades) it re-adjusts its exposure both chemically and geometrically by adjusting the iris which regulates the size of the pupil. Initial dark adaptation takes place in approximately four seconds of profound, uninterrupted darkness; full adaptation through adjustments in retinal chemistry (the Purkinje effect) are mostly complete in thirty minutes. Hence, a dynamic contrast ratio of about 1,000,000:1 (about 20 f-stops) is possible. The process is nonlinear and multifaceted, so an interruption by light merely starts the adaptation process over again.


« Last Edit: January 04, 2011, 06:26:57 PM by Ray » Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1713


« Reply #93 on: January 05, 2011, 03:27:57 AM »
ReplyReply

Bob,
I'm having trouble getting my mind around this. It would seem to me that such a monitor, capable of displaying 15 or 16 stops of DR, would have to be so bright in order to display the brightest parts of an HDR capture, it would dazzle and hurt the eyes, unless the monitor were the size of a wall so that the eye could exclude the brighter parts as it focussed attention on the darker parts.
If the camera-monitor reproduction chain was able to reproduce the original dynamic range of the scene, and the size/distance to the monitor was similar to the angle you would have observed if you were at the scene (or using binoculars mimicing the tele-lens used, if any), then the stimuli in the room should be similar to "being there". Of course, some real-life scenes have a DR that make them visually hurtfull or not pretty.

If the monitor fill only a part of your field of view, and the rest is filled with windows or walls of a very different brightness there might be perceptuall issues.
Quote
My Panasonic plasma HDTV claims to have a contrast ratio of 2 million to 1. How many stops of DR is that? About 21?

I see the latest Panasonic models claim a CR of 5,000,000:1, 10/12 bit color depth, and 6,144 steps of gradation. Can anyone decipher these figures for me?
I think that plsmas have a fantastic DR due to being able to fully turn a pixel 'off'. I also think that they cannot reproduce very dark grays just above that "black" becuse they are pulse-modulated, and a brightness slightly above 'off' would be perceived as flickering.
Quote

I think maybe in order to appreciate the maximum dynamic range of one's display, the viewing room needs to be essentially a 'black box', ie all walls, floor and ceiling painted non-reflective matte black.

Would not a display technology that did not reflect anything from the room be enough?

-k
Logged
thierrylegros396
Sr. Member
****
Offline Offline

Posts: 722


« Reply #94 on: January 05, 2011, 05:32:39 AM »
ReplyReply

Bob,
I'm having trouble getting my mind around this. It would seem to me that such a monitor, capable of displaying 15 or 16 stops of DR, would have to be so bright in order to display the brightest parts of an HDR capture, it would dazzle and hurt the eyes, unless the monitor were the size of a wall so that the eye could exclude the brighter parts as it focussed attention on the darker parts.

However, if the monitor were the size of a wall, the room would be so brightly lit that the shadows would appear like midtones.

My Panasonic plasma HDTV claims to have a contrast ratio of 2 million to 1. How many stops of DR is that? About 21?

I see the latest Panasonic models claim a CR of 5,000,000:1, 10/12 bit color depth, and 6,144 steps of gradation. Can anyone decipher these figures for me?


Marketing figures, just like the 178 Viewing Angle !!!  Cheesy Cheesy Wink

Thierry

Logged
Peter_DL
Sr. Member
****
Online Online

Posts: 423


« Reply #95 on: January 05, 2011, 04:27:24 PM »
ReplyReply


PS: BTW, the last Sony sensor used in the Pentax K5 and Nikon D7000 can capture in a single shot 11 stops of DR with acceptable noise (SNR=12dB criteria), and this technology translated to a FF sensor would mean no less than 12 effective stops. So your 8 stops figure is out of date with today's technology...

Interesting plot and lesson about the evolvement of capture DR.
So I'd expect that we are going to see more and more sliders for ('HDR') tone mapping in the Raw converter.
Some of it will be truly appreciated.

Peter

..
Logged
ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 8023


WWW
« Reply #96 on: January 06, 2011, 12:18:52 AM »
ReplyReply

Hi,

Lightroom can do pretty decent job.


Interesting plot and lesson about the evolvement of capture DR.
So I'd expect that we are going to see more and more sliders for ('HDR') tone mapping in the Raw converter.
Some of it will be truly appreciated.

This is a non HDR image developed in Lightroom: http://echophoto.smugmug.com/Special-methods/HDR/HDR/13306153_DcZHj#1002864735_dkeci

and this is a HDR image using Merge to HDR in PSCS5:

http://echophoto.smugmug.com/Special-methods/HDR/HDR/13306153_DcZHj#966794997_wt4h6

Best regards
Erik



Peter

..
« Last Edit: January 06, 2011, 12:20:58 AM by ErikKaffehr » Logged

Ray
Sr. Member
****
Offline Offline

Posts: 8943


« Reply #97 on: January 06, 2011, 08:28:14 AM »
ReplyReply

If the camera-monitor reproduction chain was able to reproduce the original dynamic range of the scene, and the size/distance to the monitor was similar to the angle you would have observed if you were at the scene (or using binoculars mimicing the tele-lens used, if any), then the stimuli in the room should be similar to "being there". Of course, some real-life scenes have a DR that make them visually hurtfull or not pretty.

That doesn't quite make sense to me in light of the information I have gleaned from the internet regarding the eye's dynamic range. With a fixed gaze, Wikipedia claims 6 1/2 stops. Uwe Steinmuller claims 10 stops. Perhaps the differences in these estimates could be attributed to involuntary, semi-microsaccadic eye movements resulting in slight shifts in pupil aperture causing variability in dynamic range, or perhaps they could be attributed simply to general variability of human eyesight.

You must have noticed how your own eye behaves when focussing precisely on a specific part of the monitor, photograph or printed page or newspaper. The angle of view for precise focus and maximum clarity is surprisingly narrow; of the order of 2 degrees I believe, so even when viewing an 8x10" portrait hanging on the wall, from a distance which is not particularly close, say 1 metre, it's impossible to focus precisely on both eyes in the portrait simultaneously, without shifting one's own eyes slightly, from left to right; just as it's not possible to focus on the entire page of even a small book. One has to move the eyes as one scrolls down the page.

To give you a more graphic example, imagine a shot with a telephoto lens of a bird sitting on a branch, silhouetted against the enlarged, firey ball of the setting sun.

The brightness of the sun would cause the eye's pupil to contract. The bird, with its extreme backlighting, would appear very dark. The color of its plummage would be undetectable, because the eye cannot simultaneously have a wide and a narrow aperture. Even when the eye's focus is precisely on the bird, through the camera's viewfinder, the brightness of the background sun will ensure the pupil's aperture remains small.

Supposing we decided to bracket exposure so we could see the full color of the bird's plummage in all its detail, say 9 exposures with a 1 EV interval giving us an additional 8 stops of DR, so that after merging to HDR the dynamic range in the image is a good 16 stops.

Supposing we display that HDR image on a monitor which has a DR capability of 16 stops. What would be the purpose if the eye can only encompass a DR of something between 6 1/2 and 10 F stops? Get my point?

On reflection, perhaps that's the point you were making all along. There's no point in having a monitor with a greater dynamic range than the eye can encompass within a certain angle of view that 'more or less' takes in the whole monitor, even though precise focus will involve a small amount of eye movement.

Quote
Would not a display technology that did not reflect anything from the room be enough?

Not sure. Here's what Wikipedia has to say on the advantages of glossy screens, assuming that the glossy screens have some degree of ant-glare coating.

Quote
In controlled environments, such as darkened rooms, or rooms where all light sources are diffused, glossy displays create more saturated colors, deeper blacks, brighter whites, and are sharper than matte displays. This is why supporters of glossy screens consider these types of displays more appropriate for viewing photographs and watching films. Also, in extremely bright conditions where no direct light is facing the screen, such as outdoors, glossy displays can become more readable than matte displays because they don't disperse the light around the screen (which would render a matte screen washed out).

« Last Edit: January 06, 2011, 08:53:02 AM by Ray » Logged
Peter_DL
Sr. Member
****
Online Online

Posts: 423


« Reply #98 on: January 06, 2011, 11:35:44 AM »
ReplyReply

Lightroom can do pretty decent job.

Fill Light is pretty cool.
Recovery may leave room for improvement.

Peter

--
Logged
Guillermo Luijk
Sr. Member
****
Offline Offline

Posts: 1336



WWW
« Reply #99 on: January 06, 2011, 11:41:55 AM »
ReplyReply

Guillermo,
how do you get that figure of 11 stops? From what I have read, that sensor has full well capacity of about 30,000e-, so 11 stops down is a signal of about 16e-, and then shot noise is 4e- RMS, limiting SNR to 4:1. Is that figure of 12dB (16:1) computed only with respect dark noise and read noise, not photon shot noise?

To put it another way, that target of 12dB or 16:1 SNR (which seems reasonable to me for tolerable shadow noise) requires a signal of at least 2^8=256 photons detected even if the noise generated within the camera is negligible, and to have that photon count 11 stops below maximum signal requires the ability to count up to 2^8*2^11=1^19 photons, a bit over 500,000. With well capacity of about 32K or 2^15, the limit is seven stops about that 12dB threshold.

P. S. [Added later] It just occurred to me that you might be using the strange "power referred" use of dB, so a factor of two in SNR is 6dB. Then the numbers are consistent, with 12dB meaning 4:1 SNR.  But I am not sure how good a local SNR as low as 4:1 can look even in very dark parts of the displayed image.

EDIT: yes, 12dB means linear SNR=4 in my calculations (dB=20*log(lin)). I created these synthetic noisy images to find out how much noise was 12dB and 0dB, and found 12dB to be the maximum acceptable. The images were created from linear noisefree images in linear colour space, added gaussian noise to get the desired StdDev, and then converted to non-linear sRGB and applied a contrast curve, emulating what we do when processing our images.

The 11 stops figure is strictly based on calculated SNR ratio, including all kinds of noise since it comes from real captures over a colourchecker card (read noise, photon noise, PRNU,...). It is not a per-pixel DR though, but normalised to 12,7Mpx (Canon 5D) of output resolution through simple statistics for fair comparision purposes with other 3 cameras.

These are the per-pixel SNR measurements (DR here would be 10,75EV):


And these after normalisation (DR becomes 11,2EV):


No idea how this matches the electronic parameters of the sensor, but these were the SNR measurements I did, have a look at them here.
At Sensorgen.info they calculated full well capacity for the Pentax K5 to be 47.159, being 3,3 read noise at base ISO:

At -11EV:
S=47159-11EV=23,0
read noise=3,3
photon noise=(23,0)^0,5=4,8
total noise=(3,3^2+4,8^2)^0,5=5,8
SNR_dB=20*log(23,0/5,8)=11,9dB

Interesting plot and lesson about the evolvement of capture DR.
So I'd expect that we are going to see more and more sliders for ('HDR') tone mapping in the Raw converter.
Some of it will be truly appreciated.

I agree about your comment on RAW developer abilities. In fact I would suggest that an enhanced approach could be desirable in the software. In the same way as the RAW developer is aware of the lens capabilities in order to correct distortion and CA, it could be aware of every particular sensor used to estimate usable captured DR and calculate optimum settings without so much user intervention. A RAW file from a Pentax K5 at ISO80 shot over a high DR scene has much more usable information than the same RAW file from an Olympus camera at ISO1600. I find the present highlight and shadow recovery sliders approach a bit primitive.

Another DR plot to think about: which of the two top selling brands has taken more care of DR on its cameras over the years? (the plotted lines represent the highest DR APS-C camera from each brand at every time):



Maybe Canon priorized Mpx over DR too much.

Regards
« Last Edit: January 06, 2011, 12:50:01 PM by Guillermo Luijk » Logged

Pages: « 1 ... 3 4 [5] 6 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad