Ad
Ad
Ad
Pages: « 1 2 [3] 4 »   Bottom of Page
Print
Author Topic: The Never Ending Holy Quest for Dynamic Range  (Read 11566 times)
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1683


« Reply #40 on: October 08, 2011, 04:09:24 PM »
ReplyReply

I think that the problem of dynamic range is inherently the medium. We are trying to put a high dynamic range scene in a low dynamic range medium (either paper or screen). It reminds me of the 80's videogames with 16 or 256 colors, they were trying to put a caribbean sunset as a background but,  hey, it was 16 colours, how could it look even close? The solution of dynamic range  problems will be of course hdr capture/output devices. That's why I agree with the opinion that the first photo looks more natural. This courtain should be bright, it cannot be a medium gray. If a film were used, would there be detail in the curtains? maybe, but if there was detail, this detail would be very compressed towards the whites, while with digital is linearly distributed, so the values go towards the midtones. We can not forget that our vision is not linear.
I like to see this as an ideal record producer should see it (dont get me started on loudness wars etc..) : you have this magnificent recording of musicians in a quiet studio at e.g. 80dB of dynamic range. You want it to shine on a radio channel with 25dB of dynamic range. So what do you do, you try to reduce the dynamic range in such a way as to compromise quality as little as possible.

Capturing the scene with a LDR camera does not avoid the problem, it simply removes options. Instead of being able to tweak curves and local/global adjustements from the raw file, one is applying a kind of LDR tonemapping in-camera (clipping highlights and adding noisefloor).

The core of the issue is: if a scene contains a lot of DR, how do we accurately (and subjectively pleasingly) recreate it on a limited DR paper/display.

-h
Logged
BartvanderWolf
Sr. Member
****
Online Online

Posts: 3759


« Reply #41 on: October 08, 2011, 07:07:23 PM »
ReplyReply

The core of the issue is: if a scene contains a lot of DR, how do we accurately (and subjectively pleasingly) recreate it on a limited DR paper/display.

By capturing it at high resolution, and subsequently by clever (perceptualy pleasing) mapping to the output DR.

Cheers,
Bart
Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1683


« Reply #42 on: October 09, 2011, 04:19:02 AM »
ReplyReply

By capturing it at high resolution, and subsequently by clever (perceptualy pleasing) mapping to the output DR.
Given that todays capture technologies does not allow very accurate and very large DR at the same time, we have to do compromises anyways.

Using any digital camera in single shot mode will give you a somewhat limited DR capture, but the possibility to represent colors and tonecurve pretty "accurate" within those limits since behaviour is well-defined and can be "corrected" digitally.

Using a digital camera with several exposures, you can capture a larger scene DR (limited by optics DR), but it only works well for static scenes.

My impression is that film allows one representation of large scene DR -> low print DR without hard clipping. But the user have limited freedom in choosing that transform, difficult predictability, and limited accuracy (please don't shoot me if I you disagree).

It is difficult to say that any one of these approaches is the "end-all" practical solution to the problem. Multi-shot digital clearly gives more information about (static) scenes that can in principle be used in later algorithms to simulate the other two. From a dsp perspective that is perhaps "optimal", but limiting oneself to static scenes is a pretty harsh limitation.

-h
Logged
ixania2
Newbie
*
Offline Offline

Posts: 42


« Reply #43 on: October 09, 2011, 07:54:13 AM »
ReplyReply

Given that todays capture technologies does not allow very accurate and very large DR at the same time, we have to do compromises anyways.

Using any digital camera in single shot mode will give you a somewhat limited DR capture...


nevertheless i like cartier-bresson's unsharp pictures with white skies better than today's with D3s.
why, oh why?
Logged
mediumcool
Sr. Member
****
Offline Offline

Posts: 676



« Reply #44 on: October 09, 2011, 07:57:58 AM »
ReplyReply

And of course film has always had the H & D-style compression at both ends of the tonal spectrum to contain the vast dynamic range of real-world scenes to the much narrower range of printing papers, manipulated to great effect by Zone practictioners such as Ansel Adams and Fred Picker. Some digital cameras mimic a filmic slow rolloff, often selectable; others clip far more readily with more exposure. I do not expose to the right, especially with my Aptus, because mid-tones and shadows can be lifted, but I have never found the fabled latitude in highlights that is often touted.

The thing to remember is that tonal compression and even posterisation occur when a huge range is compressed non-linearly into a smaller one; we have to be clever about how we do it—I often use judicious dodging to open up a particular area and maintain/increase local contrast rather than push the whole histogram. Particularly noticeable effects can result with curves adjustments that push the curve vertically; I prefer Levels rather than Curves for this very reason.

Logged

FaceBook facebook.com/ian.goss.39   www.mlkshk.com/user/mediumcool
mediumcool
Sr. Member
****
Offline Offline

Posts: 676



« Reply #45 on: October 09, 2011, 08:16:36 AM »
ReplyReply

Really meant horizontally there in the second para; a more vertical curve indicates very rapid tonal transition. Mea culpa—late at night.
Logged

FaceBook facebook.com/ian.goss.39   www.mlkshk.com/user/mediumcool
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1683


« Reply #46 on: October 09, 2011, 09:03:10 AM »
ReplyReply

And of course film has always had the H & D-style compression at both ends of the tonal spectrum to contain the vast dynamic range of real-world scenes to the much narrower range of printing papers
Trying my best to not turn this into a "my technology is better than yours" debate, but to put all capture/processing technologies into a common framework.

I tried to include that in my list as just one of several options of "mapping" the real scene intensity to paper with pros and cons. Pro being that a large range of scene intensities map to discernible media values ("High dynamic range"). Con being that the mapping is complicated and hard-to-predict/invert, meaning that for a given film/process you are sort of "locked" into one response type.

When doing exposure-synthesis (HDR), I believe that the assembled file can represent the physical scene with larger accuracy (if the scene is static), that pretty much any tonecurve look (including film) can be emulated based on physical measurements/models instead of manual trial and error. And that this non-linear tonemapping space seems to contain a few possibilities that look "better" for some people for some scenes than the characteristics typically found with film.
Quote
The thing to remember is that tonal compression and even posterisation occur when a huge range is compressed non-linearly into a smaller one; we have to be clever about how we do it—I often use judicious dodging to open up a particular area and maintain/increase local contrast rather than push the whole histogram. Particularly noticeable effects can result with curves adjustments that push the curve vertically; I prefer Levels rather than Curves for this very reason.
I dont see how posterization can result from _compressing_ the input, it should only happen when _expanding_ the input? If a narrow range of input values is spread across a large range of output values (expansion), any steps in the source will be made more visible. If a large range of input values is squeezed into a small range of output values (compression),  output tonality should have ample information (within the range in question) to be smooth?

-h
Logged
mediumcool
Sr. Member
****
Offline Offline

Posts: 676



« Reply #47 on: October 09, 2011, 09:44:51 AM »
ReplyReply

Without going through the previous post point-by-point, I’ll offer this:

Tonal compression is [almost] always required; and it has been the case that compression of the tonal ends has been considered more desirable than in the mid-tones (though how much this stance has been influenced by the inherent sensitivity characteristics of silver halide film and papers is open to debate). I recall reading many years ago (as an amateur since the early ’60s; a student for three years in a very technical photographic course; commercial photography since the mid-’70s; and more recently a user of computers for imaging over twenty years, I do have some form) that compression of highlights and shadows is visually preferable to that of mid-tones. It’s all a compromise of course.

On posterisation, if I understand your point correctly, I was [perhaps not effectively] trying to say that compensation in one part of the tonal range can lead to problems in another area. Particularly with curves; when I taught graphics as part of a multimedia course, every year I advised students to use Levels rather than Curves in Photoshop for the reason that it is harder to cock-up Levels (and the histogram is very useful too of course).

Talk of HDR is moot if the tonal range thus produced is still wider than the media it will be delivered on. I find most so-called “HDR” to be ugly, heavy and over-saturated in darker areas. Sometimes, you need a cloudy-bright day to get a pleasing tonal range where detail and texture is maintained at the extremes. Or make a aesthetic decision to lose some detail to reinforce a mood; good example of H/L blowout here.
Logged

FaceBook facebook.com/ian.goss.39   www.mlkshk.com/user/mediumcool
fredjeang
Guest
« Reply #48 on: October 09, 2011, 11:13:48 AM »
ReplyReply

It's not surprising that several shots with different exposures mixed into an automatic processing are producing hugly results. If we don't work with zones I don't see how this merging images could give controled results.

This is something I never ended to understand in still photography, specially when digital PP is 60% of the work. Would you go into a Da Vinci suite and press the "auto-grade" button?

I'm working recently with some Alexa files and those have the widest DR I have ever seen in any digital capture device. (but I'm not pretending I've seen it all). When using with keyers in Nuke or Autodesk, you see very well with the scopes what is happening between a material from the Alexa and a material from the 5D2 for ex. When I mask and work on a certain zone, I first work on the luminance key, then and only then, on the colors. The fact is that the 5D2 abruptly degrade the possibilities of precise control over all the DR available (it's 4. 2. 0) wich result in compromises within the alpha and indeed tricky to correct the way and where you want to. Less abstractly said, it lacks transitions (and that's not specially a bad thing) but also there is not enough information recorded (it simply doesn't exist) to be able to correct exactly where you need to and keep it clean. When things are going well, the 5D2 is a brise but when things are tricky you see the limtation and specially on the DR side. On a practical side, the Alexa is much more forgiving but also gives a wider choice of grading. You hardly reach the limit while with the 5D2 you reach it very fast. A bad green screen for example is workable with the Alexa while with the Canon you can through the footage on the garbage, if you fail it, there is little to do.

The irony is that the Alexa, being a much more expensive camera totally aimed top the pro market, could be used by a non skilled operator because there is much more to do in post while the 5D2 that target the prosumer market is much more critical to get the files for a creative post. Really, the widest possible DR in capture avoid a lot of hassles and also allows much more creative decisions, IMHO.
« Last Edit: October 09, 2011, 11:17:28 AM by fredjeang » Logged
mediumcool
Sr. Member
****
Offline Offline

Posts: 676



« Reply #49 on: October 09, 2011, 11:30:08 AM »
ReplyReply

The irony is that the Alexa, being a much more expensive camera totally aimed top the pro market, could be used by a non skilled operator because there is much more to do in post while the 5D2 that target the prosumer market is much more critical to get the files for a creative post. Really, the widest possible DR in capture avoid a lot of hassles and also allows much more creative decisions, IMHO.

It’s long been like this, where a more expensive piece of equipment can produce good work more easily. Thinking back to flimsy enlargers that were cheaper than high-end Dursts (for example) but used carefully with a good lens could produce good results, but always with more effort.
Logged

FaceBook facebook.com/ian.goss.39   www.mlkshk.com/user/mediumcool
Pantoned
Jr. Member
**
Offline Offline

Posts: 98


WWW
« Reply #50 on: October 09, 2011, 02:56:02 PM »
ReplyReply

I'm working recently with some Alexa files and those have the widest DR I have ever seen in any digital capture device

I totally agree with you, last july I was shooting a making off with 1dsmarkIII, I didn't even know about Alexa back then but when I saw the preview on the screen couldn't stop myself asking about the camera. I couldn't compare on stage because I only had my camera screen preview but just seeing my clippings and histograms made me wonder about the diference. The alexa looks so good even without post production. That night when I came home I had a look on the alexa specs, they claim "14 stops for all sensitivity settings from EI 160 to EI 3200, as measured with the ARRI Dynamic Range Test Chart (DRTC)" but I would like too see numbers from independent tests.

Yesterday I read the article "3 Years Later – DSLR Video", and I adventured to try the technicolor profile on the 1dsmIII, even I rarely shoot jpeg it is amazing the amount of shadow detail it displays. The profile pushes the shadows up so the artifacts that normally appear with compression have less efect. I will definetly leave the profile in the camera for the future, even if it's made for video I can find some uses for photo for the same reasons it was created.
Logged

---------------------------------------
www.arnauanglada.com
fredjeang
Guest
« Reply #51 on: October 09, 2011, 03:37:43 PM »
ReplyReply

I totally agree with you, last july I was shooting a making off with 1dsmarkIII, I didn't even know about Alexa back then but when I saw the preview on the screen couldn't stop myself asking about the camera. I couldn't compare on stage because I only had my camera screen preview but just seeing my clippings and histograms made me wonder about the diference. The alexa looks so good even without post production. That night when I came home I had a look on the alexa specs, they claim "14 stops for all sensitivity settings from EI 160 to EI 3200, as measured with the ARRI Dynamic Range Test Chart (DRTC)" but I would like too see numbers from independent tests.

Yesterday I read the article "3 Years Later – DSLR Video", and I adventured to try the technicolor profile on the 1dsmIII, even I rarely shoot jpeg it is amazing the amount of shadow detail it displays. The profile pushes the shadows up so the artifacts that normally appear with compression have less efect. I will definetly leave the profile in the camera for the future, even if it's made for video I can find some uses for photo for the same reasons it was created.

It's true, the technicolor profile is helping a lot the 5D2. In fact they create this profile with parameters that where already used by video gurus having in mind the post prod from the begining.

About what video can bring to stills and vice-versa it is indeed the case. Still can bring to video a simplified workflow in the pipeline and video can bring to still the power of their technology, specially in PP, the LUT etc...

But already...I find (personaly) that Nuke is giving me much more power and fine tuning than Photoshop does. (I mean for still imagery also, Nuke is resolution free)

I suggest that still photographers have a look at After effect. It's way cheaper than Nuke but does the same things (the harder way as mediumcool pointed). Most of you guys already have AE installed in the Adobe's suite and probably ignored it thinking it's just for motion. Even if you do not work with motion, it could bring different perspective in your workflow for certain applications.

http://vimeo.com/12758392

This video is interesting because it shows a process in real time even with some issues so very representative IMO what you can expect.
« Last Edit: October 09, 2011, 03:55:27 PM by fredjeang » Logged
KLaban
Sr. Member
****
Offline Offline

Posts: 1678



WWW
« Reply #52 on: October 09, 2011, 05:18:56 PM »
ReplyReply

The Never Ending Holy Quest for Dynamic Range

Or alternatively, let the buggers blow.

Logged

fredjeang
Guest
« Reply #53 on: October 09, 2011, 05:58:09 PM »
ReplyReply

As always, great work. Beautifull.

Indeed when looking at that pic, and most of the great imagery, I never found myself asking about a blowned highlight or whatever as a viewer (not the viewer of the camera, I mean, me as a spectator). If the image catches me I never notice the technique or the lack of it.

Specially noticed some young guys in Russia with cheap cameras and basic tech doing an imagery that truly is interesting and with very little technique and many optical "issues" (was not Holga but dslrs).


I wonder what in those abandonned places are so attractive. There is obviously a fine technique here and experience. In fact what I probably catch most in Keith work is that there is a quest of beauty and refinement in subjects where I mostly saw the exact opposite: as it's abandonned, let's work on the dirty side. As a painter, the color is mastered.
It makes me think why I like more a women in her 50's than a young plastic perfect and boring model. Time makes things way more attractive before it destroy them. There is a point, quite short in time, where the beauty is at its max, just before the final decadence. And a re-born through the lens of someone who is able to see it. This is indeed very similar to hunting treasure.
« Last Edit: October 09, 2011, 06:04:02 PM by fredjeang » Logged
KLaban
Sr. Member
****
Offline Offline

Posts: 1678



WWW
« Reply #54 on: October 10, 2011, 04:25:52 AM »
ReplyReply

Fred, many thanks for your kind comments, much appreciated.

Sadly letting the windows/light source blow like this often doesn't work. I think - or rather, hope - it works here because there is light spill around the window frame, distorting as it goes, and also on the recess. The white areas read as light rather than two perfect solid white squares. I've made a number of shots where this hasn't happened and the result is less than desirable. My problem is that more often than not the subject matter or detail through the windows is superfluous and unwanted.

I believe there is a beauty in decay, but as you say it's often fleeting. The truth is these ruins are not places where anyone in their right mind would choose to spend quality time. I've come across sheep, goats, dogs, cats, rats, snakes, scorpions, mosquitoes, horse flies...dead and alive. There’s excreta everywhere and not all of it from animals. There's earthquake damage, rotting floorboards and falling masonry. Regardless, the ruins draw me in. Sometimes I just long for the clean minimalist lines of a Barrett home - not to be confused with a Barratt Home which in the UK is a different kettle of fish entirely.

I can spend days, weeks even, searching before finding anything worthwhile, but hey, it's the thrill of the chase. The people I meet on these travels and their interest, kindness and generosity are often reward enough.

BTW, I too find image 01 to be the most pleasing.
 
« Last Edit: October 10, 2011, 04:29:05 AM by KLaban » Logged

ChristopherBarrett
Guest
« Reply #55 on: October 10, 2011, 08:38:12 AM »
ReplyReply

I should have clarified at the beginning...  I'm not really interested in the merits of the look with the blown out window, while that can work quite well in a residential image.  The fact is that its simply not an option for most of what I do.  The images below are a better example of much of my commercial work.  The blow to look is less appropriate for this commercial space and the view to the lake is actually a big part of the design story.  I spent the better part of a day blending a 7 exposure bracket to tame the contrast.  Had I submitted the unaltered image, the client would quickly have rejected it.

So... I'm just exploring better ways of achieving this end that do compress the required range of dynamic information without feeling too artificial.

Cheers,

CB

Logged
KLaban
Sr. Member
****
Offline Offline

Posts: 1678



WWW
« Reply #56 on: October 10, 2011, 09:52:09 AM »
ReplyReply

Yup, I wasn't really suggesting that the blown look is an option for the majority of commercial work. Clients pay a lot for their views.

I take it you dropped the sea in?
Logged

arashm
Full Member
***
Offline Offline

Posts: 142


« Reply #57 on: October 10, 2011, 10:12:21 AM »
ReplyReply

Chris
I've had a few discussions about this with my re-touchers as well.
Unfortunately it seems that at the moment, the layer/mask and spending quality time with your Wacom pen is the only real way to go.
I know this doesn't sound very positive, specially since it's such a time consuming process.
All the HDR softwares I've tried just don't have the sophistication needed.
The only other thing I can add is that I try to educate the clients on this style and explain the billing process!
am
Logged
Slobodan Blagojevic
Sr. Member
****
Offline Offline

Posts: 6042


When everybody thinks the same... nobody thinks.


WWW
« Reply #58 on: October 10, 2011, 10:20:38 AM »
ReplyReply

... I take it you dropped the sea in?

It is most likely Chicago, with Lake Michigan.
Logged

Slobodan

Flickr
500px
ChristopherBarrett
Guest
« Reply #59 on: October 10, 2011, 10:31:25 AM »
ReplyReply

Correct!  Lake Michigan it was.  Here was the process...

Shot a bracket from N to N-6 in 1 stop increments.
Processed the full bracket in PhotoMatix using Exposure Fusion.

Opened the fused HDR image and the Normal exposure in Photoshop.
Dropped the HDR image on top of the Normal image.
Used the luminosity on the base Normal image as a mask on the HDR image layer.
Hand blended from there.

Pain in the ass.
Logged
Pages: « 1 2 [3] 4 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad