Ad
Ad
Ad
Pages: « 1 [2] 3 »   Bottom of Page
Print
Author Topic: Curves over sliders  (Read 8507 times)
jrsforums
Sr. Member
****
Offline Offline

Posts: 750


« Reply #20 on: November 09, 2012, 10:23:55 AM »
ReplyReply

Hi John,

I think one has too look further ahead. With the (certainly until recent) slowly developing HDR tonemapping functionality (32-b/ch TIFFs) things like the ICAM (Color Appearance Model) are becoming more important than strictly colorimetric color. Such a CAM (e.g. CIECAM02) allows to e.g avoid oversaturated shadow colors in a perceptually believable way, or tune saturation clipping and perceptually correct brightness changes.

That may also be one aspect of the lack of response to Dan Margulis' supposed question, who tried to push L* a*b* processing, which is far from neutral even when switching (back and forth) to an RGB colorspace.

Cheers,
Bart

As Ricky said to Lucy, "You got some 'splaining to do."

Can you put that in horsey, horsey, duckie, duckie animal book terms.....that is, a bit simpler or detailed out so some, such as I, can better understand?

John
Logged

John
digitaldog
Sr. Member
****
Online Online

Posts: 9217



WWW
« Reply #21 on: November 09, 2012, 10:26:18 AM »
ReplyReply

There is a post on here from a couple of days ago where the Digital Dog retells a story about Dan Margulis asking Thomas Knoll to implement a luminousity curve in ACR and he declined much to Dan's chagrin. This was a while back.

Actually that was Jeff.
Logged

Andrew Rodney
Author “Color Management for Photographers”
http://digitaldog.net/
AndrewMcCormick
Newbie
*
Offline Offline

Posts: 10


« Reply #22 on: November 09, 2012, 10:45:17 AM »
ReplyReply

Charlie Cramer has a tutorial posted on the site expressly about this subject. It was about 3 months ago. Very informative.
Thanks.  Perfect article for what I was looking for.
Logged
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3863


« Reply #23 on: November 09, 2012, 10:45:41 AM »
ReplyReply

As Ricky said to Lucy, "You got some 'splaining to do."

Can you put that in horsey, horsey, duckie, duckie animal book terms.....that is, a bit simpler or detailed out so some, such as I, can better understand?

Hi John,

It's complex material, but perhaps a visual demonstration help to grasp the importance of perceptual color and brightness adjustments. I found a nice example here. Look at the examples under the "The surround effect" heading for the effect of local/surrounding brightness on perceived color/brightness.

Here is another approach, but specifically addressing similar saturation issues.

Cliff Rames made a Photoshop plugin (32-bit Windows version only) which allows to explore the wonders of perceptual transforms on your own images.

Cheers,
Bart
Logged
Tim Lookingbill
Sr. Member
****
Offline Offline

Posts: 1227



WWW
« Reply #24 on: November 09, 2012, 12:34:55 PM »
ReplyReply

Why is it when scientists get into the act of controlling and defining a creative endeavor as photography, they seem to create some of the most blandest images implementing their complicated methods?...

http://www.cis.rit.edu/fairchild/HDRPS/Scenes/CemeteryTree(2).html

http://www.cis.rit.edu/research/mcsl2/icam/hdr/ (Are those image samples meant to be a scene referred starting point or are they finished renderings?)

Thanks to this thread and Bart's links I now know what scene referred is suppose to look like. However, the definition I don't agree with or even see or understand how it was established and defined. Simon Tindeman's definition of equalizing color and removing the characterization of the camera doesn't make a lot of sense from a creative standpoint.

How would one know if the character of a camera has been removed and who defines how it is done and what it looks like? I remember reading his points he was making with the behavior of saturation tied to contrast/luminance edits (demo'ed in his "Tonability" image samples linked here) to Thomas Knoll years ago over at the Adobe Photoshop forums. I never knew he was going for a scene referred starting point. Interesting that he finally came up with an app to address it but I still see it as way too much work for what it's worth.

Color equalization of the entire image makes a lot of sense from a workflow efficiency POV, but it doesn't address what it does to the creative editing process which appears to make it look less efficient.

I use both curves and sliders to make an image "pop" or in more scientific terms...to "accurately" reproduce the spectral reflectance characteristic provided by any given illuminant on the elements in a scene in order to give it a 3D look as well as expand the dynamics in an attempt to counter the gamut/dynamic crushing look of a print (offset press) on output. Saturation (a spectral reflectance characteristic) requires a luminance boost in order to pull this off perceptually, NOT ACCURATELY.

This is what I refer to as Vibrance where Richness (saturation in darker tones) shouldn't have as much luminance increase comparatively.

The parametric curve I now find indispensable in broadening the effects of the "Clarity" slider without introducing the familiar goofy looking solarized effect. I find small increases of the Fill slider combined with gradual back and forth increases in the Contrast slider aids in adding a lot of definition in the shadows more so than a curve tweak. However, the Parametric Shadow curve maxed to the left set to 100 with the two left triangles slid far to the left often adds an extra depth to roll off into black point without posterization better than the Black Point slider.

The flat looking shadows shown in those RIT.EDU linked hdr images are not what I'm after.
« Last Edit: November 09, 2012, 12:46:29 PM by tlooknbill » Logged
RFPhotography
Guest
« Reply #25 on: November 09, 2012, 02:00:21 PM »
ReplyReply

Quote
Simon Tindeman's definition of equalizing color and removing the characterization of the camera doesn't make a lot of sense from a creative standpoint.

But that's the point of Tindeman's workflow, isn't it?  Using a 'scene referred' intermediary image as the starting point and then doing the creative processing after that.  As far as how you would know if the particular colour character of the camera were removed, that would be the work of a custom camera profile, no?

I think it makes some sense.  Particularly if you're not a fan of what Adobe gives you as a starting point with it's built in render curve.  It goes even further than that in PV2012 with the image adaptive highlight protection.  I know how to back out the render curve using a custom camera profile made with the curve set to linear but not sure how to back out the new highlight protection. 

FWIW I'd probably disagree with Tindeman on one point and that's whether highlight recovery should be a part of the workflow to generate the scene referred image or whether that's a part of the creative side of the workflow. 
Logged
Tim Lookingbill
Sr. Member
****
Offline Offline

Posts: 1227



WWW
« Reply #26 on: November 09, 2012, 08:18:20 PM »
ReplyReply

Quote
As far as how you would know if the particular colour character of the camera were removed, that would be the work of a custom camera profile, no?

That's sounds like it would work, but in practice with regards to improving workflow efficiency established with the intent of producing pleasing looking images, it doesn't consistently work. It helps a bit as long as the image isn't drastically changed as the viewer remembers the scene. Simon's flat, desaturated sample image he uses is not a desired starting point I'm interested in.

It's also disappointing to see he still uses sample images that don't represent typical shooting scenarios of most street/landscape photographers to demonstrate the advantages of scene referred processing/editing. I'ld like to see him start off with one of those HDR images like the "Cemetery Tree" only using a single properly exposed shot (which I've done a few that way shooting Raw and with better results) and see if you get consistent results from the less contrasty scene he used.

Why take 4 steps back in order to flat line an image as a way to establish some theoretical "scene referred" starting point. It just looks like more work than it's worth.

I actually applied a similar methodology with an Epson flatbed scanner in order to see all the available tonal differences that made up useable detail from shadows to highlights scanning negatives that Epson's Gamma curve driven under belly kept plugging up and blowing out.

I couldn't use a profiling target/software package at the time so instead used a colorful test image properly exposed to make the image generally flat and desaturated with its histogram end points pulled well back in order to give toe & shoulder room for the inevitable squash and stretch clipping behavior caused by tone mapping and restoring normal contrast. On some images this method allowed for a simple sigmoid or umbrella shaped curve to make it all look right but it still needed a lot of local contrast to bring out clarity.

It worked for some images but became a big PITA with others that had more or less contrast and brightness appearance caused by various lighting situations captured from scene to scene.

In the end it just became more work across a wide range of images than it was worth. The equation that's being left out of "scene referred" processing with regards to efficient workflows is that reality has to be characterized as well, not just the camera's response, for getting consistent results.

If the defaults give contrasted results that hide detail, it's a lot easier to choose a Linear tone curve and/or set Contrast slider to zero than to turn everything off as a new default one size fits all that makes one image workable and the rest a PITA to edit.
« Last Edit: November 09, 2012, 08:21:37 PM by tlooknbill » Logged
Tim Lookingbill
Sr. Member
****
Offline Offline

Posts: 1227



WWW
« Reply #27 on: November 09, 2012, 08:26:43 PM »
ReplyReply

I tried to do a search on 'image adaptive highlight protection' I couldn't find the answer I was looking for.

Does the adaptive part of that term refer to the way our eyes view (adapt) to a given level of contrasted scene or does this refer to adapting a look to an image with regards to rendering? IOW is it an editing term or human visual system term?
Logged
RFPhotography
Guest
« Reply #28 on: November 09, 2012, 10:26:42 PM »
ReplyReply

I think what you're talking about is just a different approach to a workflow.  We each develop an approach that works for us.

As I understand it, the adaptive part of the render curve in PV2012, as well as the sliders in the Basic Panel, is based on the actual data in the individual image.  So the highlight protection will be different for different images.  WRT the sliders, a 10 point change on one image will give a different result from a 10 point change on another image.
Logged
Tim Lookingbill
Sr. Member
****
Offline Offline

Posts: 1227



WWW
« Reply #29 on: November 09, 2012, 10:42:25 PM »
ReplyReply

Thanks for the explanation, Bob.

I finally found Jeff Schewe's outline of it on a Adobe forum thread which ends up going quite in depth...

Quote
The individual controls in PV 2012 are image adaptive meaning the adjustment and range of the controls adapt to your image. By default, PV2012 retains much more highlight and shadow detail than PV2010. Fill Light and Clarity were the two image adaptive controls in PV2010 (not sure is Highlight Recovery was). But now all the tone mapping in PV2012 adjusts the controls to the image. So they are much more flexible–which is why it's so hard to try to convert from PV2010's non-adaptive controls to PV2012's adaptive controls. There's some addition info on the Lightroom Journal in this article; Magic or Local Laplacian Filters (note, it's pretty deep stuff).

From this thread...

http://forums.adobe.com/message/4313870

...and the gear head take on what's going on...

http://people.csail.mit.edu/sparis/publi/2011/siggraph/

Again I'm not too thrilled with the results seen in their sample images. I don't know what part of the science they're studying that influences those kind of results.
Logged
Tim Lookingbill
Sr. Member
****
Offline Offline

Posts: 1227



WWW
« Reply #30 on: November 09, 2012, 11:00:12 PM »
ReplyReply

Ok, a correction.

This science document indicates I was too quick to judge the initial samples...

http://people.csail.mit.edu/sparis/publi/2011/siggraph/additional_results/tone_mapping_large/cadik-desk02.html

Click on any of the hi rez images and see how clean and lacking in noise the results they achieved. Amazing!
Logged
RFPhotography
Guest
« Reply #31 on: November 10, 2012, 08:08:12 AM »
ReplyReply

But the images should be low in noise, right?  That's the goal of using HDR, to maximise SNR and thereby reduce noise.
Logged
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3863


« Reply #32 on: November 10, 2012, 08:26:01 AM »
ReplyReply

But the images should be low in noise, right?  That's the goal of using HDR, to maximise SNR and thereby reduce noise.

Exactly. That's what allows to do accurate technical measurements on the images for research purposes, and (more important for us photographers) to do some heavy lifting in tonemapping.

Here's another challenging image, although it's obviously a technical demonstration because I would have tried to a) create a better EXR source image (reduce Chromatic aberrations and perhaps use a more glare resistant lens), and b) do a more pleasing color rendering (more in the lines of the attached conversion, tonemapped in SNS-HDR and downsampled in LR).

Cheers,
Bart
« Last Edit: November 10, 2012, 10:16:56 AM by BartvanderWolf » Logged
Peter_DL
Sr. Member
****
Offline Offline

Posts: 421


« Reply #33 on: November 10, 2012, 09:26:43 AM »
ReplyReply

FWIW I'd probably disagree with Tindeman on one point and that's whether highlight recovery should be a part of the workflow to generate the scene referred image or whether that's a part of the creative side of the workflow. 

Highlight recovery for scene reconstruction could be just referring to the recovery of clipped channels, that is when only one or two channels are clipped while the third one is intact
- as opposed to the highlight recovery often needed to undo the damage from a 'creative' tone curve such as the famous S-curve.  Tindeman's approach leaves the door open for spatially non-uniform tone mapping, for example to furnish the tone curve with an inverted luminosity mask in Photoshop, thus, overcoming the need to re-insert information from the scene-referred rendition again.

Peter

--
Logged
RFPhotography
Guest
« Reply #34 on: November 10, 2012, 11:23:29 AM »
ReplyReply

It was really a rhetorical question, Bart.  I knew the answer.  Wink

Except you can't repair or recover a completely clipped channel in LR/ACR, Peter.  At least I'm not aware of how it can be done.  If a channel is clipped, it's gone and no amount of playing with sliders is going to bring it back. 
Logged
Tim Lookingbill
Sr. Member
****
Offline Offline

Posts: 1227



WWW
« Reply #35 on: November 10, 2012, 12:15:25 PM »
ReplyReply

Bart, that looks perfect.

Just wondering if all that scientific coding demonstrated in the provided links is engineered into current versions of Lightroom/ACR.

What is SNS-HDR? Is it a stand alone app or Photoshop plug-in?
Logged
Schewe
Sr. Member
****
Offline Offline

Posts: 5529


WWW
« Reply #36 on: November 10, 2012, 12:36:06 PM »
ReplyReply

Just wondering if all that scientific coding demonstrated in the provided links is engineered into current versions of Lightroom/ACR.

The research in the paper is the basis of Process Version 2012, yes...and note the original paper discussed HDR which was what the original research was directed to, the engineers adapted the algorithms to work with standard raw image data and while 32-bit processing (HDR) wasn't included for the original LR 4.0 and ACR 7.0 releases, it was added for LR 4.x and ACR 7.x.
Logged
Peter_DL
Sr. Member
****
Offline Offline

Posts: 421


« Reply #37 on: November 10, 2012, 12:44:11 PM »
ReplyReply

... you can't repair or recover a completely clipped channel in LR/ACR, Peter.  At least I'm not aware of how it can be done.  If a channel is clipped, it's gone and no amount of playing with sliders is going to bring it back.

DCRAW can.

--
Logged
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3863


« Reply #38 on: November 10, 2012, 02:04:13 PM »
ReplyReply

Bart, that looks perfect.

Well, there are a lot of different ways of tonemapping possible. As long as the basic image data is of decent (S/N ratio) quality, creativity can take over without noise becoming a distracting issue.

Quote
Just wondering if all that scientific coding demonstrated in the provided links is engineered into current versions of Lightroom/ACR.

Yes, in some form or another. The Farbman et al. paper was mentioned at LR introduction time, if I'm not mistaken. EDIT: It looks like I remembered correctly (it was referenced in the paper mentioned in the article).

Quote
What is SNS-HDR? Is it a stand alone app or Photoshop plug-in?

It's IMHO currently the best stand-alone HDR tonemapping application (Windows only, although it reportedly runs fine under Parallels and such on a Mac) for the more natural looking results. For those interested, here is the website (select your preferred language at the top right).

Cheers,
Bart
« Last Edit: November 10, 2012, 06:30:01 PM by BartvanderWolf » Logged
RFPhotography
Guest
« Reply #39 on: November 10, 2012, 06:01:03 PM »
ReplyReply


Are you talking about the highlight recovery section of GL's article?  If so, then no it 's not truly recovering the blown channel(s).  It's recreating or replacing data from other information in the image.  This can be done in PS as well with saturation masks.  The fact remains that you cannot truly recover data from a blown channel because there is no data to recover.  And recreating or replacing isn't the same.
Logged
Pages: « 1 [2] 3 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad