Ad
Ad
Ad
Pages: [1] 2 3 ... 8 »   Bottom of Page
Print
Author Topic: Dan Margulis Sharpening Action  (Read 106546 times)
bjanes
Sr. Member
****
Offline Offline

Posts: 2756



« on: September 27, 2007, 11:53:44 AM »
ReplyReply

In the Nikon forum I came across an interesting post regarding a method devised by Dan Margulis for sharpening of RGB images. It uses an artificial black channel derived from a CMYK conversion to make a sharpening mask which restricts the sharpening to the darker and less colorful areas of the image. This mask It is implemented as a Photoshop action.

Dan Margulis Action

Documentation is sparse, but it is a novel method. One can control the degree of sharpening via the usual unsharp mask controls and through the center slider of the levels control of the sharpening layer.

For those accustomed to the Bruce Fraser sharpening workflow, I do not know if this should be regarded as capture sharpening to be followed by a round of output sharpening according to the size of the final image and output device, or merely as a one step process.

Bill
Logged
Jonathan Wienke
Sr. Member
****
Offline Offline

Posts: 5759



WWW
« Reply #1 on: September 27, 2007, 02:09:12 PM »
ReplyReply

Doesn't sound very impressive. If you're going to do tonal masking, you want to mask off the highlights and shadows to avoid/reduce halos, especially shadows, since dark tones have the most noise. You also want to mask off highlights to avoid clipping. Focusing sharpening on darker areas makes no sense at all.
Logged

digitaldog
Sr. Member
****
Offline Offline

Posts: 8628



WWW
« Reply #2 on: September 27, 2007, 02:50:09 PM »
ReplyReply

Quote
Documentation is sparse, but it is a novel method.

What makes it novel?

Is it capture or output sharpening? If the later, how does one know the parameters for each device?

If its based on visual sharpening, how does one operate this considering we're working on low resolution output devices like a display?

What about converting to CMYK (and what space) to generate a black channels? Seems like a good way to toss away a lot of useful data and color gamut (not that Dan believes either are an issue).

Quote
For those accustomed to the Bruce Fraser sharpening workflow, I do not know if this should be regarded as capture sharpening to be followed by a round of output sharpening according to the size of the final image and output device, or merely as a one step process.

Sounds like that needs to be defined (among other things). So again, its novel why?
Logged

Andrew Rodney
Author “Color Management for Photographers”
http://digitaldog.net/
Mark D Segal
Contributor
Sr. Member
*
Offline Offline

Posts: 6821


WWW
« Reply #3 on: September 27, 2007, 04:12:36 PM »
ReplyReply

Quote
In the Nikon forum I came across an interesting post regarding a method devised by Dan Margulis for sharpening of RGB images. It uses an artificial black channel derived from a CMYK conversion to make a sharpening mask which restricts the sharpening to the darker and less colorful areas of the image. This mask It is implemented as a Photoshop action.

Dan Margulis Action

Documentation is sparse, but it is a novel method. One can control the degree of sharpening via the usual unsharp mask controls and through the center slider of the levels control of the sharpening layer.

For those accustomed to the Bruce Fraser sharpening workflow, I do not know if this should be regarded as capture sharpening to be followed by a round of output sharpening according to the size of the final image and output device, or merely as a one step process.

Bill
[a href=\"index.php?act=findpost&pid=142237\"][{POST_SNAPBACK}][/a]

Bill,

First responding to a comment of Andrew's on the method per se, I believe the idea is to make a copy of the image, convert the copy to CMYK, extract the K channel and use that for the sharpening Action in the original RGB file. This way one does not lose the advantage of the wider gamut RGB colour space.

With that out of the way, let us turn to the fundamentals involved here. I preface my remarks with a comment to Jonathan that one should not dismiss a proposal like this out-of-hand simply because aspects of it sound counter-intuitive. It may well be quite usable on a number of image types. Dan Margulis would not issue this procedure without having tested it somehow; but we do not know the details of the testing, therefore we do not know its optimality relative to other sharpening solutions already available - of which there is a plethora.

Of all those solutions, "the bar" on the subject of image sharpening was raised decisively and definitively with the publication of Bruce Fraser's book "Real World Image Sharpening............". It is now necessary for any one developing or evaluating sharpening algorithms to read the first three chapters of that book and understand thoroughly what the technical issues with sharpening are, and their implications. Bruce was clearly of the view, based on his extensive analysis of these issues, that high quality sharpening requires a two or three stage process - hence its own "workflow"; furthermore, within each of those stages, the character of the sharpening needs to be customized by image source, image content (frequency of detail) and image use (each use by resolution).

Based on these principles, PixelGenius LLC developed PK Sharpener (Pro), which, according to Jeff Schewe in other posts in this Discussion Forum, required months upon months of research to develop and refine the settings appropriate to each imaging situation (source, content and use). I have laid-out all of these logical permutations and combinations on a spreadsheet, and calculated that PK Sharpener Pro caters to a total of two thousand three hundred and sixty eight unique imaging conditions, each of which is easily user-selectable in the PKS user interface.  

Hence, testing Dan's one-pass tool for optimality relative to the full panoply of PKS options would indeed be a formidable task. And let us recall, again for reasons both Bruce and Jeff have explained, unlike for soft-proofing, it is not really straight-forward to view the results of sharpening on a display - it is necessary to make a print and examine it. Therefore a comparative test for both the test condition and the counterfactual requires twice the number of prints. At the extreme, therefore that would involve the production of 4736 prints.

Faced with a total population that size, of course one would revert to a representative sampling technique that covers the territory while substantially reducing the workload. Therefore, in the same spreadsheet, and with reference to PKS, I scaled-back to what I thought would be the minimum replications within each condition-set, and ended up nonetheless with a requirement to test for three-hundred and sixty-four unique conditions, requiring 728 prints.

If any one reading this has the time on their hands to undertake this extent of testing (no, I don't), it would inform the community about where Dan's technique stands relative to simply using PKS. Of course Dan's Action is downloadable for free (I have done so), and PKS costs money - but as we see - there is a real reason for it. That extent of effort developing and testing a comprehensive technical solution to a very complex set of issues is not handed-out for free.

I should close with an observation that at the recent Canon digital imaging event here in Toronto, Sebastian Stefano of Adobe demonstrated the creative use of the new Smart Sharpen features of Lightroom, which allowed him to control the extent of sharpening between the light and dark contours, and lower and higher frequency image data quite nicely. It isn't PKS, but it is clearly progress relative to USM, such that Dan's new approach deserves to be compared with that as well.

Cheers,

Mark
Logged

Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....." http://www.luminous-landscape.com/reviews/film/scanning_workflows_with_silverfast_8.shtml
bjanes
Sr. Member
****
Offline Offline

Posts: 2756



« Reply #4 on: September 27, 2007, 04:14:15 PM »
ReplyReply

Quote
Doesn't sound very impressive. If you're going to do tonal masking, you want to mask off the highlights and shadows to avoid/reduce halos, especially shadows, since dark tones have the most noise. You also want to mask off highlights to avoid clipping. Focusing sharpening on darker areas makes no sense at all.
[a href=\"index.php?act=findpost&pid=142268\"][{POST_SNAPBACK}][/a]

Jonathan,

I surmise that you are feeling better now and have regained your usual spunk?    

It sounds like you are critiquing the method without having referred to the link. If this is not the case, please accept my apologies, but if so, you should review the referenced link. Darker is relative and does not necessarily refer to shadows. Dan is controversial, but his retouching credentials probably compare favorably to yours. Iliah Borg is very knowledgeable and has written a well regarded raw converter for Nikon cameras.

Bill
Logged
bjanes
Sr. Member
****
Offline Offline

Posts: 2756



« Reply #5 on: September 27, 2007, 04:35:38 PM »
ReplyReply

Quote
What makes it novel?
[a href=\"index.php?act=findpost&pid=142274\"][{POST_SNAPBACK}][/a]

It's novel to me, but perhaps not to you. If it is old hat to you, please supply references showing prior use of Dan's methods so that we can understand the methodology better.

Quote
Is it capture or output sharpening? If the later, how does one know the parameters for each device?
[a href=\"index.php?act=findpost&pid=142274\"][{POST_SNAPBACK}][/a]
It would appear to me to be closer to capture sharpening and perhaps the new method could be incorporated into a multi pass work flow such as Bruce pioneered. Not everyone uses a multi pass technique, even though I think that Bruce has made a good case for it in his sharpening book and other writings. Alternatively, one could adjust the unsharp mask parameters in Dan's action. Have you looked at the link, or is your response a reflexive not invented here one?

Quote
What about converting to CMYK (and what space) to generate a black channels? Seems like a good way to toss away a lot of useful data and color gamut (not that Dan believes either are an issue).
Sounds like that needs to be defined (among other things). So again, its novel why?
[a href=\"index.php?act=findpost&pid=142274\"][{POST_SNAPBACK}][/a]

It appears that you are spouting off without having referred to the link or studying the method. The CMYK is used only to make a mask and the CMY information is discarded.
Logged
digitaldog
Sr. Member
****
Offline Offline

Posts: 8628



WWW
« Reply #6 on: September 27, 2007, 05:20:09 PM »
ReplyReply

Quote
It's novel to me, but perhaps not to you. If it is old hat to you, please supply references showing prior use of Dan's methods so that we can understand the methodology better.

Oh I see. Since its new to you, its novel. I was just trying to figure out what made it novel, I think I get it.

As for a methodology, I've yet to hear Dan's has one. Hence the questions.
Logged

Andrew Rodney
Author “Color Management for Photographers”
http://digitaldog.net/
bjanes
Sr. Member
****
Offline Offline

Posts: 2756



« Reply #7 on: September 27, 2007, 06:12:48 PM »
ReplyReply

Quote
Oh I see. Since its new to you, its novel. I was just trying to figure out what made it novel, I think I get it.

As for a methodology, I've yet to hear Dan's has one. Hence the questions.
[a href=\"index.php?act=findpost&pid=142308\"][{POST_SNAPBACK}][/a]

Thanks for the gracious and informative reply. Dan has no methodology, but he made it into the Photoshop Hall of Fame far before you. I see he was inducted along with Thomas Knoll at the first ceremony in 2001. Not bad for a dunce. I am still waiting for your references if you have any.
Logged
BernardLanguillier
Sr. Member
****
Offline Offline

Posts: 7776



WWW
« Reply #8 on: September 27, 2007, 06:52:03 PM »
ReplyReply

I had seen that thread by chance on DP yesterday and browsed through it quickly.

If I am not mistaken, they were discussing the interest of taking into account color saturation/purity in the original image as one of the inputs in determining the amount of sharpening that would be required.

I am not sure whether this is instanciated in the action that was made available by Dan, but it appeared to be a question worth considering.

I don't think that this is incompatible with the work done by Bruce and Jeff in terms of process, but - if valid - color saturation might be one parameter to take into account when creating the masks used to control the amount of sharpening to be applied. My understanding is that tone is the main input today.

There would of course be a need to determine to what extend this should be taken into account in the 3 steps of Bruce's process.

Just my 2 cents,

Regards,
Bernard
Logged

A few images online here!
Marco Ugolini
Jr. Member
**
Offline Offline

Posts: 53



WWW
« Reply #9 on: September 27, 2007, 07:00:13 PM »
ReplyReply

Quote
Thanks for the gracious and informative reply. Dan has no methodology, but he made it into the Photoshop Hall of Fame far before you. I see he was inducted along with Thomas Knoll at the first ceremony in 2001. Not bad for a dunce. I am still waiting for your references if you have any.
[a href=\"index.php?act=findpost&pid=142317\"][{POST_SNAPBACK}][/a]
Hey guys...

This is not that other forum, ya know. Here people play fair and try to be nice.

So what if Dan was inducted to the Photoshop Hall of Fame before Andrew? That is silly and a deliberately mean thing to say.

And indeed, there *is* a difference between "new to me" and "novel": "novel" is what appears new or different in the field. The simple fact that something is new to you doesn't automatically mean that it's objectively new, hence "novel". You will agree to that -- yes?

Marco
« Last Edit: September 27, 2007, 07:01:02 PM by Marco Ugolini » Logged

Marco Ugolini
digitaldog
Sr. Member
****
Offline Offline

Posts: 8628



WWW
« Reply #10 on: September 27, 2007, 07:04:49 PM »
ReplyReply

Quote
Thanks for the gracious and informative reply. Dan has no methodology, but he made it into the Photoshop Hall of Fame far before you. I see he was inducted along with Thomas Knoll at the first ceremony in 2001. Not bad for a dunce. I am still waiting for your references if you have any.
[{POST_SNAPBACK}][/a]

Ah, then the date of NAPP introduction is key to a methodology. Just wanted to be sure I understand the mindset here. Lets forget his take on high bit editing (technically wrong), wide gamut spaces and the evils of Camera Raw and Lightoom. That he was inducted at a fixed time makes this all moot, despite those of us that have (unlike Dan) actually produced and provided files and instructions which prove his points on the above ideas are flat earth, religious thinking for anyone who wishes to do the testing.

Lets look at the technique that's so novel to you and Dan's original post about it. This is a direct quote from Dan's list:

Quote
My suggestion is a mask that caters to both--that allows more sharpening
where the image is darker but also restricts it where the image is colorful. While it
is possible to make a convoluted Action that generates such a mask by a series of blends of
the RGB channels, there's a faster way--make a false separation, and use an inverted
Heavy GCR black as the mask for the RGB sharpening.

OK a mask to apply sharpening. Nothing really new here at all. Been described and done for years. Use the image to build a mask to protect areas. What might be novel is the idea of using a CMYK black channel to do this. Lets look at the technique and see if there's anything to it and how its explained that is novel or defines the methodology:

Quote
Here's the procedure, which of course should be reduced to an Action to save
having to do it over and over.

1) Copy the RGB image.

2) With the copy, Convert to Profile>Custom CMYK.

3) Fill in: Heavy GCR, 70% black ink limit, 340% total ink. Dot gain is basically not relevant as you can always lighten or darken the mask after applying it, but I just use the default 20%. (AR: basically not relevant? It is or it isn't).

4) Click OK twice to generate the false separation. (AR: There's no such thing, Dan likes to make up terms).

5) Command-4 to expose the black channel, and Mode: Grayscale to discard the CMY channels.

6) Invert the channel with Command-I, yielding a negative image.

7) Auto Levels.

Cool Gaussian Blur, radius 2.0 pixels to eliminate noise and make for a softer sharpen.

9) Return to the RGB image and create a duplicate layer. Sharpen conventionally with a very heavy hand--500%, 1.2 pixel Radius, 3 Threshold might be a good starting point for most images. (AR: Dan likes to use such terms like "most images" or "Usually this works" etc. Good starting point? That leads me to believe YMMV).

10) Add a layer mask. To it, load the artificial black channel that was made in steps 1-8. This should confine the sharpening to the desired areas. (AR: Dan again protecting himself in case this doesn't work. It should confine the sharpening to the desired areas. Heck, the entire methodology rests in 'it should').

11) If you feel the image is not sharp enough, apply a curve to the mask to lighten its midpoint. If you find the image to be too sharp, darken the mask in the same way.  (AR: Dan once again protecting his novel idea. If you feel.... Feel based on what? You output the file and its not sharp enough? It doesn't appear visually sharp enough? This is a novel new idea about sharpening?)

We have a technique which has no methodology unlike what Bruce described in his original article on Creative Pro and greatly expanded in his ground breaking book on sharpening. Yes, Dan's built a mask using a black channel. He's not provided any kind of methodology about when or why to use this technique, nor has he discussed when or why to alter any of the setting based on any size, capture device, previous possible sharpening to the document or the output size and output device.

Here's my new novel technique. Open and RGB document. Select USM. For Amount, pick 157, for Radius, pick 1.7, for Threshold pick 5.

Now what have I done? The image IS sharper appearing. This is akin to someone saying "I have an award winning recipe for brownies. Its easy. Mix the ingredients and bake for one hour at 350 degrees." Well wasn't that useful. Its possible the USM values I just made up above have never been exactly specified by anyone anywhere, so its new and novel. So what?

Now lets see what someone like Bruce did? At the very least, there's this article, the genesis of the sharpening workflow:

[a href=\"http://www.creativepro.com/story/feature/20357.html]http://www.creativepro.com/story/feature/20357.html[/url]

Back to Dan. He likes to be controversial to draw attention to himself. He likes to make complex multiple stage routines that in effect polish turds. When you see the before and after, the turd looks better. Dan doesn't, as others here go out of their way to do, teach people NOT to create turds in the first place. But his livelihood is based on taking just awful originals and fixing them in Photoshop. If we all produce pretty good data from the get-go, in a Raw converter which he dismisses as unfit for pro use, he's got nothing to write about. I find this mindset dangerous and polluting to those who don't know any better. The idea that one should set a Raw converter settings null, then fix the image in Photoshop is bad enough. But to then say the converter is unfit for professional use without a drop of proof is worse than unprofessional.

His technique above has no methodology other than the image appears sharper. As Jonathan points out, the ideas seem odd based on the masking methodology if we can be kind enough to use that term here. The technique is abundantly vague. He even says he wants to sharpen the dark areas, the areas we all know is full of noise!

I'm not trying to pick a fight with you bjanes, just trying to understand what Dan brings to the party other than a pretty high B.S. factor regarding digital imaging he's expressed over the years. Does his induction into the NAPP hall of fame in any way validate or invalidate this? That I and others have provided files that PROVE him wrong about wide gamut editing spaces or high bit editing change this fact? That he refuses to explain his religious beliefs using science and tells his list he has produced the exact math used in Camera Raw to show that the math is both sloppy and destructive (but refuses to share this math), thus this product is unfit for professional use (his words), we should take him at all seriously when it comes to sharpneing?

Now we have a NEW sharpening technique. OK Dan, explain, as Bruce did, how it works, when to use it. Nothing form Dan. But we do see others in the imaging community posting about it, as if there's something novel let alone useful here. I await the methodology from Dan or anyone else who can provide any reason why we should pay attention to this guy.

In the past, Dan DID have some useful ideas about image processing. But his message has been soiled using flat earth theories that simply don't pan out. He's targeting photographers now since the prepress folks have either died off or stopped listening to him. When he starts polluting the minds of photographers, I take notice and start demanding he be accountable for what he says. Its one thing to go around panning Adobe as an axis of evil or even say their products are unfit for pro use. But to try to persuade the photo/imaging community of this nonsense, without providing a shred of evidence and worse, calling those who do provide evidence shills is going to far.
« Last Edit: September 27, 2007, 07:08:26 PM by digitaldog » Logged

Andrew Rodney
Author “Color Management for Photographers”
http://digitaldog.net/
Mark D Segal
Contributor
Sr. Member
*
Offline Offline

Posts: 6821


WWW
« Reply #11 on: September 27, 2007, 08:14:05 PM »
ReplyReply

Quote
Thanks for the gracious and informative reply. Dan has no methodology, but he made it into the Photoshop Hall of Fame far before you. I see he was inducted along with Thomas Knoll at the first ceremony in 2001. Not bad for a dunce. I am still waiting for your references if you have any.
[a href=\"index.php?act=findpost&pid=142317\"][{POST_SNAPBACK}][/a]

Bill, Dan is no dunce, but the Hall of Fame has nothing to do with the subject under discussion. What's needed is a proof of concept, and that isn't forthcoming, nor will it be easy because the bar is high. Even if the number of  image frequency conditions were reduced from 4 to 2 in the sampling approach I discussed before, the number of prints to be made is cut in half, but still several hundred. Furthermore, I forgot to mention that comparative workflow efficiency would also need to be evaluated, say between Dan's proposal and PK Sharpener Pro. I don't think it makes much sense to argue about whether or not the approach is superior to anything else unless the people making those arguments do the hard work. I agree with Andrew that just to say it sharpens some images - even nicely - isn't the whole story by a long-shot.
Logged

Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....." http://www.luminous-landscape.com/reviews/film/scanning_workflows_with_silverfast_8.shtml
digitaldog
Sr. Member
****
Offline Offline

Posts: 8628



WWW
« Reply #12 on: September 27, 2007, 08:47:04 PM »
ReplyReply

Quote
What's needed is a proof of concept, and that isn't forthcoming, nor will it be easy because the bar is high.

Exactly. If you look at Dan's MO, proof of concept is the weak link (weak, its non existent).

He's always right because he says so. Introduce enough fudge factor, which lately he's doing more and more, you can't pin him down on anything. Then the discussion goes nowhere and we're left with nothing concrete. But when you're trying to create a buzz about yourself, this works surprisingly well. Its a shame so many are attracted to this flame. When you examine it up close, you find there's little substance.

His list is a vehicle for self promotion. That's fine I guess until what he says gets posted to the outside world, when people get the idea he's got something useful to say. Lately, that hasn't been true, in fact the opposite and this can be proven. Case in point is Mark's article on curves here on LL. Mark did good science and anyone who wishes can take the time to not only read the piece but do the testing on their end to convince themselves of the validity of each side of the argument. Dan doesn't provide any files or proof of concept. You either accept it out right or you're wrong in his eyes. This isn't good science, in fact its not even bad science. Its religion. Its pointless to argue religion. Its not pointless to use scientific reasoning in attempting to come to some conclusion.

None of us are born with an intimate knowledge of Photoshop, imaging or even photography. We learn by reading, testing and sharing ideas. We have such a community. I suspect we are all wrong at times. Just recently, there was a superb and well behaved discussion about Metamerism on the ColorSync list. No ego's, no doggie posturing. Many of us, myself included, learned that we have been using the term incorrectly. I was actually very happy to fully understand the finer points of which I thought I fully understood. I had been talking about and writing about this phenomena incorrectly for years. Now that I understand the subtleties of this technical issue, even though I was wrong, I'm pleased to have increased my knowledge of this subject. That's how we grow. That's how we learn. Then we hope to share such ideas with others. The problem with Dan is, he's never wrong, even when you have empirical and well defined evidence of this. He can't grow or learn, he's always right. If someone does indeed show the error of his ways, he either uses censorship or diversion to ignore the finer points of the discussion. Or he tells his list, this topic is now closed and there will be no further discussion about the topic. That makes many of us even more determined to prove him wrong outside the list he controls. This emperor has no cloths. Shame. He's a nice guy in person, has (had) a lot to contribute. But based on the last few years of his behavior, he's simply not worth listening to. Worst of all, he's a hypocrite. This is easy to prove, all you need to do is examine a few posts he's made to his list. Funny, he never ventures outside the list since he has no control over the process.

IF he wants to post nonsense to his private list and pollute the minds of his minions, fine. But the topics find there way outside this list, then he's nothing more than a bad influence.

Getting back to sharpening. In Dan's mind, we have to discuss and prove his technique is screwy but in science, that's not how things work. The person proposing a theory is supposed to prove his points using sound scientific processes. That's never how Dan has worked, be it his ideas on high bit processing, wide gamut spaces or Raw processing. He's right, you have to prove him wrong (he never has to prove his ideas as being sound). This isn't how Bruce operated. Now its up to Dan to prove to you and I that he has a methodology in this new, radical sharpening technique (which I'll add isn't at all radical). You're not going to see it. Yes, the image appears sharper than before his process, so its viable? Well I don't buy that. Should you?
Logged

Andrew Rodney
Author “Color Management for Photographers”
http://digitaldog.net/
Mark D Segal
Contributor
Sr. Member
*
Offline Offline

Posts: 6821


WWW
« Reply #13 on: September 27, 2007, 10:42:55 PM »
ReplyReply

Quote
Getting back to sharpening. ............
The person proposing a theory is supposed to prove his points using sound scientific processes. ........................ Now its up to Dan to prove to you and I that he has a methodology in this new, radical sharpening technique ...................... Well I don't buy that. Should you?
[a href=\"index.php?act=findpost&pid=142340\"][{POST_SNAPBACK}][/a]

Andrew, "focusing on sharpening" (no pun intended   ), let us look at where we're at with the specific issue: No theory was proposed. A procedure was proposed. Operating the procedure presumably would be the methodology. In this case, it would appear that simple, but of course not ideal for satisfactory or easy evaluation. There should be a theory, in the sense of at least describing the principles underlying the procedure in some detail. Then there should be a fairly extensive description of how the procedure is implemented to optimize a wide range of images. That would establish the methodology. Had both of these things been done - and well tested - most of this discussion would be moot. For example, it would have covered-off Jonathan's point about the role of black.

Bernard raised a question about the irelationship between saturation and sharpening. Bernard, as this kind of sharpening would appear to be about acutance, one needs edges in order to improve their contrast. To my understanding, excessive saturation of a rendered image can push colours out of gamut and obliterate those edges, in essence undercutting the "raw material" on which the acutance would be improved. In other words, one needs "sharpenable" material to begin with. I believe this is one reason why it's so important to optimize white balance, luminosity and saturation in the raw converter, thereby rendering the highest quality image the raw converter can deliver; then there would normally be enough material for sharpening with tools and techniques more nuanced than those in the raw converter (though that said, they are starting to catch up).
Logged

Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....." http://www.luminous-landscape.com/reviews/film/scanning_workflows_with_silverfast_8.shtml
bjanes
Sr. Member
****
Offline Offline

Posts: 2756



« Reply #14 on: September 28, 2007, 07:42:07 AM »
ReplyReply

Quote
I should close with an observation that at the recent Canon digital imaging event here in Toronto, Sebastian Stefano of Adobe demonstrated the creative use of the new Smart Sharpen features of Lightroom, which allowed him to control the extent of sharpening between the light and dark contours, and lower and higher frequency image data quite nicely. It isn't PKS, but it is clearly progress relative to USM, such that Dan's new approach deserves to be compared with that as well.
[{POST_SNAPBACK}][/a]

Mark,

Thanks for the extremely well reasoned reply. First of all, I would like to stress that I am not attempting to denigrate Bruce's work. I do have his sharpening book and PK sharpener. However, there are some images that do not respond well to this workflow and one must look to other methods. High ISO shots under incandescent light often have high noise, especially in the blue channel and PK can wreck these. With such images, edge masking does not work well. The edges are not well delineated with the edge mask, and the mask outlines noise as well as the edges. With such images I use Noise Ninja or, more recently, Noiseware. Use of PK brings back the noise. In his sharpening book, Bruce did warn that some noise reduction methods produce images which can not be sharpened. In these cases, I simply use the sharpening built into the NR application.

As you point out, Bruce did considerable research in the development of PK and Jeff talks about the "magic numbers". Of the hundreds of permutations you list, what proportion involve capture sharpening and what proportion are applied to output sharpening? It would seem to me that the majority of these permutations would involve output sharpening.

Sharpening for the capture device has well defined parameters determined by the resolution of the sensor and the strength of the anti-aliasing filter. Sharpening for image content involves selecting the frequency to be emphasized: high, medium or low. I bring up this point, since one could perform capture sharpening by some other method, perhaps smart sharpen or some other deconvolution method, and then use PK for output sharpening. Perhaps even Dan's method could be used for capture sharpening and PK for output.

Bruce's capture sharpening uses the unsharp mask as its main tool, modifying its effects with the blend if sliders and the edge mask. The unsharp mask is pretty old technology, dating back to the film era and analogue technology. It is entirely possible that the smart sharpen or some other algorithm could be substituted with better results, while maintaining the essential features of Bruce's workflow. In the sharpening book, Bruce was not a fan of smart sharpen, but as the algorithm undergoes improvements and more experience is gained with it, his opinion could have changed.

I am glad you brought up smart sharpen, since these deconvolution methods actually  restore image detail, whereas the unsharp mask merely creates the impression of sharpness. The trouble with these methods is the selection of what point spread function (PSP) to use in the deconvolution. For example, if you blur an image with the Gaussian blur filter, and then use smart sharpen with the Gaussian blur PSP, it should do a better job at restoring sharpness than the unsharp mask. I'm not sure what the assumptions underlying the lens blur PSP of smart sharpen are. Lenses can have multiple aberrations, but if the main aberration is known, the restoration will be more successful. For example the Hubble Space telescope suffered from spherical aberration prior to its repair, and NASA was able to use the [a href=\"http://en.wikipedia.org/wiki/Richardson-Lucy_deconvolution]Lucy-Richardson deconvolution algorithm[/url] to good effect. If the PSP is not known, Expectation-maximization algorithms can be used. As the reference shows, the theory of these algorithms is beyond comprehension by most of us, but one does not need to understand the theory to make use of the algorithm.

In a post on his web site, Roger Clark showed how he made use of the Adaptive Richardson-Lucy Iteration to approximately double the print size of an image while maintaining image detail. Newer algorithms do not negate the value of Bruce's work, but can extend it. As Issac Newton, not a particularly modest man, said "If I have seen further than others, it is by standing upon the shoulders of giants."

Bill
Logged
Mark D Segal
Contributor
Sr. Member
*
Offline Offline

Posts: 6821


WWW
« Reply #15 on: September 28, 2007, 09:14:13 AM »
ReplyReply

Quote
...............I do have his sharpening book and PK sharpener. However, there are some images that do not respond well to this workflow and one must look to other methods. High ISO shots under incandescent light often have high noise, especially in the blue channel and PK can wreck these. ...............I use Noise Ninja or, more recently, Noiseware. Use of PK brings back the noise.

As you point out, Bruce did considerable research in the development of PK and Jeff talks about the "magic numbers". Of the hundreds of permutations you list, what proportion involve capture sharpening and what proportion are applied to output sharpening? It would seem to me that the majority of these permutations would involve output sharpening.

Sharpening for the capture device has well defined parameters determined by the resolution of the sensor and the strength of the anti-aliasing filter. Sharpening for image content involves selecting the frequency to be emphasized: high, medium or low. I bring up this point, since one could perform capture sharpening by some other method, perhaps smart sharpen or some other deconvolution method, and then use PK for output sharpening. Perhaps even Dan's method could be used for capture sharpening and PK for output.

It is entirely possible that the smart sharpen or some other algorithm could be substituted with better results, while maintaining the essential features of Bruce's workflow. In the sharpening book, Bruce was not a fan of smart sharpen, but as the algorithm undergoes improvements and more experience is gained with it, his opinion could have changed.

I am glad you brought up smart sharpen, since these deconvolution methods actually  restore image detail, whereas the unsharp mask merely creates the impression of sharpness.
Bill
[a href=\"index.php?act=findpost&pid=142411\"][{POST_SNAPBACK}][/a]

Bill,

I agree - noisy images need to be treated with noise reduction before sharpening. I also find that the scope for capture sharpening after noise reduction to some extent depends on the quality and strength of the noise reduction. Noise reduction is always a trade-off between fudging noise and fudging acutance, so the better one can aim the noise reduction program at selecting-out the unwanted detail from the wanted detail, the less the sharpening problem. PKS can over-sharpen unless one is careful - but that's the point - being careful I find one can obtain a satisfactory balance. I normally put the noise reduction on a separate image layer, then by playing between the opacities of the noise reduction and the sharpening layers which PK provides, to the extent one is accustomed to judging impacts on a display, the outcomes are OK. PKS also has some tools for dealing with noise, but I have not used them much.

OK, here's the data on the options:

In the vesion of PKS I'm using (1.2.4 I believe), there are:

16 image types
4 frequency levels =
64 input image conditions.

For image output conditions, there are 4 image types, each of which has several output-type categories and resolution settings, such that the number of unique conditions for each is as follows:

Half-tone 10
Error Diffusion Dither (Inkjet) 10
Contone 6
Web 10

Hence, when you multiply the 64 image input conditions by each of the output conditions (which is legit because each is independent of the other) and add the results, you get the total number of custom conditions PK treats through its UI. But that's nowhere near the end of it, because it's all done on adjustment layers, usually three of them with Light Contour, Dark Contour and the Composite, so if you don't like the default effect from the optimal setting from the UI, we can adjust the opacities of these layers, or we can localize effects by painting in the image with the appropriate layer mask active.

Then added to that, which I did not mention in my previous posts, there are of course the Stage Two options, which add another 52 levers of adjustment that can be inserted between the Input and Output Sharpeners. These 52 options consist of:

Smoothing 18
Sharpening Brushes 15 (a lot of localized control)
Sharpening Effects 19.

Any one who can't find a way to optimally sharpen an image with all this......there's either something truly fatal about the image, or....OK, point made.

Now, interesting you mention deconvolution and restoration of image detail. Here's where the definition of terms really matters, and you've got it right. When I responded to Andrew's post by "focusing on sharpening" it was a bit tongue-in-cheek, because over at the Applied ColorTheory List where Dan first introduced the technique he said "we all know" that USM improves focus. Well, of course "we all know" it does nothing of the sort. Planes of an image are either in-focus or out of focus (this is lenses - circles of confusion as a function of aperture and distance) and no amount of acutance sharpening will fundamentally change that. (I got taken to the cleaners by several of their more defensive members for making this clarification, but it's non-trivial because working on focus and working on acutance does take us down different paths.)

If we're talking about restoring focus, we are indeed into deconvolution methods, an example of which is "Focus Magic" a tool which Ctein recommends in his book "Digital Restoration from Start to Finish" (by the way, a book I think is really first-class for both content and presentation). I downloaded Focus Magic and gave it a whirl. Indeed to a considerable extent it does what it advertises - quite an amazing piece of software; that said, it can be fairly harsh unless used VERY carefully, and it is nowhere as "fine-tunable" as PKS. But something like this has its place if you really need to reconstruct detail - apparently this technology stems from forensic imaging.

Now, as to whether deconvolution can be successfully integrated with an acutance workflow of the PKS variety is conceptually an intriguing question. I see scope for heaps of research on this, and yet newer and better tools to come.................

Cheers,

Mark
« Last Edit: September 28, 2007, 09:17:16 AM by MarkDS » Logged

Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....." http://www.luminous-landscape.com/reviews/film/scanning_workflows_with_silverfast_8.shtml
digitaldog
Sr. Member
****
Offline Offline

Posts: 8628



WWW
« Reply #16 on: September 28, 2007, 10:31:40 AM »
ReplyReply

Quote
When I responded to Andrew's post by "focusing on sharpening" it was a bit tongue-in-cheek, because over at the Applied ColorTheory List where Dan first introduced the technique he said "we all know" that USM improves focus. Well, of course "we all know" it does nothing of the sort. Planes of an image are either in-focus or out of focus (this is lenses - circles of confusion as a function of aperture and distance) and no amount of acutance sharpening will fundamentally change that. (I got taken to the cleaners by several of their more defensive members for making this clarification, but it's non-trivial because working on focus and working on acutance does take us down different paths.)

Its not really nontrivial when you see such ideas expressed as fact by readers of his list. After Mark's post to correct Dan, yet another typical hissy fit on the list with more censorship and doggie posturing by Dan and minions. Oh, that's also a typical MO there "everyone knows" this or that. IOW, if you don't know, you should. If you disagree, you're wrong. If you try to clarify improper use of technical language, you're out of line. But using terms like False Profile or Master Curve etc is fine, as long as your Dan. Sorry if I keep bringing this up, its frustrating to listen to this nonsense month after month, year after year and then hear outsiders incorrectly use the terms invented by Dan as if what he's saying has an ounce of accuracy or credibility. He might have the most awesome sharpening routine known to man, but its getting progressively harder to take him seriously. Hence, the questions about methodology once again. Other than having an image that appears sharpener after using this technique, where's the beef?
Logged

Andrew Rodney
Author “Color Management for Photographers”
http://digitaldog.net/
Schewe
Sr. Member
****
Offline Offline

Posts: 5425


WWW
« Reply #17 on: September 28, 2007, 04:53:06 PM »
ReplyReply

Quote
Perhaps even Dan's method could be used for capture sharpening and PK for output.
[a href=\"index.php?act=findpost&pid=142411\"][{POST_SNAPBACK}][/a]

Not even close...

I've tested Dan's "recipe" (for that is _ALL_ it is) and find it introduces undesirable results...

First off, he doesn't mention the obvious (that I can tell) that the sharpening layer wants to be set to Luminosity only in the blend mode. This is so obvious that it makes me wonder why he left it off.

Second, the sharpening is directed to the wrong area/areas of the image (particularly for digital capture). Part of PhotoKit Capture Sharpener's emphasis is to REDUCE sharpening in shadow areas where there is already more noise and in general less edges (which generally occur between light/dark contours). Dan's recipe concentrates sharpening in the WRONG areas and in the wrong ways.

There is simply no way that Dan's recipe is useful for any sort of Capture Sharpening, period.

His recipe is simply another "sharpening for effect" attempt that many people try to pawn off as useful. And it may well be "useful for effect" (Creative Sharpening)but so far, on the images I've tested, it's really doesn't offer anything really useful for effect (unless you like over sharpening shadows).

As far as Output Sharpening, forget about it...

This is a subject I know a lot about (having learned from Bruce as well as having taught him a thing or two) and I'm here to tell you that Dan's recipe is much ado about nothing...the K generation and transformation into a mask is mildly interesting, but useless for a sharpening workflow. The sharpening is misdirected to the wrong portions of an image (unless you want your shadow noise to blossom). It's a curiosity at best...and a bad thing for most images if people have a care what their images look like.

But feel free to ruin your images if you like...I won't be doing this on my images.

And Bill, from one Photoshop Hall of Famer to somebody who will prolly never get there, if you want to trade creds, you might want to have some creds to trade with. Dan demos well (which is one reason he's in the Hall of Fame) and he HAD made useful contributions to the industry in the past. Now? Not so much...I'll let Andrew's characterizations stand (since they mimic pretty much what I think) and not pile on the Dan bashing, other to say that this is a whacky and particularly unuseful sharpening recipe as I've seen for a long time.

Other than that, have fun wasting your time (and potentially ruining your images in the bargan).
« Last Edit: September 28, 2007, 04:54:12 PM by Schewe » Logged
Mark D Segal
Contributor
Sr. Member
*
Offline Offline

Posts: 6821


WWW
« Reply #18 on: September 28, 2007, 05:32:48 PM »
ReplyReply

Adding to what Jeff just said, if you check-out that same sharpening thread on DPReview, as I did this afternoon, you will see one example of a comparative test performed by one contributor pretty much suporting Jeff's observations of the procedure's impact, at least on that image. Simple USM worked better - superior acutance without making the image look as artificial as Dan's rendered it. Of course, we are talking JPEGs viewed on a display - but it is nonetheles revealing.

Reverting to a point further back which Andrew made about the absence of a theory supporting this approach, it may even be grounded in some misperception about basic sharpening principles. I refer to Dan's statement:  "I've realized a corollary--in addition to lightness, a strong color is an argument against sharpening. We don't like to oversharpen skies, the petals of flowers, and human skin, for example."

It's really hard to understand why a strong colour would be an argument against sharpening. As a generalization it doesn't work. What if the strong colour also contained important image detail that we wanted to be sharp - detail that Dan argues elsewhere we need to protect and why he proclaims Camera Raw's curves are not suitable for professional use. And take his examples of bright colours not to sharpen: sky and skin: the reason why we don't sharpen sky and skin is because they are low frequency areas that aesthetically people don't like to see sharpened. Do you really want a granular sky and every pimple? It has nothing to do with bright colours.
Logged

Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....." http://www.luminous-landscape.com/reviews/film/scanning_workflows_with_silverfast_8.shtml
bjanes
Sr. Member
****
Offline Offline

Posts: 2756



« Reply #19 on: September 28, 2007, 08:27:31 PM »
ReplyReply

Quote
I've tested Dan's "recipe" (for that is _ALL_ it is) and find it introduces undesirable results...

There is simply no way that Dan's recipe is useful for any sort of Capture Sharpening, period.

This is a subject I know a lot about (having learned from Bruce as well as having taught him a thing or two) and I'm here to tell you that Dan's recipe is much ado about nothing...the K generation and transformation into a mask is mildly interesting, but useless for a sharpening workflow. The sharpening is misdirected to the wrong portions of an image (unless you want your shadow noise to blossom). It's a curiosity at best...and a bad thing for most images if people have a care what their images look like.
[a href=\"index.php?act=findpost&pid=142497\"][{POST_SNAPBACK}][/a]

Jeff,

Thanks for your opinion on Dan's proposed method. Since you are a recognized authority on the subject and have actually evaluated the method, I give your opinion a lot of weight.

Quote
Other than that, have fun wasting your time (and potentially ruining your images in the bargan).
[a href=\"index.php?act=findpost&pid=142497\"][{POST_SNAPBACK}][/a]

I don't really have the time to spend on something that appears a bit oddball. Let Dan and his fans develop the method and we will see if they get anywhere. I thought it was worth bringing up for discussion. In the meantime, I will continue to use PK Sharpener. However, I am still interested in deconvolution methods. Is Pixel Genius doing anything in that area?

Bill
Logged
Pages: [1] 2 3 ... 8 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad