Ad
Ad
Ad
Pages: « 1 2 [3] 4 5 »   Bottom of Page
Print
Author Topic: sharpening-Lightroom vs PhotoKit in Photoshop or both?  (Read 23974 times)
JimGoshorn
Full Member
***
Offline Offline

Posts: 176



« Reply #40 on: February 15, 2010, 07:30:30 AM »
ReplyReply

Jeff,

I have a question for you on where to capture sharpen.

A while back you had written an article (as I recall) giving a method to resize a file (up to 200%) where you suggested that you should capture sharpen after you had done the resizing, followed by PK Super Sharpen, PK Super Grain and PK Output Sharpen.

Is that workflow still suggested or is it OK to capture sharpen in LR and then do the rest later on with PK Sharpener.

Thanks!

Jim
Logged
Schewe
Sr. Member
****
Offline Offline

Posts: 5541


WWW
« Reply #41 on: February 15, 2010, 11:55:24 AM »
ReplyReply

Quote from: JimGoshorn
Is that workflow still suggested or is it OK to capture sharpen in LR and then do the rest later on with PK Sharpener.

Depends...as the time has moved forward and the file size my cameras produce get bigger (1Ds MIII, P-65+) I find myself actually fighting with too much resolution at times.

But if I needed something upsampled 2x, I would prolly just set the capture sharpening in ACR/LR and either upsample in ACR or export an upsampled image from LR. Then yes, I would do the intermediate stage work of Super Sharpener, Photo Noise and whatever else helped the look of the upsampled image in Photoshop before taking the image back to Lightroom for output sharpening and printing.

The reason I really gravitate towards printing from Lightroom instead of Photoshop is the superior usability of Lightroom's Print module.

Heck, I was printing out from Photoshop yesterday for a friend...2 prints in a row I screwed up the settings either in the Photoshop dlog or the driver...pissed me off so much I imported the guy's image into Lightroom just so I could print...

:~)
Logged
PeterAit
Sr. Member
****
Offline Offline

Posts: 1994



WWW
« Reply #42 on: February 15, 2010, 12:26:24 PM »
ReplyReply

Quote from: Schewe
In the end, it's all about the print...

You can dither and argue about what an image is supposed to look like on a computer display but unless the display is the final output, it don't mean shit...

The real arbiter of what is good and bad is the general user (and the general public) and how they evaluate a print. The important technology is that which actually has a practical impact. Theoretical research is just that...theoretical. No reason not to do it...big reason not to fall in love what what isn't there yet...

Again I have to caution all of you to take what is _NOW_ with a grain of salt...the elves at Adobe (wether you think they are good or evil) don't stand still. I realize the vast majority of people don't get the chance to interact and see the brilliance of Thomas or Eric...in that regard I consider myself fortunate. On the other hands, just because somebody has done a web site citing a bunch of research doesn't make it the Holly Grail.

I don't discount pure research...pure research is the work of genius in search of a reason...we need that, no question. But we also need practical tools that actually friggin' work...that's ultimately what I'm most concerned about...how to make _MY_ work better. If you don't have something to offer, kindly get the F%&CK out of the way...

Really, if you don't have anything substantial to offer, shut the F%&CK up...

I've got to agree with Jeff. Theory has its uses, but all too often these discussions amount to little more than mental masturbation of the "I know more about this obscure formula than you do" variety. To coin a phrase, where's the beef? This is photography, after all, and photography is a visual art. When something helps me make my art better (or easier), I'll stand up and applaud. One envisions oil painters standing around discussing the physics of brush bristles and the quantum physics of linseed oil!
Logged

Peter
"Photographic technique is a means to an end, never the end itself."
View my photos at http://www.peteraitken.com
ErikKaffehr
Sr. Member
****
Online Online

Posts: 8009


WWW
« Reply #43 on: February 15, 2010, 01:46:46 PM »
ReplyReply

Hi,

Another way to see it....

Without the curiosity, the need to know, to invent and to find out we would not have digital photography, autofocus, silver halide photography. Nor would we have a very long life span, and earth would be able to feed a population of a few million stone age people.

Best regards
Erik



Quote from: PeterAit
I've got to agree with Jeff. Theory has its uses, but all too often these discussions amount to little more than mental masturbation of the "I know more about this obscure formula than you do" variety. To coin a phrase, where's the beef? This is photography, after all, and photography is a visual art. When something helps me make my art better (or easier), I'll stand up and applaud. One envisions oil painters standing around discussing the physics of brush bristles and the quantum physics of linseed oil!
Logged

dsp
Newbie
*
Offline Offline

Posts: 7


« Reply #44 on: February 15, 2010, 02:08:24 PM »
ReplyReply

Quote from: PeterAit
. One envisions oil painters standing around discussing the physics of brush bristles and the quantum physics of linseed oil!

Just so you know...

TED fellow using nanoparticle paint : http://www.boingboing.net/2010/02/11/ted-f...g+(Boing+Boing)
Just because people don't understand the technology doesn't mean they shouldn't (or don't) use it.  They might not use the same jargon, but that is a different story.

I do high-resolution scientific image for a living, and use high and low tech image enhancements, including things like RL and maximum-likelihood estimation to get rid of the instruments response function.  I can use the same code on my photographs, but rarely do it.  I usually just do selective unsharp masking, and it is fine.  I'd be very happy if the more "modern" enhancement methods make it into new software (CSxx or LRxx or whatever), because it would be nice to have the option to do it, if desired.  But I think it depends on your needs.  When I need to have the absolute highest res image, I'll characterize the spatially (and depth) dependent PSF for a given lens, and deconvolve.  It's great, and you can see big improvements using well defined metrics, but the images aren't necessarily more aesthetically pleasing.  I have to say, for my own pictures, and most of the ones I see posted, it isn't the lack of pixel sharpness that kills them, it is the lack of sharpness in execution...

best regards, Darcy
« Last Edit: February 15, 2010, 02:09:44 PM by dsp » Logged
Schewe
Sr. Member
****
Offline Offline

Posts: 5541


WWW
« Reply #45 on: February 15, 2010, 02:30:46 PM »
ReplyReply

Quote from: dsp
I have to say, for my own pictures, and most of the ones I see posted, it isn't the lack of pixel sharpness that kills them, it is the lack of sharpness in execution...


Or, to quote Ansel Adams, "There is nothing worse than a sharp image of a fuzzy concept."

:~)
Logged
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3905


« Reply #46 on: February 16, 2010, 09:20:09 AM »
ReplyReply

Quote from: Schewe
Depends...as the time has moved forward and the file size my cameras produce get bigger (1Ds MIII, P-65+) I find myself actually fighting with too much resolution at times.

That may have to do with a wrong premise that the ACR/Lightroom workflow 'forces' one to follow. The premise to always do Capture sharpening as a first step, IMHO, is poor practice because it increases the risk of creating aliasing artifacts when downsampling (say web publishing) is required.

Here is an example, all images are based on the original TIFF that was used to produce this full size (warning 21 MB !) JPEG sample image from a 1Ds3
http://www.xs4all.nl/~bvdwolf/temp/7640_CO40_FM1-175pct_sRGB.jpg
The image is nothing fancy, just a demonstration that sharp images can be made with a camera with an AA-filter, and that deconvolution sharpening can restore the sharpness that was reduced by the opical system (lens+aperture+AA-filter+Bayer CFA+sensel aperture). Also low contrast features like the thatched roof and gras and branches against the sky are restored to what's possible given the pixel limitations of our displays without halos and stairstepped edges.

One image without prior Capture sharpening (just a straight non-sharpened Raw conversion in Capture One), and the same image but Capture sharpened, both downsampled to 800 pixels high using the Photoshop recommended Bicubic sharper method:
[attachment=20294:7640_NoS...CSharper.jpg] [attachment=20295:7640_Cap...CSharper.jpg]
The brick walls all produce aliasing artifacts, due to the poor quality of the bicubic sharper algorithm (it's worse than simple bicubic), and the Capture sharpened image shows more prominent aliasing! This demonstrates that Capture sharpening is, despite what's suggested by some, best postponed to the final processing for output stage, or skipped alltogether.

And here is an example of what proper downsampling (and without capture sharpening) looks like (and it even compresses better with the same quality settings):
[attachment=20296:7640_NoS..._Lanczos.jpg]
Halo's are prevented, ringing is present but not noticeable at the intended size, and the brick structure still looks like a brick structure without glaring artifacts. Just like basic Digital Signal Processing (DSP) predicts, one needs to reduce/eliminate the high spatial frequency content before downsampling, instead of boosting it's amplitude.

One can only hope that the future will bring better Photoshop/Lightroom tools, but as long as they are based on the wrong premises, I'm not going to hold my breath.

Cheers,
Bart
« Last Edit: February 16, 2010, 09:34:27 AM by BartvanderWolf » Logged
Samotano
Jr. Member
**
Offline Offline

Posts: 88



WWW
« Reply #47 on: February 16, 2010, 09:41:51 AM »
ReplyReply

Quote from: BartvanderWolf
That may have to do with a wrong premise that the ACR/Lightroom workflow 'forces' one to follow. The premise to always do Capture sharpening as a first step, IMHO, is poor practice because it increases the risk of creating aliasing artifacts when downsampling (say web publishing) is required.

Here is an example, all images are based on the original TIFF that was used to produce this full size (warning 21 MB !) JPEG sample image from a 1Ds3
http://www.xs4all.nl/~bvdwolf/temp/7640_CO40_FM1-175pct_sRGB.jpg
[...]
Cheers,
Bart
Isn't this more a problem of the downsampling algorithm rather than the sharpening workflow? I had the same problem which I resoloved by using Lanczos in a different softwareto downsample. I don't recall capture sharpening having much to do with it.
Logged
Mark D Segal
Contributor
Sr. Member
*
Offline Offline

Posts: 7121


WWW
« Reply #48 on: February 16, 2010, 09:53:14 AM »
ReplyReply

Quote from: Samotano
Isn't this more a problem of the downsampling algorithm rather than the sharpening workflow? I had the same problem which I resoloved by using Lanczos in a different softwareto downsample. I don't recall capture sharpening having much to do with it.

I agree, there is a co-mingling of issues here. One should analyze one thing at a time for correct scientific procedure. I am doing some very elementary stuff with the images which Bart so kindly shared with us and I'll report back if I have anything useful to contribute as a result of that.
Logged

Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....." http://www.luminous-landscape.com/reviews/film/scanning_workflows_with_silverfast_8.shtml
joofa
Sr. Member
****
Offline Offline

Posts: 488



« Reply #49 on: February 16, 2010, 11:01:43 AM »
ReplyReply

Quote from: BartvanderWolf
The brick walls all produce aliasing artifacts, due to the poor quality of the bicubic sharper algorithm (it's worse than simple bicubic), and the Capture sharpened image shows more prominent aliasing!

Hi Bart, I would tend to believe that aliasing is happening because BiCubic is normally an interpolation mechanism (upsampling) and not suited for downsampling in general because it does not include an anti-aliasing filter. I'm not privy to Photoshop's version of BiCubic (viz., BiCubic sharpner, etc.) but would like to believe that it is more or less a variation on the regular BiCubic interpolation stuff. Perhaps that is why people have such heuristics as to blur an image with, say, a Gaussian, before BiCubic downsampling, which is a non-ideal way of throwing in an anti-aliasing filter in the form a blurring Gaussian. If one downsamples with Lanczos then it includes the anti-aliasing mechanism matched to the downsampling factor.

In general straight interpolation-derived downsampling mechanisms are not streamlined for information loss inherent in a downsampling operation.
« Last Edit: February 16, 2010, 11:02:16 AM by joofa » Logged

Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3905


« Reply #50 on: February 16, 2010, 11:25:11 AM »
ReplyReply

Quote from: joofa
Hi Bart, I would tend to believe that aliasing is happening because BiCubic is normally an interpolation mechanism (upsampling) and not suited for downsampling in general because it does not include an anti-aliasing filter. I'm not privy to Photoshop's version of BiCubic (viz., BiCubic sharpner, etc.) but would like to believe that it is more or less a variation on the regular BiCubic interpolation stuff.

That's correct, BiCubic sharper is a variation on BiCubic, but it is Adobe's recommended heuristics for downsampling and I think people should be warned against the use of it. Adding sharpening before downsampling only complicates the process.

Quote
Perhaps that is why people have such heuristics as to blur an image with, say, a Gaussian, before BiCubic downsampling, which is a non-ideal way of throwing in an anti-aliasing filter in the form a blurring Gaussian. If one downsamples with Lanczos then it includes the anti-aliasing mechanism matched to the downsampling factor.

Absolutely agree. Pre-blurring the image and then using straight BiCubic (although Photoshop's isn't straight either) already gives better behaved results, but not quite as good as a e.g. Lanczos windowed Sinc does.

Quote
In general straight interpolation-derived downsampling mechanisms are not streamlined for information loss inherent in a downsampling operation.

They are not optimal indeed, but for reasons of speed(?) companies will cut corners, and the result is as shown. What's worse, users of Photoshop are not even offered an option to do it better, without resorting to homebrewn 'solutions'.

Cheers,
Bart
Logged
Samotano
Jr. Member
**
Offline Offline

Posts: 88



WWW
« Reply #51 on: February 16, 2010, 11:57:23 AM »
ReplyReply

Going OT here for a moment... But I just don't understand why Adobe does not put something as simple as a Lanczos downsampling algorithms in the image size window (something like: Lanczos (Better for reduction)).  So many images are destined for the web that downsampling is quite a common routine.  I don't think speed is no longer an issue these days.
Logged
Mark D Segal
Contributor
Sr. Member
*
Offline Offline

Posts: 7121


WWW
« Reply #52 on: February 16, 2010, 11:59:10 AM »
ReplyReply

Quote from: Mark D Segal
I agree, there is a co-mingling of issues here. One should analyze one thing at a time for correct scientific procedure. I am doing some very elementary stuff with the images which Bart so kindly shared with us and I'll report back if I have anything useful to contribute as a result of that.

I'm back (ugh! some may say - too bad   )

First, I like Bart's basic image, because it has the kind of micro-detail and frequency which lends itself to the testing one needs for drilling down on the issues discussed here.

Second, there are five things going on at the same time relative to the raw file: (a) The file format has been converted from CR2 to jpeg, which destroys a huge amount of data right from the get-go; ( the bit depth has been shrunk from 16 to 8 which destroys a lot more data - so we're starting the examination from a compromised file. © There are two kinds of downsampling, and (d) there is sharpening versus no sharpening. (e) In this particular file, there is a strange underlying phenominon with the brickwork on the windmill (only) which shows under some conditions and not others: this is a pattern of concentric bands accross the brickwork. These got created and embedded somewhere along the process between capture and processing, and, as I say, they show under some conditions but not others. I would expect they are completely absent from the original raw capture, but got introduced with one of the above-mentioned manipulations.

Starting with Bart's 25MP file, the first thing I did was to examine what shows on the display (LaCie 321 resolution 1600*1200) as a result of resampling. The first resampling I did was only to change the resolution from 300PPI to 72 PPI (quite a dramatic change, but what one would do making high-res images useable on the internet) without touching the linear dimensions. This downsampling, viewed at 100%, produced no obvious impairment of the image. Then I added to the change of resolution, a downsizing of linear dimensions from the 12*18 (roughly) Bart sent, to the (roughly 7.4*11.1 which he used in his other renditions. The combination of the change in linear dimensions with the change in resolution brought out factor (e) above.

Next, I printed (in my Epson 3800) all three of Bart's low-res JPEGs on a large sheet of Epson Exhibition Fiber paper at the same resolution as provided (72 PPI) and did absolutely no adjustments period, so as not to introduce yet more variables into the stew. Of course, the resulting print is somewhat pixellated, as expected at 72PPI. Therefore to examine for other impacts, one needs to kind of look through the pixellation, but in a way perhaps this is not so bad a thing, because it does allow one to TRULY PIXEL PEEP - and this IS what we're into here: sheer pixel-peeping. So, first I examined the two unsharpened low-res JPEGS (i.e. the one downsampled by what Bart calls "the proper way" and the one using Bicubic Sharper. Even with a 5x loupe, the printed images show no difference of quality, except for factor (e) above - the bicubic sharper image showed it quite mildly and the "proper" one didn't. That doesn't necessarily mean the "proper" one is "better" - it may in fact be less accurate, but that's unclear.

Then I compared the two images downsampled with Bi-Cubic Sharper - one capture sharpened and the other not capture sharpened. Here again, een with extreme pixel-peeping, the effect of Capture Sharpening is virtually indetectable on paper, and that is how it should be. In a multi-staged sharpening workflow, Capture sharpening is built-upon to get the final effect, it is not an end in itself.

Now, to escape from whatever the confines of pixellation, I reverted to Bart's 25MP 300PPI JPEG which has a - to us - starting file size of about 12*18, and I made a copy of it. I left the original intact, and for the copy I simply added PK Capture Sharpen Hi-res Digital Superfine (because this is a very high frequency image). I printed the two images on Epson Exhibition Fiber paper (BTW, in all sets of prints, I used Epson's highest quality settings of Super Photo and low speed, with Photoshop Manages Color and Printer Color Management OFF - took a while to do). This time, we are comparing for sharpening impact only  - and here again, even examined under the loupe - there is nothing negative, no halos, and in fact these images are pretty-well indistinguishable on paper, and that is very much what I expected also.

I've now exhausted pretty much what I could do with the materials at hand, and I've come to a provisional landing that much of this discussion is fun intellectual self-gratification, but it has very little, if any, real-world significance.
Logged

Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....." http://www.luminous-landscape.com/reviews/film/scanning_workflows_with_silverfast_8.shtml
Schewe
Sr. Member
****
Offline Offline

Posts: 5541


WWW
« Reply #53 on: February 16, 2010, 12:21:47 PM »
ReplyReply

Quote from: BartvanderWolf
One image without prior Capture sharpening (just a straight non-sharpened Raw conversion in Capture One), and the same image but Capture sharpened, both downsampled to 800 pixels high using the Photoshop recommended Bicubic sharper method:


Uh huh...capture sharpened how? With what settings?

I think what this proves is NOT that capture sharpening is bad, but that a single bicubic sharper downsample of a large file is bad...

in Photoshop I do multiple bicubic downsample processes till I get close to the final needed size than do a final bicubic sharper for the last resample...or, I use Camera Raw or Lightroom to get the image close to the final needs size with a last touch of resample (often bicubic) to get the exact size...unfortunately, at this time Camera Raw's size and resolution controls suck so I would suggest Lightroom for a comparison since there you can be precise with pixel dimension...

If you want to upload the raw file I'll compare...I have no interest flailing around with a jpeg.
« Last Edit: February 16, 2010, 12:27:04 PM by Schewe » Logged
joofa
Sr. Member
****
Offline Offline

Posts: 488



« Reply #54 on: February 16, 2010, 12:33:54 PM »
ReplyReply

Quote from: Schewe
a single bicubic sharper downsample of a large file is bad... in Photoshop I do multiple bicubic downsample processes till I get close to the final needed size than do a final bicubic sharper for the last resample...

It would appear to me that multiple cascaded BiCubic operations will do the trick in a convoluted way since now in effect you are increasing the "reach" of the BiCubic operation to a larger number of pixels as opposed to some fixed number in a single downsampling operation using BiCubic. (I'm assuming that Photshop's window of pixels to work with in their version of BiCubic is fixed for a single operation.) However, it is just easier to use a single "proper" downsampling filter, say, Lanczos, applied in a single step instead of the above-said cascaded BiCubic operation.
« Last Edit: February 16, 2010, 12:53:59 PM by joofa » Logged

Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins
Schewe
Sr. Member
****
Offline Offline

Posts: 5541


WWW
« Reply #55 on: February 16, 2010, 02:13:27 PM »
ReplyReply

Quote from: joofa
(I'm assuming that Photshop's window of pixels to work with in their version of BiCubic is fixed for a single operation.)


For Photoshop, yes...for Lightroom I'm pretty sure Eric's implementation of adaptive resampling is different than Photoshop's. Not sure I can say (nor know EXACTLY) what the resampling in Lightroom is doing relative to Photoshop's 3 flavors of Bicubic but I do know it's different than Photoshop's downsampling options and specifically designed to avoid ringing and interference when downsampling...
Logged
Mark D Segal
Contributor
Sr. Member
*
Offline Offline

Posts: 7121


WWW
« Reply #56 on: February 16, 2010, 02:19:55 PM »
ReplyReply

Quote from: Schewe
For Photoshop, yes...for Lightroom I'm pretty sure Eric's implementation of adaptive resampling is different than Photoshop's. Not sure I can say (nor know EXACTLY) what the resampling in Lightroom is doing relative to Photoshop's 3 flavors of Bicubic but I do know it's different than Photoshop's downsampling options and specifically designed to avoid ringing and interference when downsampling...

Jeff, by "ringing", do you mean the phenominon I mentioned as point (e) in my post 53 above?
Logged

Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....." http://www.luminous-landscape.com/reviews/film/scanning_workflows_with_silverfast_8.shtml
Schewe
Sr. Member
****
Offline Offline

Posts: 5541


WWW
« Reply #57 on: February 16, 2010, 02:56:38 PM »
ReplyReply

Quote from: Mark D Segal
Jeff, by "ringing", do you mean the phenominon I mentioned as point (e) in my post 53 above?


No, I don't think so...

Camera Raw (and therefor Lightroom) used to use a Lanczos variant in the resampling algorithm starting with ACR 1 through ACR 5.2 when it was changed to the hybrid bicubic variant found in ACR/LR now.

The Lanczos algorithm can introduce a ringing effect (kind of a dark line or edge interference) in some images when downsampling and it was pretty much less good (sucked) for upsampling...

The impact of this was visible not only in downsampled exports and the Web module but in the Print module of Lightroom where a really large image printed small had a tendency of breaking up on strong contrast circles and diagonals and certain frequencies of texture....at the time we thought it might have been caused by the Lightroom output sharpening but it was more a result of the resample code. But, once that got fixed, Eric also went in and fine-tuned the output sharpening even further to the point where I think it's optimal...

Which is yet another reason I really like outputting from Lightroom vs Photoshop...

Also note that great strides have been made in IQ with Lightroom 3 beta...not only has the demosiacing been enhanced but the color noise reduction and the capture sharpening has been optimized...one can also presume (although I can't say definitively due to NDA) that all the attention spent toward IQ improvements will also be seen in Lightroom 3's Luminance Noise reduction which is yet to be implemented...

Logged
Mark D Segal
Contributor
Sr. Member
*
Offline Offline

Posts: 7121


WWW
« Reply #58 on: February 16, 2010, 03:06:34 PM »
ReplyReply

Quote from: Schewe
No, I don't think so...

Camera Raw (and therefor Lightroom) used to use a Lanczos variant in the resampling algorithm starting with ACR 1 through ACR 5.2 when it was changed to the hybrid bicubic variant found in ACR/LR now.

The Lanczos algorithm can introduce a ringing effect (kind of a dark line or edge interference) in some images when downsampling and it was pretty much less good (sucked) for upsampling...

.................

You're right - this isn't the same thing that I described above.

Logged

Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....." http://www.luminous-landscape.com/reviews/film/scanning_workflows_with_silverfast_8.shtml
bjanes
Sr. Member
****
Offline Offline

Posts: 2880



« Reply #59 on: February 16, 2010, 03:45:49 PM »
ReplyReply

Quote from: BartvanderWolf
That may have to do with a wrong premise that the ACR/Lightroom workflow 'forces' one to follow. The premise to always do Capture sharpening as a first step, IMHO, is poor practice because it increases the risk of creating aliasing artifacts when downsampling (say web publishing) is required.

Here is an example, all images are based on the original TIFF that was used to produce this full size (warning 21 MB !) JPEG sample image from a 1Ds3 http://www.xs4all.nl/~bvdwolf /temp/7640_CO40_FM1-175pct_sRGB.jpg
The image is nothing fancy, just a demonstration that sharp images can be made with a camera with an AA-filter, and that deconvolution sharpening can restore the sharpness that was reduced by the opical system (lens+aperture+AA-filter+Bayer CFA+sensel aperture). Also low contrast features like the thatched roof and gras and branches against the sky are restored to what's possible given the pixel limitations of our displays without halos and stairstepped edges.

One image without prior Capture sharpening (just a straight non-sharpened Raw conversion in Capture One), and the same image but Capture sharpened, both downsampled to 800 pixels high using the Photoshop recommended Bicubic sharper method:
[attachment=20294:7640_NoS...CSharper.jpg] [attachment=20295:7640_Cap...CSharper.jpg]
The brick walls all produce aliasing artifacts, due to the poor quality of the bicubic sharper algorithm (it's worse than simple bicubic), and the Capture sharpened image shows more prominent aliasing! This demonstrates that Capture sharpening is, despite what's suggested by some, best postponed to the final processing for output stage, or skipped alltogether.

One can only hope that the future will bring better Photoshop/Lightroom tools, but as long as they are based on the wrong premises, I'm not going to hold my breath.

Cheers,
Bart
Bart,

If you look at Bruce Fraser's original capture sharpening work flow, he sharpened for source with the unsharp filter at a small radius according to the megapixel count of the camera and amount according to the strength of the blur filter. Your deconvolution restoration also does this and more and would be considered by many to be a substitute for capture sharpening. To this first phase of capture sharpening, Bruce sharpened for image content according to the predominant frequencies of the image.

A priori, performing the restoration at an early stage in processing would make sense, since one is merely putting pixels back to where they should have been in the absence of blur filter or demosaicing artifacts and the process would not introduce sharpening halos which could cause problems in downsizing. It is not clear to me why content sharpening is needed at an early stage in the work flow.

I would be interested in the deconvolution algorithm and implementation you use and how you determine the PSP, as I would like to try deconvolution in my own work. Would the deconvolution be best applied to scene referred data (linear tone curve with no gamma) or could it be applied to a TIFF with a normal tone curve? One advantage of parametric editing in ACR and Lightroom is that bulky intermediate files are not needed, but if one uses ACR or LR and deconvolutes in a standalone program, a bulky intermediate TIFF file is needed.

Regards,

Bill
Logged
Pages: « 1 2 [3] 4 5 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad