Ad
Ad
Ad
Pages: « 1 2 3 [4] 5 »   Bottom of Page
Print
Author Topic: sharpening-Lightroom vs PhotoKit in Photoshop or both?  (Read 22233 times)
hcubell
Sr. Member
****
Offline Offline

Posts: 729


WWW
« Reply #60 on: February 16, 2010, 08:39:28 PM »
ReplyReply

Bart, I had similar experiences with my images that I downsized as jpegs for the web at 800 ppi. Images that I thought looked properly capture sharpened became quite oversharpened downsizing with Bicubic Sharper. Not much better with "plain" Bicubic. I then read about a PS plugin called Resize Magic that I downloaded and used. Much cleaner results.  http://www.fsoft.it/imaging/en/default.htm I have no idea what it does differently under the hood. Trial versions are available for both Mac and Windows.
Logged

Mark D Segal
Contributor
Sr. Member
*
Offline Offline

Posts: 6818


WWW
« Reply #61 on: February 16, 2010, 08:53:35 PM »
ReplyReply

Quote from: hcubell
Bart, I had similar experiences with my images that I downsized as jpegs for the web at 800 ppi. Images that I thought looked properly capture sharpened became quite oversharpened downsizing with Bicubic Sharper. Not much better with "plain" Bicubic. I then read about a PS plugin called Resize Magic that I downloaded and used. Much cleaner results.  http://www.fsoft.it/imaging/en/default.htm I have no idea what it does differently under the hood. Trial versions are available for both Mac and Windows.

Howard - something is unclear here, in the phrase "downsized as jpegs for the web at 800 ppi". Did you start or end at 800ppi, (because for the web you wouldn't want more than about 800 linear pixels total, or say something south of 100PPI with an 8 inch wide dimension)? Then there is the procedure. Normally one wouldn't "downsize as jpegs" - one would do as much as possible in 16-bit ProPhoto (downsizing, conversion of colour space and converting to 8-bit) before saving as JPEG, so that you have the maximum amount of data for handling all the adjustments for web before they need to be "JPEGed". Conversion to JPEG would be the last step before any sharpening for web, if needed. Is this your procedure?
Logged

Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....." http://www.luminous-landscape.com/reviews/film/scanning_workflows_with_silverfast_8.shtml
joofa
Sr. Member
****
Offline Offline

Posts: 485



« Reply #62 on: February 16, 2010, 09:08:00 PM »
ReplyReply

Quote from: hcubell
I had similar experiences with my images that I downsized as jpegs for the web at 800 ppi. Images that I thought looked properly capture sharpened became quite oversharpened downsizing with Bicubic Sharper. Not much better with "plain" Bicubic.

I think the "oversharpened" look may actually be due to an aliased image since no anti-aliasing is present before BiCubic downsampling - a limitation of the interpolation based schemes used directly for downsampling. However, even such schemes should be able to provide a right kernel for downsampling if handled properly. For e.g. I took the interpolation based Keys BiCubic convolution operation and derived the right anti-aliasing filter coupled to it for downsampling by a factor of 2. The calculations were done by hand quickly so there is a chance of error, however, that is the shape I got:



In deriving the above filter I set up the optimization problem to use the same criterion inherent in Shanon's sampling theorem: That the difference between the original and reconstructed signal obtained from the downsampled lies in the null space of the sampling operator. Note the response of the filter lies close to [-0.25 0.25] x [-0.25 0.25] for dowsampliing by a factor of 2.
« Last Edit: February 16, 2010, 09:52:50 PM by joofa » Logged

Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins
hcubell
Sr. Member
****
Offline Offline

Posts: 729


WWW
« Reply #63 on: February 16, 2010, 09:58:13 PM »
ReplyReply

Quote from: Mark D Segal
Howard - something is unclear here, in the phrase "downsized as jpegs for the web at 800 ppi". Did you start or end at 800ppi, (because for the web you wouldn't want more than about 800 linear pixels total, or say something south of 100PPI with an 8 inch wide dimension)? Then there is the procedure. Normally one wouldn't "downsize as jpegs" - one would do as much as possible in 16-bit ProPhoto (downsizing, conversion of colour space and converting to 8-bit) before saving as JPEG, so that you have the maximum amount of data for handling all the adjustments for web before they need to be "JPEGed". Conversion to JPEG would be the last step before any sharpening for web, if needed. Is this your procedure?

I start with very large 16 bit scans or digital captures. Using PS CS4, I would flatten the layers, convert to 8 bits, convert to sRGB, output sharpen, then go to Save for Web and Devices and size the file at 800ppi wide and Medium Quality. With Resize Magic, I would flatten, convert to 8 bits, convert to sRGB, output sharpen, save as a JPEG, open the JPEG and downsize with Resize Magic to 800 ppi wide.
Logged

Schewe
Sr. Member
****
Offline Offline

Posts: 5415


WWW
« Reply #64 on: February 16, 2010, 10:19:10 PM »
ReplyReply

Quote from: hcubell
...I would flatten the layers, convert to 8 bits, convert to sRGB, output sharpen, then go to Save for Web and Devices and size the file at 800ppi wide and Medium Quality...


Well, couple of problems there...first, are you SURE you convert to 8 bit and THEN convert to sRGB? If you are you are wating your 16 bit images by not converting to sRGB and THEN down to 8 bits/channel. Second, if you are sizing a large image in Save For Web, you might as well quit right there...if you are taking a large high quality image into a small  web image in a single sizing then you pretty much have given up a whole bunch of image quality right there...

You would be better off recording a web action that takes the ProPhoto RGB 16 bit, converts to sRGB, then converts to 8 bit then does several 50% bicubic followed with a bicubic sharper for the last resample to 800 pixels...
Logged
hcubell
Sr. Member
****
Offline Offline

Posts: 729


WWW
« Reply #65 on: February 17, 2010, 07:13:04 AM »
ReplyReply

Quote from: Schewe
Well, couple of problems there...first, are you SURE you convert to 8 bit and THEN convert to sRGB? If you are you are wating your 16 bit images by not converting to sRGB and THEN down to 8 bits/channel. Second, if you are sizing a large image in Save For Web, you might as well quit right there...if you are taking a large high quality image into a small  web image in a single sizing then you pretty much have given up a whole bunch of image quality right there...

You would be better off recording a web action that takes the ProPhoto RGB 16 bit, converts to sRGB, then converts to 8 bit then does several 50% bicubic followed with a bicubic sharper for the last resample to 800 pixels...

Thanks, I will try that approach of downsampling by 50% in several bicubic downsamples and see how it compares to Resize Magic. I have been converting into sRGB from 16 bit files in DCam 4(a wide gamut space designed by Joe Holmes) before going to 8 bit and using Save for Web. You are absolutely right on the results with Save for the Web. That's why I switched to Resize Magic. (My principal "problem" with color in preparing images for my website is that it is a Flash site and the architecture is not color managed, with the result that images tend to look oversaturated on wide gamut monitors.)
« Last Edit: February 17, 2010, 07:16:48 AM by hcubell » Logged

Mark D Segal
Contributor
Sr. Member
*
Offline Offline

Posts: 6818


WWW
« Reply #66 on: February 17, 2010, 07:42:57 AM »
ReplyReply

Quote from: Schewe
Well, couple of problems there...first, are you SURE you convert to 8 bit and THEN convert to sRGB? If you are you are wating your 16 bit images by not converting to sRGB and THEN down to 8 bits/channel. Second, if you are sizing a large image in Save For Web, you might as well quit right there...if you are taking a large high quality image into a small  web image in a single sizing then you pretty much have given up a whole bunch of image quality right there...

You would be better off recording a web action that takes the ProPhoto RGB 16 bit, converts to sRGB, then converts to 8 bit then does several 50% bicubic followed with a bicubic sharper for the last resample to 800 pixels...

Generally I agree - this is a good workflow. However I successfully use a slightly different procedure and have found it unecessary to downsample in stages. ("Success" here means the image emerges without looking crunchy, banded or over-sharpened). The procedure I'm using starting from a Canon 1Ds3 16-bit Pro-Photo layered PSD or TIFF is as follows:

1. Flatten Image
2. Resize and resample to 800 pixels maximum dimension, resolution 96 PPI and BiCubic Sharper in one operation
3. Convert to Profile: sRGB, ACE, RelCol or Perceptual RI to taste, BPC selected (critical)
4. Convert Mode to 8 bits per channel
5. Use PK Output Sharpener for Web and Multimedia 800 pixels, select frequency to suit, adjust the opacity of the Pass-through layer to taste (usually low opacities work best)
6. Flatten image
7. SAVE AS JPEG with ICC profile, Quality 8.

A  conservative application of step 5 is what requires the most care in the whole procedure.

This procedure can be automated up to and including Step 4. There is no check-stop possible in a PS Action for step 5, which must be adjusted manually to taste for each image.

Better ideas always welcome, provided they actually show real, tangible superiority - not interested in abstract theory and math unsupported by obvious superior rendition of real-world photographs.
Logged

Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....." http://www.luminous-landscape.com/reviews/film/scanning_workflows_with_silverfast_8.shtml
bjanes
Sr. Member
****
Offline Offline

Posts: 2756



« Reply #67 on: February 17, 2010, 07:53:55 AM »
ReplyReply

Quote from: Mark D Segal
Generally I agree - this is a good workflow. However I successfully use a slightly different procedure and have found it unecessary to downsample in stages. ("Success" here means the image emerges without looking crunchy, banded or over-sharpened). The procedure I'm using starting from a Canon 1Ds3 16-bit Pro-Photo layered PSD or TIFF is as follows:

1. Flatten Image
2. Resize and resample to 800 pixels maximum dimension, resolution 96 PPI and BiCubic Sharper in one operation
3. Convert to Profile: sRGB, ACE, RelCol or Perceptual RI to taste, BPC selected (critical)
4. Convert Mode to 8 bits per channel
5. Use PK Output Sharpener for Web and Multimedia 800 pixels, select frequency to suit, adjust the opacity of the Pass-through layer to taste (usually low opacities work best)
6. Flatten image
7. SAVE AS JPEG with ICC profile, Quality 8.

A  conservative application of step 5 is what requires the most care in the whole procedure.

This procedure can be automated up to and including Step 4. There is no check-stop possible in a PS Action for step 5, which must be adjusted manually to taste for each image.
Mark,

That work flow seems reasonable, but in step 3 the perceptual rendering intent is not available with matrix based profiles such as ProPhotoRGB and sRGB. Photoshop allows perceptual rendering to be checked without warning, but the rendering intent is always colorimetric. There are new ICC sRGB profiles that do have lookup tables for perceptual rendering.

Bill
Logged
Mark D Segal
Contributor
Sr. Member
*
Offline Offline

Posts: 6818


WWW
« Reply #68 on: February 17, 2010, 08:27:38 AM »
ReplyReply

Thanks Bill. I generally leave it RelCol anyhow. I wasn't aware I really had no choice!

Mark
Logged

Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....." http://www.luminous-landscape.com/reviews/film/scanning_workflows_with_silverfast_8.shtml
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3415


« Reply #69 on: February 17, 2010, 08:51:14 AM »
ReplyReply

Quote from: Samotano
Isn't this more a problem of the downsampling algorithm rather than the sharpening workflow? I had the same problem which I resoloved by using Lanczos in a different softwareto downsample. I don't recall capture sharpening having much to do with it.

Correct, a good downsampling algorithm will also catch some of the improperly boosted high spatial frequency detail, but as demonstrated, not all downsampling algorithms are good. Capture sharpening prior to downsampling doesn't make much sense and it increases the risk of introducing artfacts because there are no perfect filters.

Cheers,
Bart
Logged
jbrembat
Full Member
***
Offline Offline

Posts: 177


« Reply #70 on: February 17, 2010, 09:23:15 AM »
ReplyReply

Quote
The premise to always do Capture sharpening as a first step, IMHO, is poor practice because it increases the risk of creating aliasing artifacts when downsampling (say web publishing) is required.
Bart, Capture sharpening is essentially: recover the sharpness of the image before the AA-filter.
AA-filter is a low-pass filter that cuts frequency that is not compatible with periodicity of photo receptors.
Now for sharpening you can:
1- try to rebuild the original frequencies
2- increase locally the contrast to make the image crisper

1- is difficult as the poblem is ill-posed and conditions are to be forced to transform it in a well-posed problem.
There are studies that try to solve the problem, but results sometimes are good, sometimes are not good.
2- is more praticable but no frequencies (details) are restored, so no aliasing will be generated from this contrast increase.

Now what happen in downsizing?
You know it: the sampling theorem.
Before resampling you have to apply an AA-filter (low-pass) otherwise you get aliasing artifacts.

So, if you know that your image have to be downsized, it doesn't make sense to try to recover frequencies that will be in any case cutted.
If the image have to be used at original size (or upsampled for printing), if you can, you can try to restore high frequencies.

Jacopo


Logged
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3415


« Reply #71 on: February 17, 2010, 09:43:55 AM »
ReplyReply

Quote from: Schewe
Uh huh...capture sharpened how? With what settings?

If I want to apply capture sharpening, I usually use the FocusMagic plug-in in Photoshop because it integrates nicely with my workflow. The settings I used were (cryptically) mentioned in the file name; Focusmagic=FM, settings Radius=1 and amount is 175. The settings were chosen visually from the small preview, and they can vary with the lens/aperture used, and with the focus quality (I used Live View to focus on the name shield of the windmill). No difficult determination of PSFs was required, it was just done by eye.

Quote
I think what this proves is NOT that capture sharpening is bad, but that a single bicubic sharper downsample of a large file is bad...


Well, it shows that the Adobe recommended downsampling method doesn't play well with sharpening prior to downsampling. I agree that by jumping through some hoops better results can be obtained, but why do we need to?

Quote
in Photoshop I do multiple bicubic downsample processes till I get close to the final needed size than do a final bicubic sharper for the last resample...or, I use Camera Raw or Lightroom to get the image close to the final needs size with a last touch of resample (often bicubic) to get the exact size...unfortunately, at this time Camera Raw's size and resolution controls suck so I would suggest Lightroom for a comparison since there you can be precise with pixel dimension...

Sure that will work, but it's quite a convoluted workflow to do something as common as repurposing an image for web display or thumbnail generation.

Quote
If you want to upload the raw file I'll compare...I have no interest flailing around with a jpeg.

I don't share Raw files, besides there is not much wrong with a highest quality JPEG when each 49 pixels get squeezed into a single pixel at output, especially when the mode is first changed to 16 bpc before downsampling. I also want to avoid an apples and oranges comparison between different Raw converters, such a comparison is nice for a different thread. I think the best method to compare sharpening is to base it on the same source data.

Nevertheless, for the people with adequate download bandwidth, I'll make available 2 crops from the original unsharpened (no noise reduction either) Capture One 4.0.0 conversion. They are 16bpc TIFFs converted to AdobeRGB, and each crop consists of the original unsharpened background layer, and a Focusmagic sharpened copy of that layer as an example of deconvolution sharpening. The sharpening is as is, neither masks nor blendings were used, just a simple FM filter was applied with the same settings as mentioned above. The result can of course be improved by adding an edge mask, but masking skills are not the subject of investigation here.

Here they are (ZIP compressed TIFFs, but that shouldn't be a problem for Photoshop users):
http://www.xs4all.nl/~bvdwolf/temp/7640_CO40_Crop1.tif  (filesize approx. 20MB)
http://www.xs4all.nl/~bvdwolf/temp/7640_CO40_Crop2.tif  (filesize approx. 30MB)

To reproduce the effects I showed earlier, just increase the canvas size to 3744 px wide by 5616 px high, and (after optional capture sharpening) resample down to 533x800 pixels to get the same dimensions as the small JPEGs above.

Happy flailing,
Bart
Logged
joofa
Sr. Member
****
Offline Offline

Posts: 485



« Reply #72 on: February 17, 2010, 09:51:38 AM »
ReplyReply

Separate to the discussion of where in the processing chain the capture or output sharpening should be done, I think it is abundantly clear that Photoshop has an inherent flaw in downsampling operations: -viz., using an interpolation procedure for direct downsampling without an anti-aliasing filter. It may not be terribly difficult to figure out a right downsampling kernel even with many interpolation-based approaches when used for downsampling as I mentioned in post # 63.

Such problematic downsampling in Photoshop results in heuristics, for e.g., using multliple BiCubic downsampling operations to "simulate" a real downsampling, throwing in a Gaussian blurr before Photoshop BiCubic downsampling, and other contrived workflows, which may result in an intermediate aliased image, which might sometimes appear artificially "sharpened". However, the problems in that intermediate image might be "concealed" as it might actually get smoothed during any number of operations done to it while going to an output, which in effect may "hide" the inherent flaw in Photoshop.  

Logged

Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3415


« Reply #73 on: February 17, 2010, 10:59:21 AM »
ReplyReply

Quote from: bjanes
A priori, performing the restoration at an early stage in processing would make sense, since one is merely putting pixels back to where they should have been in the absence of blur filter or demosaicing artifacts and the process would not introduce sharpening halos which could cause problems in downsizing. It is not clear to me why content sharpening is needed at an early stage in the work flow.

Hi Bill,

Me neither. Especially when one is likely to downsample, it is IMHO not good practice to boost high spatial frequency detail, and boosting lower frequency detail is potentially even more dangerous. Another discussion would be whether to capture sharpen before interpolation/magnification of the image size, or postpone it till after reaching the final output size.

Quote
I would be interested in the deconvolution algorithm and implementation you use and how you determine the PSP, as I would like to try deconvolution in my own work.

I use a couple of different ones, but FocusMagic works fine on a 32bit hardware platform. Unfortunately it is not ready for 64-bit hardware/software, and I'm not sure whether they will update. A friend of mine does have FocusMagic running on his 64-bit Windows 7 PC, but only when running the 32-bit version of Photoshop CS4. On my Vista Ultimate 64-bit system it won't install. I also use ImagesPlus 64-bit version which allows to specify the PSF (upto a 9x9 kernel) for use with e.g. the adaptive Richardson Lucy restoration or a few others. A free version of the RL algorithm can be found as an alternative sharpening tool in the RawTherapee converter, but it is not possible to define one's own PSF so I assume RT uses a Gaussian as PSF (through its radius control). RL is also implemented in MatLab.

PSF determination apparently can be done reasonably well with a Gaussian like PSF, or by eyeballing with a preview. The imaging chain consists of several different PSFs ((de-)focus+lens+aperture+AA-filter+sensel aperture) usually followed by a demosaicing operation. Convolving with so many different PSFs typically leads to something Gaussian-ish. I can characterize the PSF of my camera/lens/aperture/Raw converter combination pretty accurately with a proprietary method of mine. Of course for the most accurate restoration, an exact model (perferably spatially variant) would be prefered.

Quote
Would the deconvolution be best applied to scene referred data (linear tone curve with no gamma) or could it be applied to a TIFF with a normal tone curve? One advantage of parametric editing in ACR and Lightroom is that bulky intermediate files are not needed, but if one uses ACR or LR and deconvolutes in a standalone program, a bulky intermediate TIFF file is needed.

In principle one would be better off with deconvolution of linear gamma data, but as FocusMagic shows it can also be used on gamma adjusted data (maybe they linearize the data during the calculations, I don't know). My experience with RL in ImagesPlus shows that applying it on gamma adjusted data does already produce good results. I don't know how much of a difference it would make compared to linear gamma data (I want to keep things somewhat practical in my workflow, so I would like to avoid intermediate files as well).

Cheers,
Bart
Logged
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3415


« Reply #74 on: February 17, 2010, 11:11:12 AM »
ReplyReply

Quote from: hcubell
Bart, I had similar experiences with my images that I downsized as jpegs for the web at 800 ppi. Images that I thought looked properly capture sharpened became quite oversharpened downsizing with Bicubic Sharper. Not much better with "plain" Bicubic. I then read about a PS plugin called Resize Magic that I downloaded and used. Much cleaner results.  http://www.fsoft.it/imaging/en/default.htm I have no idea what it does differently under the hood. Trial versions are available for both Mac and Windows.

Yes, perhaps it uses Lanczos windowed Sinc behind one of its options, but I don't know what goes on under the hood.

Cheers,
Bart
Logged
Schewe
Sr. Member
****
Offline Offline

Posts: 5415


WWW
« Reply #75 on: February 17, 2010, 03:20:53 PM »
ReplyReply

Quote from: BartvanderWolf
Well, it shows that the Adobe recommended downsampling method doesn't play well with sharpening prior to downsampling. I agree that by jumping through some hoops better results can be obtained, but why do we need to?

When you say "Adobe" you realize you are referring to a company of many, many individuals, right? There is no such thing as "An Adobe Way"...what you are referring to is really just the "Photoshop Way". What you are ignoring when lumping all of Adobe into a single entity is the Camera Raw pipeline (Camera Raw and Lightroom) as well as the Fireworks (if dealing with images for the web).

Thomas used to use a Lanczos variant but decided bicubic variants did better and thus changed them. Thomas (and Eric) look at all sorts of new tech developments for possible inclusion into Camera Raw...on the other hand, Photoshop at nearly the age of 20 (as of Fri) is a bit slower to take on new tech to replace old tech. They currently have their hands completely full doing the Mac Carbon to Cocoa API conversion...they wouldn't be interested in changing Image Size any time in the near future...

As far as jumping through hoops...you really only have to do that one time for the purposes of creating an action to run in batch mode. I can see no reason to do image by image reductions manual just for the web...or simply use Lightroom that is using a more modern reasmpling.

As for the raw file, I completely understand, but that then eliminates the possibility of testing various capture sharpening options such as Capture One, DPP and Lightroom 3 at a raw stage...
Logged
Mark D Segal
Contributor
Sr. Member
*
Offline Offline

Posts: 6818


WWW
« Reply #76 on: February 17, 2010, 03:51:36 PM »
ReplyReply

Quote from: BartvanderWolf
Here they are (ZIP compressed TIFFs, but that shouldn't be a problem for Photoshop users):
http://www.xs4all.nl/~bvdwolf/temp/7640_CO40_Crop1.tif  (filesize approx. 20MB)
http://www.xs4all.nl/~bvdwolf/temp/7640_CO40_Crop2.tif  (filesize approx. 30MB)



Happy flailing,
Bart

Bart, I did download your raw sections, thanks, duplicated them and on the duplicate set sharpened with PK Capture and Output sharpening, then reduced the opacity of the Pass-Through Output Sharpen layer to taste. Seen at 50% and 100% screen magnification to avoid aliasing impacts, I think your Focus Magic results are fine and so are those from PK Sharpener. As I don't experience (visibly on paper) the kind of problems being reported on this thread with resampling and downsizing using Bicubic sharper, it doesn't make sense for me to intervene further on that one.
« Last Edit: February 17, 2010, 03:52:15 PM by Mark D Segal » Logged

Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....." http://www.luminous-landscape.com/reviews/film/scanning_workflows_with_silverfast_8.shtml
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3415


« Reply #77 on: February 17, 2010, 04:55:24 PM »
ReplyReply

Quote from: Schewe
When you say "Adobe" you realize you are referring to a company of many, many individuals, right? There is no such thing as "An Adobe Way"...what you are referring to is really just the "Photoshop Way". What you are ignoring when lumping all of Adobe into a single entity is the Camera Raw pipeline (Camera Raw and Lightroom) as well as the Fireworks (if dealing with images for the web).

You're right, the Photoshop recommended way it is then.

Quote
Thomas used to use a Lanczos variant but decided bicubic variants did better and thus changed them. Thomas (and Eric) look at all sorts of new tech developments for possible inclusion into Camera Raw...on the other hand, Photoshop at nearly the age of 20 (as of Fri) is a bit slower to take on new tech to replace old tech. They currently have their hands completely full doing the Mac Carbon to Cocoa API conversion...they wouldn't be interested in changing Image Size any time in the near future...

I understand the priorities, but I do wonder how on earth he was led to believe that the bicubic variants were better. Certainly not better for quality downsampling, as many have demonstrated/experienced, and as predicted by my simple zone plate test. It is also an easy method to test the effectivity of cascaded downsampling. Maybe he didn't want different algorithms for downsampling and interpolation/upsampling (which is common practice in the industry)?

Quote
As far as jumping through hoops...you really only have to do that one time for the purposes of creating an action to run in batch mode. I can see no reason to do image by image reductions manual just for the web...or simply use Lightroom that is using a more modern reasmpling.

Things are easier when one only needs a fixed downsampling ratio, e.g. camera file format to 800 pixels max dimension, but when one has to deal with different size source materials then the solution quickly becomes sub-optimal for e.g. batch (droplet) processing without manual intervention.

Quote
As for the raw file, I completely understand, but that then eliminates the possibility of testing various capture sharpening options such as Capture One, DPP and Lightroom 3 at a raw stage...

Maybe something for a new thread, after a non-beta version of LR3 is released, although I'd resent being 'forced' to buying/upgrading 2 different products from the same company for a single purpose (and I already use a good Raw converter). That's why I use the free ImageMagick routines when I need quality downsampling, and its choice of interpolation/upsampling algorithms is also nice.

Cheers,
Bart
Logged
Schewe
Sr. Member
****
Offline Offline

Posts: 5415


WWW
« Reply #78 on: February 17, 2010, 06:18:41 PM »
ReplyReply

Quote from: BartvanderWolf
I understand the priorities, but I do wonder how on earth he was led to believe that the bicubic variants were better. Certainly not better for quality downsampling, as many have demonstrated/experienced, and as predicted by my simple zone plate test. It is also an easy method to test the effectivity of cascaded downsampling. Maybe he didn't want different algorithms for downsampling and interpolation/upsampling (which is common practice in the industry)?


Thomas (and really Eric Chan in the case of the ACR pipeline resampling) only implements new things after extensive testing...they don't do things by accident nor simply because it's either easier or cheaper. I was somewhat involved in testing the resampling because the output sharpening in LR 2 and ACR 5 was impacted by the resample code.

Unless you've done extensive tests using either Lightroom 2.3 or Camera Raw 5.3 then you are not judging what I consider to be optimal resampling of raw files (although again, I don't like the crude size and resolution controls in Camera Raw).
Logged
jbrembat
Full Member
***
Offline Offline

Posts: 177


« Reply #79 on: February 18, 2010, 03:35:11 AM »
ReplyReply

Quote from: BartvanderWolf
You're right, the Photoshop recommended way it is then.



I understand the priorities, but I do wonder how on earth he was led to believe that the bicubic variants were better. Certainly not better for quality downsampling, as many have demonstrated/experienced, and as predicted by my simple zone plate test. It is also an easy method to test the effectivity of cascaded downsampling. Maybe he didn't want different algorithms for downsampling and interpolation/upsampling (which is common practice in the industry)?



Things are easier when one only needs a fixed downsampling ratio, e.g. camera file format to 800 pixels max dimension, but when one has to deal with different size source materials then the solution quickly becomes sub-optimal for e.g. batch (droplet) processing without manual intervention.



Maybe something for a new thread, after a non-beta version of LR3 is released, although I'd resent being 'forced' to buying/upgrading 2 different products from the same company for a single purpose (and I already use a good Raw converter). That's why I use the free ImageMagick routines when I need quality downsampling, and its choice of interpolation/upsampling algorithms is also nice.

Cheers,
Bart
Bart, you are on the wrong way.
The problem, on downsizing, is not the interpolation quality:
to avoid aliasing image have to be filtered before size reduction.


In any case it's true that Adobe is not very smart on resampling.

Jacopo


Logged
Pages: « 1 2 3 [4] 5 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad