Ad
Ad
Ad
Pages: « 1 [2] 3 4 5 »   Bottom of Page
Print
Author Topic: sharpening-Lightroom vs PhotoKit in Photoshop or both?  (Read 21166 times)
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3012


« Reply #20 on: February 12, 2010, 07:33:22 PM »
ReplyReply

Quote from: Mark D Segal
Perhaps you understand the LR workflow - not sure - but just to remind - it doesn't matter in LR what adjustments you make in which order, because when you "render" the image the program processes them all in correct sequence "under the hood". So you can noise-reduce and capture sharpen or capture sharpen and noise reduce - doesn't matter - the program will handle them correctly so as not to sharpen the noise. It's a "smart program".

Hi Mark,

Yes, I understand the LR sequence of events. I also do like the parametric approach, some adjustments may cancel out at certain luminosity levels, in which case one should avoid introducing roundiing inaccuracies early in the processing chain of events. Noise control should be an integral part of sharpening (which could under circumstances mean that noise reduction is exclusively restricted to the sharpened areas, but it could also have an overall effect).

Nevertheless, edge contrast/acutance boosting shouldn't be confused with real sharpening when it produces better visual results.

Cheers,
Bart
« Last Edit: February 12, 2010, 07:37:53 PM by BartvanderWolf » Logged
Mark D Segal
Contributor
Sr. Member
*
Offline Offline

Posts: 6767


WWW
« Reply #21 on: February 12, 2010, 09:35:21 PM »
ReplyReply

Quote from: BartvanderWolf
Hi Mark,

Yes, I understand the LR sequence of events. I also do like the parametric approach, some adjustments may cancel out at certain luminosity levels, in which case one should avoid introducing roundiing inaccuracies early in the processing chain of events. Noise control should be an integral part of sharpening (which could under circumstances mean that noise reduction is exclusively restricted to the sharpened areas, but it could also have an overall effect).

Nevertheless, edge contrast/acutance boosting shouldn't be confused with real sharpening when it produces better visual results.

Cheers,
Bart

"......when it produces better visual results" is exactly the problem. I've yet to see it, so I'm not convinced.

You say "noise control should be an integral part of sharpening..............". Where does this idea come from? Please explain and mention any authoritative references, because the way I'm understanding it - unless I'm misunderstanding it - this is totally contrary to everything I've ever been told or experienced about the relationships between measures to reduce noise and those to increase acutance. The standard approach and established advice is to NOT sharpen noise, hence if working outside of LR, reduce noise, then sharpen. The main relationship between them that I'm familiar with is that depending on the strength of the noise reduction, some additional accutance sharpening may be helpful to counteract excessive smoothing.
Logged

Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....." http://www.luminous-landscape.com/reviews/film/scanning_workflows_with_silverfast_8.shtml
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3012


« Reply #22 on: February 12, 2010, 11:33:07 PM »
ReplyReply

Quote from: Mark D Segal
"......when it produces better visual results" is exactly the problem. I've yet to see it, so I'm not convinced.

You say "noise control should be an integral part of sharpening..............". Where does this idea come from? Please explain and mention any authoritative references, because the way I'm understanding it - unless I'm misunderstanding it - this is totally contrary to everything I've ever been told or experienced about the relationships between measures to reduce noise and those to increase acutance.

Hi Mark,

Perhaps there is a misunderstanding. Sharpening noise is counterproductive, one'll probably have to reduce it afterwards. So when I say "noise control" I meant sharpening the image content while keeping/reducing the noise component of the image in control. One common method is the adaptive Richardson Lucy restoration method, but there are others by now.

Quote
The standard approach and established advice is to NOT sharpen noise, hence if working outside of LR, reduce noise, then sharpen. The main relationship between them that I'm familiar with is that depending on the strength of the noise reduction, some additional accutance sharpening may be helpful to counteract excessive smoothing.

Ah, but then we're already compensating for things lost. I'm talking about preventing things from getting lost.

Cheers,
Bart
Logged
Schewe
Sr. Member
****
Offline Offline

Posts: 5255


WWW
« Reply #23 on: February 13, 2010, 12:00:40 AM »
ReplyReply

Quote from: BartvanderWolf
On a related note, anyone who doesn't use a type of deconvolution restoration for capture sharpening is dwelling in the same dark confines ...


Yeah, well I would consider deconvolution sharpening a sharpening for effect...

You tell me what the PSF is for a Canon 24-105mm lens is at 48mm and I might sit up an listen....

The problem with the advocates of the deconvolution kernel type of sharpening is that in practice, they CAN'T provide a point spread function worth a crap.

Yes, if you know the EXACT method of blurring and can program the EXACT opposite effect in sharpening then you can indeed turn fuzzy crap out of the Hubble telescope into usable images.

In reality all the theoretical hocus-pocus regarding deconvolution is just that...theoretical applications of algorithms that don't get too far off the ground in real life photographic applications.

Countering the effects of AA filters and lens induced softness ain't gonna happen with arbitrary PFS (point spread functions).

When you get the key to the universe, let me know (I'll bet some SOB will change the lock while you ain't looking).
« Last Edit: February 13, 2010, 12:03:30 AM by Schewe » Logged
ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 6921


WWW
« Reply #24 on: February 13, 2010, 12:09:29 AM »
ReplyReply

Hi,

Bart is an image processing guru, as far as I understand. I'm pretty sure that he has a point. On the other hand the people behind LR/ACR/PS and PKS are no beginners either. I got the impression that the late Bruce Fraser did look at deconvolution based sharpening and did not find any real advantage.

In a sense it's similar to 'uprezzing' it is well accepted that methods like Lanzsos introduce less artifacts than other upscaling methods. Eric "Madman" Chen and his colleagues tested around 30 different methods but still came up with the present one in LR as the best compromise.

I used Focus Magic myself, and liked it a lot. I have two issues with it:

It doesn't work on Intel Macs
It doesn't fit the parametric workflow in ACR

An additional problem with Focus Magic is it seems that development has stopped on it.

A few other comments. Deconvolution uses something called Point Spread Function (PSF). I presume that Focus Magic assumes a PSF typical of slight defocus. Optimally, the PSF would include both defocus and the effects of the AA-filter. It may be that "Focus Fixer" uses both. In my view the capture sharpening in LR works very well, it is not really obvious to me that other methods like Focus Fixer, Focus Magic or PS Advanced Sharpening would give better results.

Also, capture sharpening is never the one we see, except at actual pixels on the computer screen. Normally the images are downsized for viewing on screen or dithered for printing. I guess that the small differences we see in capture sharpening may be lost in the process.

A final observation is that your mileage may vary. Folks working with image analysis are probably much more artifact aware than normal photographers more concerned with perception of sharpness. Eyesight may matter.

Best regards
Erik

Quote from: Mark D Segal
Perhaps you understand the LR workflow - not sure - but just to remind - it doesn't matter in LR what adjustments you make in which order, because when you "render" the image the program processes them all in correct sequence "under the hood". So you can noise-reduce and capture sharpen or capture sharpen and noise reduce - doesn't matter - the program will handle them correctly so as not to sharpen the noise. It's a "smart program".
Logged

Schewe
Sr. Member
****
Offline Offline

Posts: 5255


WWW
« Reply #25 on: February 13, 2010, 12:37:23 AM »
ReplyReply

Quote from: ErikKaffehr
Bart is an image processing guru, as far as I understand. I'm pretty sure that he has a point. On the other hand the people behind LR/ACR/PS and PKS are no beginners either. I got the impression that the late Bruce Fraser did look at deconvolution based sharpening and did not find any real advantage.


No doubt Bart is a smart guy...but the odds are I've prolly made a whole poop-load more captures and prints than he since shooting and making prints is/was my livelihood. So, I'm all about practical applications of technology and less interested in theoretical stuff (although I did get the chance to hang out at MIT and meet a bunch of really bright people doing really interesting things last year thanks to Eric).

To be clear about what Bruce did or didn't think about deconvolution sharpening, Bruce didn't think too much of the initial implementation of Smart Sharpen and the rather flimsy implementation of motion de-blurring. In fact he was less than impressed with ANY commercially available deconvolution available back when he did PhotoKit Sharpener in 2003.

Nothing about that has really changed from my point of view. Nobody has produced a viable commercial sharpening approach, other than one of the options in RAW Developer that does anything SPECIFIC with regards to sharpening.

RAW Developer DOES have a Richardson-Lucy Deconvolution alternative sharpening method. But, I've been able to essentially match the IQ of RAW Developer in a "beta" raw processor currently under development (a hint might be Lightroom 3).

So, the geeks might all point to the mystical "Deconvolution" approach and banty about stuff like the Richardson–Lucy algorithm but until somebody can send me a link to a download for a REAL (and current) Photoshop plug-in, I'm not too interested...
« Last Edit: February 13, 2010, 12:39:36 AM by Schewe » Logged
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3012


« Reply #26 on: February 13, 2010, 01:15:16 AM »
ReplyReply

Quote from: Schewe
Yeah, well I would consider deconvolution sharpening a sharpening for effect...

You tell me what the PSF is for a Canon 24-105mm lens is at 48mm and I might sit up an listen....

The problem with the advocates of the deconvolution kernel type of sharpening is that in practice, they CAN'T provide a point spread function worth a crap.

Hi Jeff, I've come to know you as a person who likes to play hard ball, so allow me to respond likewise

Too bad, but apparently you've not being paying attention to developments in scientific research (admittedly not everyone's cup of thea). Many moons ago, several years actually, there have been successful attempts to characterize the Point Spread Function (PSF), across the image (AKA spatially variant PSFs), automatically.

Quote
Yes, if you know the EXACT method of blurring and can program the EXACT opposite effect in sharpening then you can indeed turn fuzzy crap out of the Hubble telescope into usable images.

No, it doesn't require to know the EXACT PSF a priory. As the simple FocusMagic approximation of several years ago already proved, it can be successfully determined automatically and, by trial and error, improved upon for specific regons in the image plane.
I do appreciate that you've gone throught the effort of looking up what the RL restoration is about (Hubble is one area), but that's before the adaptive version, and way before the improved versions were even mentioned in research.

BTW, a kind suggestion, please do gently kick people like Chris Cox's behind if he's still operating somewhere in Adobe's code optimalization's arena and has any influence on future developments. I've exchanged some Usenet suggestions with him when he still resisted HDR tonemapping in Photoshop, he seemed to be quite conformative of the status quo, not innovative.

Quote
In reality all the theoretical hocus-pocus regarding deconvolution is just that...theoretical applications of algorithms that don't get too far off the ground in real life photographic applications.

Au contraire mon ami. As an example of (by now ancient) technology, please look at http://www.mathworks.com/access/helpdesk_r13/help/toolbox/images/deblurr8.html, not theoretical at all (even Photoshop has reluctantly adopted something vagely similar (but less effective) in its Smart Sharpen filter). There are many more examples, but I'm not sure if people are open minded enough to digest them. Theoretical? I do such restorations all the time, it's even been part of a free Raw converter called RawTherapee for some time, and I would also welcome some proactive movements from the established industry, instead of dragging behind.

Quote
Countering the effects of AA filters and lens induced softness ain't gonna happen with arbitrary PFS (point spread functions).

The AA-filter, even when combined with the theoretical diffraction (of a perfectly round aperture) used to take the image, is one of the simplest effects to restore. It's the spatial variation across the image plane that's going to challenge some of the coding dinosaurs.

Quote
When you get the key to the universe, let me know (I'll bet some SOB will change the lock while you ain't looking).

Nice try, but no sigar. I'm not a person to get intimidated by reputation alone. On the contrary, I'm much easier to get along with without (attempted) intimidation or ridicule ... There is no need for a key to the universe, just getting up to speed with it will do ...

Don't get me wrong, I do appreciate what Bruce and you have achieved, but let's not stick to that and let's collectively move on to the next level, please. It's overdue for some time.

Cheers,
Bart
Logged
BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3012


« Reply #27 on: February 13, 2010, 01:39:16 AM »
ReplyReply

Quote from: ErikKaffehr
Hi,

Bart is an image processing guru, as far as I understand.

Not really, although I've been involved in (amongst others) scientific imaging for more than 30 years by now, I'm also a professional photographer by education.

Quote
I'm pretty sure that he has a point. On the other hand the people behind LR/ACR/PS and PKS are no beginners either. I got the impression that the late Bruce Fraser did look at deconvolution based sharpening and did not find any real advantage.

That may be part of the problem. While I respect the achievements of Bruce at all, I'm not that pleased with the slow down in progess of commercially available solutions for photographers that need to make a living, or even for advanced amateurs.

Quote
In a sense it's similar to 'uprezzing' it is well accepted that methods like Lanzsos introduce less artifacts than other upscaling methods. Eric "Madman" Chen and his colleagues tested around 30 different methods but still came up with the present one in LR as the best compromise.

There, unfortunately, is not a single best method (image content may dictate a different method). Lanczos windowed Sinc is very good at downsampling, although Sinc variants can be successful at upsampling as well.

Quote
I used Focus Magic myself, and liked it a lot. I have two issues with it:

It doesn't work on Intel Macs
It doesn't fit the parametric workflow in ACR

An additional problem with Focus Magic is it seems that development has stopped on it.

It seems to have stopped, maybe something will happen in the future, but I'm mot holding my breath.

Quote
A few other comments. Deconvolution uses something called Point Spread Function (PSF). I presume that Focus Magic assumes a PSF typical of slight defocus. Optimally, the PSF would include both defocus and the effects of the AA-filter. It may be that "Focus Fixer" uses both. In my view the capture sharpening in LR works very well, it is not really obvious to me that other methods like Focus Fixer, Focus Magic or PS Advanced Sharpening would give better results.

Also, capture sharpening is never the one we see, except at actual pixels on the computer screen. Normally the images are downsized for viewing on screen or dithered for printing. I guess that the small differences we see in capture sharpening may be lost in the process.

Good points, but what about upsampling, or downsampling without aliasing artifacts (unlike in PS Bicubic 'sharper'). Why does a program like Qimage produce better output quality than Photoshop?

Quote
A final observation is that your mileage may vary. Folks working with image analysis are probably much more artifact aware than normal photographers more concerned with perception of sharpness. Eyesight may matter.

True, although (after a considererable time) the establishment cannot keep denying the progress that's been made despite of them ...

Cheers,
Bart
« Last Edit: February 13, 2010, 01:45:20 AM by BartvanderWolf » Logged
Schewe
Sr. Member
****
Offline Offline

Posts: 5255


WWW
« Reply #28 on: February 13, 2010, 01:45:22 AM »
ReplyReply

Quote from: BartvanderWolf
The AA-filter, even when combined with the theoretical diffraction (of a perfectly round aperture) used to take the image, is one of the simplest effects to restore.

Then why has nobody actually coded something useful yet?

If it's "simple" to restore AA filter and lens diffraction, why hasn't some bright-boy already done that?

Oh, maybe because it's not quite so "simple"?

Quote
Too bad, but apparently you've not being paying attention to developments in scientific research (admittedly not everyone's cup of thea). Many moons ago, several years actually, there have been successful attempts to characterize the Point Spread Function (PSF), across the image (AKA spatially variant PSFs), automatically.

Uh huh...no, I don't spend a lot of time reading scientific journals. But tell me EXACTLY where all this "new" research has been taking place regarding PSFs and deconvolution image restoration and point me to a real product that is using it...

RawTherapee doesn't count cause well, I don't do Windows (nor Linux). RAW Developer is the only Mac raw processor I know of offering Richardson-Lucy Deconvolution as an optional sharpening option.

Yeah, I've wandered around and looked at the various web sites touting all manner of computational image processing to restore image sharpness. The problem is, it seems nobody is willing to actually create a commercial product and let the industry decide. It's all about theory and math (something I'm not inclined to get too far into since I'm not math genius) and so far nobody has actually PRODUCED anything...(that I know of)

As far as dealing with Chris Cox, I've known Chris since way before he joined Adobe...he can be a bit of a pain if you fail to PROVE your point to him successfully. As far as HDR tone  mapping, you might be pleasantly surprised at the changes made in the "next" version of Photoshop.

With regards to image sharpening, I'm ALWAYS willing to listen and learn. But the ultimate arbiter of what ends up in Camera Raw (and the ACR pipeline for Lightroom) is Thomas Knoll. If you can't move him, you can't move  forward...ironically, we have somebody who does a wonderful job working with Thomas here in our ranks, our own Eric Chan (MadManChan).

If you can convince Eric of something, he does have Thomas' ear (course I tend to be able to move both Thomas and Eric myself sometimes).

All the theoretical deconvolution stuff has never panned out in practical applications that I'm aware of. Sorry, it simply doesn't move me to any useful degree.

Again, if you can point to anything current that WORKS, let me know...
Logged
hcubell
Sr. Member
****
Offline Offline

Posts: 727


WWW
« Reply #29 on: February 13, 2010, 11:15:47 AM »
ReplyReply

I recently ran some medium format digital files through Raw Developer using the Richardson-Lucy Deconvolution sharpening tool. I then ran the same files through PK Capture Sharpen, CS4 Smart Sharpen and Hasselblad Phocus. I was blown away by the Raw Developer sharpening; the combination of exceptional sharpness with a natural look. I could not get close with the other tools. Does anyone offer a CS4 plug-in using the Richardson-Lucy Deconvolution technology?
Logged

joofa
Sr. Member
****
Offline Offline

Posts: 485



« Reply #30 on: February 13, 2010, 01:26:48 PM »
ReplyReply

Quote from: BartvanderWolf
There, unfortunately, is not a single best method (image content may dictate a different method). Lanczos windowed Sinc is very good at downsampling, although Sinc variants can be successful at upsampling as well.

It would appear to me that a key factor is not considered in many typical sharpening operations: how to maintain a "best" sharpening while downsampling using some filter and later upsampling for a particular output device using some upsampling method. However, if the output upsampling, in case it is required, is known, there is hope.

Laczos is a good filter for downsampling, if we just stop at downsampling. However, if we are going to further stretch an image by upsampling in an output process then what is the situation? If the output reconstruction process is known then one can do a better job at keeping a sharpened image while downsampling+later upsampling. For e.g., say given an input image I downsample to half the size using Lanczos and, then, later upsample by factor 2, say, by linear interpolation, then how does the image sharpening get affected in such operation compared to an "optimized" filter (in min. mean square error terms) that does downsampling by factor 2 + later linear upsampling by factor 2.

I derived such an "optimal" filter for downsampling by factor 2 and later linear interpolation by factor 2. And the frequency response of this operation is shown below:



Note that response near Nyquist in the image above rises, before falling down, which causes sharpening, and the amount by which an image is sharpened was calculated to give the best response with later linear interpolation. Note that this "optimal" response includes an overall low pass operation, as the theory of downsampling requires, however, interestingly, in addition to including low pass effect, it also has a sharpening effect as it starts rising before Nyquist.

« Last Edit: February 17, 2010, 10:16:46 AM by joofa » Logged

Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins
bjanes
Sr. Member
****
Offline Offline

Posts: 2714



« Reply #31 on: February 14, 2010, 08:15:12 AM »
ReplyReply

Quote from: Schewe
Yeah, well I would consider deconvolution sharpening a sharpening for effect...
Actually, deconvolution is image restoration, not sharpening. The differences may seem academic, but the essential difference is that deconvolution actually puts misplaced pixels back to where they should be, truly increasing detail, while sharpening using the unsharp mask and other techniques merely creates the illusion of sharpness by increasing edge contrast.

As all photographers know, it is best to obtain a good image in the first place and restoration would not be needed and one merely would have to employ output sharpening to get the best reproduction. If you are doing landscapes with a sturdy tripod with a Phase One P65+ using mirror lockup and a high quality prime, deconvolution might not be needed, but in much scientific imaging under difficult conditions, such methods are widely used. Their use in astronomy has been mentioned, and they are also used in microscopy.

Quote from: Schewe
You tell me what the PSF is for a Canon 24-105mm lens is at 48mm and I might sit up an listen....

The problem with the advocates of the deconvolution kernel type of sharpening is that in practice, they CAN'T provide a point spread function worth a crap.

Yes, if you know the EXACT method of blurring and can program the EXACT opposite effect in sharpening then you can indeed turn fuzzy crap out of the Hubble telescope into usable images.

In reality all the theoretical hocus-pocus regarding deconvolution is just that...theoretical applications of algorithms that don't get too far off the ground in real life photographic applications.
As Bart pointed out, one can often obtain acceptable results with some trial and error using a few basic shapes for the PSF. Examples are here and here. Roger Clark's use of the Adaptive Richardson-Lucy Iteration for an image of a fox is a real life photographic application. However R-L is not for everyone--Roger's example involved trying several PSFs by trail and error and the final computation took one and a half hours. Roger did use a Canon lens, but I don't know if it was a 24-105 at 48 mm  . However, to dismiss such techniques is akin to putting one's head in the sand and NIH (not invented here).
Logged
Mark D Segal
Contributor
Sr. Member
*
Offline Offline

Posts: 6767


WWW
« Reply #32 on: February 14, 2010, 11:16:37 AM »
ReplyReply

Bill,

Some very useful insights and distinctions here. I agree - deconvolution is more for restoration than for "sharpening". And I mostly agree that the "sharpening" we do isn't focusing (that must be done in the original capture), but rather creating an illusion of sharpness by increasing edge contrast; however, I think an important additional consideration is that this increase of edge sharpening is being done to counteract the edge softening which happens as a result of both the low pass filter before the sensor and the inevitable effects of the digitization process. So in some sense it is also a "corrective", but for a different purpose than deconvolution.

It is also good that you raise the example of a Phase P65+ not needing deconvolution. That is for sure correct. Not only that, those images need only MINIMAL sharpening of the more usual variety (be it PK or others), and one needs to be very careful to apply sharpening conservatively to these images. I have recently processed a slew of them from my participation in the Phase-1 Death Valley MF workshop. Those images come straight out of the box quite clear and "sharp" - which makes sense considering that there is no low pass filter in front of that sensor, plus a bunch of other factors which make a Phase-1 back what it is.

I would also recommend that readers reflect carefully upon the veracity of the comparison reports of the kind Howard Cubell mentioned. Much is in the eye of the beholder (subjective); it may or may not be "objectively correct", because we don't have an ISO standard defining how sharp is "too sharp" in a processed photograph. What I find over-sharpened or crunchy, Howard or others may think is great stuff - not that he's wrong and I'm right, but simply because our taste and perception of what constitutes sharpness may not be the same. This came to mind as I perused a number of the images on his website BTW some striking images there Howard)  - to the extent one can tell anything reliably from relatively low-res JPEGs posted on the internet and viewed on a display. Quite some time ago in the context of another such discussion on this Forum, I too have compared actual prints of images made using Focus Magic compared with those made using PK Sharpener, re-iterating the Focus Magic settings using a range around the recommended ones, and quite frankly I couldn't produce on paper as pleasing a sharpening effect with FM as I could with PK. No doubt, others will have an opposite take on this. That's fine. So much to say there are no absolute truths here, but I remain quite persuaded that the mainstream approach to sharpening is still mainstream for good reason.

Reflecting further on the orientation some of this discussion has taken, I don't perceive any of what you call the "NIH" syndrome. Speaking of the relevant actors in the mainstream industry, you're conversing with forward-looking objective folks who don't mind looking at both the strengths and areas for improvement of what they've done before. I also place little credence in the corporate head-in-the-sand theories of the type Bart mentioned, because that too fails closer examination. Bart seems to think the "industry" is asleep and by implication perhaps he knows better than Adobe what should come next in the evolution of sharpening. The problem with this kind of statement is an implicit assumption that the engineers and mathematicians at or consulting to Adobe have some kind of vested interest against technical progress relative to their own previous work, or that they lack the competence and interest to explore alternative approaches, and that if they did indeed suffer from these conditions, Adobe's management has so little knowledge and interest to maximize shareholder value that they would simply cave-in to and indefinitely tolerate the retrograde prejudices or technical limitations of their employees. Not believable in today's competitive, performance-oriented environment - especially as one sees how aggressively Adobe goes out to buy-up expertise they don't have in-house and perceive as real value-added. But anyhow, what matters is results, not theories. Much as the discussion is interesting, we're veering off the OPs questions perhaps! (So what else is new here!)

Logged

Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....." http://www.luminous-landscape.com/reviews/film/scanning_workflows_with_silverfast_8.shtml
Schewe
Sr. Member
****
Offline Offline

Posts: 5255


WWW
« Reply #33 on: February 14, 2010, 02:45:55 PM »
ReplyReply

Quote from: Mark D Segal
Bart seems to think the "industry" is asleep and by implication perhaps he knows better than Adobe what should come next in the evolution of sharpening. The problem with this kind of statement is an implicit assumption that the engineers and mathematicians at or consulting to Adobe have some kind of vested interest against technical progress relative to their own previous work, or that they lack the competence and interest to explore alternative approaches, and that if they did indeed suffer from these conditions, Adobe's management has so little knowledge and interest to maximize shareholder value that they would simply cave-in to and indefinitely tolerate the retrograde prejudices or technical limitations of their employees.


Actually, it's laughable....

Adobe has it's fingers in a lot of the most progressive computational image processing studies from Stanford to MIT to BYU. The proof is some of the more interesting tech to go into CS3 and CS4 the most recent in CS4 with the Content Aware Scaling (originally called Seam Carving from a graduate student at MIT). Eric Chan is also an MIT product and there are several areas of research Adobe is sponsoring at MIT. Adobe even went to the effort to build new offices on the outskirts of Boston.

To say "NIH" or there's some corporate agenda against new developments and substantial image quality improvements is to ignore the rather important and substantial contributions to digital images by a fellow named Thomas Knoll. Not only did he and his brother write Photoshop 20 years ago, he wrote Camera Raw (one of the fist to break the log jams of proprietary raw file formats) and continues improving the raw processing image quality in the Lightroom processing pipeline.

I know Thomas pretty well...there is no way he would leave ANY stone unturned in the pursuit of improving image quality...the same can now be said for his understudy, Eric Chan. If you can prove to them something needs to be changed or improved, they do it. Yes, there are constraints on timing and development. Stuff doesn't happen overnite (well, sometimes it does).

I'm pretty sure that ACR 6 and Lightroom 3 releases will impress the heck out of people regarding image quality. Seriously...I think they will achieve "best of breed" status even over C1 and the manufactures' software. As I indicated, even the RAW Developer's implementation of a Richardson-Lucy deconvolution will not surpass the IQ of the next round of improvements in ACR & LR.

If somebody could prove that a Richardson-Lucy deconvolution algorithm inside of Camera Raw would be better than the current (ok, the next gen) sharpening, Thomas would not be above adding it (given adequate timing).

Yes, for some image imperfections, you can make some improvements in the image if you can find a PSF to match the image defect...

I've seen some pretty amazing image reconstructions including a guy at MIT that is working on multi-directional PSFs–the kind of camera blur where the resulting motion blur goes in multiple directions. And yes, if you were trying to read the license plate off of a fuzzy recon camera or de-blurring a terrorist's face for facial recognition of trying to remove the motion blur of a star field, all of those highly technical computational image enhancements might benefit from a more exotic de-blur algorithm regardless of where it came from...

The fact is, I'm simply not aware of anybody doing anything really useful at it relates to general photography....the kind of photography I do (which is really all I care about)

Seriously, correct me if I'm wrong...it ain't like I'm sticking my head in the sand. I'm actively looking at any and all methods of improving image quality whether it be in the shooting or the processing or printing of images...show me something interesting and useful.

(BTW, I'll bet Eric is secretly watching this thread even if he won't/shouldn't post on it–so seriously, if anybody has any useful links, post them)

:~)
« Last Edit: February 14, 2010, 02:47:24 PM by Schewe » Logged
joofa
Sr. Member
****
Offline Offline

Posts: 485



« Reply #34 on: February 14, 2010, 03:58:53 PM »
ReplyReply

Quote from: Mark D Segal
Bart seems to think the "industry" is asleep and by implication perhaps he knows better than Adobe what should come next in the evolution of sharpening. The problem with this kind of statement is an implicit assumption that the engineers and mathematicians at or consulting to Adobe have some kind of vested interest against technical progress relative to their own previous work, or that they lack the competence and interest to explore alternative approaches, and that if they did indeed suffer from these conditions, Adobe's management has so little knowledge and interest to maximize shareholder value that they would simply cave-in to and indefinitely tolerate the retrograde prejudices or technical limitations of their employees. Not believable in today's competitive, performance-oriented environment - especially as one sees how aggressively Adobe goes out to buy-up expertise they don't have in-house and perceive as real value-added. But anyhow, what matters is results, not theories.

I think Bart suggested an option that is indeed viable technically. Separate from Bart's suggestion, if Adobe is not acting upon certain research out there then it is their prerogative. Similarly, appealing to higher authorities such as MIT, Stanford, etc. does not help here. Academic research and commercializing of academic research are quite different. That is a whole new topic in itself as US universities in engineering and computer science are getting increasingly distant from industrial needs and applications. But lets not get into that topic.
« Last Edit: February 14, 2010, 04:00:43 PM by joofa » Logged

Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins
bjanes
Sr. Member
****
Offline Offline

Posts: 2714



« Reply #35 on: February 14, 2010, 04:04:42 PM »
ReplyReply

Quote from: Mark D Segal
... I too have compared actual prints of images made using Focus Magic compared with those made using PK Sharpener, re-iterating the Focus Magic settings using a range around the recommended ones, and quite frankly I couldn't produce on paper as pleasing a sharpening effect with FM as I could with PK. No doubt, others will have an opposite take on this. That's fine. So much to say there are no absolute truths here, but I remain quite persuaded that the mainstream approach to sharpening is still mainstream for good reason.
Mark,
It is always rewarding to exchange views with someone as knowledgeable and civil as yourself.  FWIW, I've also played around with FocusMagic for capture sharpening, but have found that it offers no advantage over what I can do in ACR with much less effort. I have not migrated to Lightroom and still use ACR and PS with output sharpening by PK Sharpener. Likely, the market for image restoration would be limited. Microscopists and astronomers have their own specialized software and the pro who needs the highest image quality would likely go to MFDBs. Rank amateurs are satisfied with their JPEGs. A few of us enthusiasts would like to extend what we already have and can not afford or justify a P65+. If I could double the size of my prints with My Nikon D3 using deconvolution, as Roger Clark claimed for his 8 MP 1DMII, I would gladly pay $200 for ImagesPlus or something similar. I won't hold my breath for Ver 2 of PK. And, as Roger Pointed out, there is no reason that the mainstream methods for sharpening can be applied after image restoration.

Quote from: Mark D Segal
... I don't perceive any of what you call the "NIH" syndrome. Speaking of the relevant actors in the mainstream industry, you're conversing with forward-looking objective folks who don't mind looking at both the strengths and areas for improvement of what they've done before. I also place little credence in the corporate head-in-the-sand theories ...
I was not referring to Adobe or Eric Chan, but to the forum Rottweiler. I don't know Eric personally but have exchanged a few posts with him and find him to be a gentleman and very knowledgeable and helpful. Adobe did introduce deconvolution in Photoshop some time ago and further improvements have been hinted at for the next release. I have used Photoshop since Ver 3 and have upgraded to every version since and have been pleased with enhancements in each version.




Logged
hcubell
Sr. Member
****
Offline Offline

Posts: 727


WWW
« Reply #36 on: February 14, 2010, 09:40:33 PM »
ReplyReply

Quote from: Mark D Segal
I would also recommend that readers reflect carefully upon the veracity of the comparison reports of the kind Howard Cubell mentioned. Much is in the eye of the beholder (subjective); it may or may not be "objectively correct", because we don't have an ISO standard defining how sharp is "too sharp" in a processed photograph. What I find over-sharpened or crunchy, Howard or others may think is great stuff - not that he's wrong and I'm right, but simply because our taste and perception of what constitutes sharpness may not be the same. This came to mind as I perused a number of the images on his website BTW some striking images there Howard)  - to the extent one can tell anything reliably from relatively low-res JPEGs posted on the internet and viewed on a display. Quite some time ago in the context of another such discussion on this Forum, I too have compared actual prints of images made using Focus Magic compared with those made using PK Sharpener, re-iterating the Focus Magic settings using a range around the recommended ones, and quite frankly I couldn't produce on paper as pleasing a sharpening effect with FM as I could with PK. No doubt, others will have an opposite take on this. That's fine. So much to say there are no absolute truths here, but I remain quite persuaded that the mainstream approach to sharpening is still mainstream for good reason.

Mark, FWIW, most of the photographs on my website are from scans of 6x7 chromes that were output sharpened with...PK! I actually found that PK does oversharpen files using the output setting for the web. I consistently had to lower the opacity of the sharpening layer returned by PK. As for your point about my observations about the R-L deconvolution sharpening tool in RD being anecdotal, you are completely right. I am hardly a scientific expert on sharpening theory, but I know what looks good to me, and I found the R-L tool to produce a nicely sharpened file that still had a very natural look. I have only compared a relatively small number of 39mp files from an H3D-39 using the deconvolution sharpening tool in Raw Developer v. other capture sharpening tools like Smart Sharpen in CS4 and the sharpening tools in Phocus and PK. Anyone can download a demo copy of Raw Developer to try the R-L deconvolution tool for himself.
Logged

Mark D Segal
Contributor
Sr. Member
*
Offline Offline

Posts: 6767


WWW
« Reply #37 on: February 14, 2010, 09:55:26 PM »
ReplyReply

Quote from: hcubell
Mark, FWIW, most of the photographs on my website are from scans of 6x7 chromes that were output sharpened with...PK! I actually found that PK does oversharpen files using the output setting for the web. I consistently had to lower the opacity of the sharpening layer returned by PK. As for your point about my observations about the R-L deconvolution sharpening tool in RD being anecdotal, you are completely right. I am hardly a scientific expert on sharpening theory, but I know what looks good to me, and I found the R-L tool to produce a nicely sharpened file that still had a very natural look. I have only compared a relatively small number of 39mp files from an H3D-39 using the deconvolution sharpening tool in Raw Developer v. other capture sharpening tools like Smart Sharpen in CS4 and the sharpening tools in Phocus and PK. Anyone can download a demo copy of Raw Developer to try the R-L deconvolution tool for himself.

Howard, As Raw Developer is Mac only, not *anyone* can download it and try it. I have no experience with it - I'm on Windows, so I can't comment with any first-hand experience in this respect. But I know from reading reviews that the program has a good following.

Most interesting to hear what you tell us about your web images. Sharpening scans is a VERY delicate business. I've been through the same issues with both PK and other techniques in Photoshop alone. It starts up-stream of web output. I have found that care is needed at the Capture Sharpen stage - to select the appropriate capture sharpener, examine the results and possibly lower the opacity, then likewise with sharpening for web. I have also found for scanned media that I could often dispense with output sharpening for web after appropriate capture sharpening and downsizing for the internet.

I agree that evaluation of results depends largely on what we see - irrespective of theory, and if print is the final destination, print comparisons are all that matter.
Logged

Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....." http://www.luminous-landscape.com/reviews/film/scanning_workflows_with_silverfast_8.shtml
ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 6921


WWW
« Reply #38 on: February 15, 2010, 12:26:29 AM »
ReplyReply

Hi,

I guess that capture sharpening is not only about algorithms but also about applying the algorithms. PKS and LR use several steps, like gradient masking and halo suppression, in order to protect the image. It may be that some algorithm, like RL works better than unsharp mask, but what really counts is what comes out of the printer. Getting all that right takes experimentation and experience. With PKS and LR both has been made.

I'm pretty much impressed with the sharpening in LR. I'm using Landscape as my standard preset but sometime increase amount to 95, decrease radius to 0.4 and and add some masking, leaving detail around 50.

In my view there is a tremendous amount of experience in the sharpening methods in LR, ACR and PKS. It is very well possible that it's "old thinking" but it works, never the less.

It is well possible or even probable, that better reconstruction of detail can be done using more advanced deconvolution methods based on known or estimated PSF. This is probably the case with blurred images, but capture sharpening is not about  doing the photographers work after the fact.

Best regards
Erk
Logged

Schewe
Sr. Member
****
Offline Offline

Posts: 5255


WWW
« Reply #39 on: February 15, 2010, 12:43:12 AM »
ReplyReply

Quote from: ErikKaffehr
...It may be that some algorithm, like RL works better than unsharp mask, but what really counts is what comes out of the printer. Getting all that right takes experimentation and experience. With PKS and LR both has been made.


In the end, it's all about the print...

You can dither and argue about what an image is supposed to look like on a computer display but unless the display is the final output, it don't mean shit...

The real arbiter of what is good and bad is the general user (and the general public) and how they evaluate a print. The important technology is that which actually has a practical impact. Theoretical research is just that...theoretical. No reason not to do it...big reason not to fall in love what what isn't there yet...

Again I have to caution all of you to take what is _NOW_ with a grain of salt...the elves at Adobe (wether you think they are good or evil) don't stand still. I realize the vast majority of people don't get the chance to interact and see the brilliance of Thomas or Eric...in that regard I consider myself fortunate. On the other hands, just because somebody has done a web site citing a bunch of research doesn't make it the Holly Grail.

I don't discount pure research...pure research is the work of genius in search of a reason...we need that, no question. But we also need practical tools that actually friggin' work...that's ultimately what I'm most concerned about...how to make _MY_ work better. If you don't have something to offer, kindly get the F%&CK out of the way...

Really, if you don't have anything substantial to offer, shut the F%&CK up...
« Last Edit: February 15, 2010, 12:45:17 AM by Schewe » Logged
Pages: « 1 [2] 3 4 5 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad