Ad
Ad
Ad
Pages: « 1 [2] 3 4 5 »   Bottom of Page
Print
Author Topic: Lens correction in CS5 - why not in RAW converter?  (Read 16460 times)
Schewe
Sr. Member
****
Offline Offline

Posts: 5544


WWW
« Reply #20 on: April 20, 2010, 12:33:58 AM »
ReplyReply

Quote from: ErikKaffehr
Thanks for your comment. I see your point.

Note I didn't say it was impossible...just that doing it parametrically is a much more difficult proposition than when working with pixels.

And it would be useful if people didn't presume that the reason why Adobe hasn't already done it already is because of some feeble attempt at keeping people from getting what they want because they want to preserve Photoshop's position in the marketplace–which is ignorant on the face of it because if that was a concern they would have killed Lightroom a long time ago...
Logged
Jeremy Payne
Guest
« Reply #21 on: April 20, 2010, 06:25:37 AM »
ReplyReply

Quote from: Schewe
And it would be useful if people didn't presume that the reason why Adobe hasn't already done it already is because of some feeble attempt at keeping people from getting what they want because they want to preserve Photoshop's position in the marketplace–which is ignorant on the face of it because if that was a concern they would have killed Lightroom a long time ago...

Just because they haven't "killed it" doesn't mean they can't and don't differentiate consciously to segment the market.

Your position is not logically sound ... in fact, it is quite ignorant and naive.


Logged
opgr
Sr. Member
****
Offline Offline

Posts: 1125


WWW
« Reply #22 on: April 20, 2010, 06:42:04 AM »
ReplyReply

Quote from: Schewe
Note I didn't say it was impossible...just that doing it parametrically is a much more difficult proposition than when working with pixels.

I would like to strongly disagree.

A parametric implementation is simply a deferred pixel implementation. But ultimately we users are just looking at the pixel result. LR has already sacrificed some of the real-time preview capabilities in order to implement the desired functionality, but even so, we are still basing our input on the pixel based preview.

So, there is no such thing as "doing it parametrically". The parameters are always translated to pixelbased corrections. It is just a matter of whether that can be done:
1) in near real time,
and for the sake of this discussion:
2) whether there are easy inverse transforms for user input if applicable.

A simple example would be the fact that we are usually looking at a preview proxy which is reduced in size. If we then apply sharpening parameters how should the result be applied/displayed?



Having said that, here is why I am really opposed to the parametric argument against lenscorrections:

Lenscorrections are not the same as perspective corrections. The latter is not really a correction, and most users can easily live with perspective corrections being relinquished to later photoshop editing, as it usually is a specialized task.

Lenscorrections should, and this is very very important, should be done BEFORE any other processing, including and most importantly before DEBAYERING and before COLORMANAGEMENT! Otherwise chromatic aberrations will have been obfuscated by crosschannel processing which degrades debayer performance and makes lenscorrections nearly impossible in later stages. (in the same way as white-point corrections are near impossible in non-linear gammacorrected data).

It seems however that LR and related products (and yes, I am making a presumption here) do these corrections further down the pipeline. Well after Debayering and a lot of the other processing. It is ONLY IN THIS case that you run into inverse-transform problems that I have seen mentioned. I therefore make this presumption of processing order, and would therefore also like to propose that the entire problem is not one of parametric vs pixels, but one of processing order.


And I also believe that this should be discussed pretty damn heavily & transparent at Adobe and elsewhere, because as it currently is, the DNG "standard" is in a developmental stage but it seems that a lot of the definitions are being based on the Adobe-processing-paradigm. And I already have mentioned earlier that the colormanagement implementation of DNG is in serious need of reconsideration, and now the lenscorrection definitions may just go in the same direction, which in my not so humble opinion is more southward bound than is absolutely necessary.










Logged

Regards,
Oscar Rysdyk
theimagingfactory
Schewe
Sr. Member
****
Offline Offline

Posts: 5544


WWW
« Reply #23 on: April 20, 2010, 10:34:39 AM »
ReplyReply

Quote from: Jeremy Payne
Just because they haven't "killed it" doesn't mean they can't and don't differentiate consciously to segment the market.

Your position is not logically sound ... in fact, it is quite ignorant and naive.


No actually my "opinion" is based upon knowledge of both the engineering and product marketing teams on Lightroom...I've been involved in the original marketing studies and further use case studies involving Lightroom. The fact is most photographers have an inflated view of their importance in the marketplace. Photographers make up almost ALL of the Lightroom user base while under 10% of the Photoshop user base. Under 10% and an even lower % of the whole Creative Suite user base.

Really, Adobe isn't consciously segmenting the market so much as it is marketing to different markets. But presuming that Adobe is NOT putting something in Lightroom because they want to preserve it for Photoshop is what is ignorant and naive...
Logged
Schewe
Sr. Member
****
Offline Offline

Posts: 5544


WWW
« Reply #24 on: April 20, 2010, 10:49:17 AM »
ReplyReply

Quote from: opgr
A parametric implementation is simply a deferred pixel implementation. But ultimately we users are just looking at the pixel result. LR has already sacrificed some of the real-time preview capabilities in order to implement the desired functionality, but even so, we are still basing our input on the pixel based preview.

So, there is no such thing as "doing it parametrically". The parameters are always translated to pixelbased corrections. It is just a matter of whether that can be done:
1) in near real time,
and for the sake of this discussion:
2) whether there are easy inverse transforms for user input if applicable.


Guess you didn't read the thread with Mark Hamburg's explanation why a parametric approach to lens corrections is a problem, huh? See, auto lens corrections is already built into Camera Raw 5.3/Lightroom 2.3 and beyond. Yep...for certain lens camera combos, the camera raw pipeline is already doing the corrections for distortion, CA and vignetting. So, it's obvious the doing lens corrections parametrically is possible.

The problem comes when you allow the user some control over the lens correction settings (the parametric settings mind you).

When the auto corrections are applied, they are applied before any other pipeline processing so all other settings are coming after the correction. So spot healing and local corrections don't need to be adjusted for changes in the lens corrections because they are in the later stages of the pipeline.

So, if people didn't want any controls over lens corrections, the way it already works in Camera Raw could be expanded pretty easily Well, easy is a matter of debate, I've made lens corrections profiles for use in Photoshop CS5–there will be a free utility on Labs.Adobe.com for making your own profiles and it's tedious to do it.

But, what if you wanted manual control over the lens distortion corrections or CA or wanted to adjust perspective or scale? Again, what is the expected behavior if the user has already put spot healing and local corrections down? Would Camera Raw distort the spot healing circles? Ideally, yes...would Camera Raw adjust the shape and coordinates of the local adjustment brush? Ideally, yes...

The problem is, you don't really grok the difficulty of putting lens corrections in a parametric processing pipeline...the problem is indeed far more complex than simply warping pixels and yes, it WOULD be optimal to do the corrections in the raw processing stage–including perspective corrections...
Logged
opgr
Sr. Member
****
Offline Offline

Posts: 1125


WWW
« Reply #25 on: April 20, 2010, 12:18:31 PM »
ReplyReply

Quote from: Schewe
But, what if you wanted manual control over the lens distortion corrections or CA or wanted to adjust perspective or scale? Again, what is the expected behavior if the user has already put spot healing and local corrections down? Would Camera Raw distort the spot healing circles? Ideally, yes...would Camera Raw adjust the shape and coordinates of the local adjustment brush? Ideally, yes...

The problem is, you don't really grok the difficulty of putting lens corrections in a parametric processing pipeline...the problem is indeed far more complex than simply warping pixels and yes, it WOULD be optimal to do the corrections in the raw processing stage–including perspective corrections...


It obviously is a matter of definition: is this problem inherent of parametric brushes, or of geometric distortions?

But the problem is more intricate than that: I'm sure you have noticed that it can be impossible to create a decent black&white image when chromatic aberration residue is present? So, it's not just a matter of geometric vs parametric.

It is necessary to rigorously define the steps required for a decent raw-conversion, then present the user these steps in a meaningful way. Something like tabs. Every tab/step contains the parameters that will influence the subsequent tabs/steps. Just as white-point and saturation settings affect eachother, or contrast settings and USM. Blackpoint and noise reduction and god knows what else...

Once you reach the "creative tab" you can go ahead and do all kind of parametric evil, but one should be aware that the actual raw conversion has been defined in previous steps. And it are those steps that are relevant to DNG.

Or to put it differently: the further down the pipeline, the less relevant for DNG...!?
Logged

Regards,
Oscar Rysdyk
theimagingfactory
Schewe
Sr. Member
****
Offline Offline

Posts: 5544


WWW
« Reply #26 on: April 20, 2010, 12:28:37 PM »
ReplyReply

Quote from: GBPhoto
I don't understand why parametric control of lens distortion is desirable beyond on/off?  Assuming an accurate preset is available, I'd say the same goes for CA.

To a certain extent you might be right however, the current Lens Correction in Photoshop CS5 DOES allow the use of custom lens correction profiles that users can build on their own...if you go through the matrix of potential lens profiles needed by camera, I think you will see that expecting Adobe to make such an extensive range of primary and 3rd party lens profiles ain't gonna happen...hense the lens profile application that Adobe be giving away much like DNG Profile Editor was.

Lens makers themselves can also provide lens data for the creation of profiles. DP Review reports Sigma has already provided lens profile data for almost ALL of their 3rd party lenses...

But don't overlook the fact that lens corrections are really the tip of the iceberg. While not a lens correction per se, perspective adjustments are also important for accurate rendering of final images. And, if the Camera Raw team is gonna go through the hassle of doing lens corrections they might as well solve the perspective correction issue at the same time.

Which comes back to the basic issue, if the team is gonna do it, they want to do it well and not halfassed...and doing it well, isn't easy. Which has ZERO to do with market segmentation or preserving Photoshop's market superiority or any of the other "reasons" we haven't seen this stuff in ACR/LR yet...
Logged
Schewe
Sr. Member
****
Offline Offline

Posts: 5544


WWW
« Reply #27 on: April 20, 2010, 12:36:09 PM »
ReplyReply

Quote from: opgr
It is necessary to rigorously define the steps required for a decent raw-conversion, then present the user these steps in a meaningful way. Something like tabs. Every tab/step contains the parameters that will influence the subsequent tabs/steps. Just as white-point and saturation settings affect eachother, or contrast settings and USM. Blackpoint and noise reduction and god knows what else...


No thanks...that breaks the parametric paradigm. I don't want to be constrained by what I've already done...while the pipeline should be free to do whatever it needs to do in whatever order it wants to do it in, I don't want that constraint.

I want to be able to do my spotting or local corrections at ANY time and not not be forced to do something in a particular order...that's way too much like a pixel editing approach where what comes after is completely dependent on what was done before. No thanks, I've got Photoshop for that...
Logged
Slobodan Blagojevic
Sr. Member
****
Offline Offline

Posts: 6303


When everybody thinks the same... nobody thinks.


WWW
« Reply #28 on: April 20, 2010, 12:48:56 PM »
ReplyReply

Quote from: Schewe
... Photographers make up almost ALL of the Lightroom user base while under 10% of the Photoshop user base...
Thank you Mr. Schewe fo this wonderful piece of information! It finally removes the nagging sense of guilt I have because I can not see much (if any) reason to upgrade from CS4 (i.e., the "improvements" in CS5 appeared to be aimed at the other 90%).
Logged

Slobodan

Flickr
500px
Schewe
Sr. Member
****
Offline Offline

Posts: 5544


WWW
« Reply #29 on: April 20, 2010, 01:34:59 PM »
ReplyReply

Quote from: Slobodan Blagojevic
Thank you Mr. Schewe fo this wonderful piece of information! It finally removes the nagging sense of guilt I have because I can not see much (if any) reason to upgrade from CS4 (i.e., the "improvements" in CS5 appeared to be aimed at the other 90%).


Actually, you would be wrong. Aside from Camera Raw 6, Photoshop CS5 will have a lot of additional functionality for photographers in spite of the fact that photographers are in the minority in terms of the user base...you upgrade or don't upgrade based on the upgrade feature set (like content aware fill, HDR Pro and things like auto lens correction) not because of guilt...(or at least I don't).

Look, do what you want, really, no skin off my nose...really. You're only shortchanging yourself.
Logged
ejmartin
Sr. Member
****
Offline Offline

Posts: 575


« Reply #30 on: April 20, 2010, 08:05:52 PM »
ReplyReply

Quote from: opgr
Having said that, here is why I am really opposed to the parametric argument against lenscorrections:

Lenscorrections are not the same as perspective corrections. The latter is not really a correction, and most users can easily live with perspective corrections being relinquished to later photoshop editing, as it usually is a specialized task.

Lenscorrections should, and this is very very important, should be done BEFORE any other processing, including and most importantly before DEBAYERING and before COLORMANAGEMENT! Otherwise chromatic aberrations will have been obfuscated by crosschannel processing which degrades debayer performance and makes lenscorrections nearly impossible in later stages. (in the same way as white-point corrections are near impossible in non-linear gammacorrected data).

It seems however that LR and related products (and yes, I am making a presumption here) do these corrections further down the pipeline. Well after Debayering and a lot of the other processing. It is ONLY IN THIS case that you run into inverse-transform problems that I have seen mentioned. I therefore make this presumption of processing order, and would therefore also like to propose that the entire problem is not one of parametric vs pixels, but one of processing order.

As I see it, the only lens correction that is advantageously done before demosaic is CA.  Many good demosaic algorithms are based on color differences, and CA feeds the demosaic errant color differences, leading to poorer demosaic.  Geometric corrections such as barrel distortion etc, are better left until after demosaic, where the full resolution of the demosaiced image can be best used to interpolate the geometric corrections.  Trying to do that before demosaic is hampered by the fact that the CFA data is of reduced resolution, with the additional resolution data being encoded in color correlations that the demosaicing process is designed to decode (if done well).  Doing geometric corrections before demosaic risks destroying the color correlations that are needed for the demosaic, if the demosaic algorithm is designed to use them.
Logged

emil
ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 8029


WWW
« Reply #31 on: April 20, 2010, 11:46:50 PM »
ReplyReply

Hi,

I have the impression that Photoshop is used for many things from touching up images to creating new ones. There are also several editions with the more expensive versions adding capabilities which may not needed by photographers.

As I see it Photoshop tries to have all the tools needed. It's often lagging a bit compared to the competition. I'd probably use Autopano Pro instead of the Photomerge option in PS CS3 (what I have). I have the impression that the tools get refined for each release. On the other hand I feel that HDR in PS CS3 is quite good and prefer to use it compared with other tools I have. The main reason I use PS is that it is sort of defining the industry. So if you are told to use JPEGs with Quality 10 you know what that means. Not that I'm a HDR enthusiast, anyway.

I'll update from CS3 to CS5, definitively.

Best regards
Erik


Quote from: Slobodan Blagojevic
Thank you Mr. Schewe fo this wonderful piece of information! It finally removes the nagging sense of guilt I have because I can not see much (if any) reason to upgrade from CS4 (i.e., the "improvements" in CS5 appeared to be aimed at the other 90%).
Logged

Schewe
Sr. Member
****
Offline Offline

Posts: 5544


WWW
« Reply #32 on: April 21, 2010, 02:53:07 AM »
ReplyReply

Quote from: ejmartin
As I see it, the only lens correction that is advantageously done before demosaic is CA.

I don't disagree...however, I do think there are some real advantages to apply lens distortion correction as well as vignette inside of the raw processing pipeline and combine those corrections with perspective correction in one fell swoop...

The more you touch the data with various and multiple algorithms, the more eventual degradation of that data...

While it ain't easy, don't be at all surprised to see an ACR/LR lens correction/perspective correction solution sooner rather than later. The fact that's it's incredibly complicated and difficult to do parametrically, shouldn't be all that surprising that Thomas and team could figure it out.

So, bottom line, quit your bitching...you'll get what you want (and it'll be better than you expected) if you exhibit a degree of being patient, the odds are your "agitation" that lens corrections and perspective crop not being apparent in Lightroom 3 (for whatever reason) will make some people complaining the loudest sound, well, kinda petty...

Hint, hint...say no more–wink's as good as a nod!

Logged
jjj
Sr. Member
****
Offline Offline

Posts: 3648



WWW
« Reply #33 on: April 21, 2010, 03:16:38 AM »
ReplyReply

Quote from: Slobodan Blagojevic
Thank you Mr. Schewe fo this wonderful piece of information! It finally removes the nagging sense of guilt I have because I can not see much (if any) reason to upgrade from CS4 (i.e., the "improvements" in CS5 appeared to be aimed at the other 90%).
Not the case at all. Apart from the JDIs which ease workflow, things like refine edge will make a lot of photographers very happy. See Hair cut out Tutorial to see what this can do. Then there is content aware fill, which is kind of handy. And if you are on a Mac then you get 64 bit power and it's faster generally. ACR is improved significantly and can make a big differnce to high ISO/poor lighting images.
Logged

Tradition is the Backbone of the Spineless.   Futt Futt Futt Photography
opgr
Sr. Member
****
Offline Offline

Posts: 1125


WWW
« Reply #34 on: April 21, 2010, 03:29:14 AM »
ReplyReply

Quote from: ejmartin
As I see it, the only lens correction that is advantageously done before demosaic is CA.

Possibly, but:

1. scaling the red and blue channels for CA means that data is already compromised and resampling those channels twice is likely worse than the gain of not resampling the green channel(s) prior to Debayer,

2. at an early stage, resampling is probably close to transitive in the sense that
DEBAYER-sample(DEBARREL-sample) == DEBARREL-sample(DEBAYER-sample),

3. resampling the raw-data influences moiré, and thus influences moiré corrections that may be present in the debayer stage,


Having said that, one might discuss the exacts of the internal algorithms, but the question remains how this should be represented in a DNG file. And also how this should be represented in the GUI.

DNG file:
If you think of a DNG file as the RAW data + parameters for interpretation, then it becomes very apparent that creative edits are not initially relevant for RAW conversion. Neither is interpolation of two colorprofiles.

GUI:
what should actually be exposed to the user and more importantly HOW. We now have a hefty 9 sliders in the detail tab of LR, but is there a reasonable method, set of steps, that will get me to the optimal result most efficiently? Is there a single optimal setting?

If i have found an optimal setting in the detail section, will adjustments of the tonal section adversely affect the detail settings? It seems the detail setting is now very sensitive to small changes. I find it very difficult to come to a reasonable setting.

By reasonable method I mean something like:

1. set all sliders to default,
2. start by adjusting slider A until you see this,
3. then adjust slider B until you see this,
4. if you see this, then adjust slider C, otherwise leave it untouched,
5. etc...


Similarly, there are some noise reductions that may be useful prior to debayering (and lenscorrections), like dead pixel elimination. If these corrections are defined properly, they translate easily to a DNG definition, as well as to a proper GUI and processing method. If you see a smeared pixel, first try the dead-pixel elimination button/slider in the debayer section. If that doesn't change it, use whatever other section the raw converter producer thinks appropriate. etc...

Positional problems with a parametric brush doesn't seem to me to be a very good argument to hamper proper implementation of RAW-conversion features and standards. Lenscorrections seem more intrinsic to raw-conversion than parametric brushes...




Logged

Regards,
Oscar Rysdyk
theimagingfactory
ejmartin
Sr. Member
****
Offline Offline

Posts: 575


« Reply #35 on: April 21, 2010, 07:48:17 AM »
ReplyReply

Quote from: Schewe
I don't disagree...however, I do think there are some real advantages to apply lens distortion correction as well as vignette inside of the raw processing pipeline and combine those corrections with perspective correction in one fell swoop...

The more you touch the data with various and multiple algorithms, the more eventual degradation of that data...

While it ain't easy, don't be at all surprised to see an ACR/LR lens correction/perspective correction solution sooner rather than later. The fact that's it's incredibly complicated and difficult to do parametrically, shouldn't be all that surprising that Thomas and team could figure it out.

I agree that touching the data with multiple manipulations degrades it.  However, once the data is demosaiced it doesn't seem to me that it should matter all that much whether further manipulations are carried out in the converter or in the editor, if the editor and converter teams are talking to one another and use a common data representation.  Degradation would come from converting from ACR's internal representation to an output color space (and gamma), and then having to convert back to the internal representation (assuming the editor uses the same internal representation as the converter, I don't know if that's the case) in order to do geometric corrections.  In such a case, it would be helpful if the converter output the internal representation directly for the editor to pick up if the correction was to be done in the editor.

Quote
So, bottom line, quit your bitching...you'll get what you want (and it'll be better than you expected) if you exhibit a degree of being patient, the odds are your "agitation" that lens corrections and perspective crop not being apparent in Lightroom 3 (for whatever reason) will make some people complaining the loudest sound, well, kinda petty...

Hint, hint...say no more–wink's as good as a nod!

Not sure who you're replying to here, the post you're replying to is my first in this thread, in which I did not moan about any current or impending software releases by Adobe.  Nor am I agitated...
Logged

emil
NikoJorj
Sr. Member
****
Offline Offline

Posts: 1063


WWW
« Reply #36 on: April 21, 2010, 09:12:41 AM »
ReplyReply

Quote from: GBPhoto
I don't understand why parametric control of lens distortion is desirable beyond on/off?  Assuming an accurate preset is available, I'd say the same goes for CA.
For CA at least (and I'd think the same for distortion?) the error is distance-dependent : the correction could be made with the focus distance used, but first I'm not sure it's available as documented or known EXIF data in all cameras on the market, and second I'm not sure either it accurately describes the error affecting any object in the image.
That's a pity, because I really do agree that one ideally shouldn't have to fiddle with such settings : there is one right setting, it's the one that cures the error, period.

I completely agree too that once distortion is under control, it just calls for perspective control (which I personally need much more than distortion correction), which OTOH does need to fine tune and choice between the Charybdis of the vanishing roofs and the Scylla of the too-straight-not-to-fall-on-my-face square facade.
Conceptually, it's just a matter of deforming the reference space with the image - a transform which should be reversible (and therefore applicable to an edit's coordinates) as long as the image content is recognizable, shouldn't it?
And I'd say than transforming this rather simple concept view into actual working code is what I give Adobe hard-earned money for...  So I'll just bow to Jeff Schewe's last remark and keep my fingers crossed (that may provide a good excuse for typos).
Logged

Nicolas from Grenoble
A small gallery
loonsailor
Jr. Member
**
Offline Offline

Posts: 69


« Reply #37 on: April 21, 2010, 09:46:33 AM »
ReplyReply

Quote from: Schewe
See, auto lens corrections is already built into Camera Raw 5.3/Lightroom 2.3 and beyond. Yep...for certain lens camera combos, the camera raw pipeline is already doing the corrections for distortion, CA and vignetting. So, it's obvious the doing lens corrections parametrically is possible.

Just to be clear, are you saying that ACR 5.3/LR 2.3 and later are doing DxO-type automatic correction of barrel, pincusion, etc. of some lens/ body combos?  This is the first I've heard about that.  How would one know it's happening, and for which lens/body combos?  Seems like something that Adobe would be bragging about it, or at least informing somehow that it was happening (and maybe allowing it to be enabled/disabled).
Logged
madmanchan
Sr. Member
****
Offline Offline

Posts: 2110


« Reply #38 on: April 21, 2010, 11:09:53 AM »
ReplyReply

To address some questions/issues raised in this thread:

There are pros/cons to the placement of a given "lens correction" setup in a processing pipeline. For example, it is conceivable that one might want to do vignette compensation as early as possible, e.g., as soon as the mosaic image is in a linear light space. It would have the advantage of processing only 1 image plane (instead of 3 or 4, for color images, if done later), so it would take less time. On the other hand, it would be preferable to keep the processed image with a data type that supports overrange values (i.e., outside the nominal [0,1] space). Otherwise if you blow out highlights by applying vignette compensation and clip the data, you can't get it back later. One of the many tradeoffs involved.

The question of DNG and its interaction with (optional) processing. There are three broad levels at which one can view DNG. The simplest is as the basic image container; just the image data and some required metadata that tells you about (e.g., CFA plane order, clipping level); in that case it is very much like existing TIFF or TIFF-EP. Without these basics, you simply could not get a useful result (like trying to read a TIFF without knowing the image dimensions). Next step up is additional (optional) metadata that may be useful for rendering, such as extra color profiles and/or processing instructions (e.g., to do per-column scaling calibration). The format and capabilities of these instructions was determined largely by feedback provided by the camera makers (familiar names in Japan and Germany). These are all documented in the DNG spec and implemented in the DNG SDK; they are technically independent of the raw converter, in the sense that any raw converter that wishes to support these additional goodies is welcome to do so (obviously they are not required to, and many choose not to), no legal/political/patent stuff involved. Finally there is optional processing metadata that is raw converter-specific, e.g., XMP metadata from Camera Raw. These are obviously proprietary and only make sense to the raw converter in question (e.g., there are a gazillion ways to define "Saturation" and there's no reason ACR's definition should be the same as the one used by another converter like Capture One or Aperture).

Order of operations in CR/LR and the preservation of the parametric workflow are both taken seriously by Adobe, because for many (not all, but many) users they are integral parts of the overall editing experience. It is true that rendering is essentially deferred. As much as possible, we very much want to preserve the concept of being able to tweak the sliders in whatever order you feel comfortable, without having to undo/redo one type of editing just to be able to use another. We do not consider it an acceptable user experience, for example, to require a user to redo local corrections to apply (or remove), say, a distortion adjustment. There are lots of existing images out there in the real-world that have local adjustments, spot healing, etc. already applied, and we respect that. In general, we do not consider X a shippable feature until X works appropriately with all other editing features. At least, that's the standard we try to hold. Admittedly we don't always meet that standard, but that's we strive for.

Getting back to the original question, I hope to provide more information about the situation very soon.
« Last Edit: April 21, 2010, 11:11:35 AM by madmanchan » Logged

Schewe
Sr. Member
****
Offline Offline

Posts: 5544


WWW
« Reply #39 on: April 21, 2010, 11:40:43 AM »
ReplyReply

Quote from: ejmartin
Not sure who you're replying to here, the post you're replying to is my first in this thread, in which I did not moan about any current or impending software releases by Adobe.  Nor am I agitated...


You are right...I'm sorry...that part of my post was NOT directed at you...it was directed at some of the other posters in this thread whose hand wringing is getting a bit tiresome. I should have directed it to them and away from you...
Logged
Pages: « 1 [2] 3 4 5 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad