Ad
Ad
Ad
Pages: « 1 ... 9 10 [11] 12 13 ... 20 »   Bottom of Page
Print
Author Topic: If Thomas designed a new Photoshop for photographers now...  (Read 69816 times)
Mark D Segal
Contributor
Sr. Member
*
Offline Offline

Posts: 6976


WWW
« Reply #200 on: May 13, 2013, 09:30:43 AM »
ReplyReply

But it has never been Adobe's way.

They always wanted to keep Photoshop esential for some tasks.

That's why LR doesn't have some interesting tools.

But perhaps the initiative of Jeff (and the reaction of a lot of people to the "cloud only version") will push Adobe to propose a special Photoshop for Photographs.

That's what I hope !

Thierry

I don't believe that conspiracy theories constitute an accurate guide to explaining any of this. There are numerous other more convincing reasons related to design intent, technical and practical factors that differentiate one application for the other. For the purposes of this thread it will be more prospective to focus on the latter.
Logged

Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....." http://www.luminous-landscape.com/reviews/film/scanning_workflows_with_silverfast_8.shtml
Jim Kasson
Sr. Member
****
Offline Offline

Posts: 925


WWW
« Reply #201 on: May 13, 2013, 09:32:35 AM »
ReplyReply

Jeff, I don't see the pixel processing paradigm as limited to using a particular precision for intermediate calculations, a point I alluded to in Reply #167. Therefore, I don't believe the pixel paradigm has to produce less optimal results.

Jeff, let me work through an example to make sure that what I'm saying is clear. More than occasionally, but not often, I find that it's not practical to do what I want to do in Ps. For that image processing, I write Matlab code. I start out with images in 16-bit TIFF files, so when they come into Matlab, they are 16-bit gamma-compressed unsigned integers. Because I'm lazy and I want to get the most out of my time spent programming, I immediately convert them to 64-bit linear floating point representation. That way I don't have to worry about overflow or underflow, or the loss in precision that can occur when, for, example, subtracting one large number from another to yield a small number.

I use objects, some times one with several methods, sometimes many. I think of the objects as analogous to layers: they take an image, group of images, or part of am image in, do something to it, and leave something for the next object. The methods are parameterized, so I don't have to start all over to tweak the algorithms, but, that can't be dispositive, because with smart objects I can tweak layer settings in Ps. The order of operations is rigidly defined by the flow of the programming.

I think of what I'm doing as pixel processing.

Am I wrong?

Jim
« Last Edit: May 13, 2013, 09:37:33 AM by Jim Kasson » Logged

walter.sk
Sr. Member
****
Offline Offline

Posts: 1332


« Reply #202 on: May 13, 2013, 09:33:11 AM »
ReplyReply

This is a joke right? You are going to define what a PHOTOGRAPHER is for the rest of us? Who appointed you guru? Ever seen a straight print of Ansel Adams "Moonrise"? One of the most famous photographs in history-by your definition he's not a photographer because he manipulated the crap out of it. Ever heard of Jerry Uelsmann, a very important figure in the history of PHOTOGRAPHY?
+1
Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1683


« Reply #203 on: May 13, 2013, 09:56:58 AM »
ReplyReply

Jeff, let me work through an example to make sure that what I'm saying is clear. More than occasionally, but not often, I find that it's not practical to do what I want to do in Ps. For that image processing, I write Matlab code. I start out with images in 16-bit TIFF files, so when they come into Matlab, they are 16-bit gamma-compressed unsigned integers. Because I'm lazy and I want to get the most out of my time spent programming, I immediately convert them to 64-bit linear floating point representation. That way I don't have to worry about overflow or underflow, or the loss in precision that can occur when, for, example, subtracting one large number from another to yield a small number.

I use objects, some times one with several methods, sometimes many. I think of the objects as analogous to layers: they take an image, group of images, or part of am image in, do something to it, and leave something for the next object. The methods are parameterized, so I don't have to start all over to tweak the algorithms, but, that can't be dispositive, because with smart objects I can tweak layer settings in Ps. The order of operations is rigidly defined by the flow of the programming.

I think of what I'm doing as pixel processing.

Am I wrong?

Jim
MATLAB is a perfect example of an expressive scripting language capable of expressing any imaginable image processing operation. Since the native datatype in MATLAB is double-precision floats, that is the obvious choice for processing in MATLAB (other datatypes is possible, but less neat).

So in principle, both Lightroom and Photoshop could (I guess) be reduced to fancy, snappy, interactive GUI front-ends (something that MATLAB blows at) that has as output a set of MATLAB (or MATLAB-like) instructions that can be interpreted by MATLAB (or the open-source Octave) to transform an image. Chances are large that (many of) the Adobe R&D image processing people use MATLAB in order to do prototyping of new algorithms. I guess that most algorithms are fundamentally discrete approximations to continous ideal behaviour, although table-lookups and the like can also be done.

So why is this a bad idea for a product? It would probably be painfully slow, doing generic vector/matrix calls to a double-precision library is never going to be as fast as (potentially) hand-coded SSE/AVX vectorized intrinsics/assembler/Intel libraries for integer 8/16-bit datatypes where the implementer knows just how much error can be tolerated.


Funda

-h
Logged
keithrsmith
Full Member
***
Offline Offline

Posts: 107


« Reply #204 on: May 13, 2013, 10:16:23 AM »
ReplyReply

One question that needs to be thought about is  "What would Adobe do if a new "Photoshop" appeared"

I think they would quickly rush out an "Enhanced Elements" with the important things that are missing added ( see this thread for suggestions)
16 bit , all adjustment layers, all colour spaces, etc. and sell it at an attractive price which would effectively kill of the competition.

I believe that the main mistake Adobe has made in this whole Cloud issue is to not have a stand alone Photoshop.  This is the one app out of the whole suite that seems to be causing the most issues - the main one that i can see being the the fear of not being able to revisit psd files created by the latest greatest version, once your subscription has lapsed.
It is also the app that many part time, and amateur uses have and for which there is no easy alternative.  For almost all of the other apps - video , audio,...  there are viable alternatives, and the market is predominantly professional, plus it is IMO much less likely that old projects will be revisited in the way that old PSD's may be.

Lets hope Adobe see sense and reinstate a perpetual licence photoshop - an enhanced elements will do.

Keith
Logged
Jim Kasson
Sr. Member
****
Offline Offline

Posts: 925


WWW
« Reply #205 on: May 13, 2013, 10:19:05 AM »
ReplyReply

So why is this a bad idea for a product? It would probably be painfully slow, doing generic vector/matrix calls to a double-precision library is never going to be as fast as (potentially) hand-coded SSE/AVX vectorized intrinsics/assembler/Intel libraries for integer 8/16-bit datatypes where the implementer knows just how much error can be tolerated.

I was using the example as a way to crystallize the discussion, not as a concrete product proposal, but, thanks for bring practicality into the picture.

I don't think doing intermediate calcs in FP (maybe not DP FP, but FP) is necessarily impractical. We are seeing a proliferation of DSP-derived processors on graphics adapters. Many of those processors support FP, and there is a trend to make the results of calculations available to programs running in the main processors. Indeed, you can buy add-in cards that do DSP-like processing that have no connection to a display; they're expensive and power hogs, but that should change. Image processing is relatively easily parallelized.

Doubling or quadrupling the precision of representation will cause image processing programs to want more memory, but that's getting cheaper all the time. (I somewhat sheepishly admit to buying a machine with 256 GB of RAM for Matlab image processing.)

Another thing about the intermediate image processing that could ameliorate its inherently slower speed than custom-tweaked code: a lot can be done in the background. In order for a program to feel crisp to the user, all that's necessary is to update the screen fast. The number of pixels on the screen is in general fewer than the number in the file, so there's less processing to keep the screen up to date than to render the whole file. Just in case the user decides to zoom in, the complete image should be computed in the background. This also avoids an explicit rendering step, which could be an annoyance for the user.

All this background/foreground stuff makes life harder for the programmers. On the other hand, think of the time they'll save not tweaking code.

Blue sky, right?

Jim
Logged

Jim Kasson
Sr. Member
****
Offline Offline

Posts: 925


WWW
« Reply #206 on: May 13, 2013, 10:24:48 AM »
ReplyReply

It would probably be painfully slow, doing generic vector/matrix calls to a double-precision library is never going to be as fast as (potentially) hand-coded SSE/AVX vectorized intrinsics/assembler/Intel libraries for integer 8/16-bit datatypes where the implementer knows just how much error can be tolerated.

The thing that Jeff said that started me on this way of thinking was -- paraphrasing -- you don't want to do in a pixel processor what you can do in a parametric processor partially because of limited precision in the pixel processor causing potential damage to the image.  That implies that the implementer doesn't always know just how much error can be tolerated.

Jim
« Last Edit: May 13, 2013, 10:34:00 AM by Jim Kasson » Logged

Ronald N. Tan
Newbie
*
Offline Offline

Posts: 23



WWW
« Reply #207 on: May 13, 2013, 11:29:24 AM »
ReplyReply

As a portraitist specializing in men's fashion and beauty photography, I need the Liquefy and the Puppet Warp. I use Calculations and Apply Image to manipulate luminosity masks and use them creatively to address tonalities and shape in my photographs of men. I need Gaussian Blur and the High-Pass Filter. I could live without the Custom Filter and the "Deconvolution Sharpening." Come to think of it, depending on what kind of an image I am working, I use the tools and commands in Photoshop. The Content-Aware tools in CS6 saved me time in texture repairing and background repairs on a few occasions.

Get rid of the video and 3D and 8-Bit filters (I cannot use them anyway).

It is OK, if this version of Photoshop does not come bundled with ACR. I don't use ACR. For RAW processing, I am using PhaseONE CaptureONE PRO 7.1.1.
« Last Edit: May 13, 2013, 11:32:35 AM by Ronald Nyein Zaw Tan » Logged

opgr
Sr. Member
****
Offline Offline

Posts: 1125


WWW
« Reply #208 on: May 13, 2013, 11:34:41 AM »
ReplyReply

Problem is not so much the precision of the rendering pipeline, problem is stacking.

Especially if one of the steps in the stack involves blur in one way or another (think USM, local contrast enhancement, etc). In an interpreted pipe-line this would not only increase sampling requirements disproportionally and exponentially, but it disrupts the parallelism of the graphics cards internals. Additionally, some of the newer sharpening techniques rely on iteration. If you want to implement those types of functions it becomes progressively problematic if the entire pipeline is interpreted.

Secondly, what you want to determine also is the effect the user expects to see when they change some previous step.
If they use a parametric brush on some particular location in the image, and they decide to turn on lenscorrections, or apply a  perspective correction, then what should the position (and form) of the brush do? And what about if you stack images for panorama stitching and the user does the same?

Note how simple misunderstandings can occur:
If I ask you to "blend" image A and B, do you interpret that as:
1. start with A and blend B on top (not commutative),

or do you interpret that as:
2. create a mix of A and B (commutative).

What about if A and/or B have masks?


And finally, the size requirements for a photoshop image are usually significantly different than what our graphics cards currently are based on. Even if hardware will improve and become cheaper etc, you should still expect a 10 year time frame if at all, because the graphics cards are based on certain output requirements for gaming, video, and medical imaging. Well, I suppose password hacking could be added, but I'm not sure how that will affect the imaging capabilities of graphic cards.

But, any workflow that allows one to go back to previous steps could be called "parametric", and as such, as long as the expectations of the user are reasonable when deciding to redo a previous step, the application could be entirely "parametric". And a final result could be rendered based on recomputing the entire chain.





Logged

Regards,
Oscar Rysdyk
theimagingfactory
Ralph Eisenberg
Jr. Member
**
Offline Offline

Posts: 82


« Reply #209 on: May 13, 2013, 12:32:40 PM »
ReplyReply

Until the current version of ACR, my primary Raw converter had been Capture One (although I have owned all versions of Lightroom, which I sometimes use for printing). With the release of ACR 7, this has changed, so that I now generally make my conversions via Bridge, with the image opening as a smart object in PS CS6. (As an aside, I followed the upgrade cycles without skipping). I then have done secondary editing in PS, appreciating the ability to return to ACR to tweak my image when necessary. I would follow most of the suggestions made above for an image editor, but as is clear, I would hope for some kind of capability which did not rely on Lightroom, unless it would be possible to view images without the need to import them into a Lightroom catalogue. I make use of adjustment layers (and some blending modes) and the ability to make local corrections painting on masks. I'm very pleased with the sharpening and noise reduction tools in ACR, and with ACR in general, although I do miss some features of Capture One Pro for viewing and selecting Raw images. I certainly appreciate the fact that the Curves tool in ACR works just as it does in PS. The Healing Brush tool, Spot Healing Brush, and content-aware capabilities are very useful to me. For portrait retouching the Liquify filter has been a help. Of course, having the printing and soft-proofing capabilities of Lightroom in this image editor would be a plus, but I have most often gotten by with doing this in PS.
Thanks to Jeff Schewe for starting this thread, and naturally to Michael Reichmann (whose health I hope is improving) for the web site and much more that make this possible.
« Last Edit: May 14, 2013, 01:50:09 AM by Ralph Eisenberg » Logged

Ralph
Robert55
Jr. Member
**
Offline Offline

Posts: 73


« Reply #210 on: May 13, 2013, 12:39:23 PM »
ReplyReply

I don't know if I add much to the discussion stating that but I'd add another vote for just adding a few things to LR - mainly compositiong (panorama*, HDR, focus stacking, and may be elements removal la "Statistics").
These are the only reason I fired PS in the past year, I think. I personally don't use much pixel editing, partly because I do it worse than parametric ones, partly due to the file size and time penalty involved.

* for panorama stitching, please at least add a module to interactively choose perspective and projection before actual stitching to the photomerge routines! A toll to add control points as in more full-featured stitchers as Hugin or PTGui would be nice, but is less necessary.
?
For me, these are the things I only go to PS to nowadays. I'd also like something I'll call 'color stacking', for situations where part of your image has a warm and another a cool color temp [like a mountain valley partially in shadow].
Logged
rasterdogs
Jr. Member
**
Offline Offline

Posts: 85


« Reply #211 on: May 13, 2013, 12:49:08 PM »
ReplyReply

I was using the example as a way to crystallize the discussion, not as a concrete product proposal, but, thanks for bring practicality into the picture.

I don't think doing intermediate calcs in FP (maybe not DP FP, but FP) is necessarily impractical. We are seeing a proliferation of DSP-derived processors on graphics adapters. Many of those processors support FP, and there is a trend to make the results of calculations available to programs running in the main processors. Indeed, you can buy add-in cards that do DSP-like processing that have no connection to a display; they're expensive and power hogs, but that should change. Image processing is relatively easily parallelized.

Doubling or quadrupling the precision of representation will cause image processing programs to want more memory, but that's getting cheaper all the time. (I somewhat sheepishly admit to buying a machine with 256 GB of RAM for Matlab image processing.)

Another thing about the intermediate image processing that could ameliorate its inherently slower speed than custom-tweaked code: a lot can be done in the background. In order for a program to feel crisp to the user, all that's necessary is to update the screen fast. The number of pixels on the screen is in general fewer than the number in the file, so there's less processing to keep the screen up to date than to render the whole file. Just in case the user decides to zoom in, the complete image should be computed in the background. This also avoids an explicit rendering step, which could be an annoyance for the user.

All this background/foreground stuff makes life harder for the programmers. On the other hand, think of the time they'll save not tweaking code.

Blue sky, right?

Jim

Does this mean I'd need more powerful hardware?
Logged
D Fosse
Sr. Member
****
Offline Offline

Posts: 343



« Reply #212 on: May 13, 2013, 12:52:13 PM »
ReplyReply

Just chiming in to say I'd buy this thing unseen within 30 minutes of announcement.

I agree with everything said so far... Grin
Logged
kirkt
Full Member
***
Offline Offline

Posts: 185


« Reply #213 on: May 13, 2013, 12:56:03 PM »
ReplyReply

I would like to see a "new" version of "photoshop" substantially change the workflow paradigm, whatever the resulting toolset is.  Specifically, I think we, as image processing folks, tend to work on an image sequentially - whatever that sequence is.  Open raw image > make adjustments > send to Photoshop > apply adjustment layers with masks > reduce image size > output sharpen, etc.

Whatever.  The idea is, there is a sequence to the workflow and, often, portions of that sequence require revisiting, revision, branching into a new variation, etc.

I think a node-based workflow, where one can piece together these operations in a logical flow, and revisit, rearrange, preview and create variations, with a real-time preview of any and all node outputs, would be a nice paradigm shift.  I would have no problem working on a "smart preview" version of an image, from raw conversion, all the way to output sharpening at final resolution, with the ability to render portions of it all along the node chain to see a 100% res sample to check my work.  Once my node chain is set up and I like the preview of the resulting changes, I could render a full-res version.  This is pretty standard for many render/modeling applications and video/compositing.  There is no reason why 2d image workflow has to be any different.

I think that 2D image workflow could benefit from this approach as well because it would promote variation - just create a branch off of the workflow and develop it separately.  It would ease automation - you can visualize your process and simply add an input node as a directory of images in front of your established chain of nodes to batch process images.  It could leverage the nascent "Smart Preview" raw technology that appears to be developing for the Cloud sync and smart device editing workflow.  THis node-based workflow fully preserves the "non-destructive" aspect of editing - the node-based edits are "parametric" until you finally commit to rendering them as full-res output - the original image is untouched, even if you chose to make pixel-based changes - this could be a node where a rendered proxy is part of the workflow, etc..  You could add output nodes along the way to render draft images of the stages of the edits, instead of having to save sequential PSDs to potentially have to revisit and revise.  The entire creative process is archived and editable - you could have template node structures for commonly used tasks, or commonly shot lighting conditions, looks, etc.  You could even save that entire node chain as ... you guessed it, a node, for use in other more complex chains - this would be like an action, but more flexible.

Of course, I would hope people could write their own nodes and third-party developers could write all sorts of "plug-ins" (nodes) or adapt currently existing products into a node-based form.  I see Lightroom as a node in this paradigm.

I apologize if this has already been mentioned in this thread, I know I am not inventing anything new here.  However, if there is to be a new photoshop, or yet-to-be-named image editor, I think a new workflow approach is in order and would save huge amounts of time and effort in the image processing workflow.

best - thanks jeff for starting this thread - I appreciate the chance to participate.

kirk
« Last Edit: May 13, 2013, 01:26:12 PM by kirkt » Logged
gerryrobinson
Newbie
*
Offline Offline

Posts: 18


WWW
« Reply #214 on: May 13, 2013, 01:02:02 PM »
ReplyReply

Jeff
Great thread!
For me I round trip from LR to Photoshop for the following:
compositing
merge to pano
focus stacking
cloning /healing (content aware)
actions
adjustments via layer masks
sculpting
progressive sharpening

Would love to see stuff like this worked into LR's workflow as seamlessly as possible.
A lot of the cameras out there,(especially the ones newer than my 20D) shoot video.
I think video support like CS6 has would be welcome.
If I could just open up LR, work on a image and not have noticed I'd roundtripped anywhere
that would be my Ideal workflow.
Gerry
Logged

s4e
Newbie
*
Offline Offline

Posts: 38


« Reply #215 on: May 13, 2013, 02:10:13 PM »
ReplyReply

I would like to see a "new" version of "photoshop" substantially change the workflow paradigm, whatever the resulting toolset is.  Specifically, I think we, as image processing folks, tend to work on an image sequentially - whatever that sequence is.  Open raw image > make adjustments > send to Photoshop > apply adjustment layers with masks > reduce image size > output sharpen, etc.

Whatever.  The idea is, there is a sequence to the workflow and, often, portions of that sequence require revisiting, revision, branching into a new variation, etc.

I think a node-based workflow, where one can piece together these operations in a logical flow, and revisit, rearrange, preview and create variations, with a real-time preview of any and all node outputs, would be a nice paradigm shift.  I would have no problem working on a "smart preview" version of an image, from raw conversion, all the way to output sharpening at final resolution, with the ability to render portions of it all along the node chain to see a 100% res sample to check my work.  Once my node chain is set up and I like the preview of the resulting changes, I could render a full-res version.  This is pretty standard for many render/modeling applications and video/compositing.  There is no reason why 2d image workflow has to be any different.

I think that 2D image workflow could benefit from this approach as well because it would promote variation - just create a branch off of the workflow and develop it separately.  It would ease automation - you can visualize your process and simply add an input node as a directory of images in front of your established chain of nodes to batch process images.  It could leverage the nascent "Smart Preview" raw technology that appears to be developing for the Cloud sync and smart device editing workflow.  THis node-based workflow fully preserves the "non-destructive" aspect of editing - the node-based edits are "parametric" until you finally commit to rendering them as full-res output - the original image is untouched, even if you chose to make pixel-based changes - this could be a node where a rendered proxy is part of the workflow, etc..  You could add output nodes along the way to render draft images of the stages of the edits, instead of having to save sequential PSDs to potentially have to revisit and revise.  The entire creative process is archived and editable - you could have template node structures for commonly used tasks, or commonly shot lighting conditions, looks, etc.  You could even save that entire node chain as ... you guessed it, a node, for use in other more complex chains - this would be like an action, but more flexible.

Of course, I would hope people could write their own nodes and third-party developers could write all sorts of "plug-ins" (nodes) or adapt currently existing products into a node-based form.  I see Lightroom as a node in this paradigm.

I apologize if this has already been mentioned in this thread, I know I am not inventing anything new here.  However, if there is to be a new photoshop, or yet-to-be-named image editor, I think a new workflow approach is in order and would save huge amounts of time and effort in the image processing workflow.

best - thanks jeff for starting this thread - I appreciate the chance to participate.

kirk
Very interesting ideas Kirk!

I too very much support the idea of keep the parametric model and combine it with use of "smart preview" to make performance acceptable.
Logged
MarkM
Sr. Member
****
Offline Offline

Posts: 335



WWW
« Reply #216 on: May 13, 2013, 02:23:16 PM »
ReplyReply

I think a node-based workflow, where one can piece together these operations in a logical flow, and revisit, rearrange, preview and create variations, with a real-time preview of any and all node outputs, would be a nice paradigm shift.

Yes, me too! It would be really interesting to see what would happen if somebody like The Foundry (http://www.thefoundry.co.uk) decided to compete in this space. It is one of the few companies that could enter with a product that everyone (including Adobe) would have to take seriously. Considering the node-based workflow in high-end products like Nuke that they have already developed, I would think an image editor would be a pretty natural fit. Having said that, I could imagine that their management may not be particularly interested in selling a product to the mid and lower end of the industry where customer support becomes death by a million cuts and prices have to be considerably lower.
Logged

Jim Kasson
Sr. Member
****
Offline Offline

Posts: 925


WWW
« Reply #217 on: May 13, 2013, 03:13:00 PM »
ReplyReply

Does this mean I'd need more powerful hardware?

One of the delightful givens of the entire history of electronic computation has been exponential growth of absolute processing power, and exponential growth of processing power per inflation-adjusted dollar. Trees don't grow to the sky, and I suppose that this can't continue forever. In fact, there has been some mild slowing over time. The doubling period initially cited by Gordon Moore in his Electronics article was a year, then amended to 18 months, and now thought by some to be two years.

However, although clock rates have stopped increasing because of power dissipation considerations (I remember a conference presenter in 1968 saying, "Contrary to popular opinion, the computer of the future will not be the size of a room; it will be the size of a light bulb -- and it will glow just as brightly." He was assuming advances in materials science that haven't come to pass yet.), transistor counts just keep right on climbing as the number of processors on a chip multiply.

The VP Manufacturing of Convergent Technologies, a 1980s company that drowned in the wake of the introduction of the IBM PC (they were selling an incompatible 8086-based computer at the time), used to have a motto on his wall: "Believe in  miracles? We count on them." I feel the same way about increasing processing power.

Jim
« Last Edit: May 13, 2013, 03:19:08 PM by Jim Kasson » Logged

Jim Kasson
Sr. Member
****
Offline Offline

Posts: 925


WWW
« Reply #218 on: May 13, 2013, 03:18:06 PM »
ReplyReply

I think a node-based workflow, where one can piece together these operations in a logical flow, and revisit, rearrange, preview and create variations, with a real-time preview of any and all node outputs, would be a nice paradigm shift.  

Nicely said, Kirk.

Jim
Logged

LKaven
Sr. Member
****
Offline Offline

Posts: 832


« Reply #219 on: May 13, 2013, 03:34:19 PM »
ReplyReply

I would like to see a "new" version of "photoshop" substantially change the workflow paradigm, whatever the resulting toolset is.  Specifically, I think we, as image processing folks, tend to work on an image sequentially - whatever that sequence is.  Open raw image > make adjustments > send to Photoshop > apply adjustment layers with masks > reduce image size > output sharpen, etc.

Whatever.  The idea is, there is a sequence to the workflow and, often, portions of that sequence require revisiting, revision, branching into a new variation, etc.

I think a node-based workflow, where one can piece together these operations in a logical flow, and revisit, rearrange, preview and create variations, with a real-time preview of any and all node outputs, would be a nice paradigm shift.  I would have no problem working on a "smart preview" version of an image, from raw conversion, all the way to output sharpening at final resolution, with the ability to render portions of it all along the node chain to see a 100% res sample to check my work.  Once my node chain is set up and I like the preview of the resulting changes, I could render a full-res version.  This is pretty standard for many render/modeling applications and video/compositing.  There is no reason why 2d image workflow has to be any different.

I think that 2D image workflow could benefit from this approach as well because it would promote variation - just create a branch off of the workflow and develop it separately.  It would ease automation - you can visualize your process and simply add an input node as a directory of images in front of your established chain of nodes to batch process images.  It could leverage the nascent "Smart Preview" raw technology that appears to be developing for the Cloud sync and smart device editing workflow.  THis node-based workflow fully preserves the "non-destructive" aspect of editing - the node-based edits are "parametric" until you finally commit to rendering them as full-res output - the original image is untouched, even if you chose to make pixel-based changes - this could be a node where a rendered proxy is part of the workflow, etc..  You could add output nodes along the way to render draft images of the stages of the edits, instead of having to save sequential PSDs to potentially have to revisit and revise.  The entire creative process is archived and editable - you could have template node structures for commonly used tasks, or commonly shot lighting conditions, looks, etc.  You could even save that entire node chain as ... you guessed it, a node, for use in other more complex chains - this would be like an action, but more flexible.

Of course, I would hope people could write their own nodes and third-party developers could write all sorts of "plug-ins" (nodes) or adapt currently existing products into a node-based form.  I see Lightroom as a node in this paradigm.

I apologize if this has already been mentioned in this thread, I know I am not inventing anything new here.  However, if there is to be a new photoshop, or yet-to-be-named image editor, I think a new workflow approach is in order and would save huge amounts of time and effort in the image processing workflow.

best - thanks jeff for starting this thread - I appreciate the chance to participate.

kirk

Yes, though I wrote this earlier in the thread, it's nice to see someone else pick up on this and elaborate.  It would be the key advance in architecture and workflow that this tool needs.  Using a dataflow architecture, you can implement most any request made here.  Not only that, but you can also provide different top-level user interfaces to suit the needs of different users.  It'd be a win all around.  Eric Chan, reminder to PM me.
Logged

Pages: « 1 ... 9 10 [11] 12 13 ... 20 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad