Ad
Ad
Ad
Pages: « 1 ... 36 37 [38] 39 »   Bottom of Page
Print
Author Topic: Adobe diverging Creative Cloud and Standard versions  (Read 82512 times)
Rhossydd
Sr. Member
****
Online Online

Posts: 1993


WWW
« Reply #740 on: May 17, 2013, 03:49:38 PM »
ReplyReply

because somehow I would think if this were so much better, those guys at Adobe would have been on top of it long ago.
Sure ? There are a lot of reasons why making radical changes to the way Photoshop works could be regarded as a very bad idea.
Part of it's success has been the slow steady progress and ease of moving to new versions without having to learn a lot of new things.
Logged
LKaven
Sr. Member
****
Offline Offline

Posts: 841


« Reply #741 on: May 17, 2013, 04:06:14 PM »
ReplyReply

Nuke is a compositing application for the film industry. Not clear to me whether you're comparing apples with apples. Is it truly appropriate to cherry-pick technologies developed for different purposes and then dump on Photoshop for not using them? A whole architecture and structure of a multi-purpose application is at play, so I wonder about that - I have no reason to say you are incorrect, but I wonder.

Consider also the GEGL library being developed for GIMP.  It uses the same N-dimensional dataflow architecture.

Quote
Are you so sure an "N-dimensional data workflow" would work in Photoshop? I'd like to hear from the professional digital imaging engineers on that one, because somehow I would think if this were so much better, those guys at Adobe would have been on top of it long ago.

All comers are welcome.  In my view, Adobe, as a business decision, did not want to spend $200M to make a product that had a future so long as they felt they had a cash cow.  Technically, though, photoshop was always vulnerable.  I consulted on a competing design ten years ago, and undertook a study back then.

Quote
I also don't understand why you say that anything needs to be "baked-in" when compositing with Photoshop. Everything can be done with layers and adjustment layers and people who know what they are about in that application can reverse anything they do. Have you every since how Bert Monroy, perhaps one of the great masters of all compositors, uses Photoshop? There is quite an education there about intricate, reversible workflows.

I didn't say (quantitatively) that "everything" needs to be baked in, I did say that at some point one needed to bake in intermediate results.  Try for example to use a single source file for multiple purposes within a single layer stack, and without duplicating it or importing it from another stack.  

Example, use a single source file as a layer mask, a multi-frequency sharpening mask, a "hard light" layer within a single stack.  Then decide that you need to change the source file just slightly.  Can you reverse and re-do all of those derived images in a single click?  

The ability of people to push the bounds of the existing photoshop does not speak to the extent to which those same people could push another architecture and with less effort.
Logged

LKaven
Sr. Member
****
Offline Offline

Posts: 841


« Reply #742 on: May 17, 2013, 04:07:48 PM »
ReplyReply

Sure ? There are a lot of reasons why making radical changes to the way Photoshop works could be regarded as a very bad idea.
Part of it's success has been the slow steady progress and ease of moving to new versions without having to learn a lot of new things.

A compatibility module would be a trivial exercise -- for those who want it.  Meanwhile, newer and more ambitious projects could be undertaken with greater ease.  Competition works that way.
Logged

jjj
Sr. Member
****
Offline Offline

Posts: 3649



WWW
« Reply #743 on: May 17, 2013, 04:17:52 PM »
ReplyReply


- In photoshop, it is necessary to "bake in" intermediate results in order to align them to a one-dimensional flow.  There is no going back to revise parts of your composited image, all of which might have required extensive independent treatments, as well as a level of /coordination/................

........ In photoshop, several hacks have been devised in order to accommodate different needs, such as "Apply Image..." which is completely unnecessary in an N-dimensional dataflow architecture. 

And that's just a start.  Thanks for the question though.  I'm thinking of teaching a course on this over the summer.  It's a timely topic.
Maybe you should learn to use Photoshop a little bit better first if you think there is no going back to revise composited images. I've used a non-destructive workflow in PS for years, it's not as simple as the parametric workflow in LR, but it can still be non destructive, when using smart objects, layer masks, adjustment layers...etc.
Logged

Tradition is the Backbone of the Spineless.   Futt Futt Futt Photography
Mark D Segal
Contributor
Sr. Member
*
Offline Offline

Posts: 7059


WWW
« Reply #744 on: May 17, 2013, 04:22:03 PM »
ReplyReply

Consider also the GEGL library being developed for GIMP.  It uses the same N-dimensional dataflow architecture.

All comers are welcome.  In my view, Adobe, as a business decision, did not want to spend $200M to make a product that had a future so long as they felt they had a cash cow.  Technically, though, photoshop was always vulnerable.  I consulted on a competing design ten years ago, and undertook a study back then.

I didn't say (quantitatively) that "everything" needs to be baked in, I did say that at some point one needed to bake in intermediate results.  Try for example to use a single source file for multiple purposes within a single layer stack, and without duplicating it or importing it from another stack.  

Example, use a single source file as a layer mask, a multi-frequency sharpening mask, a "hard light" layer within a single stack.  Then decide that you need to change the source file just slightly.  Can you reverse and re-do all of those derived images in a single click?  

The ability of people to push the bounds of the existing photoshop does not speak to the extent to which those same people could push another architecture and with less effort.

I'd still like to know why this new technology hasn't found its way into Photoshop or into alternative competing products. Especially if it has been worked on for a decade or more. People can and do adjust to new ways of doing things. We need only back-cast the rapid transition from film to digital.
Logged

Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....." http://www.luminous-landscape.com/reviews/film/scanning_workflows_with_silverfast_8.shtml
LKaven
Sr. Member
****
Offline Offline

Posts: 841


« Reply #745 on: May 17, 2013, 04:39:43 PM »
ReplyReply

Maybe you should learn to use Photoshop a little bit better first if you think there is no going back to revise composited images. I've used a non-destructive workflow in PS for years, it's not as simple as the parametric workflow in LR, but it can still be non destructive, when using smart objects, layer masks, adjustment layers...etc.

Imagine that you want to use a single source file for three purposes, for example (1) a layer mask, (2) a sharpening layer, and (3) an "overlay" blend (blending with itself for purposes of local contrast enhancement).  How would you do that without duplicating?  Remember, loading a layer mask is an implied duplication.  I don't know of any way to both derive and load a layer mask with a smart object.  Think about it.  You'll see it.  You could even have layers with controlled feedback, something that is patently impossible in photoshop.  There is an entire world beyond photoshop.

There are many things you can do with photoshop with considerable work.  With an N-dimensional dataflow, there are ways to do the same things with a trivial amount of work.
Logged

LKaven
Sr. Member
****
Offline Offline

Posts: 841


« Reply #746 on: May 17, 2013, 05:01:00 PM »
ReplyReply

I'd still like to know why this new technology hasn't found its way into Photoshop or into alternative competing products. Especially if it has been worked on for a decade or more. People can and do adjust to new ways of doing things. We need only back-cast the rapid transition from film to digital.

If I had to guess, I'd say there are two reasons:

1) Adobe felt they had a cash cow with photoshop, and felt no reason to innovate while they were deeply entrenched.  User expectations were matched to the (limitations on) possibilities as marketing by Adobe.

2) Competing interests did not want to invest $200M or so to try to go up against Adobe's business machine. 
Logged

opgr
Sr. Member
****
Offline Offline

Posts: 1125


WWW
« Reply #747 on: May 17, 2013, 05:34:26 PM »
ReplyReply

If I had to guess, I'd say there are two reasons:

No, there is one simple reason:

Photoshop is not a Compositing tool.

It started as a digital image editor, and grew up during a time when such processing on the desktop was simply unique, and nodal processing totally and utterly impossible.

It is fine that you want to propose doing stuff differently now, and starting from scratch, but you should realise that what you are proposing is a compositing application (say and InDesign aimed at imaging), which allows you to edit separate components in the composition with a flexibility that pales the complexity of photoshop.

I have mentioned it before: I am not a great proponent of flexibility for its own sake. Just because you can come up with infinite possibilities, that doesn't mean it is in any way efficient of effective for its purpose. Define your workflow, define the average workflow of the target audience, then build a relatively flexible product to suite. You apparently expect the application to help you make compositions, while the majority of the photographers maybe just want to edit a single file.
(I don't know that, I am just stating this as an example).

So, how are those users helped when giving them an infinitely powerful compositer?
(Again, I am not dismissing your point of a total new approach to image editing and compositing, just wondering where and how you think it will fit a workflow in both a professional production environment, and a single user case).

Why would a company build a product that does all things for everyone, and then charge a mere 200usd for it and hope to survive?

Logged

Regards,
Oscar Rysdyk
theimagingfactory
DeanChriss
Sr. Member
****
Offline Offline

Posts: 299


WWW
« Reply #748 on: May 17, 2013, 05:45:50 PM »
ReplyReply

If I had to guess, I'd say there are two reasons:

1) Adobe felt they had a cash cow with photoshop, and felt no reason to innovate while they were deeply entrenched.  User expectations were matched to the (limitations on) possibilities as marketing by Adobe.

2) Competing interests did not want to invest $200M or so to try to go up against Adobe's business machine.  

If I had to guess I'd say the potential gain in sales wasn't worth the investment. Producing the same results more easily is perceived as a minor improvement even if it takes a huge investment in technology to accomplish it, unless the original way of doing things was enormously difficult. Unless it lets users do something they couldn't do before, and want to do, innovation "under the hood" doesn't really matter much.
Logged

- Dean
LKaven
Sr. Member
****
Offline Offline

Posts: 841


« Reply #749 on: May 18, 2013, 06:39:07 AM »
ReplyReply

No, there is one simple reason:

Photoshop is not a Compositing tool.

It started as a digital image editor, and grew up during a time when such processing on the desktop was simply unique, and nodal processing totally and utterly impossible.

It is fine that you want to propose doing stuff differently now, and starting from scratch, but you should realise that what you are proposing is a compositing application (say and InDesign aimed at imaging), which allows you to edit separate components in the composition with a flexibility that pales the complexity of photoshop.

I have mentioned it before: I am not a great proponent of flexibility for its own sake. Just because you can come up with infinite possibilities, that doesn't mean it is in any way efficient of effective for its purpose. Define your workflow, define the average workflow of the target audience, then build a relatively flexible product to suite. You apparently expect the application to help you make compositions, while the majority of the photographers maybe just want to edit a single file.
(I don't know that, I am just stating this as an example).

So, how are those users helped when giving them an infinitely powerful compositer?
(Again, I am not dismissing your point of a total new approach to image editing and compositing, just wondering where and how you think it will fit a workflow in both a professional production environment, and a single user case).

Why would a company build a product that does all things for everyone, and then charge a mere 200usd for it and hope to survive?

1) Compositing is just a common task for which to compare two architectures.

2) GEGL, being developed for GIMP 3 uses N-dimensional dataflow.

3) N-dimensional dataflow is efficient for tasks that aren't computing intensive.  It also exploits inherent parallelism for efficient use of multiple cores, threads, or networked processors.

4) Pixel editing can be done using a choice of methods.  You can do it just as before.  But N-dimensional dataflow allows for journaling for infinite undo, or baking in just as in photoshop.
Logged

Mark D Segal
Contributor
Sr. Member
*
Offline Offline

Posts: 7059


WWW
« Reply #750 on: May 18, 2013, 07:04:37 AM »
ReplyReply

But N-dimensional dataflow allows for journaling for infinite undo, or baking in just as in photoshop.

To repeat what I and others have told you: if you know your way around Photoshop you can set your processing to allow undoing anything you've done. Maybe it's more work than would be required by the solution you are proposing, but let us not draw false dichotomies.

And to repeat again, before I would agree that Photoshop's technology is "obsolete" I would like to see a qualified digital imaging engineer who knows the architecture and capabilities of Photoshop to come into this thread and tell us in fact whether what Luke proposes would indeed work better for the many purposes of Photoshop, and whether it would cost a mint to implement it. We already have one relevant and perceptive view from Oscar, who comes at this with professional experience writing high quality Photoshop plugins that many of us used for years. I would like to see additional expertise on Photoshop itself advise further. Unless it is very clear, which it isn't yet, I don't buy this black&white line of argument that one thing is "obsolete" while the other isn't. More often than not, (inter alia, driven by differences of scope and purpose) there are shades of gray, nuances, details that tell the real story.

Logged

Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....." http://www.luminous-landscape.com/reviews/film/scanning_workflows_with_silverfast_8.shtml
LKaven
Sr. Member
****
Offline Offline

Posts: 841


« Reply #751 on: May 18, 2013, 07:20:57 AM »
ReplyReply

Since work of just this sort is being done for GIMP 3, it might be worthwhile to look in on that discussion. 

http://blog.mmiworks.net/2012/01/gimp-full-gegl-ahead.html
Logged

opgr
Sr. Member
****
Offline Offline

Posts: 1125


WWW
« Reply #752 on: May 18, 2013, 09:34:33 AM »
ReplyReply

1) Compositing is just a common task for which to compare two architectures.

Agreed, and photoshop today can certainly be considered a compositing tool.

However, in the graphics industry we used to make a distinction between pixel graphics and vector graphics, because images would require about 300pixels per inch for quality printed output, but vector graphics, or text, would generally require 2400pixels per inch.

So, compositing images and text in a single composition for output would immediately run into this particular split.
A split that PostScript and PDF both have encountered in one way or another.

PDF can be considered a compositing language. I can store commands to start with a clean sheet of white of particular size, paste an image on top, and add some text on top of the image.

Great, but on output it has to make a decision on the resolution requirements of these elements. Not to mention that the output device resolution is the determining factor. But then PDF introduced transparency. And now you suddenly have to decide how to render between elements prior to output. (e.g. blending decisions between image and vectors).

I'm sure we make great strides with GPU based processing, but will it muster 2400ppi rendition in real time? Not that something like that is always necessary during preview, but it does serve as an example of production-centric requirements.

3) N-dimensional dataflow is efficient for tasks that aren't computing intensive. 

That would immediately disqualify it as viable for image processing.

It also exploits inherent parallelism for efficient use of multiple cores, threads, or networked processors.

On the contrary, the strongest argument in favour of nodal editing imo, is the fact that you can easily create aliases.
Make a composition with aliases, and as soon as you edit the original, all the aliases will automatically look similar.

But the entire point of that (for photographers) is stacking of aliases to add creative effects. For example:

1. Open an image,

2. Duplicate an alias on top of this image,

3. Apply a large-radius blur to the alias,

4. Apply a mask to the alias.

So, now you have a simplified soft-focus effect.
If you then add a small colorcorrection to the original image, it will automatically transfer to the alias representation.
Exactly what you would want. The entire stack including the colorcorrection remains a script. Can be saved for future use etc, so those are certainly desirable traits.

But…, the application of a colorcorrection on the original, which then should be used for the alias layer which requires a blur, is a serial sequence that is killing to parallelism and the kind of processing in GPUs. GPUs are primarily quick under very specific circumstances, one of which is "resident, read only" source data. So if you want read/write caching, the advantages of GPU processing start to crumble very quickly. (Look up any of the pitfalls about "concurrent" or multi-threaded processing). 

Okay, enough of the geekspeak already. This is all solvable by a bunch of bright programmers, but I thought it might be illustrative, of both the useful capabilities, but also the complexities involved.


4) Pixel editing can be done using a choice of methods.  You can do it just as before.  But N-dimensional dataflow allows for journaling for infinite undo, or baking in just as in photoshop.

Certainly, and I believe this is what most people really mean when they mention "parametric editing". They simply want to be able to revisit earlier edits and re-adjust. They understand the disadvantage of stacking several Geometry corrections, versus re-editing a single Geometry correction. (The latter only requires a single re-sampling operation).





Logged

Regards,
Oscar Rysdyk
theimagingfactory
LKaven
Sr. Member
****
Offline Offline

Posts: 841


« Reply #753 on: May 18, 2013, 10:56:15 AM »
ReplyReply

BTW, when I say, for example, that " N-dimensional dataflow is efficient for tasks that aren't computing intensive", I don't mean that N-d dataflow is less efficient for tasks that /are/ computing intensive.  I mean that N-d dataflow is no less efficient than 1-d dataflow for the tasks where 1-d dataflow is typically used.  N-d dataflow is more efficient for more compute-intensive tasks in virtue of the fact that the problems at hand are decomposed into inherently parallel task flow.  While your machine might have only one GPU, we often have 4-8 CPUs, as well as many other CPUs on locally-networked machines. 

I may have been a bit unclear in trying to be brief.
Logged

jjj
Sr. Member
****
Offline Offline

Posts: 3649



WWW
« Reply #754 on: May 18, 2013, 08:51:14 PM »
ReplyReply

Imagine that you want to use a single source file for three purposes, for example (1) a layer mask, (2) a sharpening layer, and (3) an "overlay" blend (blending with itself for purposes of local contrast enhancement).  How would you do that without duplicating?  Remember, loading a layer mask is an implied duplication.  I don't know of any way to both derive and load a layer mask with a smart object.  Think about it.  You'll see it.  You could even have layers with controlled feedback, something that is patently impossible in photoshop.  There is an entire world beyond photoshop.
So you arbitrarily say I cannot do something in PS that is quick and simple to do as justification for a new workflow. Not a good way to make your point is it?
Besides, why would you even sharpen in PS when it's better/easier to do that on the raw files before even getting into PS? Sharpening in PS seems so old hat now.  Tongue

Quote
There are many things you can do with photoshop with considerable work.  With an N-dimensional dataflow, there are ways to do the same things with a trivial amount of work.
Not saying there are not ways in which you can do things better, but I find doing what you say is 'considerable work' in fact quite trivial to do. Cntrl/Cmd+j is not exactly a challenge is it?
Logged

Tradition is the Backbone of the Spineless.   Futt Futt Futt Photography
jjj
Sr. Member
****
Offline Offline

Posts: 3649



WWW
« Reply #755 on: May 18, 2013, 08:59:36 PM »
ReplyReply

Why would a company build a product that does all things for everyone, and then charge a mere 200usd for it and hope to survive?
Because they could wipe out the competition would be my first thought.
And if you already have products that it would be damaged by launching such software, surely it's better you release it before someone else does.
LR effectively did that that to Photoshop, when you look at PS users who are photographers.
Logged

Tradition is the Backbone of the Spineless.   Futt Futt Futt Photography
jjj
Sr. Member
****
Offline Offline

Posts: 3649



WWW
« Reply #756 on: May 18, 2013, 09:29:40 PM »
ReplyReply

Since work of just this sort is being done for GIMP 3, it might be worthwhile to look in on that discussion. 

http://blog.mmiworks.net/2012/01/gimp-full-gegl-ahead.html

I did just that and thought this was of note.

Quote
However, at the moment, you quite often see the following: ‘if you want this feature, you’ll have to use it on its own, extra layer.’ This is layer abuse. I get misquoted on this so let me clarify: users never abuse layers, developers do. Here are some examples of layer abuse:

the only way to do a non‐destructive operation is via an adjustment layer
only one vector shape per vector layer;
only one block of text on a text layer;
the output of a filter plugin is always put on a new layer;
the result of using a toolbox tool is always put on a new layer.
The problem is with ‘only,’ ‘always’ and ever more layers, whether users want them or not.

Reformation
The abuse listed above is straightforward to fix. Quite a bit of it has to do with enabling users to redo or revisit the image manipulation. That is solved by the operations dialog.


Furthermore, there can be as many vector shapes and text blocks on a layer as one likes. Just show them—and stack ’em—as sub‑layer elements in the layers dialog. And when then one of these sub‑layer elements is allowed to be actual pixels, then it is clear that the whole notion of special vector/text layer can disappear:

Layer abuse has to stop. Developers should never force users to use another layer. Only users decide how many layers they want to use, purely as their own personal way to organise their work.

Two things struck about this part of the article
1 - Keeping these things discrete is not actually a bad thing and can be 'solved' by using layer groups in PS.
2 - The alternative which looks no simpler and seems to replicate Layer groups - which did I mention this? - are already in PS.

Now I've seen nodal workflows like this many years back in video applications and thought them interesting, but they always struck me as something that would confuse the heck out of many people.
Not to mention the ridiculous amount of real estate they take up.

Logged

Tradition is the Backbone of the Spineless.   Futt Futt Futt Photography
jjj
Sr. Member
****
Offline Offline

Posts: 3649



WWW
« Reply #757 on: May 18, 2013, 09:34:07 PM »
ReplyReply

1) Adobe felt they had a cash cow with photoshop, and felt no reason to innovate while they were deeply entrenched.  User expectations were matched to the (limitations on) possibilities as marketing by Adobe.
Funny as each version of PS I've ever used has been much better than previous versions and I've used it since PS 3.0
Logged

Tradition is the Backbone of the Spineless.   Futt Futt Futt Photography
LKaven
Sr. Member
****
Offline Offline

Posts: 841


« Reply #758 on: May 18, 2013, 11:49:30 PM »
ReplyReply

Jeremy,

Keep in mind that an N-dimensional dataflow also includes all cases of N = 1 dimension.  If there is any case that you think you can do most efficiently with one dimension, then you have that option.  

While N-dimensional workflows can be arbitrarily complex, I think you'd agree that the maxim should be that the complexity of the workflow should be no more complex than required by the task being undertaken.  If you can do it do it more efficiently, then of course you should.

I don't believe all N-dimensional workflows can be -- in principle -- recast more efficiently into equivalent 1-dimensional workflows.  The operant terms are"more efficiently" and "in principle".
« Last Edit: May 19, 2013, 11:36:09 AM by LKaven » Logged

walter.sk
Sr. Member
****
Offline Offline

Posts: 1334


« Reply #759 on: May 24, 2013, 01:02:19 PM »
ReplyReply

After reading all the threads dealing with the Creative Cloud, and Adobe's clarifications about it, my belief was that if you subscribed to the cloud and then left it at some point, you would have to go back to your last "perpetually licensed" version, in my case, Photoshop CS6, and would lose all of the CC features.  My wife, who has CS5 on her computer, wanted to upgrade to CS6 to have, so that when we do subscribe to the Creative Cloud she would have the last "real" version to use if we quit the cloud later.

She called Adobe to order the upgrade to CS6, and was told she did not have to get the upgrade.  She asked me to get on the phone and we both heard the Adobe guy say that if you subscribe to the cloud, say for a year or more, if you then stop subscribing, we would be able to have the then-current version of Photoshop on our computer and to use it as long as we would want to.  In addition, it would retain all of the new features up until that point, and what we would lose would only be future upgrades.

This seems to be 1) a contradiction of what I understood, and, 2) too good to be true, if it is true. 

Were we given the correct info?
Logged
Pages: « 1 ... 36 37 [38] 39 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad