Ad
Ad
Ad
Pages: « 1 ... 5 6 [7] 8 9 ... 11 »   Bottom of Page
Print
Author Topic: LR4 speed totally unacceptable  (Read 41281 times)
Schewe
Sr. Member
****
Offline Offline

Posts: 5426


WWW
« Reply #120 on: May 31, 2012, 10:29:48 PM »
ReplyReply

Where we're being let down from Adobe, is they haven't written the code/processes to take advantage of the modern GPU, SSD's, RAM, etc, nearly as much as they could have.  The should.  And when they do they'll probably make a huge deal of it and double the price..

You might wanna step away from this attitude...do you know Cuda? Are you aware of the problems of doing GPU for ANYTHING not related to 3d polygons and gamer's systems?

GPU acceleration for graphic processing is one of the biggest pieces of bullshyte in the computing industry...yes, if you are willing to turn yourself inside out and write for a constantly changing and non-standard OpenGL spec, you might (just might) garner some saved cpu clicks on some things but not everything. Photoshop tried to do GPU with CS4...it sucked rocks. Now by CS6, it's "ok"...but trying to port raw processing algorithms in the ACR/LR pipeline to GPU ain't a real great story. Eric Chan used to work for a graphics card company before he came to Adobe. He knows this crap inside/out...trying to speed up raw processing by use of GPU is very, VERY difficult because of the way GPU expects data. It all has to fit in the GPU correctly which means you have to taylor EVERYTHING to sending the data in the way the GPU wants it and screw you if you don't have a qualifying vid card with the correct (not always the most recent) drivers...

Bitch all you want about LR speed and ram/CPU usages, but leave GPU out of the equation until graphic card venders get their shyte together cross-platform–cause Adobe will only ever support platform agnostic solutions...Windows only shit need not apply.
« Last Edit: May 31, 2012, 10:31:38 PM by Schewe » Logged
dreed
Sr. Member
****
Offline Offline

Posts: 1213


« Reply #121 on: May 31, 2012, 10:37:14 PM »
ReplyReply

You might wanna step away from this attitude...do you know Cuda? Are you aware of the problems of doing GPU for ANYTHING not related to 3d polygons and gamer's systems?

GPU acceleration for graphic processing is one of the biggest pieces of bullshyte in the computing industry...yes, if you are willing to turn yourself inside out and write for a constantly changing and non-standard OpenGL spec, you might (just might) garner some saved cpu clicks on some things but not everything. Photoshop tried to do GPU with CS4...it sucked rocks. Now by CS6, it's "ok"...but trying to port raw processing algorithms in the ACR/LR pipeline to GPU ain't a real great story. Eric Chan used to work for a graphics card company before he came to Adobe. He knows this crap inside/out...trying to speed up raw processing by use of GPU is very, VERY difficult because of the way GPU expects data. It all has to fit in the GPU correctly which means you have to taylor EVERYTHING to sending the data in the way the GPU wants it and screw you if you don't have a qualifying vid card with the correct (not always the most recent) drivers...

Bitch all you want about LR speed and ram/CPU usages, but leave GPU out of the equation until graphic card venders get their shyte together cross-platform–cause Adobe will only every support platform agnostic solutions...Windows only shit need not apply.

I wasn't going to say what you've said above but yes, you're completely right.

I think that people are being misled into believing that GPUs are more capable than they really are by the various press articles people read where some researcher is using the GPU to do some massive calculation and as a result, they can do it faster than if they were just using the CPU. What people don't realise is that there are limitations to this and that the data has to be presented in a very specific manner in order to be worked on by the GPU. When the data set is large enough, the expense of data massaging both before and after is out weighed by the savings in compute time through using the GPU.
Logged
Rhossydd
Sr. Member
****
Offline Offline

Posts: 1889


WWW
« Reply #122 on: June 01, 2012, 12:08:36 AM »
ReplyReply

Remember, that it is in the "Loading..." stage of develop that most people see noticeable delay and that this therefore is where performance needs to be improved.
Maybe you should re-read the first post in this thread.
It's not waiting for files to be loaded that caused 4.0 to be unusably slow, but the response of the controls in the develop module. Two completely different issues.
Waiting for a file to load before it can be worked on is mildly annoying, waiting many seconds for a control slider's effect to be seen on screen was just unacceptable.

The bottom line has been it's Adobe's poor original coding in 4.0 that was the problem as 4.1 is very much better with respect to this and adds extra features to the develop module set as well.
Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1666


« Reply #123 on: June 01, 2012, 03:41:15 AM »
ReplyReply

I think that people are being misled into believing that GPUs are more capable than they really are by the various press articles people read where some researcher is using the GPU to do some massive calculation and as a result, they can do it faster than if they were just using the CPU. What people don't realise is that there are limitations to this and that the data has to be presented in a very specific manner in order to be worked on by the GPU. When the data set is large enough, the expense of data massaging both before and after is out weighed by the savings in compute time through using the GPU.
Nvidia and AMD/ATI wants to sell their products, just like everyone else.

For some, very specific tasks, GPU compute show tremendous speedup. Typically tasks that can be split into a large number of threads with little inter-dependency in data-sets and code-flow, and where each thread consists of floating-point number-crunching and little else (no "if-tests", few integer operations etc). Stuff like geological processing, certain economic calculations etc seems to commonly fit well.

There are platform-agnostic GPU APIs, notably OpenCL. It seems that you still have to invest substantial amounts of time in optimizing the code for different platforms compared to straight cpu code. How much could the speedup gain be for the relevant bottle-necks if Adobe went this route? How much would the GPU capabilities limit the development of new algorithms? How difficult is it to find developers that can do both advanced image demosaicing development _and_ advanced GPU implementation at the same time? I guess only Adobe can guess at that.

-h
Logged
stevenskl
Newbie
*
Offline Offline

Posts: 5


« Reply #124 on: June 01, 2012, 11:22:57 AM »
ReplyReply

I wouldn`t call GPU in PS CS6 "OK". IMO it rocks in the areas where it is implemented. Screenredraw is much faster and things like liquify or the new blur filters are way faster than running them on the CPU only.
I don`t know, wether it`s hard to program. I only know that Capture One uses OpenCl and is running much smoother than without it or than Lightroom.
I guess we will see much higher resolution monitors, much higher resolution cameras and more complicated RAW processing algorithms when LR5 comes out. Will CPUs be so much faster then? I doubt it. Maybe Eric can give us his point of view about the GPU thing.
Logged
nemophoto
Sr. Member
****
Offline Offline

Posts: 481



WWW
« Reply #125 on: June 01, 2012, 01:09:04 PM »
ReplyReply

LR 4.1's speed is making me want to blow my brains out! Not only is the program slow to respond and show changes when using the brush tool (now count slowly - 1, 2, 3, 4...), but the interface is a slug. I move a slider and it's the same 3-5 second count -- and usually I've moved too much. I've taken to entering values manually -- faster and more precise. I love the quality of output (I also own DxO 7.5 and Capture One 6, but I keep going back to LR), but this makes me think about trying out Corel's version of Bibble (which I also own..). There are times I feel I waste far too much time waiting for the program to catch up with any move I make. My system isn't slow: AMD Phenom II x6 1100T (six core obviosuly), 16GB of RAM, an SSD for scratch disk and cache, a fast video card, 2 - 24" NEC Monitors, tons of fast external drives. If my system, which is no slug by the minimum standards (and even by more advanced), they've got some major programming issue. I just pray PS CS6 doesn't have the same snail's pace.

As for comments regarding the use of the GPU/OpenGL, I too feel it's overrated. I have OnOne's Perfect Suite. Most of the plugins, while good, are slugs because they use the OpenGL. I use to use Genuine Fractals. I've basically stopped because of the speed issue of redraw. I now use Alien Skin's Blowup 3. Faster previews and actually better enlargements.
Logged

Simon Garrett
Sr. Member
****
Online Online

Posts: 344


« Reply #126 on: June 02, 2012, 04:03:30 PM »
ReplyReply

LR4.1 speed perhaps depends on the installation. 

On my machine (i7-930, 12G RAM, W7-64-bit) LR4.1 is about the same speed as LR3.6.  Both are still on my machine, so I've done a few simple comparisons.  LR4.0 was noticeably slower at some functions, but LR4.1 is back to LR3.6 speed. 
Logged
nemophoto
Sr. Member
****
Offline Offline

Posts: 481



WWW
« Reply #127 on: June 05, 2012, 11:55:30 AM »
ReplyReply

LR4.1 speed perhaps depends on the installation. 

... but LR4.1 is back to LR3.6 speed. 

That's certainly not my experience. When I use the healing tool, I count 1,2,3,4 and then maybe something happens. It's even worse when I try to decide for the program where I want to sample. Then add about another 5-10 seconds. Trying to move the sliders is a real effort in futility. Better to just enter numbers. To be honest, Lightroom has not been terribly speedy since version 2, and I had a lesser computer back then.
Logged

Preeb
Newbie
*
Offline Offline

Posts: 3


« Reply #128 on: June 05, 2012, 01:31:35 PM »
ReplyReply

That's certainly not my experience. When I use the healing tool, I count 1,2,3,4 and then maybe something happens. It's even worse when I try to decide for the program where I want to sample. Then add about another 5-10 seconds. Trying to move the sliders is a real effort in futility. Better to just enter numbers. To be honest, Lightroom has not been terribly speedy since version 2, and I had a lesser computer back then.

You have to have something going on to make it that unresponsive.  I have half the machine you have (and on a laptop to boot) and yet I get instant response with 4.1. Mine is an i7 Quad Core with 6gb ram and 1gb video ram.  All of the sliders work without hesitation.  Loading and zooming raw files takes the most time of anything I do.   If yours doesn't run it properly, then there has to be more to it than just the software. 
Logged
john beardsworth
Sr. Member
****
Offline Offline

Posts: 2670



WWW
« Reply #129 on: June 05, 2012, 01:40:26 PM »
ReplyReply

That's certainly not my experience. When I use the healing tool, I count 1,2,3,4 and then maybe something happens. It's even worse when I try to decide for the program where I want to sample. Then add about another 5-10 seconds. Trying to move the sliders is a real effort in futility. Better to just enter numbers. To be honest, Lightroom has not been terribly speedy since version 2, and I had a lesser computer back then.
See if there's any difference if you set Clarity to 0.
Logged

nemophoto
Sr. Member
****
Offline Offline

Posts: 481



WWW
« Reply #130 on: June 05, 2012, 03:51:49 PM »
ReplyReply

No real speed difference with Clarity set to zero. Good suggestion, though I'm not sure why that would effect things. I worked on an image and did not set clarity, but found there was no obvious speed differential.
Logged

john beardsworth
Sr. Member
****
Offline Offline

Posts: 2670



WWW
« Reply #131 on: June 05, 2012, 05:11:26 PM »
ReplyReply

In a perfect world it shouldn't affect speed, but it's something I've heard.
Logged

budjames
Sr. Member
****
Offline Offline

Posts: 690


WWW
« Reply #132 on: June 09, 2012, 07:52:16 PM »
ReplyReply

LR4 runs great on my MacBook Air I7 with only 4gb of ram.

Cheers.
Bud
Logged

Bud James
North Wales, PA
www.budjamesphotography.com
Ovid
Newbie
*
Offline Offline

Posts: 1


« Reply #133 on: June 12, 2012, 04:43:51 AM »
ReplyReply

I also had speed/response problems with LR4 in spite of ok spec on my pc.
I solved the problem by created a new catalog, and imported my LR3 image archive over again into this new catalog, rendered new prewiews and all.....
This took care of all my problems, and my Lightroom are now faster and more responsive then ever....... Grin

Logged
dreed
Sr. Member
****
Offline Offline

Posts: 1213


« Reply #134 on: June 12, 2012, 07:06:39 AM »
ReplyReply

I also had speed/response problems with LR4 in spite of ok spec on my pc.
I solved the problem by created a new catalog, and imported my LR3 image archive over again into this new catalog, rendered new prewiews and all.....
This took care of all my problems, and my Lightroom are now faster and more responsive then ever....... Grin

I wonder if this speed improvement can be achieved in other ways, for example -
- delete all of your previews and force them to be rebuilt
- optimize the catalog

It isn't clear what "optimize the catalog" does, but if it is anything like traditional database operations that do "self optimisation" then it isn't really going all the way - a full text dump of the catalog contents and then restore into a clean catalog is really the best way to optimise the catalog and get rid of all the dead wood.
Logged
nemophoto
Sr. Member
****
Offline Offline

Posts: 481



WWW
« Reply #135 on: June 12, 2012, 11:15:32 AM »
ReplyReply

I also had speed/response problems with LR4 in spite of ok spec on my pc.
I solved the problem by created a new catalog, and imported my LR3 image archive over again into this new catalog, rendered new prewiews and all.....
This took care of all my problems, and my Lightroom are now faster and more responsive then ever....... Grin



With the exception of my portfolio catalog and fine art catalog, EVERY catalog is a new catalog, and LR 4.1 is still a dog when it comes to speed of doing anything ...woof, woof. Smiley

Nemo
Logged

john beardsworth
Sr. Member
****
Offline Offline

Posts: 2670



WWW
« Reply #136 on: June 12, 2012, 01:54:00 PM »
ReplyReply

It isn't clear what "optimize the catalog" does.
Mainly the SQL Vacuum command and a few integrity checks.
Logged

jdyrek
Newbie
*
Offline Offline

Posts: 3


« Reply #137 on: October 09, 2012, 06:46:51 PM »
ReplyReply

I too have been experiencing LR 4 slow performance, particularly in the Develop mode.  And it was not helped by upgrading to 4.2.  I have found something that worked for me, I now have no lag time when I move the sliders in Develop mode.

And this fix is a variation on different discussions and suggestions I had found on Adobe forums and SportsShooter.com.  The problem is with the preference files, LR was, apparently, being slowed by preference files left over from earlier versions.  I use a Mac, not a clue how this would be implemented on a Windows OS. 

I went to (User Name)/Library/Preferences  and located the files:  com.adobe.lightroomX.plist.  I had one of these files for every version of LR even though I no longer had the oldeer applications installed.  I deleted the "com" and replaced it with "old" so I could find it again. I did nothing to the file for the current version of LR.

And that was all there was to it, now LR 4.2 runs in real time.
Logged
simonsaith
Newbie
*
Offline Offline

Posts: 1


« Reply #138 on: October 10, 2012, 11:30:17 AM »
ReplyReply

Hi jdyrek,

We'd like to investigate this a bit further. Could you send me all versions of your Lighroom pref files, including the 4.2 one that you are currently using? Please send them privately to sichen at adobe dot com. Also, when you get your speed improvement, did you also renamed your old LR 4.2 pref file to the "old" that you start with (That is you started LR 4.2 with no pref)? We want to take a look at all versions of your pref file. Thanks!

Simon
Adobe
Logged
Rhossydd
Sr. Member
****
Offline Offline

Posts: 1889


WWW
« Reply #139 on: October 10, 2012, 11:57:58 AM »
ReplyReply

not a clue how this would be implemented on a Windows OS.
The preference files are easy enough to find and rename.
eg Win 7 <root>/users/<user name>/AppData/Roaming/Adobe/Lightroom/Preferences/
Deleting old preference files makes no difference here.

LR4 wasn't good, 4.1 was a bit better, 4.2 seems very slightly better, but it's still painfully unreactive at times.

Now using an i7 @3.5/3.9ghz, 32gb ram, SATA III SSDs for OS & scratch disks onto a GTX470 powered 3840x1440 desktop. It really ought to fly with that hardware.
Logged
Pages: « 1 ... 5 6 [7] 8 9 ... 11 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad