Ad
Ad
Ad
Pages: « 1 ... 9 10 [11]   Bottom of Page
Print
Author Topic: LR4 speed totally unacceptable  (Read 43092 times)
kaelaria
Sr. Member
****
Offline Offline

Posts: 2226



WWW
« Reply #200 on: October 24, 2012, 12:47:56 PM »
ReplyReply

Funny how no one listens when they are given the right answer because they just don't want to believe it Smiley  At least some of you find out the hard way and get it solved Smiley
Logged

JRSmit
Sr. Member
****
Offline Offline

Posts: 361


WWW
« Reply #201 on: October 25, 2012, 12:37:32 PM »
ReplyReply

Funny how no one listens when they are given the right answer because they just don't want to believe it Smiley  At least some of you find out the hard way and get it solved Smiley
Do not know what is really ment with this statement.
I have an ivy bridge computer like described in a previous post, and still when a lot of local corrections and lens corrections, even such a platform becomes slow. I regularely exit and restart lightroom to regain some performance. This is when editing hundreds of images in a row.
so yes it would be nice if the LR development team comes with some serious performance improvements.
Logged

Fine art photography: www.janrsmit.com
Courses and workshops: www.centrumbeeldbeleving.nl

Jan R. Smit
dmerger
Sr. Member
****
Offline Offline

Posts: 686


« Reply #202 on: October 29, 2012, 11:36:53 AM »
ReplyReply

If you need to really speed up LR, and you have the room and budget, you might consider a Titan computer.   Wink

http://www.pcworld.com/article/2013228/titan-supercomputer-hits-20-petaflops-of-processing-power.html
Logged

Dean Erger
acktdi
Jr. Member
**
Offline Offline

Posts: 64


« Reply #203 on: November 19, 2012, 09:55:13 AM »
ReplyReply

I too have been suffering from painfully slow LR4 speed.  I had been using a 5 year old Intel Q6600 2.4, 8gb, ssd system and it was fine for LR3. 
I just upgraded to a home-built i7-3770k 3.5ghz, 16 gb ram, ssd and LR4 is blazing fast.  There is no wait switching between modules.  Preview render times are under 5 seconds, compared to 30 seconds before.

So I agree, if LR4 seems slow, you need a new processor.

If anyone is interested, here are the specs on my new system

Intel i7-3770k cpu
Asus P8Z77-vpro motherboard
Coolermaster 212+ cpu cooler
G.skill ripjaw 2x8gb ram
Samsung 830 256gb ssd
cost - 957$

ATI Radeon 4870 video card
Coolermaster Cosmos 1000 case
Corsair TX850w psu
cost - about $300, these were reused from my previous system
Logged
kaelaria
Sr. Member
****
Offline Offline

Posts: 2226



WWW
« Reply #204 on: November 20, 2012, 07:25:26 AM »
ReplyReply

Another happy user sees the truth Smiley
Logged

NigelC
Sr. Member
****
Offline Offline

Posts: 515


« Reply #205 on: November 20, 2012, 07:59:30 AM »
ReplyReply

Don't wish to participate in ongoing ping-pong game but I have Vista 64 Q6600 2.4 overclocked to 3ghz and 8GB RAM. Very recently loaded LR4.1 and it's a bit slow. I am close to pensioning it off (Vista Home Premium limitations as much as anything else) and as I can't currently stretch to new computer plan to run my T/Pad W520 with external monitor/keyboard. It's got a more modern Corei7 2.2 and better graphic card but I'm guessing things would run faster upgrading RAM from 8 to 16GB or even maxing out to 32GB?
Logged
acktdi
Jr. Member
**
Offline Offline

Posts: 64


« Reply #206 on: November 20, 2012, 09:07:57 AM »
ReplyReply

I was watching Resource Monitor while LR ran, it's mostly the CPU getting pegged at 100%, the memory wasn't as important.  Memory is cheap these days, but I don't think it will improve performance very much.
Logged
JRSmit
Sr. Member
****
Offline Offline

Posts: 361


WWW
« Reply #207 on: November 20, 2012, 09:45:42 AM »
ReplyReply

I was watching Resource Monitor while LR ran, it's mostly the CPU getting pegged at 100%, the memory wasn't as important.  Memory is cheap these days, but I don't think it will improve performance very much.
Same observation here, speed of memory can make some difference. Same for speed of drives, but not a lot.
Logged

Fine art photography: www.janrsmit.com
Courses and workshops: www.centrumbeeldbeleving.nl

Jan R. Smit
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1677


« Reply #208 on: November 20, 2012, 03:03:37 PM »
ReplyReply

Puzzling indeed. LR being an application that does parametric updates to the final image display (on screen or at the ultimate output size), it should be obvious that size matters (as do things like lens corrections). The only thing that could reduce (not eliminate) the dependency of the display (monitor) size, is that more than only the zoomed-in area of the image needs to be rendered.

Cheers,
Bart
I dont know how the Lightroom pipeline is setup. But I can easily see that might work at a fixed (camera native) all of the way until the final rendering/scaling.

If that is the case, the difference between a 4MP display and a 1MP display should be a 4 times increase in scaler complexity and output buffer size. Does not sound too expensive to me?

-h
Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1677


« Reply #209 on: November 20, 2012, 03:07:47 PM »
ReplyReply

There does seem to have some improvement with the recent updates.
I doubt we'll see anything really dramatic until (if?) someone at Adobe manages to utilise the currently unused power of GPUs for processing, as they did so successfully in Premiere Pro CS5.
GPU processing tends to give "two orders of magnitude" speedup for carefully selected floating-point, infinitely threadable scientific tasks where the scenario is unfairly rigged by Nvidia/AMD. Think physical modelling, Monte Carlo simulations and such.

For more realistic tasks, the speedup tends to be far more modest, while the increased complexity, potential for nasty bugs, test-matrix, difficulty of hiring developers etc can be significant.

If the pipeline is fixed-point and hard to thread into 100s of highly separate tasks, the speedup may be negligible.


Did anyone try running Lightroom on a 16-core x86 system? It is significantly easier to thread most applications to that hardware. If Adobe did not bother/was not able to exploit moderately large numbers of x86 cores, I dont see them doing anything useful on GPUs?

-h
« Last Edit: November 20, 2012, 03:09:34 PM by hjulenissen » Logged
Pages: « 1 ... 9 10 [11]   Top of Page
Print
Jump to:  

Ad
Ad
Ad