Ad
Ad
Ad
Pages: « 1 ... 4 5 [6] 7 8 ... 11 »   Bottom of Page
Print
Author Topic: LR4 speed totally unacceptable  (Read 43344 times)
John R Smith
Sr. Member
****
Offline Offline

Posts: 1357


Still crazy, after all these years


« Reply #100 on: May 29, 2012, 02:34:51 AM »
ReplyReply

Not here. It ran smoothly straight from release.
I never had any serious problems with previous versions since beta 1.

You must be about the only one, then. Note that you can download every version of LR 3x from the Adobe site, except 3.0.

John
Logged

Hasselblad 500 C/M, SWC and CFV-39 DB
and a case full of (very old) lenses and other bits
Keith Reeder
Full Member
***
Offline Offline

Posts: 242


WWW
« Reply #101 on: May 29, 2012, 02:39:34 AM »
ReplyReply

You must be about the only one, then. Note

No - I've had no speed/performance problems with Lr 3 or with Lr 4.
Logged

Keith Reeder
Blyth, NE England
John R Smith
Sr. Member
****
Offline Offline

Posts: 1357


Still crazy, after all these years


« Reply #102 on: May 29, 2012, 06:07:41 AM »
ReplyReply

Well, good for you guys then. It's a pity we can't magically go back to the pages of Adobe Forums Lightroom at the time LR 3.0 was released. There were a whole load of pretty traumatised users on there who had been happily using 2.6 and 2.7 and who could barely use their shiny new upgrade.

The point I was trying to make, was that when I upgraded to LR 3.0 from 2.7 on my quite basic Win 7 laptop, I had terrible trouble. It crashed at unpredictable times. If I used perspective correction I could barely do anything to the file afterwards, and it was generally as slow as paint drying. Local edits with the brush were virtually unuseable. This was with my quite big 39 MP Hasselblad 3FR RAW files.

I did absolutely nothing to upgrade my PC or alter the cache sizes or anything else whatsoever. On the same laptop, with the same files, by the 3.5 release everything was fine. Smooth, quick, no crashes, perspective correction no problem. Now it seems to me somebody must have altered something, and it was most likely the clever folks at Adobe. Hang in there, ye of little faith, and wait for 4.5  Wink

John
Logged

Hasselblad 500 C/M, SWC and CFV-39 DB
and a case full of (very old) lenses and other bits
john beardsworth
Sr. Member
****
Offline Offline

Posts: 2759



WWW
« Reply #103 on: May 29, 2012, 06:09:54 AM »
ReplyReply

I suspect you're letting your personal experience colour your memory. More people have more problems with 4.0 - there was no need to rush out a 3.1.
« Last Edit: May 29, 2012, 06:26:39 AM by johnbeardy » Logged

Alan Goldhammer
Sr. Member
****
Offline Offline

Posts: 1632


WWW
« Reply #104 on: May 29, 2012, 12:43:17 PM »
ReplyReply

I'm finally putting LR4 through its paces having come back from two weeks in Italy with about 800 images.  The only thing that I've found a little slower is the Library module as I go through and assign stars/flags to images for further work but it's only a tad bit slower.  All other functions seem to be fine.
Logged

John R Smith
Sr. Member
****
Offline Offline

Posts: 1357


Still crazy, after all these years


« Reply #105 on: May 29, 2012, 01:22:56 PM »
ReplyReply

I suspect you're letting your personal experience colour your memory. More people have more problems with 4.0 - there was no need to rush out a 3.1.

Well, I don't want to get into a silly argument over this. Of course my comments were based on my personal experience, that's the only one I can give you a first-hand account of. But there were a lot of other folks in the same boat - allow me to cite the thread "Why is Lightroom 3 So Slow?" on the Adobe forum, which got 124,595 views and 1,198 replies before being locked.

http://forums.adobe.com/thread/656635

There never was an LR 3.1, it went straight to 3.2. There was also a horrible bug in release 3.4, so there was a hurried issue of 3.4.1 which sorted things out. By 3.5 the software actually started to work rather well. If LR 3.0 was so wonderful, why can't you download it from the Adobe site?

http://www.adobe.com/support/downloads/product.jsp?product=113&platform=Windows

John
Logged

Hasselblad 500 C/M, SWC and CFV-39 DB
and a case full of (very old) lenses and other bits
john beardsworth
Sr. Member
****
Offline Offline

Posts: 2759



WWW
« Reply #106 on: May 29, 2012, 01:48:37 PM »
ReplyReply

There wasn't a 3.1 because they wanted to sync the numbering scheme with Camera Raw, and 3.2 wasn't rushed out - it came out on the normal schedule.
« Last Edit: May 29, 2012, 01:57:41 PM by johnbeardy » Logged

John R Smith
Sr. Member
****
Offline Offline

Posts: 1357


Still crazy, after all these years


« Reply #107 on: May 29, 2012, 02:23:30 PM »
ReplyReply

There wasn't a 3.1 because they wanted to sync the numbering scheme with Camera Raw, and 3.2 wasn't rushed out - it came out on the normal schedule.

With all due respect, John, I didn't say anything about 3.2 being rushed out. And I am aware of why it was 3.2 rather than 3.1, there's nothing sinister there. In fact, rather than a rush, it seemed an age to me waiting for an upgrade to 3.0. I was having so much pain that I thought seriously about dumping 3.0 and going back to 2.7.

But like I said, by 3.4.1 and on to 3.5 things were running really well. As ever, your mileage may vary.

John
« Last Edit: May 29, 2012, 02:25:26 PM by John R Smith » Logged

Hasselblad 500 C/M, SWC and CFV-39 DB
and a case full of (very old) lenses and other bits
john beardsworth
Sr. Member
****
Offline Offline

Posts: 2759



WWW
« Reply #108 on: May 29, 2012, 02:54:54 PM »
ReplyReply

As I said, your bad experience must be colouring your impression of 3.0's performance - that's why I pointed to the significance of 4.1 being rushed out and there not being any need to do so last time round.
Logged

Isaac
Sr. Member
****
Offline Offline

Posts: 2781


« Reply #109 on: May 29, 2012, 09:50:02 PM »
ReplyReply

I understand that I don't have a "flag ship" machine.  I keep my "working" catalog small usually less than 500 images.  I also don't expect it to POP.  I think i have realistic expectations I had hoped that someone might have found a issue I could deal with.

Perhaps someone already mentioned ReadyBoost, if not ...

8-16GB USB 2.0 flash memory is now very cheap, and can be used as fast cache for disk access -- which, depending on the particulars of your hard-drive, has the potential to speed up most programs you use.
Logged
Rhossydd
Sr. Member
****
Offline Offline

Posts: 1917


WWW
« Reply #110 on: May 30, 2012, 02:44:04 PM »
ReplyReply

so 4.1 looks to have resolved most of 4's problems from what I've seen so far.

All this talk of esoteric system optimisation is really unnecessary, the real problem was down to the coding.
Logged
Steve Weldon
Sr. Member
****
Offline Offline

Posts: 1447



WWW
« Reply #111 on: May 31, 2012, 01:58:33 AM »
ReplyReply

Steve - By no meaningful improvement, I mean no measurable difference using a stopwatch. In other words, it made no difference whatsoever. As I suspected, when a GPU monitor shows zero percent usage of the GPU, a faster graphics card isn't going to help.

I think the main issue is still screen size.
In the develop tab my primary screen is driving 70% more pixels in the rendered image area, that's what accounts for any differences.

1.  Thanks for the clarification.  Out of curiosity, as you change from screen to screen in any program, just changing screens, do you see "any" difference in how fast the screens render with different cards?  I don't think this difference will be great enough for you to manually manipulate a stop watch, but you should be able to see it with your eyes.  This is the difference you normally get with screen rendering from a faster video card which isn't tasking the GPU.   It will be there, but some don't notice it until they look for it.

2.  It is certainly possible the primary screen renders differently than the secondary screen, for certain it renders after the secondary screen.  We can watch that happen with our eyes.  For this particular area of LR we're discussing, you believe the time it takes for the "loading..." to complete, is mostly a function of the CPU?  I've wanted a PA271w.. tempted to see this testing thing through that far..
Logged

----------------------------------------------
http://www.BangkokImages.com
Steve Weldon
Sr. Member
****
Offline Offline

Posts: 1447



WWW
« Reply #112 on: May 31, 2012, 02:09:37 AM »
ReplyReply

Interesting, I read your post and thought I'd try that with the exposure slider.  To give a bit of background I've never actually measured the rendering time on my system as I have no complaints with the performance.  I opened an image that I haven't previously had in the develop module, although upon import I do have a base set of adjustments applied (including sharpening and noise reduction).  Loading the image in develop took something less but very close to a full a second, with the second screen actually instantly showing the image  Once loaded I "yanked" the exposure slider from -5.0 to 5.0 several times.  Too quick to measure the time of the "yank" but probably .10 seconds less?  The effect on the screen was almost as immediate, maybe another .10 second at most.  My second screen was slightly behind the first, but the whole process with both screens is far less than a second and very smooth.

Okay, for my system, the largest time bottleneck seems to be when moving from image to image if, and more so if the image has not been previously loaded in the develop module.  It's not a large one but it's there.  So what hardware resource is being used for that particular function?  I opened the resource monitor to take a look while moving around. The only truly significant action I see happening is processor usage.  It jumps from a baseline of approximately 4% usage to a range somewhere between 45-55% usage as I move from one image to another.  You also see a bit of disk read but frankly it is just a blip and never registers as a percentage in the monitor.  Same with memory.

This leads me to believe what as has been written multiple times, that Lr is very processor intensive which makes a tremendous amount of sense to me and always has.  We know that the develop module stores a basic preview of an image in the cache after the first use. having basically nothing but the de-mosaicing performed.  All other adjustments applied to the image have to be rendered each time the image is accessed.  That takes processing power.  The image appears on screen instantly, but the "loading" message hangs around till the processor has completed this rendering task.

For me personally, it would appear that the only way I would see faster performance is by increasing processor performance.  If I did so, I might eventually see a bottle-neck elsewhere but from the basic monitoring I've done thus far I have a pretty big window before that happens.


What about the people that are having significant performance issues now?  Especially those that are reportedly running more powerful processors than mine? (i7-930 overclocked to 4ghz).  Most of the complaints I've read have been about rendering time and/or smooth operation of adjustment sliders.  For the most part the complaints are from those that were running Lr3 just fine.  We know there were significant changes to the process versions, some of those very complicated, between the versions of Lr.  If we believe these differences and issues to be processor path related then perhaps processor/memory transfer speed or memory speed itself comes into play? Although I haven't done the math that seems unlikely to me but who knows.  FWIW I'm running 12gb DDR3 1300mhz memory overclocked to ~1550 mhz.

Since my disk system is all based on 7200 rpm drives I'm having a hard time thinking that these issues have anything to do with disk access.  I'm set up with the operating system (Win 7 64), Lr software, the catalog, as well as the previews on one disk.  The Lr Raw Cache is on another single disk.  The images themselves are on a 4 disk array in Raid 10.

**I see the same basic performance whether I'm working on 12mp 5D files, 21mp 5DII files, or scanned Tiffs at over 200mb in size.  Someone pointed out to me that my huge film scans aren't the same thing as a RAW file of the same size primarily due to the de-mosaicing needed with the digital capture.  I understand how that can make a difference on the initial rendering in develop but once the cache preview is generated (which includes de-mosaicing for the digital file) the differences between the two should be minimal I would think.  All other Lr adjustments have to be performed each time for each type of image to my knowledge.


Also, I ran the above tests on my system as I normally use it, meaning I have a browser open with 7 or so tabs, TOPO software, Outlook, Excel, and Quicken financial software running (but idle) at the same time. 

 

Nice post, well articulated.

1.  Your system is setup mostly like mine and seems to behave the same as well.

2.  Yes, it is CPU intensive and this blame can only be laid on Adobes doorstep.  Most imaging professionals and even amateurs have more powerful GPU's and SSD's in their systems, and if they didn't they would if it significantly helped performance.  It's past time Adobe starts supporting modern computers and stops depending mostly on CPU performance.

3.  No, not the "only" way, but the primary way.  Small bits of performance increases can be realized through the use of more powerful GPU's and SSD's, but only in very specific points in the workflow.  If you have a workflow that hits those points a lot it 'might' pay to increase your resources in these areas.

4.  I'd agree, the more common issues we hear complaints on would not benefit (much) from faster I/O times.  However, there are areas in the workflow where it would.
Logged

----------------------------------------------
http://www.BangkokImages.com
Steve Weldon
Sr. Member
****
Offline Offline

Posts: 1447



WWW
« Reply #113 on: May 31, 2012, 02:27:57 AM »
ReplyReply

If that is the case then Adobe are at fault for not delivering a package that works as well as expected for people using "out of the box" computers from Dell, HP, Apple, etc. Similarly, fault would lay with hardware manufacturers for not delivering systems optimally tuned. If hardware vendors delivered systems to the public in such a poorly configured state then you can be sure that websites would review them badly and slowly but surely, reputation and sales would suffer.

Whilst you may enjoy in tweaking and measuring the performance of everything on your system, the average person isn't going to - and nor should they need to. And nor should they need to engage someone to help them tune their system - if they want to fine, but it shouldn't be necessary as it should just work out of the box and work well.

Thus I would hope that the testing Adobe does uses "off the shelf" computers from various companies and evaluates the performance of their applications in that way, rather than using custom builds that do not reflect what the average consumer uses.

Now it may be that in their testing, Adobe have made various assumptions in their application environment that actually match up very well with what people here are doing or that some settings (such as the automatic update of XMP files) are set the wrong way. I don't know, so I'm just guessing. Similarly, there may be changes in the application that for one reason or another slow it down in ways that they didn't expect.

But what is very clear from various messages in this thread is that Adobe have some homework to do because they've created performance issues that faster hardware is only going to hide for a short period of time.

1.  Yes and no.  Obviously they can always do better, and it appears in this case particularly.  But in general, and I've used this analogy before, image processing is a very hardware intensive task.  And so is gaming, video editing, and CAD/CAM, but these three areas have long benefited from custom builds and builders with specialized knowledge.  Where before, most image editors processed one task on one image at a time, LR is now doing much more.  From this perspective it doesn't surprise me at all that image editing at the bleeding edge now needs like attention and expertise.

2.  Again, I'll invoke the gaming analogy.  With games you don't "need" a powerful system, you don't need to hire a builder, etc, etc.. but we've long accepted that performance is directly linked to your system.  In Lightroom image editing as now reached this stage.  Where we're being let down from Adobe, is they haven't written the code/processes to take advantage of the modern GPU, SSD's, RAM, etc, nearly as much as they could have.  The should.  And when they do they'll probably make a huge deal of it and double the price..

3.  I think we would benefit either way.  I'd like to see them test/benchmark using "levels" of systems.  Say, an 3770, 16gb of 2133, 300mbps R/W SSD for system, 100mbps R/W data, hybrid catalog, 6800/560 series GPU at one level, a dual core i5/4gb laptop on another, and so on.. I'm sure we could easily agree on the necessary levels and where to place them.  Then someone would know what to expect for a given "class" of machine vs. a specific model.   And I'd bet the farm we'd have scores of independent bloggers, builders, etc.. verifying their benchmarks one way or the other and/or coming up with their version of upgrade and what difference it would make.


We need to grow used to the fact that image editing requires considerable hardware, and that increased performance is directly tied to more powerful hardware.   Image files are getting bigger, more of us are taking on video, and in general improved software in the sense that we're offered functions not previously offered.. ALL takes more hardware.  More, it requires more hardware working together which makes the "build" and "tweak" all that much more vital.

It's been an interesting educational thread..
Logged

----------------------------------------------
http://www.BangkokImages.com
Steve Weldon
Sr. Member
****
Offline Offline

Posts: 1447



WWW
« Reply #114 on: May 31, 2012, 02:29:31 AM »
ReplyReply

There do seem to be a lot of short memories around here. LR 3.0 was an absolute dog when it was first released. It was slow, it crashed, it did all sorts of horrible things on my very basic laptop Win 7 PC. Each dot release got a bit better, and now vers 3.6 is running on the same machine as sweet as a nut with my big 3FR Hass files. The same thing will happen with LR 4 in due course, mark my words. Don't sweat it upgrading anything, just hang on for 4.5  Wink

John
+1... I remember it well.  And with more people than ever using, supporting, and writing about LR..  it 'seems' bigger of a problem.  I'm not convinced it is.
Logged

----------------------------------------------
http://www.BangkokImages.com
dreed
Sr. Member
****
Offline Offline

Posts: 1223


« Reply #115 on: May 31, 2012, 06:30:53 AM »
ReplyReply

1.  Yes and no.  Obviously they can always do better, and it appears in this case particularly.  But in general, and I've used this analogy before, image processing is a very hardware intensive task.  And so is gaming, video editing, and CAD/CAM, but these three areas have long benefited from custom builds and builders with specialized knowledge.

I really don't know why you keep flogging this "specialized knowledge." Every teenage kid that's into gaming knows what's important and what's required to build a good gaming system. For games and video, it is all about the video card that you can and do put in the system. To a certain extent, CAD/CAM is as well but not as much as it used to be.
Logged
D Fosse
Sr. Member
****
Offline Offline

Posts: 323



« Reply #116 on: May 31, 2012, 09:10:16 AM »
ReplyReply

Quote
From this perspective it doesn't surprise me at all that image editing at the bleeding edge now needs like attention and expertise

This is just nonsense. Software that doesn't work properly on an off-the-shelf system is badly written, because that's what 95% of the customers will run it on, no matter what people with "specialized knowledge" would like to think.

Take Photoshop as an example. It's also hardware-intensive (although in different ways than Lr), and clearly benefits from a high-performance system. But it basically works and is perfectly snappy on any cheap laptop you pick up at your local dealer.

There's a difference between basic functionality and optimized performance. What people have been complaining about in Lightroom is the "basic functionality" part.

Anyway, let's see what 4.1 has to offer. So far it looks good here.
Logged
CatOne
Sr. Member
****
Offline Offline

Posts: 380


WWW
« Reply #117 on: May 31, 2012, 08:13:30 PM »
ReplyReply


Take Photoshop as an example. It's also hardware-intensive (although in different ways than Lr), and clearly benefits from a high-performance system. But it basically works and is perfectly snappy on any cheap laptop you pick up at your local dealer.


Note thought that comparing Lightroom and Photoshop performance isn't fair.  They are doing totally different things.

Parametric editors like Lightroom (or Aperture) start with the rendering of the original RAW file, and then apply ALL the effects dynamically over the image.  This is a TREMENDOUS amount of math and work once you add a number of complicated adjustments to the image.  Photoshop actually modifies the pixels, so it's just displaying the image.  It is WAY less work for the computer to do.
Logged

Tony Jay
Sr. Member
****
Online Online

Posts: 2110


« Reply #118 on: May 31, 2012, 08:15:06 PM »
ReplyReply

Interesting point!

Regards

Tony Jay
Logged
dreed
Sr. Member
****
Offline Offline

Posts: 1223


« Reply #119 on: May 31, 2012, 10:06:01 PM »
ReplyReply

...
2.  Yes, it is CPU intensive and this blame can only be laid on Adobes doorstep.  Most imaging professionals and even amateurs have more powerful GPU's and SSD's in their systems, and if they didn't they would if it significantly helped performance.  It's past time Adobe starts supporting modern computers and stops depending mostly on CPU performance.
...

I'm not entirely sure that this is possible.

For example. unless you know the specifics of how Lightroom demosaics an image, I can't see how you can make the above claim. Remember, that it is in the "Loading..." stage of develop that most people see noticeable delay and that this therefore is where performance needs to be improved. At least in my doodles, there is no way that I can see the GPU being able to assist Lightroom in the demosaic process. If you have detailed knowledge of the demosaic algorithm used and how that can be assisted by the GPU, I'd be very interested to see it.
Logged
Pages: « 1 ... 4 5 [6] 7 8 ... 11 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad