Ad
Ad
Ad
Pages: « 1 ... 3 4 [5] 6 7 ... 11 »   Bottom of Page
Print
Author Topic: LR4 speed totally unacceptable  (Read 49626 times)
Steve Weldon
Sr. Member
****
Offline Offline

Posts: 1477



WWW
« Reply #80 on: May 21, 2012, 09:12:44 PM »
ReplyReply

I don't know if the "total real estate" calculation is the best way to look at it. After all, we know that the CPU is involved in rendering the image to be displayed in the Develop tab, and it doesn't have to render the image twice just because it's being simultaneously displayed on both screens. Rendering a larger image once is likely going to take longer than rendering a smaller image and then displaying it twice on multiple screens.

Anyhow, since I'm trying to get my computer build dialed in to my satisfaction before sitting tight and using it for the next 3-4 years, I wanted to explore this graphics card issue further. I found a good deal on an AMD Radeon HD 6950 video card, which is a huge step up in GPU performance from the AMD Firepro v4900. It will arrive later this week and I'll get a chance to try the two cards back to back. Then I'll know if the GPU really makes much difference for LR 4. It it does, then I'll have to decide if I want the better performance or if I want to be able to use the 10 bit display capabilities of the Firepro + U2711 combo in Photoshop.

1.  I'm not sure either, but since it was brought up as a variable I mentioned they were approx the same.  I do think the rendering of screen 1 is separate from the rendering of screen 2 even though the same image might be displayed on both of them, often they're not.. it depends on your work flow. 

2.  I'm really curious as to your results.  "much difference" is an opinion, actual times between renderings are more factual and if those times will be worth it to you would be another opinion.  When I tested my different cards I rebooted twice between each card install.  Hope you find it worth it.

Will post here once I've had a chance to try out the new card.
Logged

----------------------------------------------
http://www.BangkokImages.com
Phil Indeblanc
Sr. Member
****
Online Online

Posts: 1225


« Reply #81 on: May 21, 2012, 09:36:47 PM »
ReplyReply

looking forward.

Oddly enough my #2 screen gets priority display, then the main screen with controls a second after.

It would be a great option if you organize(Library) with both screens, which I do....When switching to Develop window, the second screen should be able to close out. Having this option would be very helpful and speed up the process in editing. And if the second is needed, you can always bring it up. For now I do it manually, and I often forget and it is irretating to deal with it for the first few then relaize, Opps, I have both screens up...hence the lag.
Logged

If you buy a camera, you're a photographer...
Sheldon N
Sr. Member
****
Offline Offline

Posts: 810


« Reply #82 on: May 25, 2012, 03:59:07 PM »
ReplyReply

So I had a chance to try out a Radeon HD 6950 graphics card with Lightroom 4.1RC. I saw no meaningful improvement in how fast I could move from image to image in the library or develop tabs, no change in how smoothly the sliders worked or how fast the screen redrew as you moved a slider. This was in comparison to a AMD Firepro v4900 video card which is a workstation class card, but only about as powerful as a Radeon HD 6670 in terms of pure GPU power. For reference sake, the difference between a 6670 and a 6950 is about 300% in GPU processing power, ie. the 6950 is 3 times faster.

I'd say that once you've selected a decent video card for Lightroom, there's no benefit to getting a super high end card. This is consistent with my observation from the GPU usage monitor - LR isn't using the GPU for rendering or calculation. The "horsepower" required to simply redraw the screen is very minimal compared to what even a mid-grade modern video card is capable of so there's no reason to assume that a faster video card is going to make it better.

Overall, LR4 runs great on my computer (i7 3770k @ 4.4Ghz, 16GB RAM, dual SSD's). Just wanted to see if I'd get any bump by improving the video card. Doesn't look like that's the case, so I'll be returning the 6950.
Logged

leuallen
Sr. Member
****
Offline Offline

Posts: 280


« Reply #83 on: May 25, 2012, 04:10:08 PM »
ReplyReply

FWIW, a fairly high spec machine, performance was horrible with 4.2RC UNTIL I turned off 'Automatically write changes to XMP'. Then things sailed along with about the same performance as 3.6.

Larry
Logged
Steve Weldon
Sr. Member
****
Offline Offline

Posts: 1477



WWW
« Reply #84 on: May 25, 2012, 09:09:56 PM »
ReplyReply

So I had a chance to try out a Radeon HD 6950 graphics card with Lightroom 4.1RC. I saw no meaningful improvement in how fast I could move from image to image in the library or develop tabs, no change in how smoothly the sliders worked or how fast the screen redrew as you moved a slider. This was in comparison to a AMD Firepro v4900 video card which is a workstation class card, but only about as powerful as a Radeon HD 6670 in terms of pure GPU power. For reference sake, the difference between a 6670 and a 6950 is about 300% in GPU processing power, ie. the 6950 is 3 times faster.

I'd say that once you've selected a decent video card for Lightroom, there's no benefit to getting a super high end card. This is consistent with my observation from the GPU usage monitor - LR isn't using the GPU for rendering or calculation. The "horsepower" required to simply redraw the screen is very minimal compared to what even a mid-grade modern video card is capable of so there's no reason to assume that a faster video card is going to make it better.

Overall, LR4 runs great on my computer (i7 3770k @ 4.4Ghz, 16GB RAM, dual SSD's). Just wanted to see if I'd get any bump by improving the video card. Doesn't look like that's the case, so I'll be returning the 6950.

1.  Curious how you're defining "no meaningful improvement?"  Any improvement to that 1.5 second time?

2.  With my testing the 5970 (dual GPU highest end card a few years back) only made a difference over a mid-range card in how fast my second screen showed the changes from moving a slider. 

3.  Out of curiosity have you tried running your CPU and RAM at default clock speeds?  It would be odd, but timing could be part of this.

It would be very interesting to put our machines side by side and see first if our observations are being described in the same way, and if so then find out why you're showing slower performance in the area discussed.  There are many variables involved so we'd have to go through them one by one.   

For the record my xmp file write is off..
Logged

----------------------------------------------
http://www.BangkokImages.com
John Cothron
Full Member
***
Offline Offline

Posts: 170



WWW
« Reply #85 on: May 25, 2012, 09:33:23 PM »
ReplyReply

So I had a chance to try out a Radeon HD 6950 graphics card with Lightroom 4.1RC. I saw no meaningful improvement in how fast I could move from image to image in the library or develop tabs, no change in how smoothly the sliders worked or how fast the screen redrew as you moved a slider. This was in comparison to a AMD Firepro v4900 video card which is a workstation class card, but only about as powerful as a Radeon HD 6670 in terms of pure GPU power. For reference sake, the difference between a 6670 and a 6950 is about 300% in GPU processing power, ie. the 6950 is 3 times faster.

I'd say that once you've selected a decent video card for Lightroom, there's no benefit to getting a super high end card. This is consistent with my observation from the GPU usage monitor - LR isn't using the GPU for rendering or calculation. The "horsepower" required to simply redraw the screen is very minimal compared to what even a mid-grade modern video card is capable of so there's no reason to assume that a faster video card is going to make it better.

Overall, LR4 runs great on my computer (i7 3770k @ 4.4Ghz, 16GB RAM, dual SSD's). Just wanted to see if I'd get any bump by improving the video card. Doesn't look like that's the case, so I'll be returning the 6950.

makes sense to me.  I'm running a Geforce 9800 bridged to a GTS 250 (not the latest and greatest by any means) and I'm not seeing any performance issues to complain about running two 24" wide screens. 

rest of the system: i7 930 at 4ghz, 12gb memory, no SSD's

FWIW I'm not auto writing XMP either
Logged

dreed
Sr. Member
****
Offline Offline

Posts: 1291


« Reply #86 on: May 25, 2012, 10:40:47 PM »
ReplyReply

So I had a chance to try out a Radeon HD 6950 graphics card with Lightroom 4.1RC. I saw no meaningful improvement in how fast I could move from image to image in the library or develop tabs, no change in how smoothly the sliders worked or how fast the screen redrew as you moved a slider. This was in comparison to a AMD Firepro v4900 video card which is a workstation class card, but only about as powerful as a Radeon HD 6670 in terms of pure GPU power. For reference sake, the difference between a 6670 and a 6950 is about 300% in GPU processing power, ie. the 6950 is 3 times faster.

I'd say that once you've selected a decent video card for Lightroom, there's no benefit to getting a super high end card. This is consistent with my observation from the GPU usage monitor - LR isn't using the GPU for rendering or calculation. The "horsepower" required to simply redraw the screen is very minimal compared to what even a mid-grade modern video card is capable of so there's no reason to assume that a faster video card is going to make it better.

Great report! Makes complete sense.

FWIW, a fairly high spec machine, performance was horrible with 4.2RC UNTIL I turned off 'Automatically write changes to XMP'. Then things sailed along with about the same performance as 3.6.

Where does that setting hide? I'll have to go looking for it because that will have an impact on things.
« Last Edit: May 25, 2012, 10:44:47 PM by dreed » Logged
Steve Weldon
Sr. Member
****
Offline Offline

Posts: 1477



WWW
« Reply #87 on: May 25, 2012, 11:05:15 PM »
ReplyReply

Great report! Makes complete sense.

We need to be careful not to accept that which we want to believe without a careful and complete evaluation of the testing procedures and additional variables which could be factoring into the results.  A "great" report would have addressed as many variables as at least reasonable to try.  Or in other words, if you assume the system is operating optimally, then the report is fine.  But if you leave open the possibility it isn't.. then it leaves room for improvement.  Whether or not he wants to explore these possibilities will be key.  But it is more than possible his system's bottleneck hasn't reached the video stream yet.

From the information given we know several things:

My system which is lower powered is going from image to image faster than is his system and without issues with the sliders.

My system is using two monitors powered my DVI connections and making use of the monitors internal LUT's.  I'm not familiar if this would make a performance difference or not, but I'd investigate the possibility.

His system is over clocked.  Mine isn't.  Timing conflicts would be unusual but not unheard of.

His system is using two SSD's, both should be faster than my same brand last generation SSD but because he has the lower performing 128gb versions that might be negated.  What's more important is how these SSD's are setup.  What controller is driving them, what driver he's using, if they're setup as ACHI in the BIOS, and of course how full they are and what numbers are being achieved in the more popular utilities to confirm they're operating at their best.  And concerning setup, where the caches are, etc. 

And of course the all important setup of caches, LR, and possible conflicting CPU/RAM tasking.

It's time consuming, but if I were him I would not stop until I figured out why he's having the performance issue with his sliders.  There are too many others running lesser systems without this issue.   Once this is determined.. then we have a new bottleneck.. or set of bottlenecks.  Whether or not his system, when running optimally, could benefit from new hardware, can only be determined once at that point.  I'm more than curious.
Logged

----------------------------------------------
http://www.BangkokImages.com
Steve Weldon
Sr. Member
****
Offline Offline

Posts: 1477



WWW
« Reply #88 on: May 25, 2012, 11:07:01 PM »
ReplyReply


Where does that setting hide? I'll have to go looking for it because that will have an impact on things.

Catalog settings under the Meta tab..
Logged

----------------------------------------------
http://www.BangkokImages.com
dreed
Sr. Member
****
Offline Offline

Posts: 1291


« Reply #89 on: May 26, 2012, 06:30:58 AM »
ReplyReply

We need to be careful not to accept that which we want to believe without a careful and complete evaluation of the testing procedures and additional variables which could be factoring into the results.  A "great" report would have addressed as many variables as at least reasonable to try.  Or in other words, if you assume the system is operating optimally, then the report is fine.  But if you leave open the possibility it isn't.. then it leaves room for improvement.  Whether or not he wants to explore these possibilities will be key.  But it is more than possible his system's bottleneck hasn't reached the video stream yet.

From the information given we know several things:

My system which is lower powered is going from image to image faster than is his system and without issues with the sliders.

My system is using two monitors powered my DVI connections and making use of the monitors internal LUT's.  I'm not familiar if this would make a performance difference or not, but I'd investigate the possibility.

His system is over clocked.  Mine isn't.  Timing conflicts would be unusual but not unheard of.

His system is using two SSD's, both should be faster than my same brand last generation SSD but because he has the lower performing 128gb versions that might be negated.  What's more important is how these SSD's are setup.  What controller is driving them, what driver he's using, if they're setup as ACHI in the BIOS, and of course how full they are and what numbers are being achieved in the more popular utilities to confirm they're operating at their best.  And concerning setup, where the caches are, etc. 

And of course the all important setup of caches, LR, and possible conflicting CPU/RAM tasking.

It's time consuming, but if I were him I would not stop until I figured out why he's having the performance issue with his sliders.  There are too many others running lesser systems without this issue.   Once this is determined.. then we have a new bottleneck.. or set of bottlenecks.  Whether or not his system, when running optimally, could benefit from new hardware, can only be determined once at that point.  I'm more than curious.

It's really not that hard to understand.

I saw no meaningful improvement in how fast I could move from image to image in the library or develop tabs, no change in how smoothly the sliders worked or how fast the screen redrew as you moved a slider.

Lets see, which of those tasks would benefit from a faster graphics card? Well, not the moving from image to image in library or develop as both of those tasks are dominated by the time it takes to load in and prepare the new image. The amount of work that needs to be done in order to prepare the data to be displayed by the graphics card far outweighs the card itself needs to do, which is quite possibly none with modern products.

As for the sliders and screen redraw, once they're "fast enough", how are you going to notice any improvement? Once the motion is smooth, any increase in the speed of the graphics card is not going to make the "smooth" motion "more smooth" in a manner that is perceptible.
Logged
Steve Weldon
Sr. Member
****
Offline Offline

Posts: 1477



WWW
« Reply #90 on: May 26, 2012, 02:42:38 PM »
ReplyReply

It's really not that hard to understand.

Lets see, which of those tasks would benefit from a faster graphics card? Well, not the moving from image to image in library or develop as both of those tasks are dominated by the time it takes to load in and prepare the new image. The amount of work that needs to be done in order to prepare the data to be displayed by the graphics card far outweighs the card itself needs to do, which is quite possibly none with modern products.

As for the sliders and screen redraw, once they're "fast enough", how are you going to notice any improvement? Once the motion is smooth, any increase in the speed of the graphics card is not going to make the "smooth" motion "more smooth" in a manner that is perceptible.
1.  Perhaps not for some, but I think setting up a machine to it's optimum level is something few understand the inner workings of a computer well enough to do well.  I suspect this is the cause of a large number of the performance complaints we're seeing with LR.  Image processing is a resource intensive task, it only follows the resources perform better when working together at their best.  With the great number of variations we have in hardware and even software, it becomes all the more difficult for any developer to automatically accommodate the variations.

Besides, tracking down problems like this is time consuming.  It might not be worth it to many for what they see as small differences.

2.  All you listed could be at issue.  As you mention, "once the motion is smooth."  This assumes there is enough CPU/RAM or other non-video card related resources to make this possible.. which means they'll probably need to be working at an optimum level to do so.  If the bottleneck is other than the video card then the video card won't make any difference until that bottleneck is eliminated.  You'd expect that.  In this case that the sliders are not working smoothly suggests to me it's other than a video card issue.  In my limited experience the video card affects speed and delay, not lack of smoothness described as "jerky."  And these speed/delay differences are subtle as I've always said, so in the best of circumstances the computer will need to be working it's best to see these differences.

3.  This is where his more powerful system, once the bottleneck is dealt with, 'might' make the benefits from the video card that I'm feeling not noticeable in his system. 

Still, the term "no meaningful improvement" suggests there was improvement.  Even if all the video card is doing, is rendering screens, which is probably the case in LR.. a faster card will render the screen faster.  How much faster is where what's "meaningful" for one person might not be meaningful at all for another.   I've had clients who tell me they're pleased as punch with a 1mb internet connection.. they surf the net and to them it's fast enough.  Show them a 100mb connection and you get two common responses.  a.  WOW, I never realized there could be so much difference and all the things I've always wanted to do but couldn't, are now possible (streaming video, etc, etc).  or b.  It makes no difference to me.   Which tells me their work flow and personal habits are such that 1mb maxes them out.   LR is much this way..  and to a smaller extent so is the speed at which a screen renders, though his jump in performance from a 4900 Firepro to a 6900 Radeon would be more akin to a 1mb to 5mb internet speed difference.. so even if he could see a difference it would be less likely to be meaningful.  To him.

With all that said.. I do think the issues with his sliders could be completely eliminated with his existing hardware.  Far too many people have no such issues with lesser systems including myself.  I'd find the cause of this first.. and if he hasn't returned the video card yet.. then try it.
Logged

----------------------------------------------
http://www.BangkokImages.com
Sheldon N
Sr. Member
****
Offline Offline

Posts: 810


« Reply #91 on: May 26, 2012, 03:00:19 PM »
ReplyReply

Steve - By no meaningful improvement, I mean no measurable difference using a stopwatch. In other words, it made no difference whatsoever. As I suspected, when a GPU monitor shows zero percent usage of the GPU, a faster graphics card isn't going to help.

I think the main issue is still screen size. In the develop tab my primary screen is driving 70% more pixels in the rendered image area, that's what accounts for any differences.
Logged

Sheldon N
Sr. Member
****
Offline Offline

Posts: 810


« Reply #92 on: May 26, 2012, 03:07:44 PM »
ReplyReply

With all that said.. I do think the issues with his sliders could be completely eliminated with his existing hardware.  Far too many people have no such issues with lesser systems including myself.  I'd find the cause of this first.. and if he hasn't returned the video card yet.. then try it.


To further clarify, I'm not having issues with the sliders. What I was describing is that if I grab the slider and yank it as fast as I can from -4 exposure compensation to +4 compensation, I can start to see individual screen redraws as the image moves from underexposed to overexposed. I'm not getting a perfect 30fps "video fade transition" as it moves through the process, but instead get a rapid flicker of redraws. I wouldn't expect any computer to be able to render the RAW data that fast at this resolution.

In normal usage, the sliders work just fine.
Logged

John Cothron
Full Member
***
Offline Offline

Posts: 170



WWW
« Reply #93 on: May 26, 2012, 06:49:59 PM »
ReplyReply

To further clarify, I'm not having issues with the sliders. What I was describing is that if I grab the slider and yank it as fast as I can from -4 exposure compensation to +4 compensation, I can start to see individual screen redraws as the image moves from underexposed to overexposed. I'm not getting a perfect 30fps "video fade transition" as it moves through the process, but instead get a rapid flicker of redraws. I wouldn't expect any computer to be able to render the RAW data that fast at this resolution.

In normal usage, the sliders work just fine.

Interesting, I read your post and thought I'd try that with the exposure slider.  To give a bit of background I've never actually measured the rendering time on my system as I have no complaints with the performance.  I opened an image that I haven't previously had in the develop module, although upon import I do have a base set of adjustments applied (including sharpening and noise reduction).  Loading the image in develop took something less but very close to a full a second, with the second screen actually instantly showing the image  Once loaded I "yanked" the exposure slider from -5.0 to 5.0 several times.  Too quick to measure the time of the "yank" but probably .10 seconds less?  The effect on the screen was almost as immediate, maybe another .10 second at most.  My second screen was slightly behind the first, but the whole process with both screens is far less than a second and very smooth.

Okay, for my system, the largest time bottleneck seems to be when moving from image to image if, and more so if the image has not been previously loaded in the develop module.  It's not a large one but it's there.  So what hardware resource is being used for that particular function?  I opened the resource monitor to take a look while moving around. The only truly significant action I see happening is processor usage.  It jumps from a baseline of approximately 4% usage to a range somewhere between 45-55% usage as I move from one image to another.  You also see a bit of disk read but frankly it is just a blip and never registers as a percentage in the monitor.  Same with memory.

This leads me to believe what as has been written multiple times, that Lr is very processor intensive which makes a tremendous amount of sense to me and always has.  We know that the develop module stores a basic preview of an image in the cache after the first use. having basically nothing but the de-mosaicing performed.  All other adjustments applied to the image have to be rendered each time the image is accessed.  That takes processing power.  The image appears on screen instantly, but the "loading" message hangs around till the processor has completed this rendering task.

For me personally, it would appear that the only way I would see faster performance is by increasing processor performance.  If I did so, I might eventually see a bottle-neck elsewhere but from the basic monitoring I've done thus far I have a pretty big window before that happens.


What about the people that are having significant performance issues now?  Especially those that are reportedly running more powerful processors than mine? (i7-930 overclocked to 4ghz).  Most of the complaints I've read have been about rendering time and/or smooth operation of adjustment sliders.  For the most part the complaints are from those that were running Lr3 just fine.  We know there were significant changes to the process versions, some of those very complicated, between the versions of Lr.  If we believe these differences and issues to be processor path related then perhaps processor/memory transfer speed or memory speed itself comes into play? Although I haven't done the math that seems unlikely to me but who knows.  FWIW I'm running 12gb DDR3 1300mhz memory overclocked to ~1550 mhz.

Since my disk system is all based on 7200 rpm drives I'm having a hard time thinking that these issues have anything to do with disk access.  I'm set up with the operating system (Win 7 64), Lr software, the catalog, as well as the previews on one disk.  The Lr Raw Cache is on another single disk.  The images themselves are on a 4 disk array in Raid 10.

**I see the same basic performance whether I'm working on 12mp 5D files, 21mp 5DII files, or scanned Tiffs at over 200mb in size.  Someone pointed out to me that my huge film scans aren't the same thing as a RAW file of the same size primarily due to the de-mosaicing needed with the digital capture.  I understand how that can make a difference on the initial rendering in develop but once the cache preview is generated (which includes de-mosaicing for the digital file) the differences between the two should be minimal I would think.  All other Lr adjustments have to be performed each time for each type of image to my knowledge.


Also, I ran the above tests on my system as I normally use it, meaning I have a browser open with 7 or so tabs, TOPO software, Outlook, Excel, and Quicken financial software running (but idle) at the same time. 

 
Logged

dreed
Sr. Member
****
Offline Offline

Posts: 1291


« Reply #94 on: May 27, 2012, 12:04:28 PM »
ReplyReply

...
For me personally, it would appear that the only way I would see faster performance is by increasing processor performance.  If I did so, I might eventually see a bottle-neck elsewhere but from the basic monitoring I've done thus far I have a pretty big window before that happens.
...
FWIW I'm running 12gb DDR3 1300mhz memory overclocked to ~1550 mhz.

To get faster you'll probably need to wait until DDR4 systems are available.
Logged
dreed
Sr. Member
****
Offline Offline

Posts: 1291


« Reply #95 on: May 27, 2012, 12:38:44 PM »
ReplyReply

1.  Perhaps not for some, but I think setting up a machine to it's optimum level is something few understand the inner workings of a computer well enough to do well.  I suspect this is the cause of a large number of the performance complaints we're seeing with LR.

If that is the case then Adobe are at fault for not delivering a package that works as well as expected for people using "out of the box" computers from Dell, HP, Apple, etc. Similarly, fault would lay with hardware manufacturers for not delivering systems optimally tuned. If hardware vendors delivered systems to the public in such a poorly configured state then you can be sure that websites would review them badly and slowly but surely, reputation and sales would suffer.

Whilst you may enjoy in tweaking and measuring the performance of everything on your system, the average person isn't going to - and nor should they need to. And nor should they need to engage someone to help them tune their system - if they want to fine, but it shouldn't be necessary as it should just work out of the box and work well.

Thus I would hope that the testing Adobe does uses "off the shelf" computers from various companies and evaluates the performance of their applications in that way, rather than using custom builds that do not reflect what the average consumer uses.

Now it may be that in their testing, Adobe have made various assumptions in their application environment that actually match up very well with what people here are doing or that some settings (such as the automatic update of XMP files) are set the wrong way. I don't know, so I'm just guessing. Similarly, there may be changes in the application that for one reason or another slow it down in ways that they didn't expect.

But what is very clear from various messages in this thread is that Adobe have some homework to do because they've created performance issues that faster hardware is only going to hide for a short period of time.
Logged
John R Smith
Sr. Member
****
Offline Offline

Posts: 1357


Still crazy, after all these years


« Reply #96 on: May 27, 2012, 03:16:21 PM »
ReplyReply


There do seem to be a lot of short memories around here. LR 3.0 was an absolute dog when it was first released. It was slow, it crashed, it did all sorts of horrible things on my very basic laptop Win 7 PC. Each dot release got a bit better, and now vers 3.6 is running on the same machine as sweet as a nut with my big 3FR Hass files. The same thing will happen with LR 4 in due course, mark my words. Don't sweat it upgrading anything, just hang on for 4.5  Wink

John
Logged

Hasselblad 500 C/M, SWC and CFV-39 DB
and a case full of (very old) lenses and other bits
Keith Reeder
Sr. Member
****
Offline Offline

Posts: 253


WWW
« Reply #97 on: May 27, 2012, 04:06:00 PM »
ReplyReply

I was thinking the same thing, John - Groundhog Day.
Logged

Keith Reeder
Blyth, NE England
Preeb
Newbie
*
Offline Offline

Posts: 3


« Reply #98 on: May 27, 2012, 11:14:17 PM »
ReplyReply

As a counterpoint to so many here, LR4 (RC4.1 - I heard some things I didn't like about RC2, so I didn't update) has been quite good since I installed the first release candidate.  I even tried that exposure slider trick and the changes were immediate.  It does take about 3 seconds for a fresh .CR2 file to open in the Develop module, but for me that isn't an issue.  I did have some trouble with the Detail sliders until I learned the trick of closing the filmstrip while using them.  Now the response there is quick too.  I am quite happy with the way LR 4.1 runs on this machine.

Specs:

Sony 17" laptop with 22" Samsung LED external monitor connected by HDMI.
i7 Quad Core Q740 @1.73ghz
Win 7 64 bit
6gb RAM
1gb nVidia GT425M
640gb HDD with 400gb free plus a 2tb external

Nothing special, no $4000 hot rod.  Just a decent laptop with enough cajones to get the editing that I need done.
« Last Edit: May 27, 2012, 11:16:10 PM by Preeb » Logged
Rhossydd
Sr. Member
****
Offline Offline

Posts: 2004


WWW
« Reply #99 on: May 29, 2012, 01:30:53 AM »
ReplyReply

LR 3.0 was an absolute dog when it was first released.
Not here. It ran smoothly straight from release.
I never had any serious problems with previous versions since beta 1.
Logged
Pages: « 1 ... 3 4 [5] 6 7 ... 11 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad