Ad
Ad
Ad
Pages: [1]   Bottom of Page
Print
Author Topic: Lightroom Performance on a Quad-Core  (Read 13328 times)
photobadger
Newbie
*
Offline Offline

Posts: 19


« on: October 27, 2008, 09:29:50 PM »
ReplyReply

I recently built a new computer primarily for photography. Deciding on a processor was an interesting experience as I found little to help with the evaluation. I ended up going with a quad-core, and have written up a note on what I found about performance. Bottom line is that I'm happy with the choice. You can read the report here:

http://aphotogeek.wordpress.com/2008/10/04...htroom-on-quad/

Logged
NikosR
Sr. Member
****
Offline Offline

Posts: 622


WWW
« Reply #1 on: October 28, 2008, 01:52:53 PM »
ReplyReply

Quote from: photobadger
I recently built a new computer primarily for photography. Deciding on a processor was an interesting experience as I found little to help with the evaluation. I ended up going with a quad-core, and have written up a note on what I found about performance. Bottom line is that I'm happy with the choice. You can read the report here:

http://aphotogeek.wordpress.com/2008/10/04...htroom-on-quad/

Just a comment on your findings. You think you are driving your 4 cores well but your screenshots say otherwise. As long as your CPU utilization is quite less than 25% you could do with just one core. Less than 50% with 2. The fact that all 4 cores are shown as being used somewhat does not mean a thing. A single task, when interrupted by I/O, can end up being dispatched next on another processor.

My own findings on Windows XP (32-bit) show that LR can on average fairly well utilize a 2 processor system but not a 4 processor one.
« Last Edit: October 28, 2008, 01:55:12 PM by NikosR » Logged

Nikos
James DeMoss
Newbie
*
Offline Offline

Posts: 43


WWW
« Reply #2 on: October 30, 2008, 04:48:32 AM »
ReplyReply

Quote from: photobadger
I recently built a new computer primarily for photography. Deciding on a processor was an interesting experience as I found little to help with the evaluation. I ended up going with a quad-core, and have written up a note on what I found about performance. Bottom line is that I'm happy with the choice. You can read the report here:

http://aphotogeek.wordpress.com/2008/10/04...htroom-on-quad/


I too run a quad core system, Vista 32 bit. One thing I believe you may really want to consider is the very best graphics card and a disk raid system. Since the final release of LR 2.1, sensing sped is subjective, however i find that LR is so much more fluid and plays well with Photoshop. The size of the on-board cache on the CPU seems to improve performance over installed ram..

Just my 2 jelly beans

_ James
Logged
photobadger
Newbie
*
Offline Offline

Posts: 19


« Reply #3 on: November 02, 2008, 11:21:51 AM »
ReplyReply

Quote from: NikosR
Just a comment on your findings. You think you are driving your 4 cores well but your screenshots say otherwise. As long as your CPU utilization is quite less than 25% you could do with just one core. Less than 50% with 2. The fact that all 4 cores are shown as being used somewhat does not mean a thing. A single task, when interrupted by I/O, can end up being dispatched next on another processor.

My own findings on Windows XP (32-bit) show that LR can on average fairly well utilize a 2 processor system but not a 4 processor one.

I agree that there is little benefit to having 4 cores if the utilization on each core is "quite less than 25%." I don't see that as the case for the screen shots in the article.

Looking at them one by one

1) Ingesting Images:     Minimum of 40% on all cores, with a peak in the beginning of 100% on all cores. Assuming same clock speed, a plus vs. dual core.
2) Rendering Previews:  Average of 60% on all cores. Assuming same clock speed, a plus vs. dual core.
3) Develop Module:       Occasional peaks of 40% for all cores, but given how little of the time this represents, I'd agree this is not particularly useful.
4) Auto Mask Brush:      3 cores at 40% and one at 20%. Assuming same clock speed, likely a wash vs. dual core.
5) Printing 8 Pictures:    All four cores driven at about 70%. Assuming same clock speed, a plus vs. dual core.
6) Printing 1 Picture:     There are two peaks where all 4 cores are driven. The first at 80% each, the second at 40% each. Again, given that these peaks represent a small part of the overall time, this is going to be a wash vs. a dual core at the same clock speed.

In the end, what it looks like to me is that for any multi-picture processing the quad-core is going to be faster than a dual-core. For single picture activities, any benefits will be minimal and likely overridden by the fact that you can get a greater clock speed per dollar in the dual core line.

For me that still means a signficant advantage for the quad-core for Lightroom. After all, if I was only doing single picture activities, I'd use Photoshop. The whole reason I find lightroom useful is that it lets me run through a lot of shots quickly and process them efficiently.

YMMV,

Joe
« Last Edit: November 02, 2008, 11:22:15 AM by photobadger » Logged
NikosR
Sr. Member
****
Offline Offline

Posts: 622


WWW
« Reply #4 on: November 02, 2008, 11:52:58 PM »
ReplyReply

Quote from: photobadger
I agree that there is little benefit to having 4 cores if the utilization on each core is "quite less than 25%." I don't see that as the case for the screen shots in the article.

Joe

Still can't see what you're seeing in the screenshots provided. The CPU usage history graph per core is pretty useless for telling you the CPU utilisation at any moment. Only meaningful number is the CPU Usage number and this is for the total system not for each core.
« Last Edit: November 02, 2008, 11:57:11 PM by NikosR » Logged

Nikos
photobadger
Newbie
*
Offline Offline

Posts: 19


« Reply #5 on: November 03, 2008, 08:10:03 PM »
ReplyReply

Quote from: NikosR
Still can't see what you're seeing in the screenshots provided. The CPU usage history graph per core is pretty useless for telling you the CPU utilisation at any moment. Only meaningful number is the CPU Usage number and this is for the total system not for each core.


Agree that the CPU usage number is for total system. As such it is also mathematically trivial to go from this number to a figure of merit for cores used--you just multiply this number by the number of cores in the system. It is also "instantaneous" and represents the average of the end point of the 4 individual graphs, so it is only useful if you are staring at it the entire time. It cannot be any more useful than the individual 4 graphs as it is just a mathematical average the endpoints of those same graphs, and those endpoints are being traced in time.

So yes, I'm deriving my conclusions from looking at the graphs which have lines at 20% utilization increments.
Logged
NikosR
Sr. Member
****
Offline Offline

Posts: 622


WWW
« Reply #6 on: November 04, 2008, 12:49:52 AM »
ReplyReply

Quote from: photobadger
Agree that the CPU usage number is for total system. As such it is also mathematically trivial to go from this number to a figure of merit for cores used--you just multiply this number by the number of cores in the system. It is also "instantaneous" and represents the average of the end point of the 4 individual graphs, so it is only useful if you are staring at it the entire time. It cannot be any more useful than the individual 4 graphs as it is just a mathematical average the endpoints of those same graphs, and those endpoints are being traced in time.

So yes, I'm deriving my conclusions from looking at the graphs which have lines at 20% utilization increments.

I'm not sure what you're talking about when you say' to a figure of merit for cores used -- you must multiply..'.  If CPU usage for 4 cores is 25%, for example,  then for 2 it would be 50% and for 1 it would have been 100%.

Looking at the graph does not tell you much because you can't extrapolate the total average CPU utilisation for the system for any time period. You could do that if the lines were relatively flat for any sigificant time interval for not the way SOME these curves look because the graph is not detailed enough in the x-axis.  You have to look at the graph for the total system (I think there is an option to do that) to be able to get any meaningful picture of the total average CPU utilization. If you did that and you could deduce that total average CPU utilization exceeded 50% for any significant time period (i.e. if there were 2 cores it would have reached 100% at the same period), then you can say that the 4 cores make a difference.
« Last Edit: November 04, 2008, 12:52:28 AM by NikosR » Logged

Nikos
photobadger
Newbie
*
Offline Offline

Posts: 19


« Reply #7 on: November 05, 2008, 12:16:34 PM »
ReplyReply

Quote from: NikosR
I'm not sure what you're talking about when you say' to a figure of merit for cores used -- you must multiply..'.  If CPU usage for 4 cores is 25%, for example,  then for 2 it would be 50% and for 1 it would have been 100%.

Looking at the graph does not tell you much because you can't extrapolate the total average CPU utilisation for the system for any time period. You could do that if the lines were relatively flat for any sigificant time interval for not the way SOME these curves look because the graph is not detailed enough in the x-axis.  You have to look at the graph for the total system (I think there is an option to do that) to be able to get any meaningful picture of the total average CPU utilization. If you did that and you could deduce that total average CPU utilization exceeded 50% for any significant time period (i.e. if there were 2 cores it would have reached 100% at the same period), then you can say that the 4 cores make a difference.

I watched those charts you say you can't tell anything from as they ran, utilization was running between 60 to 80 percent while doing the multi-picutre processing.

Though I appreciate the comments, given that we now find ourselves staring at the same color square and I'm saying it's green and you are saying it's blue, I can't see much point continuing the dialog.


Logged
NikosR
Sr. Member
****
Offline Offline

Posts: 622


WWW
« Reply #8 on: November 05, 2008, 12:41:23 PM »
ReplyReply

Quote from: photobadger
I watched those charts you say you can't tell anything from as they ran, utilization was running between 60 to 80 percent while doing the multi-picutre processing.

Though I appreciate the comments, given that we now find ourselves staring at the same color square and I'm saying it's green and you are saying it's blue, I can't see much point continuing the dialog.


You have to appreciate the fact that I have been commenting on what you have shown and not on what you may have seen.
Logged

Nikos
BruceHouston
Sr. Member
****
Offline Offline

Posts: 308



« Reply #9 on: November 05, 2008, 01:02:48 PM »
ReplyReply

I agree completely with badger.

Processing of a pipeline screeches to a halt for an instant when the corresponding processor peaks instantaneously to 100%.  Even the graph, which is the finer granularity measurement, may not show every occurrence of such instantaneous activity due to the coarseness of the sampling times/intervals.  A processor needs to average WELL below 50% to insure that these very short interruptions do not happen.

I am soon building a dual-quad system to address these very issues.

Bruce
Logged
NikosR
Sr. Member
****
Offline Offline

Posts: 622


WWW
« Reply #10 on: November 05, 2008, 02:07:18 PM »
ReplyReply

Quote from: BruceHouston
I agree completely with badger.

Processing of a pipeline screeches to a halt for an instant when the corresponding processor peaks instantaneously to 100%.  Even the graph, which is the finer granularity measurement, may not show every occurrence of such instantaneous activity due to the coarseness of the sampling times/intervals.  A processor needs to average WELL below 50% to insure that these very short interruptions do not happen.

I am soon building a dual-quad system to address these very issues.

Bruce

While I only see a remote relevance of your comment to what we are discussing (which is whether the graphs presented show good utilization of the processor resources or not) let's discuss what you are saying.

I agree with your second sentence. However, I cannot see how processing screeches to a halt when the processor peaks to 100%. Would you care to elaborate? To my knowledge,  when a processor peaks to 100% it just shows that the workload is CPU bound, could use more CPU resources if they were available, but the task that is being executed keeps happily being executed (until it is interrupted for I/O or for a system service task).

A system asked to serve a batch type of workload (i.e. not being asked to respond swiftly to user input) should utilize close to 100% of the CPU resources if we are to say that processor resources are not wasted (less than 100% to allow for instantaneous peaks as you correctly mention but certainly much higher than 50%, maybe at least 90% depending on the measurement interval the utilization measurement is averaged on).

 A system asked to also serve an interactive workload (contrary to a purely batch workload) certainly needs to be utilized well below 100% at any time to ensure responsiveness to user input  (how much below 100% depends on the OS capabilities for efficient task preemption and honouring task priorities) but this is not what you're saying. Or is it?

This used to be Operating Systems 101 in my days decades ago, but since these are fairly basic concepts I don't expect that too much has changed since the days I was professionally involved in such things. I will be happy to be educated further though.

Still, I can't see the relevance of all this to the original discussion.

PS. A further thing you might find of interest during your planning for an 8 processor system. Imagine a single processor system of processor 'speed' x and adequate memory resources of infinitely high memory access speed.  Imagine a simplified theoretical system where a task is being executed (CPU bound, no need for I/O no need for system tasks). Now,  this processor will execute this single CPU bound task at 100% processor utilization until the task finishes. Time taken will be y. If you increase the speed of the processor by a factor a, that is 'speed' is a*x, then the processor will again be 100% utilized when executing our sample task. The difference will be that the time taken to complete the task will be y/a. This will be true for any value for a. High cpu utilization is generally a good thing.
« Last Edit: November 05, 2008, 02:57:08 PM by NikosR » Logged

Nikos
BruceHouston
Sr. Member
****
Offline Offline

Posts: 308



« Reply #11 on: November 14, 2008, 10:37:48 PM »
ReplyReply

Quote from: NikosR
While I only see a remote relevance of your comment to what we are discussing (which is whether the graphs presented show good utilization of the processor resources or not) let's discuss what you are saying.

I agree with your second sentence. However, I cannot see how processing screeches to a halt when the processor peaks to 100%. Would you care to elaborate? To my knowledge,  when a processor peaks to 100% it just shows that the workload is CPU bound, could use more CPU resources if they were available, but the task that is being executed keeps happily being executed (until it is interrupted for I/O or for a system service task).

A system asked to serve a batch type of workload (i.e. not being asked to respond swiftly to user input) should utilize close to 100% of the CPU resources if we are to say that processor resources are not wasted (less than 100% to allow for instantaneous peaks as you correctly mention but certainly much higher than 50%, maybe at least 90% depending on the measurement interval the utilization measurement is averaged on).

 A system asked to also serve an interactive workload (contrary to a purely batch workload) certainly needs to be utilized well below 100% at any time to ensure responsiveness to user input  (how much below 100% depends on the OS capabilities for efficient task preemption and honouring task priorities) but this is not what you're saying. Or is it?

This used to be Operating Systems 101 in my days decades ago, but since these are fairly basic concepts I don't expect that too much has changed since the days I was professionally involved in such things. I will be happy to be educated further though.

Still, I can't see the relevance of all this to the original discussion.

PS. A further thing you might find of interest during your planning for an 8 processor system. Imagine a single processor system of processor 'speed' x and adequate memory resources of infinitely high memory access speed.  Imagine a simplified theoretical system where a task is being executed (CPU bound, no need for I/O no need for system tasks). Now,  this processor will execute this single CPU bound task at 100% processor utilization until the task finishes. Time taken will be y. If you increase the speed of the processor by a factor a, that is 'speed' is a*x, then the processor will again be 100% utilized when executing our sample task. The difference will be that the time taken to complete the task will be y/a. This will be true for any value for a. High cpu utilization is generally a good thing.

Sorry for the delayed reply, Nikos; I have been away from the forums lately.

"A system asked to also serve an interactive workload (contrary to a purely batch workload) certainly needs to be utilized well below 100% at any time to ensure responsiveness to user input  (how much below 100% depends on the OS capabilities for efficient task preemption and honouring task priorities) but this is not what you're saying. Or is it?"

That is what I was trying to say, but expressed it poorly.  Thanks for pointing out the technical inaccuracy.

What I was thinking (and hoping that the forum reader was also a mind-reader, I suppose)  , is that the more processors available, the less likely that a multi-threaded application will appear to the user to momentarily stall (and the quicker that a batch process will finish).  That is because the thread queues will be shorter, on average, than with fewer processors.  Shorter thread queues result in the process moving through the overall system pipeline in a shorter length of time.  System responsiveness may take a hit when a gazillion threads are queued up at fewer processors.  With modern systems running multiple applications simultaneously, it is often the case that the user interacts with the system while executing a batch process such as stitching, etc.  It is a royal pain when the system APPEARS TO THE USER to have come to a screeching halt.

The number of processors will continue to multiply (probably geometrically) to meet the demands of evolving applications like image processing, speech and pattern recognition, etc.  With the possible exception of the early IBM mainframes, hardware processing power and other hardware resources have generally lagged behind the burgeoning application software and OS monsters, in my experience since the early '70s.

It is hard to imagine having too much processing power if an acquirer of a new system wishes to provide any future protection at all.  I would do a 16-core system if I could; however the 8-core Xeon-based motherboards appear to be the limit currently.

Best,
Bruce


Logged
NikosR
Sr. Member
****
Offline Offline

Posts: 622


WWW
« Reply #12 on: November 24, 2008, 10:54:50 AM »
ReplyReply

You seem to be thinking in terms of 'more CPUs = more processing power' while this is highly dependent on the OS and application ability to support parallelism. In the worst case scenario of running a single single-threaded application the benefit you will get for having even a second CPU will be marginal.

Since you seem to come from an IBM Mainframe background you must be able to understand how the law of diminishing returns (the so called multiprocessor effect) applies to SMP systems even to ones which are much more sophisticated in terms of their multitasking efficiency and where workloads run lend themselves to parallelism much much better than a PC running LR.

Coming back to the subject, my own findings with LR (and PS) is that they can decently utilize a 2 CPU (core) system but not a 4 core system (not to mention an 8 or 16 core one...).  That is in Windows, don't know about the Mac. DxO Pro (during its output processing) is one photo related application I know of which seems to utilise 4 cores fairly well (have not tried it with more) although it is a hog to start with...

If I were in your shoes I would look to spend my money on the fastest dual core or quad core I could buy and spend any budget left on memory and fast disk system. Otherwise you will end up hugely underutilising the CPU resources you have paid for. Unless of course you are planning on actively utilising more than 1 processor intensive applications concurrently.
Logged

Nikos
Per Zangenberg
Newbie
*
Offline Offline

Posts: 46


« Reply #13 on: May 03, 2010, 08:28:04 AM »
ReplyReply

I just jumped from Core 2 Duo E8400 to Quad Core Q9650. I made a video about the impact on lightroom performance when running both at same speed (3.8GHz).

When rendering previews the Quad core is ~35% faster, and 40% faster if you run to threads. Also it seems faster at loading and image in develop. Loading is equally fast wether 1:1 previews have been rendered or not.

http://vimeo.com/11378426
Logged
Pages: [1]   Top of Page
Print
Jump to:  

Ad
Ad
Ad