Ad
Ad
Ad
Pages: « 1 [2] 3 »   Bottom of Page
Print
Author Topic: New Powerfull Workstation  (Read 16531 times)
Christopher
Sr. Member
****
Offline Offline

Posts: 944


WWW
« Reply #20 on: November 06, 2009, 01:10:23 PM »
ReplyReply

Quote from: Justan
I wanted to add some notes to this thread.

Regarding RAM, you can’t have too much with a 64-bit machine. In an ideal world your computer would have enough ram so that it almost never needs to access the hard drive.

Investigate using a RAM disk as a paging file/scratch disk. This is not to be confused with a SSD drive.

Anti-Virus/Anti-Spyware software: Configure these programs so that they exclude the directory structure where PS/LR are installed, where your data is stored, and especially the drive where the paging file/scratch disk(s) are maintained. A/V software checks most every file you load and save. It adds a lot of processing time.

Page file/scratch disk. For best performance put these on their own drive or as mentioned above, a RAM drive. A SSD drive is also a good choice.

I'm not sure, but wasn't there a test a while back, whiched showed that PS can't use "a created / not physical" RAM disk very well ?
Logged

John.Murray
Sr. Member
****
Offline Offline

Posts: 893



WWW
« Reply #21 on: November 06, 2009, 04:03:29 PM »
ReplyReply

Chris:  on x58 chipset boards you do not want to populate all 4 (or Cool slots - this incurs a performance hit, populate either 3 or 6, leave the 4th (8th) slot empty.

You also may want to consider going with a platform that supports ECC memory, like the Xeon 5500 or 3500.  Memory errors are an increasing problem:

http://blogs.zdnet.com/storage/?p=653&tag=col1;post-653

I'm personally evaluating an Intel Workstation board with a Xeon 3500 Quad Core CPU - the only downside is it's limited to 16GB (make that 12) memory.

http://www.intel.com/products/workstation/...bp-overview.htm

hth - John
Logged

John.Murray
Sr. Member
****
Offline Offline

Posts: 893



WWW
« Reply #22 on: November 06, 2009, 04:10:32 PM »
ReplyReply

In regard to a RAM disk on a 64bit platform; maybe someone familiar with PS memory access can correct me but, PS will address *all* of the memory presented to it.  Unless there is some artificial boundary built in - it makes no sense to use any memory to create a RAM disk for scratch purposes.  Why add the artificial overhead of a filesystem to memory access?
Logged

Justan
Sr. Member
****
Offline Offline

Posts: 1882


WWW
« Reply #23 on: November 06, 2009, 05:08:57 PM »
ReplyReply

> I'm not sure, but wasn't there a test a while back, whiched showed that PS can't use "a created / not physical" RAM disk very well ?

I don’t know of this test.

> …it makes no sense to use any memory to create a RAM disk for scratch purposes. Why add the artificial overhead of a filesystem to memory access?

For CS3 it is worthy for CS4 (x64) it is not. The overhead of a ram disk is trivial. Here is a specification on how CS3 uses memory:

When you run Photoshop CS3 on a computer with a 64-bit processor (such as a, Intel Xeon processor with EM64T, AMD Athlon 64, or Opteron processor) running a 64-bit version of the operating system (Windows XP Professional x64 Edition or Windows Vista 64-bit) and with 4 GB or more of RAM, Photoshop will use 3 GB for it's image data. You can see the actual amount of RAM Photoshop can use in the Let Photoshop Use number when you set the Let Photoshop Use slider in the Performance preference to 100%. The RAM above the 100% used by Photoshop, which is from approximately 3 GB to 3.7 GB, can be used directly by Photoshop plug-ins (some plug-ins need large chunks of contiguous RAM), filters, or actions. If you have more than 4 GB (to 6 GB), then the RAM above 4 GB is used by the operating system as a cache for the Photoshop scratch disk data. Data that previously was written directly to the hard disk by Photoshop is now cached in this high RAM before being written to the hard disk by the operating system. If you are working with files large enough to take advantage of these extra 2 GB of RAM, the RAM cache can speed performance of Photoshop. Additionally, in Windows Vista 64-bit, processing very large images is much faster if your computer has large amounts of RAM (6-8 GB).

http://kb2.adobe.com/cps/401/kb401088.html


Accordingly CS4 x64 will us all ram available to the computer

http://kb2.adobe.com/cps/404/kb404439.html



Logged

Sheldon N
Sr. Member
****
Offline Offline

Posts: 809


« Reply #24 on: November 06, 2009, 05:12:40 PM »
ReplyReply

I think you may be crossing into that 10% bleeding edge catagory with your drive choices. It might be faster to do that many SSD's, but the real world price/performance for it just isn't worth it IMHO, even on an unlimited budget.

I'd keep the system drive smaller/leaner. It's probably better to run a simple OS/programs setup than trying to build a big SSD RAID 0 array with lots of programs. You might have a faster system with a 3 drive RAID 0 setup, but if you load it up with so many programs then you'll probably run slower than if you ran smaller/leaner. There's not a huge need for fast write speeds or sustained reads, mainly this drive would be doing a lot of small random access seeks for the OS.

I'd recommend a pair of Intel X25M 80GB drives in RAID 0 as your system drive. You shouldn't need more than 160GB for an OS drive. Going to 3 drives for the OS just introduces another layer of possible failure and uses up ports on your motherboard.

For scratch disk, I would stick with conventional HD's. The ones that are the leaders right now are the WD Caviar Black or RE3 drives. Look at the 640GB and 1TB sizes, they have the highest platter densities and will give you the best results. 4 640GB drives in a RAID 0 would give a very nice scratch disk and would cost just $300.

I think you could skip the extra 500GB drive for miscellaneous stuff, you'd have 2 TB of spare storage on the RAID 0 arrary after partitioning off for scratch.

The 6 drive RAID 5 or RAID 10 array sounds good for storage. It should also be fast enough not to be a significant bottleneck for loading up a file on open. I don't know whether it would be faster to keep working files on the OS, on the extra space on the RAID 0 scratch array, or on the storage RAID array. I'd imagine there is a huge IO demand to load up a 10gb file into system memory, but I don't know which one would have the fastest throughput. It would be close between the SSD RAID 0 array and the 4 drive scratch array. Either one of them might have conflicting uses, the OS accessing the drive at the same time the file was loading, or the scratch drive trying to read the file and write into scratch if the physical RAM ran out.

Another thing to think about is the speed of your RAID controller. 12 drives in three RAID arrays might be more than a normal controller could handle, this could end up being your bottleneck.
Logged

Christopher
Sr. Member
****
Offline Offline

Posts: 944


WWW
« Reply #25 on: November 07, 2009, 12:07:29 AM »
ReplyReply

Thx for all the input. Yes I need a extra RAID controller. Most boards just have 6 Intel Sata ports and some have 4 more extra. So I would need at least a RAID Sata card with 4 ports.

 
Quote
Chris: on x58 chipset boards you do not want to populate all 4 (or Cool slots - this incurs a performance hit, populate either 3 or 6, leave the 4th (8th) slot empty.

Do you mind expending on that ? it is the first time I have heard of it and I read a lot on computer sites. The X58 has a triple channel RAM interface, so one needs to use at least 3 slots.  (for example 3 x 1GB) However o achieve 24Gb of ram one has to fill all 6 slots with 4GB mem. So far I haven't heard any negative comments. The one reason some people have problems filling them is, because they are using standard dual channel Mem, which isn't really the best choice for a triple channel board. (Yes dual channel RAM can work, but does not have to)

Here are my thoughts about why using three larger SSDs instead of smaller ones.

First it is speed. To get the best out of writing and reading speed one needs to use at least 128GB per SSD. (As I posted above, below that you loose speed especially write) I really don't worry about a drive failure. I have all my presets and stuff stored away and could set up a new system from scratch in around 1-2hours.

For the scratch disk, I probably will go with HDs around 1000 for SSDs is just to much for scratch. I can always change that in the future if the price drops a bit. (Still thinking on it)


In regards of CPUs, well the one main thing that puts me off the Xeon rout is, that the current generation isn't really new and still is really expensive if one wants to get the same speed.
I'm pretty sure, that one can't buy a dual Xeon System with 2,26ghz and expect it to hold up against a 3.6ghz i7 system. It just won't. Even if you have twice the cores. Especially in Lightroom and PS :@
So to get a real advantage of a Xeon workstation, these CPUs have to be around 2,7 ghz and here it get's really expensive.
« Last Edit: November 07, 2009, 12:16:56 AM by Christopher » Logged

alain
Sr. Member
****
Offline Offline

Posts: 309


« Reply #26 on: November 07, 2009, 04:58:25 AM »
ReplyReply

Quote from: Sheldon N
I think you may be crossing into that 10% bleeding edge catagory with your drive choices. It might be faster to do that many SSD's, but the real world price/performance for it just isn't worth it IMHO, even on an unlimited budget.

....

For scratch disk, I would stick with conventional HD's. The ones that are the leaders right now are the WD Caviar Black or RE3 drives. Look at the 640GB and 1TB sizes, they have the highest platter densities and will give you the best results. 4 640GB drives in a RAID 0 would give a very nice scratch disk and would cost just $300.

...

Another thing to think about is the speed of your RAID controller. 12 drives in three RAID arrays might be more than a normal controller could handle, this could end up being your bottleneck.

The samsung F3 drive have a plattersize of 500Gb, which is for big writes/reads about 30% faster than 334Gb platters.

I would take a good look at good caching raid controllers.  I suspect big gains there.
Logged
John.Murray
Sr. Member
****
Offline Offline

Posts: 893



WWW
« Reply #27 on: November 07, 2009, 02:47:35 PM »
ReplyReply

Sure!  Populating the 4th memory slot will disable triple memory (interleaved) access.  It is only there for backward support - you are correct in populating by 3's....

Here's a link including a block diagram of the x58 chipset:

http://www.intel.com/Assets/PDF/prodbrief/...oduct-brief.pdf






Logged

Christopher
Sr. Member
****
Offline Offline

Posts: 944


WWW
« Reply #28 on: November 07, 2009, 07:38:54 PM »
ReplyReply

Quote from: Joh.Murray
Sure!  Populating the 4th memory slot will disable triple memory (interleaved) access.  It is only there for backward support - you are correct in populating by 3's....

Here's a link including a block diagram of the x58 chipset:

http://www.intel.com/Assets/PDF/prodbrief/...oduct-brief.pdf


All clear. My mistake, I was reading your original statement wrong. At that time I had in mind that 8 slots are all and you said, one could not fill all slots. However that wasn't what you said ;-)

Logged

alain
Sr. Member
****
Offline Offline

Posts: 309


« Reply #29 on: November 12, 2009, 05:35:11 AM »
ReplyReply

Hi Christopher

For a really fast SSD :  OCZ Z-Drive m84 PCI-Express SSD

SSD info page

Sustained write : 600 MB/s

But price is about $1000
Logged
Christopher
Sr. Member
****
Offline Offline

Posts: 944


WWW
« Reply #30 on: November 12, 2009, 03:16:48 PM »
ReplyReply

Quote from: alain
Hi Christopher

For a really fast SSD :  OCZ Z-Drive m84 PCI-Express SSD

SSD info page

Sustained write : 600 MB/s

But price is about $1000


Thanks for the info. I was thinking about using something like that, however the only real benefit I can see is that one saves up some SATA II ports. When it comes to speed I think one is on the cheaper side combining 3 x 128GB SSDs and gets away cheaper. I'm still considering this option for my scratch disk. If it would sell for around 600-700 I would go for it, but 1000 is a lot ^^
Logged

alain
Sr. Member
****
Offline Offline

Posts: 309


« Reply #31 on: November 12, 2009, 04:09:18 PM »
ReplyReply

Quote from: Christopher
Thanks for the info. I was thinking about using something like that, however the only real benefit I can see is that one saves up some SATA II ports. When it comes to speed I think one is on the cheaper side combining 3 x 128GB SSDs and gets away cheaper. I'm still considering this option for my scratch disk. If it would sell for around 600-700 I would go for it, but 1000 is a lot ^^

Well it's way above my budget.  I did compare it with the other OCZ specs and with those you need 6 in RAID-0, just looked at the AnandTech tests and then it compares to 3 in RAID-0.
If the motherboard sata II controller can drive those full speed it will be a lot cheaper to use 3x ssd.
« Last Edit: November 12, 2009, 04:09:49 PM by alain » Logged
Christopher
Sr. Member
****
Offline Offline

Posts: 944


WWW
« Reply #32 on: November 12, 2009, 04:31:39 PM »
ReplyReply

Quote from: alain
Well it's way above my budget.  I did compare it with the other OCZ specs and with those you need 6 in RAID-0, just looked at the AnandTech tests and then it compares to 3 in RAID-0.
If the motherboard sata II controller can drive those full speed it will be a lot cheaper to use 3x ssd.

Yes however, there is one point in favor of the PCI card. A good X58 board has 6 SATA II (Intel) and 4 additional SATA II (third party) ones. So I have ten in total. However for my set up I need around 13 ports, which means I have to get a extra RAID SATA controller. These start around 200 and top end one can cost around 400, which than put together with the 3 x SSDs isn't any cheaper than the other solution.
Logged

alain
Sr. Member
****
Offline Offline

Posts: 309


« Reply #33 on: November 13, 2009, 05:15:17 AM »
ReplyReply

Quote from: Christopher
Yes however, there is one point in favor of the PCI card. A good X58 board has 6 SATA II (Intel) and 4 additional SATA II (third party) ones. So I have ten in total. However for my set up I need around 13 ports, which means I have to get a extra RAID SATA controller. These start around 200 and top end one can cost around 400, which than put together with the 3 x SSDs isn't any cheaper than the other solution.

Well even if there are enough connectors it's doubtful that the controller will be able to push them all to it's limits, not much desktop users use 3x ssd in RAID-0.

On the other hand some dedicated RAID SATA controllers have 512 MB memory.  It could make write speeds of the storage device much less important, maybe not for you but for most users.  A 400-500MB write buffer with 2x SSD is maybe be comparable to a bufferless with 3x SSD.
« Last Edit: November 13, 2009, 01:30:32 PM by alain » Logged
Christopher
Sr. Member
****
Offline Offline

Posts: 944


WWW
« Reply #34 on: November 13, 2009, 05:26:18 AM »
ReplyReply

Quote from: alain
Well even if there ate enough connectors it's doubtful that the controller will be able to push them all to it's limits, not much desktop users use 3x ssd in RAID-0.

On the other hand some dedicated RAID SATA controllers have 512 MB memory.  It could make write speeds of the storage device much less important, maybe not for you but for most users.  A 400-500MB write buffer with 2x SSD is maybe be comparable to a bufferless with 3x SSD.

Both internal RAID controllers are connected through the same lines as any PCI port. In total they can handle more than 3GB/s
Logged

alain
Sr. Member
****
Offline Offline

Posts: 309


« Reply #35 on: November 13, 2009, 01:39:14 PM »
ReplyReply

Quote from: Christopher
Both internal RAID controllers are connected through the same lines as any PCI port. In total they can handle more than 3GB/s

Christopher

I have my doubts about the controller itself, but I can be complete wrong.

It's maybe worthwhile to wait for SATA 3 MB's and the first SATA 3 SSD's.  I've seen some info to the first MB's.  Asus has something special to connect to one off the PCe x16 lanes.

I expect that the SSD's will quickly have much better read speeds.

   
Logged
Gemmtech
Sr. Member
****
Offline Offline

Posts: 526


« Reply #36 on: November 16, 2009, 11:17:04 PM »
ReplyReply

I have never read so much BS in my entire life.  Build the fastest system you can afford and in 6 months you will be able to have a system 50% faster for 50% less, IOW, you are spending more time researching trying to gain less than 10% for what?  2010 Q1 will see newer faster and higher capacity SSDs, that still are unproven technology.  Set a budget, stick with it and upgrade a little every 6 months.  Why spend $10,000.00 today when $5,000.00 tomorrow will be twice as fast?  Be smart! Hint:  You are NOT using any programs that need "State of the Art" computing power.
« Last Edit: November 16, 2009, 11:18:51 PM by Gemmtech » Logged
Josh-H
Sr. Member
****
Offline Offline

Posts: 1911



WWW
« Reply #37 on: November 17, 2009, 03:45:05 AM »
ReplyReply

Quote
Hint: You are NOT using any programs that need "State of the Art" computing power.

Perhaps not.. but he is working with P65+ files; which can easily exceed 2 gig as layered files in PS. P65+ files 'NEED' a fair amount of computational power - end of story.

I use an 8 core Mac Pro with 32 Gig of Ram and RAID10 FAST HD's - and I am processing and working with 1DS MK3 files. I can easily make my puta chug with these files. Christopher needs a high performance machine for the files he is dealing with - period.
« Last Edit: November 17, 2009, 03:47:26 AM by Josh-H » Logged

Christopher
Sr. Member
****
Offline Offline

Posts: 944


WWW
« Reply #38 on: November 17, 2009, 07:19:59 AM »
ReplyReply

Well that is the main point why I am arguing about different solutions. I'm not planing on buying the most expensive stuff out there. I will get the things which will actually boost my performance. As example, going with a i7 System I would get the i7 920/30 which is quite a low end model, but can easily be clocked to the same speed as the high end models.  And you argument is just old. with electronic you can always wait and get something faster, however I need something which comes close to the 100% which are possible now. Not in 6 months. (besides that your argument is flawed because you just can't get something 6 months later for half the money which is twice that fast. (only if you would buy ONLY Top End products, which would be stupid.)

Some might be able to work with larger files on a medicore system and get coffee between each mouse click, but I certainly can't.
Logged

Gemmtech
Sr. Member
****
Offline Offline

Posts: 526


« Reply #39 on: November 17, 2009, 08:41:55 AM »
ReplyReply

Quote from: Josh-H
Perhaps not.. but he is working with P65+ files; which can easily exceed 2 gig as layered files in PS. P65+ files 'NEED' a fair amount of computational power - end of story.

I use an 8 core Mac Pro with 32 Gig of Ram and RAID10 FAST HD's - and I am processing and working with 1DS MK3 files. I can easily make my puta chug with these files. Christopher needs a high performance machine for the files he is dealing with - period.


Lots of Ram & a couple fast HDs does NOT equal state of the art.  Your 1st problem is you are using a MAC, so you are overpaying (a factor of 2) for a machine and receiving no benefits.  If this were a discussion about 3D animation or even 3D CAD programs, then we could talk, but static images using Photoshop might take some ram, but that's about it.  It's a total waste of cash to overpay today for negligible returns.  What's high performance?  Period?  Have you ever tested various machines with various configurations to see what works best?  I doubt it, if you had, then you wouldn't be using a MAC.  If this is a discussion about best bang for the buck, or even highest performance computing than a MAC doesn't even enter the discussion.  

Logged
Pages: « 1 [2] 3 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad