Ad
Ad
Ad
Pages: « 1 [2]   Bottom of Page
Print
Author Topic: optimum hard drive setup for photoshop  (Read 11532 times)
Plekto
Sr. Member
****
Offline Offline

Posts: 551


« Reply #20 on: April 03, 2009, 03:40:04 PM »
ReplyReply

I'd like to weigh in on this.  I've been a computer consultant and technician for almost 20 years.

- What type and speed of drive you have generally isn't the issue.  All are fast enough for this sort of work.  What does matter, though, is reliability, because failures are a major problem.  Now, you can do backups and the like, but you really should be running two drives in RAID1(redundancy).  That way if one dies, you can either add another drive in and rebuild, or run good remaining drive by itself.   This also Gives you two chances to recover data in theory, which is good.  The real failure rate for drives across the industry is roughly 1/500 per day.  Now this includes big server farms, as well as individuals, as well as data corruption issues and hardware ones, but 1.5-2 years seems common now between major incidents.

Running RAID1 or a large RAID array(raid 5 or similar) makes this number jump to 1/250,000 that two drives brick at the exact same time.  

- Data recovery generally costs $300-$500 plus media charges of $5 per GB(total drive capacity, not recovered data!).  A 300GB drive can easily run upwards of $1500-$2000 to recover.  A second drive is $60-$80 by comparison.  Even tape backups aren't as inexpensive as a second drive, currently.  Smaller here is better.

- You only need to run redundancy on your boot drive.  99% of the time, a large data drive is easily fixable since the FAT table isn't getting hammered and the OS doesn't reside on it.  Usually you lose a single directory or have one file that is affected.   I usually recommend a pair of small drives in Raid 1 for the boot, then a huge data drive.  This is all of your applications that you can reinstall easily(programs folder) and temporary data.  

- Raptor drives are a bad choice as they run hot and fast.  They are made specifically for RAID/server environments as well and most people don't know that RAID drives and single user drives have different specs and drivers/software on the controller board.  You can run RAID drives in a pinch as single drives - and many people don't have a problem, but not the other way around.  Single user drives tend to have problems in a RAID array due to the poor quality on-board controllers most people use(a good RAID card is $300-$500, typically)  The least expensive Raptor costs almost double as well compared to 7200rpm drives.

- Heat is a killer.  You want a computer case that has an intake fan blowing air over the drives.  You want slower drives, within reason, or possibly laptop drives, since they generate far less noise and heat.  the reason Maxtor drives failed and most external ones do as well is because they run hot.  If you can't put your hand on it while its running because it's too hot, it needs to be cooled down.

- If you are running a system where speed or large data files are required, the issue isn't the computer here but the swap files.  You would then want a ram disk to put the swap/virtual memory file into or a third small drive just for swapping.  If you are running a 64 bit OS, that's easy - just add another couple of sticks of memory or a SSD or something similar for the swap space.  Use it, abuse it, and expect it to die every 6-12 months.

That said, ram disks are fastest, followed by SSDs, and then hard drives.  But running a ram disk as a small swap drive is astounding, because average memory is now hundreds of times faster than any drive, and is random access as well, so no cueing or spinning up or other issues, either.  But they are pricey, running about $250 for a dedicated box (ACard) or at the least, $50-$100 or so more for a typical motherboard with 8 ram slots in it(requires 64 bit OS).  Plus the cost of ram, of course.   A SSD is almost as good but much cheaper.  But even moving the swap to its own drive is a nearly 100% speedup in most cases.

http://forums.techpowerup.com/showthread.php?p=1192476
The OS really isn't doing a lot when Photoshop or similar programs are grinding away.  It's either swapping files or running the program and its associated files.  Disk access is minimal otherwise.  As he noted, putting CS3 and the swap on the thing was a huge speed boost.  Because it is backed-up, you could install CS3 on it and the swap file(doesn't have to be on a second drive)


whew - sorry it is so long...    
My recommendation:
2 Western Digital RE2 160GB drives for just the OS and secondary apps(email and other stuff)   Fast, less heat, two platter design.  Plenty for most normal installs.   Also, under $1000 to do data recovery on in case of a disaster.
http://www.wdc.com/en/products/products.asp?driveid=403   5 year warranty.

http://www.newegg.com/Product/Product.aspx...N82E16822136200 - 2 of them are only $120.    The 320GB drives are $80 each, but not worth it.   My apps, a few games, and OS only uses 60-80GB anyways.  I wish they made 80GB single platter models.

1 ANS-9010 ($369) or 9010B($239) with 16-32GB on it.  Photoshop, Window's swap file, and Photoshop's swap space all on this.  The stuff you want and need to speed up.  16GB using cheap 2GB modules is a good cheap way to fill in up.   No need for a 64 bit OS as well, which is possibly some costs savings.  You can run a 64 bit OS on it, of course, just it's not required.  

http://www.newegg.com/Product/Product.aspx...N82E16820227291 - $90 for 8GB, high quality RAM.  4GB modules are almost $60 each by comparison.

Optional:
1 large data drive if you need it for longer-term storage.  Anything will do here.  1-2TB drives are cheap enough.
« Last Edit: April 03, 2009, 03:48:19 PM by Plekto » Logged
joedecker
Full Member
***
Offline Offline

Posts: 142


WWW
« Reply #21 on: April 09, 2009, 03:53:42 PM »
ReplyReply

Quote from: Plekto
The real failure rate for drives across the industry is roughly 1/500 per day.
 ....
Running RAID1 or a large RAID array(raid 5 or similar) makes this number jump to 1/250,000 that two drives brick at the exact same time.

Excellent post, the only nit I wanna pick is in the quote above.

In my (somewhat more limited) experience, it feels to me that the failures aren't quite independent, that in a RAID 1 mirror that I've seen often both disks go only days, perhaps a small number of weeks apart each other when they were the first, matched, pair of disks in an array.  I haven't done the science, but it intuitively makes a little sense (same drive manufacture, same environment, similar or identical access patterns.)  

None of this affects your conclusions, I just use it as a reminder to get started replacing/rebuilding a mirror the moment it fails.  

Again, thanks for the great post.

--j






Logged

Joe Decker
Rock Slide Photography
http://www.rockslidephoto.com/
DarkPenguin
Guest
« Reply #22 on: April 09, 2009, 05:03:09 PM »
ReplyReply

When you lose a disk in a RAID array the rest of the disks have to work much harder.  They have to service read/write requests (potentially recreating the data from parity) and rebuild a new disk.  If they are already near end of life (as evidenced by the dead disk) that is often enough to push them over the edge.
« Last Edit: April 09, 2009, 05:04:04 PM by DarkPenguin » Logged
Plekto
Sr. Member
****
Offline Offline

Posts: 551


« Reply #23 on: April 09, 2009, 08:12:40 PM »
ReplyReply

Quote from: DarkPenguin
When you lose a disk in a RAID array the rest of the disks have to work much harder.  They have to service read/write requests (potentially recreating the data from parity) and rebuild a new disk.  If they are already near end of life (as evidenced by the dead disk) that is often enough to push them over the edge.

Yes, the common response is to replace both disks ASAP and do nothing at all on the drive(s) while it is rebuilding.  It shouldn't stress it too much, since RAID1 isn't physical striping/sectors but is really it just aligning the FATs and data - so rebuilding it is more like a defrag utility being run.  But replacing the drives is a good idea in any case.

RADI 5 or typical arrays with multiple drives - shoot, they seem to toss a drive every few months IME, once the first one goes.  You're absolutely right about this type of cascade effect.  

This also is worse with large drives as well, since any data or sector errors while the array is rebuilding that might drop the drive will toast the whole array.  This is common during rebuilds or right afterwards if the thing is using 1+TB drives.  I forget the exact science behind it, but it's something like the chance of a random sector error is almost the same as the number of sectors on a 1TB drive or something.  Normally the drives deal with bad sectors on the fly, but during a rebuild, they have no tolerance or ability to remap bad sectors.

Small is therefore best for an OS drive.  I have a couple of huge drives as D and F on my system, plus an external one.  Those that are for long term storage I'll replace with SSDs in a year or two, since they only fail on writing and not reading.  For long-term or archival storage, SSDs or even CF or a USB drive are perfect this way.

Oh - for reference.  My rebuild was due to a power failure that corrupted one of the FATs.  One drive would hang on boot, so unplugged it and tried the other.  It booted.  (try both drives - one will boot up, almost always).  Boot in safe mode, start up/log in and then shut down.   Plug the new drive in/or bad FAT one if it's not a physical issue) and reboot.   The controller software will see that the "good" drive has a later time stamp on the log files and use that as the master.   Fairly seamless, really.  160GB took just over an hour to rebuild and re-sync. while Windows was running, even.   I went to the grocery store and came back...    

But I am saving for that ACard.  Being able to restore from a CF image in a minute or two is amazing, really.
« Last Edit: April 09, 2009, 08:15:45 PM by Plekto » Logged
Jack Flesher
Sr. Member
****
Offline Offline

Posts: 2595



WWW
« Reply #24 on: April 10, 2009, 11:53:01 AM »
ReplyReply

Well different strokes.  Personally, I do not see a benefit of economy to using 160G drives at $50 each when you can buy 640's at $70, and the 640's are significantly (like 30%) faster on I/O, especially once you pass the 50% point of the 160's total space (rim speed)... Even the 320's don't perform as well as the 640's in I/O performance as they're still dual platter, and platter density clearly does affect performance...
« Last Edit: April 10, 2009, 11:55:10 AM by Jack Flesher » Logged

Plekto
Sr. Member
****
Offline Offline

Posts: 551


« Reply #25 on: April 10, 2009, 03:06:45 PM »
ReplyReply

That can work, of course, but the 640GB drives use more power and generate more heat as well.  The ones I recommended are the newer perpendicular head technology as well.   Speed isn't a huge issue in any case, since it's raid 1 and not 0 or 5 or something else.
Logged
GiorgioNiro
Guest
« Reply #26 on: April 12, 2009, 12:24:33 PM »
ReplyReply

Quote from: Plekto
That can work, of course, but the 640GB drives use more power and generate more heat as well.  The ones I recommended are the newer perpendicular head technology as well.   Speed isn't a huge issue in any case, since it's raid 1 and not 0 or 5 or something else.

There is a tremendous amount of great information in this thread, as someone said "different strokes for ... "
Just one more link for the Mac users, http://macperformanceguide.com/index.html .

I would like to thank all that contributed here, especially Plekto as you have reminded me that I need to be more concerned with keeping my data secure at this moment when I too will be configuring a new workstation.

Grazie a tuti.
« Last Edit: April 12, 2009, 12:43:39 PM by GiorgioNiro » Logged
Jens_Langen
Newbie
*
Offline Offline

Posts: 30


« Reply #27 on: April 14, 2009, 08:29:33 PM »
ReplyReply

Question for Jack (or anyone),
How do you configure your scratch partition to use the faster (outer part) of the hard drives?
I have an OWC Mercury Rack Pro with 4 Hitachi Deskstar 500GB hard drives and plan to use it in
RAID 0 or RAID 5 and will connect with FireWire 800.

Logged
Jack Flesher
Sr. Member
****
Offline Offline

Posts: 2595



WWW
« Reply #28 on: April 14, 2009, 08:37:31 PM »
ReplyReply

Quote from: Jens_Langen
Question for Jack (or anyone),
How do you configure your scratch partition to use the faster (outer part) of the hard drives?
I have an OWC Mercury Rack Pro with 4 Hitachi Deskstar 500GB hard drives and plan to use it in
RAID 0 or RAID 5 and will connect with FireWire 800.

The partitions on hard drives are created from the outer rim in, so the first partition includes the outer rim and is the fastest...
Logged

Jens_Langen
Newbie
*
Offline Offline

Posts: 30


« Reply #29 on: April 14, 2009, 08:41:52 PM »
ReplyReply

Quote from: Jack Flesher
The partitions on hard drives are created from the outer rim in, so the first partition includes the outer rim and is the fastest...

Thanks Jack...
Logged
Jack Flesher
Sr. Member
****
Offline Offline

Posts: 2595



WWW
« Reply #30 on: May 07, 2009, 11:15:32 AM »
ReplyReply

Quote from: Plekto
That can work, of course, but the 640GB drives use more power and generate more heat as well.  The ones I recommended are the newer perpendicular head technology as well.   Speed isn't a huge issue in any case, since it's raid 1 and not 0 or 5 or something else.

Just saw this so responding late, but want to clarify my purposes are different than yours, as I said, different strokes...

I *want* performance on my boot pair, so I do have them in RAID-0 and the WD 640G drives have 2x the platter density of the WD 320's -- the 640's and 320's are both dual platter, the 320 using the older platter design, perpendicular or not.  The 160 is a single platter version of the 320, so I agree the speed difference will be negligible there out to around 50 gigs of data or so.  For *performance* you want to keep things nearer the outer rim, so spec-ing a drive twice as large as your predicted needs will provide greater performance.  Price -- if you actually look at the current prices, you will see the 640's are only slightly more expensive than the 320's, like on the order of $25 per drive, so even a pair of them should not break anybody's bank. Heat -- my 6 WD 640s are running 24/7/365 and run at an average of 37 C which is not hot enough to kick my fans up off their lowest idle speed...

Best,
« Last Edit: May 07, 2009, 11:19:20 AM by Jack Flesher » Logged

Pages: « 1 [2]   Top of Page
Print
Jump to:  

Ad
Ad
Ad