Ad
Ad
Ad
Pages: [1]   Bottom of Page
Print
Author Topic: Hard Drives / SSD / Redundancy.. Time for a Review...  (Read 2796 times)
Josh-H
Sr. Member
****
Offline Offline

Posts: 1913



WWW
« on: June 11, 2012, 11:58:09 PM »
ReplyReply

Im running a late 2008 mac pro with 4x 1TB drives with a mac raid card in raid 0+1 - giving me 2 TB of usable space. The machine is fast with 32 gig of ram and 8 cores. No need for an update in that area.

I have a Drobo Pro hooked up to this via Firewire 800 which automatically 'super duper' copies the mac pro every night at 1am as a back-up. (kind of overkill, but it makes me feel safe at night  Grin ) I also have a super duper back up drive off site which gets refreshed every other week or so. Another external 1TB HD hooked up via USB2.0 is used for Time Machine.

I am running out of space in the mac pro as my image library is approaching 1.6 gig. (usable space is currently 2TB)

The plan is to to do a super duper back up of the mac pro to the Drobo; shut down the mac pro and pull the 4 1TB drives. Install 4x 3TB drives, boot from the CD and then build a new 0+1 array giving me 6TB of useable space then restore from the Drobo Pro to get my data back on to the macs new RAID array.

I figure if anything goes wrong I can always carefully label the drives before I pull them and put them back in if required. Plus of course the data is backed up to the Drobo and off site storage and the time machine drive.

Question: any suggestions on improving the above?

And.. is it worth putting in a dedicated SSD for Photoshop Scratch? I see OWC make a cage for two SSD's in my mac pro in the spare optical drive bay. I could in theory use one for Scratch and one for O/S and apps. although Im not convinced its worth it for apps since I rarely close apps and even less rarely reboot. The machine runs 24/7.

Any thoughts appreciated.
Logged

Justan
Sr. Member
****
Offline Offline

Posts: 1886


WWW
« Reply #1 on: June 12, 2012, 08:32:13 AM »
ReplyReply


This is about the same approach I use when replacing drives in RAID arrays.

Re the use of a SSD drive for a paging (scratch in photoshop speak) drive, the thing to test is the size of your current scratch drive data files. While you will get notably faster performance when using a SSD drive for anything, with 32 GB of RAM installed, you might not have a lot of content that makes it to the scratch drive. That said, Id use the SSD drive for anything related to temp files.

As the price for SSD drives drops, spindle drives will become functionally obsolete, and they will eventually be placed on shelves never to be connected to power or data cables again.
Logged

phila
Sr. Member
****
Offline Offline

Posts: 267



WWW
« Reply #2 on: June 13, 2012, 03:52:12 AM »
ReplyReply

The best thing I did with my 2008 MacPro was add a SSD via that OWC optical bay housing. Use it as the Boot drive - very speedy!

However with OWC now offering this:

http://eshop.macsales.com/shop/SSD/PCIe/OWC/Mercury_Accelsior/RAID

it would be even better!
Logged

Josh-H
Sr. Member
****
Offline Offline

Posts: 1913



WWW
« Reply #3 on: June 14, 2012, 05:54:21 PM »
ReplyReply

The best thing I did with my 2008 MacPro was add a SSD via that OWC optical bay housing. Use it as the Boot drive - very speedy!

However with OWC now offering this:

http://eshop.macsales.com/shop/SSD/PCIe/OWC/Mercury_Accelsior/RAID

it would be even better!

Why is that even better? Its just a really expensive way to add an SSD isn't it?
Logged

tived
Sr. Member
****
Offline Offline

Posts: 691


WWW
« Reply #4 on: June 15, 2012, 04:29:29 AM »
ReplyReply

i have added ssd's to several photogs Macpro, usually 256gb for boot and 120/128gb for scratch and two WD RE4 2TB in raid-1 and those photog have been very plaesed with the improvement in speed it has provided.

Its obviously nothing compared to when i add multiple raids with 8x SSD's (8R0) over several controllers on a pc, where we are pushing 1.5-2 gigabits/sec.

I think for the Mac the OWC solution is really great and one i would take up if I was running a Macpro, i definately see it as an option to improve over the excisting setup.

Note, that it has been mentioned here over the past several years that the biggest bottle neck, apart from the one sitting between the chair and the keyboard, is the disk system in both PC's and Mac's - the sooner you can improve this you will see improved performance on any of the Mac's/PC's

All the best

Henrik
Logged
jonathan.lipkin
Full Member
***
Offline Offline

Posts: 148



« Reply #5 on: June 15, 2012, 03:23:57 PM »
ReplyReply

I'm not sure if Super Duper does verification on the copy, but it's a good idea. A few years ago, I transferred files from RAID to RAID, and in the copying I lost about 17,000 files. I simply dragged and dropped the files, and somehow about 20% of the 100k got corrupted. Luckily I am pretty conscientious about backing up and was able to restore them.

From then on, any copying of a large number of files is done with Chronosync with Verify Copy checked.
Logged
tived
Sr. Member
****
Offline Offline

Posts: 691


WWW
« Reply #6 on: June 16, 2012, 09:24:22 PM »
ReplyReply

Ouch, Jonathan!

to the none of observant that would be a bit of a shocke to the system to come back later and find that 20% of your files are missing. However, I think this is more the exception then the rule.

When I do setup systems for clients, there is a check list to perform, which states to compare the size and number of files of the source with the distanation. Just to make sure that ever thing is coming over. And we do Copy and Move, then when comfirmed - the files on the source can then be deleted.

I take that it was on a Mac this happend, not that it can't happen on Windoze, but I have experiences something like this before and it had to do with the OS had locked some or many of the files, but would not alert to it. Never found a good explanation to this, other then from then onwards we would check all permissions first and correct these, then do the copy and varify. And I should mention, that it also had to be varified visually that the files were there, as in once it was estabilished that the number of files are there and the total size is matching. Then run through the directory structure and see if they match, randomly check a file to see if it opens.

This may be a lot mugging around, but it has saved my client several thousands of dollars, because she is relying on staff to check this and do these tasks - and it has happend that someone forgot and something went missing, which is why I got involved ;-)

Another reason for all this is also when you do move and finish a job and make multiple back copies, you want to make sure that its all OK, before investing the time to do so.

Henrik

Henrik
I'm not sure if Super Duper does verification on the copy, but it's a good idea. A few years ago, I transferred files from RAID to RAID, and in the copying I lost about 17,000 files. I simply dragged and dropped the files, and somehow about 20% of the 100k got corrupted. Luckily I am pretty conscientious about backing up and was able to restore them.

From then on, any copying of a large number of files is done with Chronosync with Verify Copy checked.

Logged
Pages: [1]   Top of Page
Print
Jump to:  

Ad
Ad
Ad