Ad
Ad
Ad
Pages: « 1 [2] 3 »   Bottom of Page
Print
Author Topic: Scratch Disk size and speed  (Read 15878 times)
Jack Flesher
Sr. Member
****
Offline Offline

Posts: 2595



WWW
« Reply #20 on: March 11, 2010, 11:01:16 AM »
ReplyReply

Quote from: Etienne Cassar
I have just built a new PC with 2 Velociraptor HD in raid 0 for the OS and 4 1TB WD Caviar Black which I was planning to set up in Raid 5 for the RAW files.  I chose raid 5 and not raid 0 for the added advantage that you can recover data in case a single drive failure. I always keep a back up of my RAW files, but not always back up my xmp sidecar or psd/tiff files from photoshop.
I have partitioned the 4 drive array in 3 separate volumes.  The first 1gb volume I will dedicate for Photoshop scratch, and the remaining divided in 2 equal volumes, one for Raw files and the other half for edited files, such as psd or tiff files.
I was wondering if I would gain any benefit if I make a Raid 0 volume of 1gb out of the 4 drive array for the scratch disk instead and then set up another raid 5 volume from the remaining space, which I later partition in 2 drives in windows 7.
Any ideas please.

Thanks

Etienne

1 GB is not enough for scratch.  Think more in terms of 64 or 128 Gigs for scratch -- with 4x1TB drives, you have more than enough space to partition off 16G or 32G off the top of each drive for scratch.  And yes, I'd RAID-0 that first partition since there is no concern if the R-0 array fails for scratch...
« Last Edit: March 11, 2010, 11:01:39 AM by Jack Flesher » Logged

wcwest
Newbie
*
Offline Offline

Posts: 36


« Reply #21 on: March 12, 2010, 01:04:15 PM »
ReplyReply

I'm finishing up a new build for Lightroom, Photoshop and Premier. After doing much research it seems the best (cost/performance) solution for a scratch disk is 2x7,200 rpm HDD's in Raid 0. I just purchsed 2 WD Black Caviar 160 GB drives at $43 each. Partioned off 30GB for scratch and will use  the rest for photos being processed and then move them off to a single hard disk for archive. If you lose the Raid, no loss. If you insist on going with SSD's, an expert said the SLC's would be better suited to the task rather than MLC's. He explained why but it was beyond my pay grade.

My final drive configuration will be an Intel X-25 v2 80GB for OS & apps, 2-WD 160GB in Raid 0 for scratch and working files and a WD Veloceraptor 300GB for current projects. Total cost was $580.
Logged
John.Murray
Sr. Member
****
Offline Offline

Posts: 893



WWW
« Reply #22 on: March 12, 2010, 01:27:14 PM »
ReplyReply

Put your working files and scratch partitions on the same spindle(s) is a mistake.
Logged

Jack Flesher
Sr. Member
****
Offline Offline

Posts: 2595



WWW
« Reply #23 on: March 12, 2010, 01:30:09 PM »
ReplyReply

Quote from: Joh.Murray
Put your working files and scratch partitions on the same spindle(s) is a mistake.

True, but really only an issue if you're reading/writing your image file array and tagging scratch array all at the same time .
Logged

Farmer
Sr. Member
****
Offline Offline

Posts: 1608


WWW
« Reply #24 on: March 12, 2010, 04:18:45 PM »
ReplyReply

Quote from: Jack Flesher
True, but really only an issue if you're reading/writing your image file array and tagging scratch array all at the same time .

Like when you save an image you've been working on or load in other images to do a composite or perhaps a stitch or maybe an HDR, or when your OS or AV decides to hit one while you use the other?

It's always a bad idea :-)
Logged

Jack Flesher
Sr. Member
****
Offline Offline

Posts: 2595



WWW
« Reply #25 on: March 12, 2010, 05:23:21 PM »
ReplyReply

Quote from: Farmer
Like when you save an image you've been working on or load in other images to do a composite or perhaps a stitch or maybe an HDR, or when your OS or AV decides to hit one while you use the other?

It's always a bad idea :-)

Uh, once again, you have to be saving or reading one file from your image array WHILE you're tagging scratch because you're in the middle of some operation on some other (probably big) file in CS --- most folks don't do that with regularity, so it isn't ALWAYS a bad idea as you suggest...  Moreover, a 4-drive RAID-0 array is pretty dang fast to begin with, so even if you do occasionally run both operations concurrently, it is still a lot fast than any single drive scratch solution.

Cheers,

Logged

David Saffir
Full Member
***
Offline Offline

Posts: 172


WWW
« Reply #26 on: March 12, 2010, 06:05:02 PM »
ReplyReply

my desktop setup:

10,000 rpm boot drive 150gb - noticeable improvement in program loading and execution
8gm of RAM - probably not enough (!!) but many of my multi-layered files exceed 1GB.
scratch disk is two 7200 drives joined in a striped raid set - they are used for nothing else, got inexpensive ones

most external drives are handicapped for scratch use at 5200 rpm, or their interface.

one exception is ESATA, which is about as fast (I think) as an internal drive running at equiv RPM

David
Logged

Farmer
Sr. Member
****
Offline Offline

Posts: 1608


WWW
« Reply #27 on: March 12, 2010, 06:53:55 PM »
ReplyReply

Quote from: Jack Flesher
Uh, once again, you have to be saving or reading one file from your image array WHILE you're tagging scratch because you're in the middle of some operation on some other (probably big) file in CS --- most folks don't do that with regularity, so it isn't ALWAYS a bad idea as you suggest...  Moreover, a 4-drive RAID-0 array is pretty dang fast to begin with, so even if you do occasionally run both operations concurrently, it is still a lot fast than any single drive scratch solution.

Cheers,

If you're opening numerous large files to stitch or merge or HDR etc then you can easily be reading from the image portion of the array whilst accessing the scratch portion.  You only need some background operation (OS or AV for example) to hit the image array whilst you're making use of the scratch and the problem exhibits.  If you have a long operation in the background and choose to move to some other app, you can end up causing conflict, too.

A 4 drive array is fast, but it loses buckets of speed when it has to move the head to another part of the disk to read or write data, it causes your channel bandwidth to be tied up doing more than one operation and if you're using mainboard RAID as most people are (and not a dedicated controller card) you're using more resources to handle two RAID operations at once (certainly less of an issue with current processors and memory access, but still a factor).

Hence I would say it's always a bad idea because there are too many occassions on which you can have a conflict and reduce performance, let alone the increase risk of disk failure due to increased usage (which might be small, but still real).  For the price of disks these days and with most mainboards providing capacity for multiple arrays (or spend a reasonable amount and get a real controller) it's a much better idea to keep the images and the scratch apart.
Logged

feppe
Sr. Member
****
Offline Offline

Posts: 2907

Oh this shows up in here!


WWW
« Reply #28 on: March 12, 2010, 07:22:51 PM »
ReplyReply

Quote from: Farmer
If you're opening numerous large files to stitch or merge or HDR etc then you can easily be reading from the image portion of the array whilst accessing the scratch portion.  You only need some background operation (OS or AV for example) to hit the image array whilst you're making use of the scratch and the problem exhibits.  If you have a long operation in the background and choose to move to some other app, you can end up causing conflict, too.

A 4 drive array is fast, but it loses buckets of speed when it has to move the head to another part of the disk to read or write data, it causes your channel bandwidth to be tied up doing more than one operation and if you're using mainboard RAID as most people are (and not a dedicated controller card) you're using more resources to handle two RAID operations at once (certainly less of an issue with current processors and memory access, but still a factor).

Hence I would say it's always a bad idea because there are too many occassions on which you can have a conflict and reduce performance, let alone the increase risk of disk failure due to increased usage (which might be small, but still real).  For the price of disks these days and with most mainboards providing capacity for multiple arrays (or spend a reasonable amount and get a real controller) it's a much better idea to keep the images and the scratch apart.

Have you done testing on the impact? Throughput and seek times of any decent 4 disk RAID 0 array are so good that I can't imagine how such conflicts would cause noticeable degaradation of performance, even when using HDDs.

And risk of disk failure due to increased usage is academic and infinitesimal.
Logged

Farmer
Sr. Member
****
Offline Offline

Posts: 1608


WWW
« Reply #29 on: March 12, 2010, 07:48:27 PM »
ReplyReply

Quote from: feppe
Have you done testing on the impact? Throughput and seek times of any decent 4 disk RAID 0 array are so good that I can't imagine how such conflicts would cause noticeable degaradation of performance, even when using HDDs.

And risk of disk failure due to increased usage is academic and infinitesimal.

No, why would I deliberately slow my system down?  As soon as I need to access two streams of data down the same pipe I'll lose performance.  Seek times are good because the load is distributed over 4 devices - if you make it do twice as many seeks, you'll clearly reduce performance.  There's just no need to setup in the manner.

Regarding disk failure, with a 4 disk RAID 0 you have quite a reasonable risk of losing a drive (particularly if you don't have a UPS to protect you) and if you have images on it, then you're heading to your backups.  Increased usage will lead to an increased failure rate.

The bottom line is, there's really no need to do it this way.  Use two arrays.
Logged

John.Murray
Sr. Member
****
Offline Offline

Posts: 893



WWW
« Reply #30 on: March 12, 2010, 08:42:55 PM »
ReplyReply

I think there is a lot of misreading of existing information out there.

RAID became a common place practice on Bus Mastered controller technology.  This includes SCSI, SAS and FC.  Having multiple disks and multiple files systems on a given bus was not an issue because the controller itself would dictate traffic on the channel.

SATA on the other hand supports (and is optimized) for one disk per channel only.  We're seeing amazing performance because the Operating System's file subsystem, device driver, and ondisk cache have become tightly integrated.  Splitting a disk into partitions, means we now have 2 filesystems to contend with.  Any switch between the 2 will involve cache invalidation and rebuild - a very expensive operation.  Whether this is important to you is of course your own decision, but is definately not considered a best practice.

I'm all for RAID 0 on a pair of short stroked high RPM drives for swap only.  I wouldn't even consider using the slack space for any reason.
« Last Edit: March 12, 2010, 08:50:16 PM by Joh.Murray » Logged

Schewe
Sr. Member
****
Offline Offline

Posts: 5257


WWW
« Reply #31 on: March 12, 2010, 11:18:28 PM »
ReplyReply

Quote from: Joh.Murray
I'm all for RAID 0 on a pair of short stroked high RPM drives for swap only.  I wouldn't even consider using the slack space for any reason.

While I agree in general with regards to most systems, there is a school of thought that actually thinks that it's the actual disk access and the speed of the read/write and access that changes based on the function being used...

See: Moving Your Users Directory - OSX Leopard

This is taking advantage of the fact that most OS calls tend to be small transactions rather than large transactions often associated with scratch files for Photoshop (or the massive disk I/O required for opening an image).

We'll find out in a few weeks...I'm putting in a new MacPro with the Apple raid card and 4 SAS 15K drives–a pair partitioned according to the above partitioning scheme and the other 2 drives stripped for Photoshop scratch space. This puts the base OS, the Applications folder and the PS scratch disk all on separate physical volumes...

There's another aspect of PS scratch that you should keep in mind...the base level scratch in Photoshop is a function of the physical installed ram allocated to Photoshop AND the size of your images...Photoshop will, by default, create a default scratch partition based on the physical ram allocated to Photoshop–and that's gonna change with 64 bit computing...in the past we've generally gotten by with the 5-10X scratch to ram...but if you have a 64 bit OS (such as Win 7, 64 bit and Snow Leopard and hopefully the next version os Photoshop), your scratch file sizes may grow...possibly a lot if you have a ton on ram on the motherboard. My MacPro will have 32 gigs so the amount of scratch I'm allocating is a pair of 600GB SAS drives-so, over one TB of scratch...

And normally I would say hard drives are cheap–and the are, but not SAS drives...so the fact I'm throwing a TB of super fast scratch disk at Photoshop should tell ya something...

I'll report back when my system is in...
« Last Edit: March 13, 2010, 01:17:36 AM by Schewe » Logged
Christopher
Sr. Member
****
Offline Offline

Posts: 944


WWW
« Reply #32 on: March 13, 2010, 02:28:07 AM »
ReplyReply

Well I think it all is a question of money. What I have now is quite nice.

- 120GB SSD for OS and programs
- 220Gb SSD for LR files
- 4x500GB RAID 0 as scratch for PS and PTGui
- 6x2000GB RAID 5 for image storage

The two SSDs are on the mainboard controller, the Raid 0 on a additional controller and the Raid 5 on another RAID controller.

Well I can say, I never have worked so quickly. It is amazing how fast everything gets with such a setup.

Transfare rates around 400-600Mb/s between the RAID arrays, instant program loading times, I really love it. I have never opened 20 x P65+ in PTGui so fast or opened a 20GB file in PS.

Is it the best ? Certainly not if I had the money, I would get 2x 120GB System drive as SSD with SLC mem and a SATA3/SAS6 interface and the same for the LR HD.
You could go even further and use 4 SLC SSDs in RAID 0 for a scratch disk, which would be fatser than any HD regardless (SATA/SAS)
If one has the money, one could do that with the whole image storage, but hell I don't even wanns start because for thet kind of money, one can buy a new computer every 6 months, or a car.


In the End there are two main mistakes made today. People who think they can use MLC based SSDs in a RAID setup and all other people who actually put all there RAID systems on one controller.
Logged

feppe
Sr. Member
****
Offline Offline

Posts: 2907

Oh this shows up in here!


WWW
« Reply #33 on: March 13, 2010, 08:10:08 AM »
ReplyReply

Quote from: Christopher
Well I think it all is a question of money. What I have now is quite nice.

- 120GB SSD for OS and programs
- 220Gb SSD for LR files
- 4x500GB RAID 0 as scratch for PS and PTGui
- 6x2000GB RAID 5 for image storage

The two SSDs are on the mainboard controller, the Raid 0 on a additional controller and the Raid 5 on another RAID controller.

Well I can say, I never have worked so quickly. It is amazing how fast everything gets with such a setup.

Transfare rates around 400-600Mb/s between the RAID arrays, instant program loading times, I really love it. I have never opened 20 x P65+ in PTGui so fast or opened a 20GB file in PS.

People on the OCZ forum post benchmark results which give an idea of the monstrously fast speeds you get with SSDs. Unraided you get 200+ MB/sec sequential read, 2-disk RAID gives 400+. Somebody has 8 in RAID going past 1.3 GB per sec. Not shabby for consumer setups fraction of the price of enterprise solutions.

As an amateur setup I have a reasonably priced computer which I recently upgraded with a single OCZ Vertex for OS and scratch (the horror) and 8 gigs of RAM with 64 bit Win7. I just tested it by opening a 2.4 gig stitched bracketed multi-layered panorama, took 105 seconds. With my older system with 4 gigs and HDDs it took 10+ minutes - no kidding. I think it was mainly because the OS ran out of memory and swapped everything to HDD.

SSD is the most cost-effective ugprade one can make to a computer lacking it, with more memory coming a close second.
Logged

Jack Flesher
Sr. Member
****
Offline Offline

Posts: 2595



WWW
« Reply #34 on: March 13, 2010, 12:10:47 PM »
ReplyReply

While SSD's are great, they're not the best choice for operations that do a lot of random writes and reads as in scratch.  This is due to the SSD's performance degradation issues.  Spinners in RAID-0 do not suffer this limitation.  

Re the issue of scratch and data on the same spindles, I do agree with Farmer in theory -- however, in my experience with my machine I essentially never run into the issue of both operations tagging the spindle set concurrently.  First, my machine set-up is an 8-core MacPro 3.2 with 24G RAM.  I have an SSD for OS and apps, and then 4 SATA2 spinners in RAID-0 with a thin outer rim partition dedicated to scratch and the rest to image storage.  On my machine, even when I'm doing a big pano, I typically process all the raws in C1, then read those tiffs into AutoPano.  I have 24 gigs of ram in my machine, so those files all load, never tagging scratch.  I wait for the pano to render, then bring the single large file into CS to perform my regular localized edits.  About the only time I run into a double tag issue is when I'm batch processing a bunch of raws from a shoot in C1, then start working on a massive file in CS -- which is rare -- but even then I do not 'notice' the performance hit on my machine as the C1 batch is usually finished before I get very far along with my CS edits. CS5 will change this too, at least hopefully, if it utilizes onboard RAM more effectively.  But as always, I respect that YMMV.

PS for Jeff: I would reconsider the SAS drive array.  SATA3 is already here, and in a very short while you'll be able to load your box up with 4 @ 2TB SATA3 drives for probably less than the 4 smaller SSDs are going to cost, and it will likely be a lot faster. Just sayin...

Cheers,
« Last Edit: March 13, 2010, 12:13:25 PM by Jack Flesher » Logged

feppe
Sr. Member
****
Offline Offline

Posts: 2907

Oh this shows up in here!


WWW
« Reply #35 on: March 13, 2010, 12:32:26 PM »
ReplyReply

Quote from: Jack Flesher
While SSD's are great, they're not the best choice for operations that do a lot of random writes and reads as in scratch.  This is due to the SSD's performance degradation issues.  Spinners in RAID-0 do not suffer this limitation.

This is only true for SSDs without TRIM to a certain extent. Granted very few SSDs in operation have TRIM or even garbage collection, but all new OCZ Vertexes ship with TRIM, and their latest firmware includes TRIM support for all Vertex drives. Don't know about other manufacturers.

Even without TRIM they will whip HDDs in random read/writes since they have no moving parts, and doubt they would perform worse than the fastest HDD even with prolonged use.

If you're refererring to the limited read/write cycles of SSDs, that's FUD in normal use - you'll be upgrading your computer way before it becomes an issue.

The biggest caveat with SSDs is that there's no TRIM for RAID setups in the foreseeable future.
Logged

Jack Flesher
Sr. Member
****
Offline Offline

Posts: 2595



WWW
« Reply #36 on: March 13, 2010, 01:12:03 PM »
ReplyReply

Did  not realize the OCZ was shipping with TRIM already -- awesome!
Logged

feppe
Sr. Member
****
Offline Offline

Posts: 2907

Oh this shows up in here!


WWW
« Reply #37 on: March 13, 2010, 01:21:59 PM »
ReplyReply

Quote from: Jack Flesher
Did  not realize the OCZ was shipping with TRIM already -- awesome!

Yeah, it's pretty recent, and that's why I finally took the plunge and bought one and am quite happy with it. Will certainly add another drive in the future for scratch. I think I'll avoid RAID setups since TRIM doesn't work with them - but again I'm confident it would still be faster than HDD RAID.

SSDs still have some minor shortcomings. OCZ's official setup guide claims that I should let the drive idle on Windows log-in screen for hours on end each week for garbage collection to clean up the pieces missed by TRIM. I'm highly skeptical about the necessity and utility of this and I'm not going to do any of it. I've taken benchmarks and will check occasionally to see if there's performance degradation.
Logged

Christopher
Sr. Member
****
Offline Offline

Posts: 944


WWW
« Reply #38 on: March 13, 2010, 01:33:08 PM »
ReplyReply

Quote from: feppe
Yeah, it's pretty recent, and that's why I finally took the plunge and bought one and am quite happy with it. Will certainly add another drive in the future for scratch. I think I'll avoid RAID setups since TRIM doesn't work with them - but again I'm confident it would still be faster than HDD RAID.
.


Ok just to clarify once again, only consumer graded SSDs (MLC) need the TRIM command and these are the drives which loose a lot of performance in a RAID. ( RAID = NO TRIM )

Compared to that SLC based SSDs, don't need it and can be used with RAID, the only drawback is that they are even more expensive.

It is also still true, that MLC based RAID system WILL loose a lot of performance over time. At a point it will be slower than any normal HD Raid, so it makes only sense for a temp. volume (Scratch), which can be refomratted and delted from time to time, or one should use professional SSDs.

In the end I think it doesn't make sense yet to use SSDs as storage. I mean if I take myelf I would need around 4TB live storage, which would mean with RAID 5 or 10 one would use 3-4 x 2TB drives. In SSDs one would need around 20 SSDs to get it.

So a SSD storage makes only sense for someone who only needs his current images on a fast RAID array.
« Last Edit: March 13, 2010, 01:38:30 PM by Christopher » Logged

feppe
Sr. Member
****
Offline Offline

Posts: 2907

Oh this shows up in here!


WWW
« Reply #39 on: March 13, 2010, 02:30:44 PM »
ReplyReply

Quote from: Christopher
Ok just to clarify once again, only consumer graded SSDs (MLC) need the TRIM command and these are the drives which loose a lot of performance in a RAID. ( RAID = NO TRIM )

Compared to that SLC based SSDs, don't need it and can be used with RAID, the only drawback is that they are even more expensive.

It is also still true, that MLC based RAID system WILL loose a lot of performance over time. At a point it will be slower than any normal HD Raid, so it makes only sense for a temp. volume (Scratch), which can be refomratted and delted from time to time, or one should use professional SSDs.

In the end I think it doesn't make sense yet to use SSDs as storage. I mean if I take myelf I would need around 4TB live storage, which would mean with RAID 5 or 10 one would use 3-4 x 2TB drives. In SSDs one would need around 20 SSDs to get it.

So a SSD storage makes only sense for someone who only needs his current images on a fast RAID array.

I wasn't referring to SSDs in storage sense, but OS and scratch disk sense. But I guess some people can afford to use SSDs for storage...
Logged

Pages: « 1 [2] 3 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad