Ad
Ad
Ad
Pages: [1]   Bottom of Page
Print
Author Topic: How much free space do I need on SSD as "system and program" drive?  (Read 3960 times)
Rand47
Sr. Member
****
Online Online

Posts: 542


« on: October 18, 2012, 09:50:00 AM »
ReplyReply

Hi All . . .

Forgive my ignorance on this one, but I've searched on the net and found all kinds of conflicting information and advice.

I have a new PC build.  256 gig SSD C: drive, a second 128 gig SSD dedicated to scratch disk / ACR cache, and other regular HDD drives for data storage.

With all my programs, drivers, etc. loaded on the C: drive SSD it shows 143 gigs "free" of 256 gigs on the drive, or roughly 55% free space.  

My question is, "Is this sufficient free space for the system drive to work optimally?"  I've read everyting from only needing 10% of the space free, to needing a minimum of 65% free.
I have no idea what to believe.

Thanks in advance for any insight from you Gurus of bits and bytes.
Rand
Logged
Ellis Vener
Sr. Member
****
Offline Offline

Posts: 1727



WWW
« Reply #1 on: October 18, 2012, 10:15:43 AM »
ReplyReply

more  free space is better , especially with spinning disks. While he primarily deals with Macs, Lloyd Chambers has a terrific guide to choosing and using SSD drives at http://www.macperformanceguide.com/SSD-RealWorld.html that should be OS independent.
Logged

Ellis Vener
http://www.ellisvener.com
Creating photographs for advertising, corporate and industrial clients since 1984.
Rhossydd
Sr. Member
****
Offline Offline

Posts: 1889


WWW
« Reply #2 on: October 18, 2012, 11:40:15 AM »
ReplyReply

Lloyd Chambers has a terrific guide to choosing and using SSD drives
That's over two years old now. I think a lot has changed with SSDs in that time.
Logged
jonathanlung
Jr. Member
**
Offline Offline

Posts: 70


« Reply #3 on: October 18, 2012, 01:51:52 PM »
ReplyReply

It depends.

I am not an expert, but the performance degradation from SSDs comes from the fact that writing data to the disk may involve having to read the previous contents, perform an erase, and then write the new data. There are situations where the read and erase are not necessary, but as the disk fills up, those situations become rarer.

Whether you will experience performance degradation will depend on the SSD drive's controller, the amount of over-provisioning on the disk (some SSDs report their capacity to the operating system as less than their physical capacity to keep a higher % of real space free), and the pattern of writes to the disk. More expensive SSDs tend to have more over-provisioning and better software in the controller; they can be filled more before performance starts to drop.
Logged
Rand47
Sr. Member
****
Online Online

Posts: 542


« Reply #4 on: October 18, 2012, 09:16:12 PM »
ReplyReply

Thanks to all for the feedback.  I'm assuming, then, that 55% of the drive being "free space" is sufficient, all the other intangibles notwithstanding.

Rand
Logged
Sareesh Sudhakaran
Sr. Member
****
Offline Offline

Posts: 547


WWW
« Reply #5 on: October 19, 2012, 04:09:00 AM »
ReplyReply

Technology changes every few months, but the general rule of thumb is to have at least 30% free as overhead. I personally try to keep it at 50%.
Logged

Get the Free Comprehensive Guide to Rigging ANY Camera - one guide to rig them all - DSLRs to the Arri Alexa.
Justan
Sr. Member
****
Offline Offline

Posts: 1876


WWW
« Reply #6 on: October 19, 2012, 10:57:48 AM »
ReplyReply

I’m not sure that a case can be made where a SSD drive needs to have more free space than is needed for regular use.

With spindle drives, the reason to leave free space is to have enough free space so that the read/write heads can write any file in a contiguous space. Doing this means that the r/w heads don’t have to seek where the bits of a file are located throughout the drive. This can take a lot of time. When the r/w heads do have to seek a lot, that dramatically slows down the process of writing to the drive, or reading from it, for that matter. The issue is the mechanical overhead required to find the data on the storage media.

In addition, if one defragments a drive, there needs to be enough free space on the drive so that the largest file on the drive can be written in a contiguous space. The largest file is often the system page file, or the file(s) used by a virtual machine, if one uses VMs.

With SSD drives, there is very little time used for seeking the location of data, because there is no mechanical platter to have to search. Estimates of SSD seek times are about 1 ms or less. From my reading there are no manufacturers of SSD drives that recommend defragmenting their drives as it is a pointless exercise with SSD drives.

So….leave enough space so you can write the largest file you are going to use and don’t worry about it otherwise.
Logged

jonathanlung
Jr. Member
**
Offline Offline

Posts: 70


« Reply #7 on: October 19, 2012, 03:50:48 PM »
ReplyReply

Fragmentation causes write amplification when writing to an SSD (an issue that doesn't exist with HDD); access times shouldn't change for low volumes of data transferred. OTOH, if data is highly fragmented, the controller might saturate its internal bandwidth reading from more blocks than it would, theoretically reducing measured bandwidth; I don't know if that's a real bottleneck, though.
Logged
John.Murray
Sr. Member
****
Offline Offline

Posts: 893



WWW
« Reply #8 on: October 19, 2012, 07:18:40 PM »
ReplyReply

Anand found that halving overprovisioned space had less than a 3% effect on performance under heavy utilization:

http://www.anandtech.com/show/3690/the-impact-of-spare-area-on-sandforce-more-capacity-at-no-performance-loss/2

Leaving free space on the drive adds to this overprovisioning, but given his numbers, I wouldn't worry about it.

« Last Edit: October 19, 2012, 07:49:47 PM by John.Murray » Logged

Rand47
Sr. Member
****
Online Online

Posts: 542


« Reply #9 on: October 22, 2012, 09:38:17 PM »
ReplyReply

Again...

Thanks everyone, for the information.  Very helpful.

Rand
Logged
francois
Sr. Member
****
Offline Offline

Posts: 6731


« Reply #10 on: October 23, 2012, 04:53:35 AM »
ReplyReply

Anand found that halving overprovisioned space had less than a 3% effect on performance under heavy utilization:

http://www.anandtech.com/show/3690/the-impact-of-spare-area-on-sandforce-more-capacity-at-no-performance-loss/2

Leaving free space on the drive adds to this overprovisioning, but given his numbers, I wouldn't worry about it.



Thanks for sharing. I missed that article…
Logged

Francois
Eak.in
Newbie
*
Offline Offline

Posts: 2


« Reply #11 on: October 25, 2012, 07:37:34 PM »
ReplyReply

Having over 100GB free on your main drive is good.

Couple of other things to keep in mind regarding your main disk...

1) You will slowly run out of space over time.  This is due to O/S updates, downloads, your web browser cache, files you create, etc.

2) Your swap file is typically located on your main drive.  The general rule of thumb is 1-1.5 times the amount of RAM your computer has is reserved for "swap" (the scratch pad your computer uses in case it needs to temporarily free up RAM for a different program).

As long as you keep your browser cache in check, keep your data files on the data drive, don't drastically increase your RAM (the swap file will grow), you should be good.

Regular house keeping should provide you a fast responsive system.  Enjoy your fast computer!

Just make sure you perform regular backups!  Remember the rule of thumb is three different backups, two different media, one off-site.
Logged
Rand47
Sr. Member
****
Online Online

Posts: 542


« Reply #12 on: November 03, 2012, 11:25:33 AM »
ReplyReply

Once again, many thanks for the insight and information.

My system is fully up, and running well.  With all of my programs, utilities, plug-ins & OS I'm still at just over 50% free space.

Scratch, ACR cache, LR catalog & previews on second 128 gig SSD.

Data on 2TB fast conventional HDD & archives on USB 3.0 external HDDs.

System works great w/ LR 4.2

I've successfully made two clone back-up copies of the c: drive and am happily working away in this new environment.

Thanks again.  Much appreciate the broad knowledge found here. 

Rand

Logged
kingscurate
Jr. Member
**
Offline Offline

Posts: 55



« Reply #13 on: November 05, 2012, 06:44:05 AM »
ReplyReply

Ive been trying to load a new ssd, so far struggled but due to great customer service im being provided a new 1. I think i know where i went wrong.
Q have you noticed a big speed difference, or apart from lightroom is thaere a performance boost. What has improved in lightroom? Develop and import i assume
Logged

I aint a pro
Rand47
Sr. Member
****
Online Online

Posts: 542


« Reply #14 on: November 05, 2012, 08:02:07 AM »
ReplyReply

Ive been trying to load a new ssd, so far struggled but due to great customer service im being provided a new 1. I think i know where i went wrong.
Q have you noticed a big speed difference, or apart from lightroom is thaere a performance boost. What has improved in lightroom? Develop and import i assume

Yes, in the Dev module sliders are instantaneous.  Previews pop quickly to full res.  Moving from module to module is very quick, etc.  Import is faster as well, probably due at least in part to USB 3.0 card reader.

BUT, I must admit to not knowing how much to attribute to the SSDs alone since there was a concomitant significant increase in CPU power, video card capability and RAM.  I can say that my system boots in about 10 seconds!  I used to push the power button, then go get my coffee.  My machine is also used for my consulting business w/ lots of word processing, spreadsheets and graphics.  While my other machine was plenty up to those tasks, this one is blazing fast on everything.  MS Outlook, for instance, now just pops up on the monitor, rather than taking 5-10 seconds to load.  Same w/ the other MS Office 2010 programs.   

Best of luck w/ your new install.
« Last Edit: November 05, 2012, 08:11:11 AM by Rand47 » Logged
kaelaria
Sr. Member
****
Offline Offline

Posts: 2225



WWW
« Reply #15 on: November 05, 2012, 12:57:43 PM »
ReplyReply

It also depends greatly on the exact model of drive, if space plays a part.  Since you said a 256GB not a 240, I assume you might have a Vertex4 like I do?  If so you have a new controller which benefits from a performance mode, keeping space below 50% utilized, unlike the Sandforce controllers everything else uses.  There's some good and some horribly wrong advice in the previous posts - in short look tot he CURRENT articles and especially benchmarks from sites such as anandtech and tomshardware for the real deal.
Logged

Pages: [1]   Top of Page
Print
Jump to:  

Ad
Ad
Ad