SSD Write Endurance...for those of you who have doubts...

Hello!
Many people fear SSDs. I know it, I read about it, and I hear them talking about it at work. They are afraid that they'll "wear out". This is based in truth, however slightly. NAND flash cells do have a "write threshold" or a limited number of times they can be written to. This number is quite high, but not infinitely high. Even bearings on traditional spinning HDDs will eventually wear out and fail, so really nothing lasts forever. But people have been fearful of this for SSDs more than HDDs.
Over the years, SSD makers have employed many strategies and techniques for making these flash cells last longer, so the fear of SSDs wearing out is mostly unfounded now. But I still hear it, even amongst very smart people.
So, in an effort to reduce ignorance on this subject, here's some interesting info.
There is a site called The TechReport, which I have found to be a fabulous resource over the years.
In late August 2013, the editors began a write-endurance test of six SSD drives. The objective was to see JUST HOW LONG they would last. Here is the introductory article on the test.
Since the beginning of the test, 4 of the 6 drives have failed. The test continues to run 24X7 and as a result, these SSDs have been subjected to FAR MORE WRITES than you would ever do on your own PC or Mac systems. Even the first ones that failed exhibited very good resilience and reliability!
Anyway, there have been numerous updates in this series of articles, but here's the latest one. If you want to read them all, I'll leave you to your own inquisitiveness.
SSDs are much faster for booting, launching programs, and loading content into DAZ Studio, Poser, Carrara, and other graphic art products, as well as audio composition applications too. I hope that this information helps some of us to reconsider our fear of SSDs for future rendering computer upgrades, rebuilds, and replacements.
Cheers!
Comments
...I consider them fine for a notebook which gets jostled around and maybe just a boot drive fo desktop but the cost is still somewhat prohibitive for mass storage. All the speed advantage is also lost if your programmes still have to access data on a larger conventional HDD.(like say, your content content library/runtimes).
I can get a 1TB 7200 RPM HDD for under 50$, The best I can find a 1TB SSD for is about 430$
2 TB is the largest SSD currently available and they take up a PCI slot as well as being prohibitively expensive for most of us here.
So at least from my perspective, it isn't the "durability" factor as it is the economic factor that scares me away.
When I (finally!) got a new Mac a few months ago, I went completely crazy on all upgrades...except for SSD.
I gave it a lot of thought. It was very tempting. Very.
But as Kyoto Kid points out, price versus power was a bit of a consideration.
The thing that really caused me to drop it from list of upgrades, though, was Apple's "Fusion"...a first generation technology that silently shuffles things between the SSD and conventional storage based on some frequency-of-use algorithm.
I'm not a huge fan of first-gen anything, especially if that anything is making silent decisions in my behalf about where things ought to be on my set of mounted volumes. So I gave it a pass and went with a 3 TB conventional platter instead.
-- Dan
Good stuff, thanks for sharing! SSD is up soon for me in upgrades, and I hadn't done much reading into the issue of them failing. The scare seemed weird to me since every hard drive I've ever owned has died at one point or another. It's not a huge number of them or anything, but it made me wonder if people worried about it have had many drive failures. I'm also sold on RAM drives, I'm realllly tempted to go with 64 gigs in my next build if I can spare it. Speedy goodness while gaming, and plenty of room for big renders, win-win!
Personally I would wait on tech/prices to level out for buying a large SSD like that 1TB. It would be nice to have that much data on a faster drive, but can't justify that price. I can stick it out putting the OS, choice apps and content on a 250GB.
@dhtapp, Many people bought smaller ones putting important stuff on that, it would really free up strain from the OS level stuff.
(quick OT question, does daz studio NEED the installer files it downloads in the library or are those safe to remove?)
Yes, I did that to begin with. I actually started by putting my content on one (this was before I got back into DAZ, so it was music content, not DAZ content, but it's the same concept). Eventually, I bought one for the OS and applications. That was a GREAT decision. It's blazingly fast.
But I still have some things (like download folders and such) on an HDD and I also write backups to high capacity physical spinning hard drives, mostly for capacity reasons: I need a 4TB drive in order to hold multiple versions of backups. Actually, I have multiple 4TB drives for each computer, and I rotate them. Another way of lowering one's risk.
This is a good discussion. Thank you all for bearing through my reasoning. :coolsmile:
Sorry, I meant 64 gigs of RAM, I just missed a word! A ramdrive is where you mount a virtual drive in RAM, where read/write speeds are hugely faster than a HDD or SSD. Plus they are temporary, you can load/unload them whenever you need. Speeding up load times in games, scratch disk in photoshop, for example.
Thanks for the answer about the archives. It occured to me that if I put my daz library on a SSD, I'd rather those be put somewhere else, but wouldn't want to set up symlinks for everything in case their presence was required.
Yeah, I move the zips to an archive location after installing (and you can change the download location, so you don't need to keep it on the C: drive).
I have an SSD and a HDD internal on my computer all my programs rest on my SSD all my files which include my run-time and my plugins sit on my HDD and the speed at which something loads it not noticeable different than if they all sat on the same drive. There is a bit of a few seconds lag time when the program on the SSD access the HDD for the first time when i turn on my computer but after that I can see no difference in speeds.
Great articles -- thanks for the links.
I'm following this thread with interest. While I would likely avoid SSDs for the moment for cost reasons, I'm also concerned about the limited number of write cycles. Honestly though I have no clue how often my disk is written too, since presumably most of the writes happen invisibly in the background where I as a user cannot see them happen.
Just to throw a few interesting bits of info out there: (fingers crossed) I've never actually had a hard drive failure. Those hard drives with moving parts have outlasted multiple graphics cards, a whole string of motherboards, and even a stick of RAM, by many years. I've also never had a fan fail. While I'm at it, Unrelated to computers I also purchased a bunch of mechanical timers and some electronic timers; the electronic ones both failed within a year or two, and here probably 15 years later every mechanical timer is still running flawlessly. I have also had a huge pile of failed electronic ballasts for fluorescent lights (before switching to a different brand) all of which failed, many of which burned out fairly quickly (electrolytic capacitors leaked on most, once even rather spectacularly leading to a hole burned clear through the circuit board). Capacitors also lead to the failure of a motherboard, a power supply, and a monitor (all repaired with the replaced capacitors). I know the "moving parts" thing is supposed to indicate higher risk for failure, but in reality the non-moving bits fail quite easily, quickly, and often, and in my experience have happened FAR more often than mechanical failures for whatever reasons. I would suggest not using the "no moving parts" as an argument for decreased likelihood of failure.
This tells me that you haven't read the articles on TechReport. Please go back to my original post for the links, and don't skim the articles; read for comprehension. This should alleviate your fears of limited write cycles. Seriously, it's a non-issue with modern SSDs.
This tells me that you haven't read the articles on TechReport. Please go back to my original post for the links, and don't skim the articles; read for comprehension. This should alleviate your fears of limited write cycles. Seriously, it's a non-issue with modern SSDs.
+1
I'm seeing some speculation that spinning disk my fall out of the consumer market, replaced by SSD, in five years or so. I'm also seeing some experimental high-end storage arrays going to SSD for those with deep pockets. Really deep pockets.
I've done some rough calculations with a few SWAGs and I think my 256 GB SSD from my current laptop should outlast this laptop - and it's replacement.
I'm still waiting for magnetic bubbles or holographic chips! SSDs will never catch on. I also predicted in the 60s that computer games would never catch on, we'd never see a computer generated movie, a computer would never beat a man at chess, audio gear would always have vacuum tubes, and it would be wise to invest in magnetic tape companies.
But seriously, actually in my short stint as home computer fix-it guy during the last couple years I would tell my customers that computers only had two moving parts. The fans and the hard drives. After 4 or 5 years both were living on borrowed time and it was time to start thinking about replacing their computer. I was usually right. But many times it was the pound of inhaled dust kittens, dog fur and cigarette smoke goo that did a system in. 8-o
I'm at the point where I'm considering a new computer and have pretty much decided on an SSD for primary fast storage, and one or two 2TB internal disks for mass storage with continued use of several external disks for backup and archival storage.
I can't get behind the idea of 4TB hard drives. That's too many eggs in one basket. Of course I said that about 1TB drives a few years ago. It used to be that the acceptable error rate for data storage was 1 in a hundred billion bits, and could be handled by error correction schemes, but disks with 1 trillion bytes seems to pushing your luck. Agreed, technology has improved in the last 25 years but hey, that's still a lot of eggs in one basket.
With all this talk about reliability of SSDs, how often DOES Studio write to the content library. I understand when installing content or saving a scene but, any other times?
Gus
...maybe not to the content library (unless you modify texture maps, save presets like characters, materials, poses, and LIE layers which re usully saved during the session).
I have also noticed the HDD is in use during a render process.
I've had one HDD failure, it was in my 32 bit notebook several years ago shortly after I attempted to set it to large address aware which caused serious boot errors as the chipset would not support the 3 GB limit. Afterwards booting up and shutting down became took longer and longer each time until it wouldn't get past the Bios screen.
Namffuak:
I have a 2011 laptop which is powerful enough to use as a travel rendering machine.
My own reassessment as noted above started a few weeks ago when I noticed how slow this laptop is when booting (it's actually a very fast machine even today, three years later, but the HDDs are the main bottleneck, and it's the worst at boot time), and now I am thinking of replacing one of the three 1TB HDDs with an SSD and put my OS, applications, and some content libraries on it. If I do this, I'll probably jigger a few things around and position my DAZ library (which is currently on its own partition on a different physical drive in the laptop) onto the SSD. As Kyoto Kid notes above, the DAZ content library is a high-read/low-write entity, so it would benefit greatly from an SSD.
Leather Gryphon:
"A lot of eggs in one basket" is an oft-given reason not to do something. Sure, you can go with smaller drives, but then you need more of them. This too carries a mathematical risk, yes? Not to mention that some motherboards have only a couple SATA ports. And for me, splitting backups to two different target drives would just complicate my backup strategy and increase the risk that something will go wrong and I won't notice it until it's too late and I need it badly.
By the way, if you (or anybody) is concerned at all about a hard drive failing, I'd suggest that you buy yourself a copy of Spinrite from Gibson Research. Then run it every few months. It will identify early any issues with the hard drive, and often will cause the drive to fix itself (even before you've noticed any problems) by rewriting blocks from bad sectors to good sectors. Many hard drives, including a lot of new ones from the factory, have bad sectors.
So I guess punched cards aren't an option? 8-o Anybody want a good buy on a warehouse full of blank punch cards?
(1TB = 6.945 million boxes of 2000 cards each with 72 bytes per card)
Backup of 1TB to punched cards using a single high speed punch at 10 cards per second punching 24/7 would take 3170 years! 8-O
This tells me that you haven't read the articles on TechReport. Please go back to my original post for the links, and don't skim the articles; read for comprehension. This should alleviate your fears of limited write cycles. Seriously, it's a non-issue with modern SSDs.
Apologies for being dense as a brick yesterday and still today even after re- and re-re-reading. Obviously I'm missing some information that's right in front of my nose somewhere. The article clearly paints a rosy picture, which hopefully is 100% accurate, which would be great. I want nothing more than for it to be right. "The 840 Pro and a second HyperX 3K have now reached two freaking petabytes of writes. To put that figure into perspective, the SSDs in my main desktop have logged less than two terabytes of writes over the past couple years. At this rate, it'll take me a thousand years to reach that total." But I still don't know if the author's main desktop usage rate is typical, or if my usage is typical, because I don't actually know what's going on in the background, unless I were to set up the same tests as the author. Are we comparable in writes to the average user running Office, or playing a game, or reading email? Do we write to the disk 2,000 times more often? 2 times more often? 2000 times LESS often? Completely and totally random from user to user by a factor of 2000? Does anybody know for certain if the author's "typical" usage rate extends to our "typical" rate? Will cost-cutting attempts significantly affect the quality and thus shorten the lifespan of the average drive over time in future successive production runs? If SSDs can run so long above the manufacturer's claimed lifespan, why are they being so conservative (or has that number changed since I saw it last)? Is there anything we overlooked that won't show up until after they've been used far more extensively?
To be clear, I'm not arguing with the article in any way. I am simply clueless as to whether or not the article's numbers can be applied to each of our unique situations or not, and I'm always leery of the hidden problems that often show up over time in products in general. I'd love for SSDs to replace traditional hard drives if they prove to perform better or just as well in every way. Thanks bearing with my stubbornness and pointing me at anything I keep overlooking.
So I guess punched cards aren't an option? 8-o Anybody want a good buy on a warehouse full of blank punch cards?
(1TB = 6.945 million boxes of 2000 cards each with 72 bytes per card)
Backup of 1TB to punched cards using a single high speed punch at 10 cards per second punching 24/7 would take 3170 years! 8-O
....heh :lol:
Apologies for being dense as a brick yesterday and still today even after re- and re-re-reading. Obviously I'm missing some information that's right in front of my nose somewhere. The article clearly paints a rosy picture, which hopefully is 100% accurate, which would be great. I want nothing more than for it to be right. "The 840 Pro and a second HyperX 3K have now reached two freaking petabytes of writes. To put that figure into perspective, the SSDs in my main desktop have logged less than two terabytes of writes over the past couple years. At this rate, it'll take me a thousand years to reach that total." But I still don't know if the author's main desktop usage rate is typical, or if my usage is typical, because I don't actually know what's going on in the background, unless I were to set up the same tests as the author. Are we comparable in writes to the average user running Office, or playing a game, or reading email? Do we write to the disk 2,000 times more often? 2 times more often? 2000 times LESS often? Completely and totally random from user to user by a factor of 2000? Does anybody know for certain if the author's "typical" usage rate extends to our "typical" rate? Will cost-cutting attempts significantly affect the quality and thus shorten the lifespan of the average drive over time in future successive production runs? If SSDs can run so long above the manufacturer's claimed lifespan, why are they being so conservative (or has that number changed since I saw it last)? Is there anything we overlooked that won't show up until after they've been used far more extensively?
To be clear, I'm not arguing with the article in any way. I am simply clueless as to whether or not the article's numbers can be applied to each of our unique situations or not, and I'm always leery of the hidden problems that often show up over time in products in general. I'd love for SSDs to replace traditional hard drives if they prove to perform better or just as well in every way. Thanks bearing with my stubbornness and pointing me at anything I keep overlooking.
2 TB over two years sounds fairly average, and web browsing is likely to account for a majority of # of writes for many people (we're constantly downloading many little things). It's smart for the manufacturers to give a conservative estimate for lifetime, an error margin a day keeps the class action away!
If you're curious and on windows, open the task manager, head to the Performance tab and find the "Resource Monitor" button. It doesn't log counts, but you can get a feel for what's going on with your disks in real time.
My two cents.
Apple's Tech is nothing new. Seagate introduced Hybrid Disk years ago with some caching also driven by the disk's pcb. Microsoft also can use some caching functions with a usb stick or a SSD. It can bring some speed up when caching for reading or writing, but not that much. However I can see some big advantage with such big volumes in the case of OS that can use ZFS filesystem and snapshot features. We're considering something alike at work for some Oracle Databases and file servers.
We began using SSDs at work for 5 years already on machines that are working 24h/24
Before that we used Raid Systems with mechanical disks. The difference is huge in term of failure rate
Till now in the 5 years we only had 3 SSD Disk failure. One was after somebody took out a disk for cloning. And the two others were because of a faulty controller. So in fact, all of the failures were due to external factors (we only use Intel SSD because they have the lowest return rate)
With mechanical disks, we used to change them every 3 years or less for certain series. With SSD I planned to leave them 10 years and till now everything goes according to plans. (We have 200+ of them running)
I don't think the NAND's max write cycle is that important. Not yet, even with the latest NAND from Samsung' SSD that only have 3000 write cycles. In mechanical as well as SSD Disk type, one of the component that may fail before is the pcb. At least with SSD we eliminate the Mechanical failure. And I've seen lots of failure either mechanical or pcb. Not yet due to Nand's wear levelling
The typical Home user won't write enough data to kill the NAND even with background OS tasks. If you make some calculations with some assumptions that is pretty obvious : I consider worst case is that a SSD will die after 400 TB written. And let's say you are a (very) big consumer and write 100 Gb a day => 36,5 TB a year. 400/36,5 = 11 years before your disk's NAND die. Good luck, you'll have to work hard
For a home use I do recommend using a SSD for the OS. NOT for Datas. You can recover files from a damaged mechanical disk. That is not the case for SSD and that is their only drawback in my view.
The best use is inside Notebooks :
- No Heat from the HDD
- Silent!
- SSD are perfect for mobile device as they are not affected by shocks during transport
- performance : older notebook's hard disk were slower than their desktop counterpart (lower round/min because of better liability vs shocks)
2 TB per year does sound average to me. I'm running 3 times that, but that's probably because I run my system 24 X 7; it's always doing work (protein folding) even when I am goofing off... I also have my swap, page (Windows 8.1 has both), and hibernation files on my C partition.
Here are a couple of images. The first (if I get the order right during upload) shows my primary SSD, which mainly has the OS on it. You can see I'm closing in on 7 TB written in 24 months of service. But remember, I'm probably at the high-end due to the work this machine is doing Hmmm....it's more productive on its own than I am! Maybe I should have named it "HAL" or "M5". But both of them were ... eh.. killers... :roll:
The second image shows all of the disks in my workstation system. Yep, 6 SATA drives, and a 7th one is my DVD burner. My motherboard supports lots of SATA connections, but most only support 4 or so, so you can't count on being able to stack 'em high inside with lots of little and cheap SSDs. Since "less expensive" SSDs were smaller when I built this system, I had to use more of them. You can see that I have squeezed most of my VST (music sound sample) libraries as well as OS and apps onto SSDs. The VST libraries, once installed, are mostly read, and only rarely written. Stuff that can be slower (downloads, MP3 library, music recording partition, backups, etc) are all still on spinning hard drives.
I would rather have fewer (but higher capacity) SSDs, but when I built this system 2 years ago, the 256 GB drives were still in the mid-to-high $300 price range; to get cheaper you had to wait for sales. And 1TB SSDs were over $600, quite far out of the price range for mere mortals! The 960 GB Crucial M-500 drive in my system was added most recently (about a year ago), and only then because it too went on sale. :coolgrin:
Apple's Tech is nothing new. Seagate introduced Hybrid Disk years ago with some caching also driven by the disk's pcb. Microsoft also can use some caching functions with a usb stick or a SSD. It can bring some speed up when caching for reading or writing, but not that much. However I can see some big advantage with such big volumes in the case of OS that can use ZFS filesystem and snapshot features. We're considering something alike at work for some Oracle Databases and file servers.
Actually, I believe that Apple's Tech is (or rather, "was", it's already 18+ months old); Apple's Fusion drives don't work the same way as the older, hybrid-cache drives. Where the Seagate system sees the SSD and HD as two drives, the Apple sees them as one drive (with a faster part and a slower part); where the hybrid disks copy information to the SSD (leaving a copy on the HD) a Fusion drive moves it, so there's only one copy. On a Fusion drive, the most-used files are moved to the SSD so if your operating system is the most used, that's what's moved there. The few test results I've seen show speeds closer to a straight SSD than to a hybrid drive, but things may have changed since the release of the Fusion. A search for "Fusion drives vs hybrid drives" would probably find more accurate and more recent results.
-- Walt Sterdan
Takeo, what an informative post; thank you for sharing your experiences.
I only have one item to add:
What you say is somewhat true. But you can structure your life so that it simply doesn't matter if you can't access the old drive. You can do that by taking frequent backups, and by automating them, you ensure that they actually get done!
For my OS and application drives, I take a fresh full image once or twice per week, and I take incrementals twice daily on weekdays, three times per day on weekends when I am using the system more often. I also run manual backups whenever I'm about to embark on (or have just finished up with) major system changes. Other drives, such as my VST libraries with low updates for example, might only get a full backup once per month, but they still get incremental backups taken at least on a weekly basis.
The MOST I will ever lose in most situations is 8 hours of work. If I was doing ANYTHING critical, such as business-related deliverables, then I'd set the frequency of my backups even greater; like every 3 or 4 hours during my workday and every 6 hours when I'm typically not working anyway.
If you take frequent backups and you protect them properly, then it would be highly unlikely that you'd need to struggle trying to get data off of a broken drive, and this applies equally for SSDs and HDDs. :coolsmile:
...and for us "mere mortals" on the other side of the tracks, the standard HDD will have to serve for a while longer until prices for high capacity SSDs drop to the point where where they were a couple years ago for 1 TB HDDs.
I remember when a 32GB USB stick cost around 50 - 60$, Now I am seeing 128GB sticks (bigger than the original HDD that came with my notebook) for about the same price, so yeah, the cost will come in line for the rest of us...eventually.
Yep, I got you, Kyoto Kid. Again, I'm not proposing that people spend beyond their means. I'm only suggesting that we should not fear write-limitations of SSDs. And all that other stuff, too. :cheese:
...well by the time I'm ready, I imagine they will almost be "bullet-poof".
Currently 500GB SSDs are almost affordable (about 200 - 250$ on average)
I'm not counting SSDs out completely, as again I could really use one in the old notebook. As I am no longer using it for 3D I can get by with a much smaller drive than the current HDD it has. A 120 G would probably be more than sufficient (it originally came with an 80 G HDD).
That sounds like a reasonable plan.
Big Picture: I would like to see two things happen in the next 12 months (actually tomorrow would be nice, but I doubt it's an immediate possibility):
1. prices for 1TB SSDs to fall below $400 (for my workstation), without the need for MIR ("Mail in Rebate", which is just another phrase for "SpamBait").
2. capacities larger than 1TB to become available in a 2.5" (notebook/laptop) form-factor (for my laptop) at a price that falls in line with 1 above.
...and bigger capacities not just by dribs and drabs. I won't be excited by a 1.2 TB option. I WOULD be excited by a 2.0 TB option, however; because that would allow me to move 2/3 of my laptop stuff from spinning drives to an SSD, and it might allow me to justify keeping it for another 3 years, and then maybe moving said large SSD to a new laptop. I just have to get better at efficient lighting and rendering so I could make the most of the 4 cores/ 8 threads.
Oh, what the heck...
3. I would like to see Apple start making MacBook Pro with the ability to add standard form-factor SSDs. Yeah, and I use a lot of unicorn manure in my garden, too...
No that is exactely that. The two disks are seen as one HDD.Intel introduced a smart caching tech in 2011 and Apple got their drive out in 2012. This review ot the seagate is dated of 2010 http://www.storagereview.com/seagate_momentus_xt_review and the Momentus wasn't even the first hybrid drive. Apple just eventually pushed the Disk's size and the "smart" algorithm a bit further. The 128 GB SSD should be a lot of help in the performance tests vs the only 4-8GB SLC on seagate's drive. I believe that if the performance are close to a SSD, that is because the SSD is mostly used and very few datas go directly to the mechanical drive.
I bought one of the first Seagate Momentus Series in 2010. I was a bit disappointed at that time and didn't reconsider Hybrid drive since then. Performance were better than mechanical disks but these were not reliable enough for Production use. We went the SSD route after lots of testing
About the price, I'll just say that I find them pretty much affordable nowadays. When I first considered SSD drives around the year 2005, the only manufacturer was M-TRON which was making them mostly for the US Army and a 64 GB drive was around 5000 $.
If you only compare SSD's price to mechanical drives, you'll see a huge difference in cost per GB and that won't change. But in my view, the goal is to get a fast and reliable drive for the OS. Not to replace the Data Disks. So I only have a 128 GB SSD for my Desktop and a 256 GB for the notebook. The mechanical drives have replaced the DVD for Data and backup storage (buy cheap drives, fill them up then store them)
Just for some of the naysayers...
Current design philosophy is for the boot drive to be SSD, general file storage to be on a standard HD. Program files would be on the SSD, saved project files on the standard.
End result? All the speed of SSD's, with the cheaper mass storage for what you're actually saving.
...if you have all your content libnrary/runtime files and scene files library on the larger slower HDD, loading/saving these files will still be subject to the HDD's seek speed, not the SSD's.
So boot up and opening the application may be fast, but accessing content and scene files will not.