Recommendations for a NAS...

Interesting read, but I agree with the basic tenet "data not stored in three separate places is only transient". I'm sure I mentioned it before: I had a RAID motherboard and my data drives mirrored... the PSU blew up and spiked the 12V rail, taking both mirrored drives with it.

There are codes which would correct multiple errors with only a small additional overhead, and I see no reason a RAID array could not be configured to overcome one completely failed drive plus a read error on another drive. Any data of significant importance should be mirrored elsewhere (geographically).
 
My home NAS runs OmniOS with disks in a RAID-Z (ZFS) configuration. Once you've used ZFS and seen the types of corruption that can and do occur on rotating rust, there's no looking back - my data just wouldn't seem safe under anything else (ok, maybe I'd be fairly content if it was under WAFL...)

From a technology perspective, RAID5 is broken and only made workable by things like large battery backed up caches. Even then performance still takes a dive as the array becomes over half full. RAID6 is only slightly better.
 
Interesting read, but I agree with the basic tenet "data not stored in three separate places is only transient".

Wait till you carelessly move some photos into a temp folder to edit them, and your backed up files minus those photos get mirrored to two other places.

Same goes for any files: the mirror is only as good as the original.:frantic:
 
Incidentally, drives have a rated spinup count (iirc around 5,000) so having your drives spinup many times (say 10+) per day increases the risk of premature drive failure.
With the number of times my HDR-FOX T2s start/stop that got me slightly concerned, although NAS drives will be different to drives for the HDR.

The Seagate Barracuda 7200.10 spec sheet states 50,000 contact Start-Stops. Is that a measure of the same thing you are refferring to?

The spec sheet for the ST1000VM002 (which I believe is a candidate replacement drive for the HDR-FOX T2) is specified differently. For the annualised failure rate 0f 0.55% it uses a figure of "10,000 average motor start/stop cycles per year" in its various factors.
 
It was the figure of just 5000 for the life time of an average HDD that made me initially concerned.

But to answer your question. Its mainly the recordings, reminders, webif restarts and OTA checks that have been causing the spin-ups.

The number of spin-ups is also increased for other reasons.
The most extreme example is that they are tuned to up to 3 different BBCB muxies and make 4 sample recordings a day from each mux. I have a reminders to make sure that it then reverts to the main mux for AR.
As I have up to 24 of these extra recordings alone, if the figure was 5000 cycles I would have needed a new disk about every 200 days just for those timers.
Thinking about it I could have probably redo those reminders so that they overlap the 3 main recording periods of the BBCB muxies, but 10000 cycles a year is a bit more reasonable.
 
But it's close enough when estimating by orders of magnitude, and useful for mental calculations.
 
Based on my reading of the SMART data for my humax drive:

Code:
Attribute: Start Stop Count
Raw value: 3293
Normalised value: 97
Threshold: 20

SMART won't consider the disk as old until it has been power cycled 80,000 times.
(Normalised value looks to be int((100000-<raw>)/1000)

I installed this disk on 6th August 2012 according to the wiki page I created so it's only been power cycled 3293 times since them. With this in mind, 2,500 cycles/year might be a useful rule of thumb too.
 
It turns out that my memory is out of date :oops:
The figure I stated was the old rule of thumb. Done some reading and - modern drives can have a start-stop cycle rating (typically) from 5,000 to 50,000 with most consumer grade drives being in the region of 20k-50k. So that mostly moots my point (unless you've got something like a frequent 10min cycle going).
The rating is supposed to be a minimum spec too so a drive will likely do way more than is stated in the literature.

I stopped paying close attention because most of my drives have been retired before reaching such limits.
-
I like the idea of ZFS but the amount of memory required to properly run it looked a bit steep for my requirements.
 
Just checked the spec sheet for my drive and it's rated at 300K load/unload cycles (approximately equivalent to power cycles)
 
Many drives have a small ramp at the edge of the platters for the heads to go up - this provides a safe position when off-line and also allows for a lower power idle state (the heads increase the drag on the spinning platters a bit) load-unload relates to this.
 
Back
Top