Thanks to everybody for an interesting and technical debate.
I'm probably not going to take the SSHD route, after some thought; but it's an interesting technical question anyway...
I recently installed one of
these. £35 per TB seems quite good. This drive is very quiet in operation.
Thanks, that's the model range I was looking at (edit: if I go non-SSHD). The price per GB is similar at 4TB, so I'd probably go for that.
As far as I know it is an ordinary file of a fixed size and will only change location if deleted when it will be recreated. [snip] the Seagate hybrid drives I looked at have a higher spin speed (7200rpm)
Luckily the 4TB Seagate hybrid drive is 5900; only the smaller versions are 7200 (
reference Seagate docs).
If the SSD in a hybrid drive could hold the 20Gb, 2 hour buffer of a Humax, it would wear out in a mere 22 years. Would a conventional drive last that long?
That was my thinking too. But then there are some practical considerations that I can think of:
1. File usage pattern
Hmm... A year is roughly 10k hours, running 24 hours a day that would mean 5000 write cycles but the beginning of the buffer would be used more as it restarts every channel change...
It's significant if the buffer re-starts at zero every time, yes. If it was purely a circular buffer then the theoretical calculations would work. But if the first portion of the file is being re-written far more frequently due to channel changes, then that matters.
Good info.
[begin edit]
...actually, I'm not sure this does matter after all. Say the TS file is at block 1234 on the spindle, and that gets cached to block 5678 on the SSD.
If you re-write spindle block 1234 again after a channel change, the SSD will likely cache it to a completely new block in its map, say 9012, because SSDs always write to newly erased blocks and not to previously used cells.
In other words: it's the write rate to the SSD that matters, which is perhaps 10Mbit/second for HD Freeview. The source block on the spindle isn't important because it doesn't map 1:1 to the SSD blocks.
In that respect, the SSD doesn't care whether you write 10MBit/second to the same few spindle blocks repeatedly, or to completely different spindle blocks. Writing identical blocks on the spindles almost certainly doesn't mean re-writing identical blocks on the SSD.
I think..!
[end edit]
2. SSD capacity
The SSHD referenced here, Seagate, only have an 8GB SSD portion. This means that the buffer wouldn't fit; so the assessment of 22 years based on a 20GB file would be reduced to more like 8 years. Somebody's done a good assessment of the drive size vs lifetime, based on the principle that you'll be more frequently re-writing on a smaller drive for the same amount of data, and has produced some lifespan graphs (
reference).
3. Write cycle lifetime
The calculations by Mike0001 (and me) are based on 100k writes cycles. This would definitely be true for SLC memory cells, but the Seagate drives are MLC (
reference) which could be as high as 100k but as low as 10k.
4. SSHD block caching algorithm algorithm
Ideally we want the cache to be used only for the TS buffer, so that there are the fewest possible re-writes of the same cells. This depends on the cache algorithm. E.g.
- Don't cache any blocks that are infrequently used - great, probably only the TS buffer will hit the cache, and if the TS buffer is smaller than the cache then it wouldn't have to overwrite itself immediately either.
- Cache more aggressively so the cache gets polluted with other data - bad, the TS buffer might not be fully cached at all, so the re-write will be even higher.
I'm familiar with the SSD cache algorithm used by one particular "auto tiered storage" architecture. It's not particularly relevant here because we've no evidence what algorithm Seagate use; but I've put it as a footnote here in case anybody is interested.
In summary:
The lifespan on a current generation 8GB MLC cache SSHD could be as low as 1 year for a low-quality MLC device, assuming 24hr running.
Theoretically, if the following conditions of an SSHD's non-volatile cache memory were were met:
- Significantly greater than the TS buffer size. (say >32GB).
- Cache algorithm favours the access pattern of the TS buffer and tends to reject normal video files.
- Memory cells to give >100k re-write cycles (e.g. SLC).
...then a SSHD might cache the TS buffer and offer the possibility of spin-down when not recording or playing back.
But for now, the only drive on the market doesn't seem to fulfil those criteria.
Footnote:
The auto-tiered storage that I'm familiar with is an array of one 2TB SSD, caching against nnTB of 7200 spindles. The 2TB SSD is treated as follows:
- 2TB raw capacity, but "over-provisioned" by retaining 600GB for wear levelling and garbage collection, so 1.4TB remaining. This is done at a hardware level below the caching algorithm.
- Of the remaining 1.4TB, the caching algorithm keeps 10% free for new block writes - so new block writes can always be done to SSD rather than spindle.
- Of the remaining 90% for general use (approx 1.2TB) the algorithm prefers to fill the whole 1.2TB immediately - regardless of block usage frequency - rather than leave any of the 1.2TB unused.
- Once the 1.2TB is filled, it starts moving blocks between SSD and spindle based on access frequency (read or write).
- Aside: 64GB volatile cache RAM for read and write. Writes are mirrored to a duplicate device on separate power circuits, via its RAM cache, so effectively all writes are direct to RAM and then flushed to SSD.
Thanks,
Andrew