disk errors, next steps?

outsidecave

New Member
Hi,

The errors below, do they mean the disk is on its way out or fixable using fixdisk or similar?

I hadn't logged into the web interface for a while and noted errors.

Any next steps appreciated.

Thanks,

Stu
disk.JPG
 
Should be fixable with fixdisk.
Worrying though with such low hours. But high number of stop/starts relative... - I still maintain it's better to leave it on, overall.
 
Ensure you have version 3.13 of the custom firmware installed before running fix-disk - it includes several improvements over earlier versions
You might want the y option to auto respond to prompts.
 
One of ours has similar usage (ie. only started for actual viewing or recording) and also threw up a batch of 197 & 198 flags. I used fix disk (with help from the resources here) which sorted it out.
That was a couple of years ago and nothing else has popped up since (touch wood).
So I'd suggest you just fix them and carry on as before. If new ones become a more common occurrence then that may be a time to start worrying about it.
 
Should be fixable with fixdisk.
Worrying though with such low hours. But high number of stop/starts relative... - I still maintain it's better to leave it on, overall.
I'd agree with you about the high number of PCCs. The HDR-T2 here is set to come on at 5pm and off at midnight so the figures are 13625/3701 POH/PCC

I remember reading somewhere that the life of a HHD is to an extent determined by how many times it's powered up. The same HDD used in the HDR-T2 is used in Sky boxes and, as reported before, I've got a couple of pulls here with 50160/180 & 64800/112 POH/PCC figures and both passed Seagate's SeaTools long test with no errors.
 
Our box with the reallocated sectors has PCC 5969 and POH 8478, so not much different from the OP. It's probably a ratio you'd expect for a normally operated box - switched on for an average of about 1.5 hours viewing or recording and then off again.

The life vs cycles idea is another one that goes back to the very early machines and has been largely disproved nowadays, or at least modern discs are built to cope easily with cycling.
If you look at the life left column in the table in the OP you can see that the tool estimates 90% on hours but 94% on cycles (ours has 91 & 94), so presumably the disc manufacturer doesn't see that many PCC as being a problem.
 
The life vs cycles idea is another one that goes back to the very early machines and has been largely disproved nowadays, or at least modern discs are built to cope easily with cycling.
I don't agree. The differential thermal expansion and contraction as the result of power on/off cycles will promote failure due to stress fractures in circuit tracks, solder joints (especially lead-free), IC internal bond wires, etc etc. This is the usual cause of failure in any electronics assembly if you analyse the detail of the failure down to sub-component level, and anything which reduces these stresses will improve the service life.

My HDRs are on 24/7.
 
I don't agree.
Me neither.
My HDRs are on 24/7.
Likewise.
If you look at the life left column in the table in the OP you can see that the tool estimates 90% on hours but 94% on cycles (ours has 91 & 94), so presumably the disc manufacturer doesn't see that many PCC as being a problem.
But those percentages are irrelevant without considering the bad sectors.

I have seen figures from lots of disks over the years, not just in PVRs. Some of them have done over 100,000 hours and often less than 50 power cycles. They mostly have 0 bad sectors.
The ones on here having trouble with bad sectors all seem to have a fairly low POH/PCC ratio.

FWIW, here are my stats. (POH/PCC/Bad):
53139/469/0
33688/3347/0
13462/1470/0
These were all eBay purchases and the vast majority of the PCCs were from the previous owners.

A replacement (larger, nothing wrong with original) disk I fitted in another one for a relative: 13810/755/0 (this one is power cycled daily)

And on my current PC:
29304/17/0
26208/13/0
 
As with so many things there are two schools of thought and few will be persuaded to change sides. No surprises there of course.

There seems to be no definitive proof that moderate cycling (a few times a day) has any significant effect on modern devices in terms of failure before an end of life for other reasons. (Most hard information relates to continuous operation in data centres.) Years ago many people got fed up with the time taken to boot a Windows PC, so left them on all the time. To counter the "waste" criticism of this practice they used the reduced life argument, from the days of valves, and I think that now forms a large part of the leave-it-on argument.

OTOH leaving something like a FOX running 24/7 uses about £25 a year in electricity, with a corresponding environmental impact, so for me that tips the balance firmly to the side of letting them power down to standby when possible.
 
From the Seagate ST3500312CS product manual

The product will achieve an Annualized Failure Rate (AFR) of 0.55% when operated in an environment of
ambient air temperatures of 25°C. Operation at temperatures outside the specifications in Section 2.9 may
increase the product AFR. AFR is a population statistics that is not relevant to individual units.
AFR specifications are based on the following assumptions for consumer electronics environments:


• 8760 power-on-hours per year
• 10,000 average motor start/stop cycles per year
• Operations at nominal voltages


Nonrecoverable read errors 1 per 10 15 bits read, max.
Annualized Failure Rate (AFR) 0.55% (nominal power, 25°C ambient temperature)
Contact start-stop cycles 50,000 cycles
(at nominal voltage and temperature, with 60 cycles per hour and a 50% duty cycle)


So if I am reading this correctly it says that coming in and out of standby (SSC) is OK but also says that POH = 8760 per year which = 24/7 PCC =1
 
Back
Top