• The forum software that supports hummy.tv will be upgraded to XenForo 2.3 on Wednesday the 20th of November 2024 starting at 7pm

    There will be some periods where the forum is unavailable, please bear with us. More details can be found in the upgrade thread.

NFS mount larger than 2TB

But hard is the default, so explicitly adding it to the mount options doesn't change anything. Or did you actually see it mounted as soft at any point? I agree, soft is generally to be avoided, for r/w.

I was talking about my problem, not your friend's mount options :) Network-shares-automount explicitly states 'soft' as the only mount option, which I had to change to stop my T2 from rebooting every time I tried to play anything.
 
Network-shares-automount explicitly states 'soft' as the only mount option, which I had to change to stop my T2 from rebooting every time I tried to play anything.

oh! That I didn't realise; thanks. I wonder why it does that; anyone know?

Oddly enough, my friend is seeing crashes, yet he's not using NSA and thus is getting the default hard option.

Am collecting more info…
 
NSA works fine as it is for me (and almost everyone else - otherwise there would be more complaints).
 
That's true, but I have tried NFS between HDRs and that worked (but has it has characteristics such that I prefer an SMB share).
 
belated update: my friend updated to 3.12, but that made no difference to the problems.

He also noted that the Humax has no problem with an smaller NFS fs share/export from another of his Linux boxes, but clearly dislikes the 8TB fs share from his main server. Sharing a subset makes no difference.

I'll try to glean further info, and see if he wants to faff about with NSA.

thanks for the advice so far.
 
I don't regard NSA as a faff!

Sorry, I didn't mean that to sound harsh; as I recall reading, having to set-up wanted server exports via nested pseudo-dirs created via the on-telly/Web IF seemed a faff, compared with logging in via ssh, creating mount-points, and mounting manually from the cmdline.

I know that it's intended for people who aren't likely to do that, but from our (unusual) perspective, it seemed a faff. And of course I could have read it wrong :)

And I stick to my claim that it should (must) be possible to do this (i.e. NFS in general) from the cmdline, whether or not there exist pkgs to ease its administration, do auto-mounting, etc. But that does leave us out in the cold, if everyone else is doing it another way.

However, at least one other use is seeing similar issues to us, yet using NSA, until he had to make an apparently unrelated change to a mount-option, so things aren't necessarily perfect with NSA either.
 
NSA is designed to be do-able through the on-screen interface, but it is much easier with the WebIF and a proper keyboard!
 
Well.... things had been going rather swimmingly but I've finally had an unexpected reboot! Just the one mind you and I'm going to put it down to the T2/NFS combo struggling with my sometimes sluggish LAN.

I should also point out I'm now trialling NSA with an SMB/CIFS share. I can do without any reboots and this would appear to be the more reliable route.
 
Is it really only 6 years? It seems like decades ...

Again, my thanks to those involved in the developments that have enabled these extra features, and for af123 for his hosting of the rs website. :thumbsup::thumbsup::thumbsup::thumbsup::thumbsup:
 
Hi, any update from your friend?

I'm accessing NFS shares from a 3GB raid1 array (mdadm) on my Linux server via network-shares-automount and the USB option. The server's connected to the LAN via powerline ethernet adaptors which gives me a max speed of around 6MB/s, so not the fastest connection.

I was having lots of reboots of the T2 until I changed the NFS mount option of network-shares-automount from "soft" to "hard". I've seen it stated that "using soft mounts is not recommended as they can generate I/O errors in very congested networks or when using a very busy server". A "hard" mount, however, could freeze any related applications if the network becomes unreachable so I also added the "intr" option which "allows NFS requests to be interrupted if the server goes down or cannot be reached". Presumably this will solve any potential freezing problem but I don't actually know myself (until I've tested more thoroughly anyway).

Now all this could just be coincidental and anecdotal evidence but it's working for me so far. The following change I made was in '/mod/sbin/scanmounts'
Code:
echo "mount -o hard,intr $host:/$folder /media/$name"
mount -o hard,intr "$host:/$folder" "/media/$name"

I'll report back after more testing.
As far as I can tell, if you make the above changes to /mod/sbin/scanmounts it makes no difference to the type of nfs mount: you still get a soft mount. You can view your current mount status with the following:
Code:
mount |grep nfs
Manually mounting as 'hard,intr' works though, so the script must just need some additional changes.
 
belated update: my friend updated to 3.12, but that made no difference to the problems.

He also noted that the Humax has no problem with an smaller NFS fs share/export from another of his Linux boxes, but clearly dislikes the 8TB fs share from his main server. Sharing a subset makes no difference.

I'll try to glean further info, and see if he wants to faff about with NSA.

thanks for the advice so far.
Using the 'mount -r' command (nfs) does create a hard mount by default but it also mounts the folder as read only. Could this be causing a problem? Using 'mount -o hard' instead creates a hard mount as read /write.
 
Using the 'mount -r' command (nfs) does create a hard mount by default but it also mounts the folder as read only. Could this be causing a problem? Using 'mount -o hard' instead creates a hard mount as read /write.

oh! that's an excellent point, thanks.

I deliberately told him to mount it read-only, to avoid any possibility of the Humax mucking up his archive. But, of course, the Humax wants to write to the hmt file, at least, to store the last-viewed position, right? So that might be causing the reboot problem, at least.

But I don't see it relating to the main problems, e.g. not seeing files at all.
 
As far as I can tell, if you make the above changes to /mod/sbin/scanmounts it makes no difference to the type of nfs mount: you still get a soft mount. You can view your current mount status with the following:
Code:
mount |grep nfs
Manually mounting as 'hard,intr' works though, so the script must just need some additional changes.

I'm getting a 'hard.intr' mount here from my edited '/mod/sbin/scanmounts'!
Code:
192.168.0.12:/export/media-share on /media/omv type nfs (rw,vers=3,rsize=524288,wsize=524288,hard,intr,proto=tcp,timeo=70,retrans=3,sec=sys,addr=192.168.0.12)
 
I'm getting a 'hard.intr' mount here from my edited '/mod/sbin/scanmounts'!
Code:
192.168.0.12:/export/media-share on /media/omv type nfs (rw,vers=3,rsize=524288,wsize=524288,hard,intr,proto=tcp,timeo=70,retrans=3,sec=sys,addr=192.168.0.12)
I'll have another try. I wonder if it needs a reboot to make it stick?
EDIT.
You are correct, it works with network shares automount. It does need a reboot after making the changes to /mod/sbin/scanmounts. Previously I made the changes to the file before switching on the remote units and assumed it would work; it doesn't without the reboot.
 
Last edited:
Back
Top