network-shares-automount showing empty mounted directory

gebbly

Member
Hi All,
I have the network-shares-automount installed and it looks like its mostly working. The mount is being made and when I run a mount command I get :
192.168.1.11://srv/nfs/ext on /media/pc type nfs (rw,vers=3,rsize=1048576,wsize=1048576,soft,proto=tcp,timeo=70,retrans=3,sec=sys,addr=192.168.1.11)
192.168.1.11://srv/nfs/ext on /mnt/hd2/My\040Video/[Shares]\040\040\040\040\040\040\040Do\040not\040delete!/pc type nfs (rw,vers=3,rsize=1048576,wsize=1048576,soft,proto=tcp,timeo=70,retrans=3,sec=sys,addr=192.168.1.11)

However doing "ls" on either of the directories is showing no files. I created a test.txt file on the pc for testing. I am assuming I should be able to see it?
Looking at those 2 mounts they are using NFSv3. At present I only have NFSv4 running on my Arch Linux box. Could that be the cause of the empty directories?

Has anyone got this working with an Arch box? Are there any plans for NFSv4 support?

Any help or suggestions at all gratefully accepted as I have been banging my head against this for a while now.

****UPDATE****
One of the things I tried last night was to remove the config directory for my mount and reboot the box to remove any trace of the mounts. I then created a mount manually by creating /tmp/stuff and running :
mount -t nfs <server IP>:<server share dir> /tmp/stuff
I got exactly the same result with "mount" reporting a successful mount using NFSv3 and an empty directory.

So it now looks like the plugin is fine and its either a more general client setup issue or a problem with the server. Sorry to trouble you all. My problem continues :)
 
Last edited:
There are a couple of threads about NFS woes here and here. Skip to the last few pages of the long one for the more recent conversation.

I'm also on Arch also and mount with this
Code:
mount -t nfs -o hard,intr,nolock,sync,vers=2 192.168.0.8:/mnt/hd2 /media/nfs-humax

NFS should negotiate the best working version it can if you don't specify anything. I thought the Humax was vers=2 anyway? I have a router which uses vers=3 and that's even flakier.

Anyway, to cut a long story short - mounting from my desktop works fine but accessing the share on my NAS/server is a bit flakey (I will get around to installing Arch on it one day!) and sometimes the Humax reboots itself out of the blue while watching something. The Humax has some not-the-most-reliable network code apparently. So... I stopped using NFS for network-shares-automount and use CIFS from my NAS/server.
 
Interesting to hear you are also on Arch linux. Can I check with you about setting up the server side please?

I have /etc/exports
/srv/nfs/ext 192.168.1.201(ro,nohide)

I have started nfs-server and rpcbind.

On the Humax side if I try your command I get :
mount: mounting 192.168.1.11:/srv/nfs/ext on /tmp/pg failed: Protocol not supported
Switching to my command:
mount 192.168.1.11:/srv/nfs/ext /tmp/pg
The command completes without error and then running "mount" reports :
192.168.1.11:/srv/nfs/ext on /tmp/pg type nfs (rw,vers=3,rsize=1048576,wsize=1048576,hard,proto=tcp,timeo=70,retrans=3,sec=sys,addr=192.168.1.11)

Other than an entry in /etc/exports and starting nfs-server and rpcbind did you do anything else server side?
 
I've decided to log my investigations here as I work through things in case it helps anyone else or sparks any ideas. If anyone has any thoughts please do chip in.

On Archserver /srv/nfs/ext is set to permissions rwx r x r x. So no write permissions.
On humax I have a directory /tmp/pg with permissions rwxrwxrwx.
I notice that after the mount command /tmp/pg has changed permissions to rwx r x r x. Unmounting sets them back to rwxrwxrwx.
This leads me to think that the mount command has been successful in so far as connecting to the remote directory.

running "rpcinfo -p | grep nfs" gives :
100003 3 tcp 2049 nfs
100003 4 tcp 2049 nfs
100227 3 tcp 2049 nfs_acl
100003 3 udp 2049 nfs
100227 3 udp 2049 nfs_acl

So it looks like the ArchServer is currently running the nfs service for v3 and v4 but not v2. That would match up with the error I got when I used chimeland's mount command that specified vers=2.

Given that the mount appears to work but I cant see any files and there is mention of acl I shall investigate Access Control Lists and see if something is restricting access to the files in the directory.
 
I know the ArchServer and Humax can see each other because they can ping each other. Also I have got gFtp running on the ArchServer and can successfully copy files from server to humax and when in cli on Humax verify that the files have copied. So there is definately a connection between the 2 machines.
 
Ok, next strange yet interesting discovery. A colleague at work asked if the file was in the exported directory even if I cant see it with "ls". Given the previous discoveries about the local mount point changing permissions correctly and the ArchServer reporting that the nfs service was definately running I decided to try an experiment. I changed the exports file on the ArchServer to export the directory as "rw". Then discovered that on the Humax after mounting the Arch server directory doing "ls" shows nothing (not even the "." and ".." entries).
BUT if I do
"cp /tmp/pg/test.txt /tmp"
(because I know that /srv/nfs/ext/test.txt should exist on the remote exported directory) it works! Doing "ls" in /tmp then shows the copied file and it has the correct contents in it.
Next thing to try was accessing the file whilst it was in /tmp/pg. Doing "less /tmp/pg/test.txt" outputs the contents of the file correctly, even though ls wont show the file as existing!
Taking this a step further on the Humax I did :
echo "test h" > /tmp/pg/h.txt
The command didnt report any errors but doing "ls" still showed nothing in the directory. HOWEVER, back on the Arch server if I looked in /srv/nfs/ext/ there was now a file h.txt that contained "test h".

So clearly the mount has worked and files can be passed between ArchServer and Humax. The only mystery left is why "ls" doesnt output anything including the "." and "..".

I'm a lot happier at this point as the nfs mount is clearly almost there!
 
Here are my settings...
NAS/server (OMV running Debian) - '/etc/exports'
Code:
# /etc/exports: the access control list for filesystems which may be exported
#               to NFS clients.  See exports(5).
/export/raidroot 192.168.0.2(rw,no_subtree_check,secure)
/export/media-share 192.168.0.8(rw,no_subtree_check)

# NFSv4 - pseudo filesystem root
/export 192.168.0.2(ro,fsid=0,root_squash,no_subtree_check,hide)
/export 192.168.0.8(ro,fsid=0,root_squash,no_subtree_check,hide)
0.2 is my desktop, 0.8 the Humax.

Mount to server from desktop
Code:
mount -t nfs4 -o hard,intr,noatime 192.168.0.12:/export/raidroot /media/nfs-omv

Mount to Humax from desktop (as in first post)
Code:
mount -t nfs -o hard,intr,nolock,sync,vers=2 192.168.0.8:/mnt/hd2 /media/nfs-humax

I think you are right about NFS Humax being vers=3 but I've always had a rock solid connection from my desktop using vers=2.

From the threads I previously linked...
The mount to the server as reported by the Humax
Code:
192.168.0.12:/export/media-share on /media/omv type nfs (rw,vers=3,rsize=524288,wsize=524288,hard,intr,proto=tcp,timeo=70,retrans=3,sec=sys,addr=192.168.0.12)

I was having permission issues on the server which I can't recall now, which needed remedying. The Humax could 'see' the files but couldn't actually play any media.

Maybe not relevant but I've had major issues accessing my router (Asus rt-ac68u) via NFS (vers=3) from my desktop and adding the option 'fsid=0' appeared to help. I've since ditched it there however and I'm using a rock solid sshfs connection.

Edit: forgot to mention I had to edit Network-shares-automount file '/mod/sbin/scanmounts' as it mounts as 'soft' when it should be 'hard'. I also added 'intr' - see links from first post.
 
Thanks for the info Chimeland. I'll see if I can replicate your setup tonight.

A bit more tinkering last night showed that doing "ls" on the Humax box in any directory that has no files in it will not list the "." and ".." entries like any normal linux installation so that explains why I don't see those 2 entries in this scenario. Clearly "ls" thinks the directory is empty even though all other commands correctly operate as though the content was there.

Maybe "ls" requires an extra service running on the server? Maybe when viewing the contents of a nfs directory I am supposed to use a command other than ls?

I have found that you can use commands like "dir", "vim" and "less" to examine the contents of a directory too. I shall be trying these tonight too to confirm if it is just "ls" or any command attempting to read the directory which has problems. I also need to check that ACL thing too in case I set some restriction up when I first installed Arch years back.

Regarding:
"Maybe not relevant but I've had major issues accessing my router (Asus rt-ac68u) via NFS (vers=3) from my desktop and adding the option 'fsid=0' appeared to help."
From my initial reading I thought "fsid=x" was a setting only used by NFSv4. When using NFSv3 or older the mount command must specify the entire path on the server. On NFSv4 there is a concept of a NFS root directory. If you had an /etc/exports :
/export 192.168.0.2(ro,fsid=0)
/export/media 192.168.0.2(ro)

The fsid=0 sets /export as the NFS root.
When mounting the server with NFSv4 "/export" would be seen as the root directory so to mount /export you would do "mount -t nfs -o vers=4 192.168.0.12:/ <local mount point>".
If you wanted to mount the servers /export/media directory you would do "mount -t nfs -o vers=4 192.168.0.12:/media <local mount point>"
It would be interesting on your working setup if you just did "mount" on your machines and see what it reports as actually being mounted. It may have negotiated different settings with the server.
 
From my initial reading I thought "fsid=x" was a setting only used by NFSv4.

From what I read I thought it was the other way round :confused: Ahh, the joys of NFS mounting! You could be right of course, I've only spent the minimum amount of time I had to bumbling about with this.

Also, I've never used bind mounts with NFS (when setting the export myself) which is recommended practice, so that may have a bearing.
 
I found a couple of links referring to fsid - here and here.

The first I've seen before and gave me the idea to add fsid to my router's exports (NFS 3) as the connection was dropping really quickly with stale file handles. In that case there are two shares so the fsid's are different (0 and 1).

To quote the second link...
As not all filesystems are stored on devices, and not all filesystems have UUIDs, it is sometimes necessary to explicitly tell NFS how to identify a filesystem. This is done with the fsid= option.

For NFSv4, there is a distinguished filesystem which is the root of all exported filesystem. This is specified with fsid=root or fsid=0 both of which mean exactly the same thing.
So 'fsid=0' means something specific to NFS 4 but fsid itself isn't specific to NFS 4, which is why it stopped the stale file handles with my router's NFS 3. BTW, I don't use fsid at all with the two NFS 4 mounts on my LAN to other computers running Arch.

I'm never too old to learn something new! And you should be an NFS whizz by the time you're done ;)
 
"but fsid itself isn't specific to NFS 4, which is why it stopped the stale file handles with my router's NFS 3"
Thanks, I hadn't realised this at all! Obviously I still have some learning to go. I certainly know more about NFS now than I did a week ago. When first trying to read up about this particular feature I just saw in passing mention of an "NFS root". Putting that into google is not helpful at all, just lots of discussion of root users and the root file system. :/

I remember reading about filesystems having UUIDs ages ago when setting up the Arch machine. So it makes sense to assign different FSID values I guess.

[All of this just to try and copy all my recordings from a failing drive to a new drive in the box, sheesh. Determined to figure it out now though as I would like to plumb in a NAS box to the network at some point in the future.]
 
So I tried various commands to view the contents of a directory last night (which all work on the pc) dir, less, vi, echo. they all report the directory as empty. Time to figure out if its the Humax or the server. Last night I plugged in my old Pi and will be trying to create mounts on that tonight and see what behaviour I get then.
 
Right then. Significant progress.

TL;DR - Its the Humax at fault

I have just got my old Raspberry Pi up and running. I ran the command :
mount -t nfs -o vers=3,nolock 192.168.1.11:/srv/nfs/ext /tmp/pc
The Pi connected and I was immediately able to do "ls" and see the contents of the remote directory on the ArchServer. So I ran exactly the same command on the Humax and got the usual blank directory list from "ls".

So it isnt the server which is a relief. At least I have that setup correctly. Now that I have a working client I can do a compare and contrast. First thing to do is look at what has actually been mounted. If I run "mount" to list mounts I notice the entries for the mount to the Arch box are significantly different on the Pi and the Humax :
Humax
192.168.1.11:/srv/nfs/ext on /tmp/pc type nfs (rw,vers=3,rsize=1048576,wsize=1048576,hard,nolock,proto=tcp,timeo=70,retrans=3,sec=sys,addr=192.168.1.11)
Pi
192.168.1.11:/srv/nfs/ext on /tmp/pc type nfs(rw,vers=3,rsize=1048576,wsize=1048576,hard,nolock,proto=tcp,timeo=600,retrans=2,sec=sys,addr=192.168.1.11,relatime, namlen=255,mountaddr=192.168.1.11,mountvers=3, mountport=20048,mountproto=udp,local_lock=all)

The Pi had some options different and some extra. I'll have to investigate those. I tried modifying the Pi output to use as a mount command on the Humax (using the different timeout values etc):
mount -t nfs -o rw,vers=3,rsize=1048576,wsize=1048576,hard,nolock,proto=tcp,timeo=600,retrans=2,relatime,namlen=255,mountvers=3,mountport=20048 192.168.1.11:/srv/nfs/ext /tmp/pc

Sadly this made no difference and directory contents are still hidden.

I'm moving into very uncertain waters now as it definately looks like a problem specific to the Humax. My Linux knowledge is ok but the inner workings of the Humax are still a bit of a mystery.
At this point I would be very grateful for any help or suggestions at all to investigate the Humax further.
 
Back
Top