Yes, but it's quite slow. CPU-bound rather than I/O-bound.Is it possible to move recordings directly between two Humax Machines using SSH/SFTP (with dropbear-ssh and greenend-sftp)?
Standard FTP is faster, but not quite as simple.But I've no objections if someone recommends a much simpler (and hopefully faster) way of doing it.
I realised that it would be slow, but I couldn't even get it to work. The problem seemed to be that the sftp command didn't seem to accept the private key in the .ssh/authorized_keys directory.Yes, but it's quite slow. CPU-bound rather than I/O-bound.
Is that using betaftpd (which I haven't used before), or some other FTP client-server?Standard FTP is faster, but not quite as simple.
betaftpd is the CF alternative to the standard built-in FTP server, yes. To use it the (flaky and restricted) standard service must be turned off:Is that using betaftpd
...that's a different matter. To use the FTP server on each machine, you're going to be using a third device as an FTP client and pull down the recording from one then pushing it up to the other. I'm not aware of there being an FTP client within the CF.Is it possible to move recordings directly between two Humax Machines
There certainly is. I asked, and af123 obliged.I'm not aware of there being an FTP client within the CF
humax ~ # opkg list|grep ftp
betaftpd - 0.0.8pre17-5 - BetaFTPD is a single-threaded FTP daemon, making it very fast
greenend-sftp - 1.0 - SFTP Server plugin.
tnftp - 20151004-3 - tnftp is a port of the NetBSD FTP client to other systems.
Don't have time or energy to try this now, but I just use scp or rsync rather than S(FTP) most of the time.The problem seemed to be that the sftp command didn't seem to accept the private key in the .ssh/authorized_keys directory.
Yes, betaftpd on one side, and tnftp (ftp) on the other.Is that using betaftpd (which I haven't used before), or some other FTP client-server?
Presumably with a network file share mount?I just use scp or rsync
Ooooh... just found tnftp in WebIF >> Package Management >> Available (with advanced packages enabled).There certainly is. I asked, and af123 obliged.
wget
or curl
on the target machine to copy and decrypt the recordings at the same time. However you would need to know the DLNA URL for each file which would probably require writing a script if you have a largish number of recordings to process.I don't see why not - there will be no confusion, because one makes FTP requests and the other services them. The particular server won't be confused because they are at separate IP addresses. Obviously you would need to operate tnftp on the command line, so it's not exactly user-friendly, whereas at least the same level of convenience can be achieved (on the command line) with a mounted external file system and the cp command.Could they both be installed on both machines, and do the transfer either way?
There are many ways to do that. It's only the built-in Humax facilities (including the FTP server) which are restricted to My Video / My Music / My Photo and recognised media file types. You can do anything you like form the Telnet / Webshell command line using the Linux commands cp (copy), mv (move), ls (list directory), mkdir (create directory) etc.Presumably, they would also permit the moving of files outside the "My Video" structure?
Why? Do you interact with your units via the external Internet?I have installed dropbear-ssh and greenend-sftp as part of improving my external security.
Sorry, I've only just seen this. The recordings are already decrypted. I have now managed to move files using wget from one machine to another, but using the inbuilt ftp server protocols, rather than DLNA. I was interested in finding out if there was a less fiddly way than that.Recordings would need to be decrypted to be playable on the second humax machine.
Why? Do you interact with your units via the external Internet?
Just to assure other readers: the RS service does not actively poll our HDR-FOXes, the rs package running on the HDR-FOX accesses RS periodically to upload status and check for instructions to download. This means all activity is initiated from the LAN side of the router and does not require any open ports for accesses initiated from the wild-west WAN side, and the router rejects incoming speculative probes.I wanted to use remote scheduling, which started me thinking more about security...
I can't help you there, hopefully somebody with the right experience will come along and add to the collective wisdom.I'd still like to know whether the SSH/SFTP difficulty with key locations that I am encountering is something fundamental, or just down to my ignorance. Any insight would be appreciated.
Absolutely - yes. Sorry that I could have been read otherwise. It was realising that rs had been implemented in such a way as to avoid any security concerns, that caused me to think much more deeply about my whole set-up.Just to assure other readers: the RS service does not actively poll our HDR-FOXes, the rs package running on the HDR-FOX accesses RS periodically to upload status and check for instructions to download. This means all activity is initiated from the LAN side of the router and does not require any open ports for accesses initiated from the wild-west WAN side, and the router rejects incoming speculative probes.
The .ssh/authorized_keys file should contain public keys.The problem seemed to be that the sftp command didn't seem to accept the private key in the .ssh/authorized_keys directory.
No. You just copy a file from source to destination (usually one is a local path and the other is in the form "a.b.c.d:/path", depending on the direction of travel). Of course you need to set up the keys and stuff first if you want an easy life thereafter.Presumably with a network file share mount?
OK, I didn't realise rsync (or whatever) had to be running at both ends.No. You just copy a file from source to destination (usually one is a local path and the other is in the form "a.b.c.d:/path", depending on the direction of travel).
Only necessary to have an SSH daemon at one end (dropbear in our case).I didn't realise rsync (or whatever) had to be running at both ends.
Oops! You're absolutely right, of course. Sorry about the slip - I'm still new to this game.The .ssh/authorized_keys file should contain public keys.
I think that's the problem that I ran into. I think it might be expressed as: "How can the private key generated by dropbear in the client Humax be accessed so that the SFTP extension can use it in its SSH connection to the Humax server?" Is that right?Looks like there's a missing "mod" there for whatever reason. And I can't see how you set the identity file manually.
#$HOME variable is used before /etc/passwd when expanding paths such as ~/.ssh/id_dropbear (for the client). Patch from Matt Robinson
/* A default argument for dbclient -i <privatekey>.
* Homedir is prepended if path begins with ~/
*/
#define DROPBEAR_DEFAULT_CLI_AUTHKEY "~/.ssh/id_dropbear"