Telnet host disconnect

sceedy

Member
The story so far :-

I am experimenting with cross compiled programs for the Humax. I used telnet to have a “poke around” and saw there was a complex ecosystem so have stuck to static programs. I have uploaded to “My Photo” (where else do you put snapshots ?) and had no problem with simple programs (e.g. deliberately producing segment traps, illegal instruction traps, bus errors etc. to confirm errors did not interfere with recoding/playback occurring at the time).

When I started creating programs with a non-trivial execution time, I started to get disconnects. I read the telnet protocol and found it was the client that was responsible for keeping the link alive. HyperTerminal is bundled with xp, so I looked at it’s specs and it is incapable of doing anything to keep alive.

I searched the web for alternatives and putty was highly recommended for it’s keep alive capabilities as well as being listed as useful in the wiki. The web tutorials gave a tick box and a time (I set to 60 seconds: putty configuration is attached in case the solution is that I just need new glasses i.e. I missed something).

I started using putty and an alert came up saying that the host had disconnected. Next step uninstall third party firewall and disable windows firewall, no change. My trusty network analyser told me the disconnect was being sent to the PC. My network is done “the hard way” i.e. every device has it’s IP address set manually and are connected together by a 128 port switch. I wondered if the switch had developed a fault so I found a cross over cat 5 cable and connected directly, no change.

The telnet prompt is not the standard linux prompt so I wondered if telnet itself had limit(s)?

I have tried looking in the standard linux places for the limits and have not been able to find them. Any reasonable suggestions on a way forward will be welcome as I am doing this out of intellectual curiosity.
 

Attachments

  • Putty.zip
    2.3 KB · Views: 4

Black Hole

May contain traces of nut
When I started creating programs with a non-trivial execution time, I started to get disconnects.
As an alternative to trying to keep a Telnet session alive, you could:

1. Use webshell https://hummy.tv/forum/threads/webshell-command-line-access-from-web-browser.6907/

2. Execute long processes within an abduco session https://hummy.tv/forum/threads/yout...other-video-platforms.8462/page-3#post-120326
abduco

A problem with running a long process from a Telnet session or the webshell package (a command terminal available as a web page - access via WebIF >> Diagnostics >> Command Line) is that the session inconveniently drops out and terminates any active processes if you take the focus away to do something else. Then you have to reconnect and restart the youtube-dl command, which then has to do its initial thinking before the download resumes...

The command abduco is already available to support other long processes such as fix-disk, and can be used here to create a protected command session which carries on regardless of the terminal session dropping out. So, before launching the main download, use the following to create a protected session:
Code:
# abduco -A yt
This creates a protected session called "yt" (call it what you like), and the next command prompt is within that session. Then start your youtube-dl process as described above. You can now "do other stuff" and the session will carry on regardless - to break out of the session and do other things on the command line, use Ctrl+\.

To come back later, use the same abduco line again. This time, as there already exists a session called "yt", the command connects to it rather than starting a new one.

To close the session (from within the session), type "exit".

To inspect open abduco sessions, type "abduco" (with no other parameters).
 
Last edited:

MymsMan

Ad detector
I regularly run long processes (>1hr) using putty without problems

You can compile many programs directly on the Humax without needing to resort to the cross complier.
But apart from some utilities most of the webif is written in Jim/TCL rather than complied

Why are you uploading to My Photos? Most of the webif and packages live in the /mod directory especially /mod/bin and /mod/webif and you can create your own directories under /mod
 
OP
S

sceedy

Member
Status update :-

Black Hole’s suggestion of abduco allowed me to get to the root of my original problem. It let stuff run longer so I could determine that I was interfering with normal operation. Once I had the “doh” moment of “don’t think pc, think embedded”, I made my code “nice”r (as per the kernel call. Default priority is 10. I was thinking 18 or 19 but something is limiting priority to 17. Note to any non-technical reader, the lower the linux priority the more resources it gets i.e. the other way round to intuition). Once I had done this I no longer got telnet disconnects. I could turn off putty’s keep alive and even go back to HyperTerminal without any problems.

Extreme laziness on my part means that I have not confirmed this is also the case for webshell. Around 3 years ago I created 2 usb sticks, one labelled “custom firmware” and the other labelled “packages”. Every time I get a new box I fit a 4T drive, load the firmware and packages, soak test for a week and then ship to the next family member on the list (My experience is that approximately 19 out of 20 “spares or repair” purchases work fine with a new drive). When I tried to install webshell on a box with the old package set I got a missing dependency in the log (attached).

MartinLiddle’s question is probably relevant now. After 15 to 20 minutes of both cores at between 99.0% to 99.5% load, the front led pannel’s leds stop scrolling the channel left (twice) and then changes to “CRASH – Wait”, ”Reboot in 10s” … I have attached a putty log in case I have not turned on crash dump. On reboot the normal places only have logs that cover the time since reboot. I am not getting a crash dump so I believe I have upset the watchdog (given the led behaviour)! This is strengthened by the program running to completion on QEMU running mipsel debian etch (also linux kernel version 2.6.18) and producing the correct results. I have tried the obvious fix of “yield”ing on every iteration of the cpu hungry loops. The help I am after is the watchdog limits (I have probably exceeded the average cpu usage limit).

I have not ignored MymsMan’s comment. There is another thread about cross-compiling. I agree with the conclusion that you should compile on target if possible. If you need scripting tools not available on target, use the cross-compilation tools provided by Humax. Anything else is “interesting” (in the sense of the old Chinese curse “may you live in interesting times”). The code I wish to write needs fixes in gcc 9.2 and 9.3 fixes would be nice to have. Debian 11 cross compiler is “pencilled in” at gcc 9.3 or 10.1. I am experimenting on debian 10 (gcc 8.3.0) to try and identify the problems. First problem, the default compiler libraries are compiled for revision 2 of the mips32 architecture and the target processor is revision 1. Rebuilding the compiler libraries is time consuming but not difficult. Second problem, later gcc requires later glibc or ulibc. Debian 11 has “pencilled in” libc 2.32. Looking at the libc documentation, the minimum kernel version rational seems reasonable for the 64 bit ABI but not the 32 bit ABI, so I have been re-instating compatibility code from previous libc versions to produce a deviant (technical term for an unauthorised and unsupported variant) that is good enough to build gdb 9.2 (I don’t want to use “dwarf 4” as an expletive) and my program. I believe I have dealt with all the kernel calls but I may have made a mistake. Third problem is that debian 10 has libc 2.28, so you need libm from 2.28 for binary compatibility with precompiled libraries. If requested, I can publish (I only did static) recompiled compiler libraries, libc deviant sources and compiled version, recompiled libm, binaries for gdb 9.2 (stand alone and server) and the most recent “build from source” document. However, I would prefer to publish after solving all the target specific problems with my program.

In case the last paragraph confused things, I want information on the watchdog limits so I can avoid triggering the limits. I don’t want to just turn it off.
 

Attachments

  • opkg_install.log
    305 bytes · Views: 4
  • putty.txt
    2.2 KB · Views: 4

/df

Active Member
If you want to max out the box for multimedia processing, better use Maintenance mode. The settop program will need around 20% of the CPUs. Having said that, it's not an obvious target unless you can somehow interface to the built-in multimedia processor.

Code that relies on a specific compiler version sounds fragile to me.

Webshell wants the libutil package but that appears to replicate a library already in the CF for recent versions; perhaps the dependency is needed for older CF versions? Your missing dependency shouldn't matter.
 
OP
S

sceedy

Member
I was looking at the post to apologise to BlackHole for confusing him. The watchdog is a generic linux problem so should not be on this forum as it is not custom firmware specific. Apparently I need to start by pulling config.gz (usually in /proc) …

The first Humax document I stumbled across (Document 7405-3HDM00-R “Preliminary Hardware Data Module”) refers to 7405-5HDM “CPU Interface” that should contain the mips co-processor 2 interface to the multimedia processor.

The code generation fixes in gcc 9.2 stops you needing to compile functions separately with no optimisation that would otherwise have the intended processing optimised out. The 9.3 fixes correctly optimise the problem constructs.

Thanks for identifying the dependency.
 
Top