• The forum software that supports hummy.tv has been upgraded to XenForo 2.3!

    Please bear with us as we continue to tweak things, and feel free to post any questions, issues or suggestions in the upgrade thread.

[unencrypt] Decrypt-in-place

From the ps listing above it is not possible to determine which files are being processed by the shells ('ps -w' would have shown this). However, it appears that the script is checking for duplicate sub-shells but only at startup. Would it be better to check for duplicate parents to ensure only one can run at a time?

A simpler alternative might be to create a lock file in /tmp and just exit the parent shell if it exists.

I spent a fair amount of time experimenting with various ps options to make this work both from the command line and cron, but if you can suggest something that should work, I'd love to do it the proper way as the simple ps test I have at the moment is a blatant bodge.

I don't like lock files as they get left around if the box is switched off and the process doesn't exit cleanly. But then again, /tmp is outside of /mod and isn't likely to persist beyond a reboot, so I'll have a look into that.
 
Unfortunately I have moved the remains of the 6.2Gig files out of my working directory "My Video/archive" (it had an encrypted TS file with no sidecar files )
and as time has moved on all I get in the log now is this (Telnet view and webif are the same) :-
Code:
humax# cd /mod/tmp
humax# cat unencrypt.log
pscount = 4
Unencrypt process already running - exiting
humax#

Does 'ps' show any unencrypt processes? If so, could you do a 'ps -w', record the result and then kill them and tell me if it starts working again.
 
Just to report everything working fine here (no >4Gb files yet though)

Thinking aloud - would it be feasible to use a routine along the lines of "nicesplice" to split larger files into "part 1", Part 2..... etc. - either before or after unencrypt, depending where the problem lies.

Obviously if it is just the Humax DLNA client which is the stumbling block then this could be done after unencrypt has done its stuff?

As I say, just thinking aloud really - way beyond my capabilities!
 
I spent a fair amount of time experimenting with various ps options to make this work both from the command line and cron, but if you can suggest something that should work, I'd love to do it the proper way as the simple ps test I have at the moment is a blatant bodge.
I think the alternatives are to use a lock file in the main shell (see below) or to move the current test into the for loop. The current test isn't really a proper mutex so could potentially fail to block due to a race condition.
I don't like lock files as they get left around if the box is switched off and the process doesn't exit cleanly. But then again, /tmp is outside of /mod and isn't likely to persist beyond a reboot, so I'll have a look into that.
/tmp is mounted as a tmpfs in RAM, so it will never persist after a reboot. If you are really worried about this you could copy the process id of the shell into it, then check if it is still running at startup.
 
Thinking aloud - would it be feasible to use a routine along the lines of "nicesplice" to split larger files into "part 1", Part 2..
I did it this afternoon using nicesplice.
I put a bookmark about halfway at a scene change and cropped the front section. Renamed that and moved the original back to videos put a bookmark at the beginning and cropped from the original bookmark which is retained in the original.
I renamed the first cut so as not to have two files in the same folder with the same name. I just doubled up the first letter.
Time consuming but not difficult.
 
Just to report everything working fine here (no >4Gb files yet though)

Thinking aloud - would it be feasible to use a routine along the lines of "nicesplice" to split larger files into "part 1", Part 2..... etc. - either before or after unencrypt, depending where the problem lies.

Obviously if it is just the Humax DLNA client which is the stumbling block then this could be done after unencrypt has done its stuff?

As I say, just thinking aloud really - way beyond my capabilities!

Don't worry - we showed today that the server isn't affected and we have already worked around the client side of things by ensuring that we use a 64-bit version of wget.
 
Sam:- I tried a different 6Gig+ file and it also failed, I was wondering it it would be possible to turn on / add any debug to the program to find out what is happening?
It always completes the count up to 100% of the wget but I have noticed that at 100% the TV video freezes for half a second or so. So something strang is happening around there, when the next cron comes along, the the encrypted flag is still set so it has another attempt and I end up with either a 1234.TS file or a propername.TS with no sidecar files but always still encryted,
EDIT:- I have tried the same file manually with cron off and it worked, Only snag now is I dont have an encrypted file to use for testing Ho-Hum
 
Ezra - did the download halt at 100% and then not get any further, or was there any more text in the log? (best viewed using the diagnostics page in webif as it's then easy to cut and paste)

It's fascinating that the problem only occurs when run from cron - that's usually a path problem, but path problems aren't going to arise only on 6G files, so I'm a bit bemused.

I've already build a debugging option into unencrypt but the extra information that is printed out isn't going to help this problem.

As I have been typing, I have downloading my own 6G test file (thank-you BBC HD Preview) using the version of wget in busybox and that now seems to be 64-bit safe, as af123 said. I don't think that your problem is related to the old 4G limit, but if wget isn't exiting then we can now swap versions to compare.

Did you say that the sidecar files are disappearing? That's very strange as the only sidecar file that is touched is an edit to the .hmt. There is something very strange going on here!
 
In another thread, Black Hole asked if unencrypt could do without auto-unprotect by doing what it does. I agree that it would be a good thing to do but the daemon is a binary so I can't see what it is actually doing.

At some point, I will do some digging, but it's fine for the moment and I like the unix ethos of "do one thing and do it well" so I'm happy to rely on auto-unprotect for the moment :)
 
In another thread, Black Hole asked if unencrypt could do without auto-unprotect. I agree that it would be a good thing to do but the daemon is a binary so I can't see what it is actually doing.

At some point, I will do some digging, but it's fine for the moment and I like the unix ethos of "do one thing and do it well" so I'm happy to rely on auto-unprotect for the moment :)

Auto unprotect is updating a flag in the .hmt sidecar file and also updating the DLNA server index database so that the recording will be served decrypted. The clever part of auto-unprotect is that it hooks into the kernel inotify framework so that it notices when recordings complete, and it then does the work. It also monitors DLNA server activity to pick a good time to do that side of things.

unencrypt doesn't need auto-unprotect for SD recordings anyway and it only needs the DLNA side of things for HD.. I'd leave it to auto-unprotect as xyz321 has done a great job in making it robust.
 
I've been thinking about ways to generalise all of these functions and have an idea..

A simple daemon process that just watches for when recordings complete (basically a variant of the current auto-unprotectd). Packages could register with the daemon so that they are called on completion and they should be able to register with a priority so there is some control over order of execution. I can envisage at least auto-unprotect, auto-decrypt, audio-extract and flatten being able to make use of this - the framework could call them in sequence on every completed recording (they could just exit if they aren't interested in it). It would keep the 'recording completed' logic central and help avoid race conditions. Without something like this, the situation is only going to get worse as more packages come along.

I'd be happy to build the framework and help people modify their packages to work with it. I'd need help from xyz321 for the logic he uses in auto-unprotectd though rather than re-invent the wheel.
 
I've been thinking about ways to generalise all of these functions and have an idea..

A simple daemon process that just watches for when recordings complete (basically a variant of the current auto-unprotectd). Packages could register with the daemon so that they are called on completion and they should be able to register with a priority so there is some control over order of execution. I can envisage at least auto-unprotect, auto-decrypt, audio-extract and flatten being able to make use of this - the framework could call them in sequence on every completed recording (they could just exit if they aren't interested in it). It would keep the 'recording completed' logic central and help avoid race conditions. Without something like this, the situation is only going to get worse as more packages come along.

I'd be happy to build the framework and help people modify their packages to work with it. I'd need help from xyz321 for the logic he uses in auto-unprotectd though rather than re-invent the wheel.

I have spent a fair amount of time trying to work out how to get processes to play fair, to avoid race conditions and to prevent processes from getting in each other's way, and leveraging auto-unprotect sounds a great way to do it.
 
I'd be happy to build the framework and help people modify their packages to work with it. I'd need help from xyz321 for the logic he uses in auto-unprotectd though rather than re-invent the wheel.
I think it is a good idea but there may be a problem using the existing auto-unprotect logic. I will send you the details later...
 
Ezra - did the download halt at 100% and then not get any further, or was there any more text in the log? (best viewed using the diagnostics page in webif as it's then easy to cut and paste)

It's fascinating that the problem only occurs when run from cron - that's usually a path problem, but path problems aren't going to arise only on 6G files, so I'm a bit bemused.

I've already build a debugging option into unencrypt but the extra information that is printed out isn't going to help this problem.

As I have been typing, I have downloading my own 6G test file (thank-you BBC HD Preview) using the version of wget in busybox and that now seems to be 64-bit safe, as af123 said. I don't think that your problem is related to the old 4G limit, but if wget isn't exiting then we can now swap versions to compare.

Did you say that the sidecar files are disappearing? That's very strange as the only sidecar file that is touched is an edit to the .hmt. There is something very strange going on here!

At 100% the screen freezes for half a second and then I get the humax# prompt (no more text). I am then left with either a 1234.TS file and no sidecars or oldname.TS that is not unencrypted. I recorded another Test file of 6.8G (and backed it up this time) and ran it with out cron successfully, The the next time I made cron run at 29 and 59 min instead of 0,15,30,45 and that also compleated, I now only get pscount=2 instead of 3 or 4 so that is good. However it still freezes the video very breifly at 100%. Here is the log from a successful cron 29,59 run on TEST1 file :-
Code:
humax# cat unencrypt.log
pscount = 2
Processing "My Video/archive/Dirk Gently_20110520_2102", Media ID is 1806
Processing "My Video/archive/The Space Shuttle_ A Horizon Guide_20110417_0059", Media ID is 1821
Processing "My Video/archive/The Joy of Stats_20110713_1958", Media ID is 1834
Processing "My Video/archive/Space Shuttle_ The Final Mission_20110724_2058", Media ID is 1836
Processing "My Video/archive/The Secrets of Scott's Hut DD5_1_20110418_0016", Media ID is 1849
Processing "My Video/archive/The Space Shuttle's Last Flight_20110726_0123", Media ID is 1870
Processing "My Video/archive/TEST1_20111211_1652", Media ID is 1874
Getting http://127.0.0.1:9000/web/media/1874.TS
Connecting to 127.0.0.1:9000 (127.0.0.1:9000)
1874.TS              100% |*******************************|  6498M 00:00:00 ETA
humax#
 
humax# crontab -l
0 2 * * * /mod/sbin/anacron -s -d
*/5 * * * * /mod/sbin/rs_process >> /mod/tmp/rs.log 2>&1
29,59 * * * * /mod/sbin/unencrypt "/mnt/hd2/My Video/archive" > /mod/tmp/unencrypt.log 2>&1
 
Version 0.1.3 is on its way into the repository.

A couple of changes
1) Process tracking now uses /tmp, not ps and should be a lot cleaner
2) I've reverted to the busybox version of wget
3) More debug statements
4) Cleaner handling of when seriesfiler has moved something
5) Now only runs at 1 minute and 31 minutes past the hour

To turn on the debug - create a file called "/mnt/hd2/My Video/.unencryptdebug". In hindsight, that's probably not the best directory to hold the file, but it'll do for now.

Ezra, could you please give this a go and let me know how you get on.
 
Good News, Test file de-crypted correctly (proved by playing a 4Min. edit on P.C.) Didn't get any debug, maybe file should have been in My Video/archive. The de-crypt did not freeze the T.V. at any stage. logs below :-
Code:
humax# cd /mod/tmp
humax# cat unencrypt.log
My Video/archive/Dirk Gently_20110520_2102
Processing "My Video/archive/Dirk Gently_20110520_2102", Media ID is 1806
My Video/archive/The Space Shuttle_ A Horizon Guide_20110417_0059
Processing "My Video/archive/The Space Shuttle_ A Horizon Guide_20110417_0059", Media ID is 1821
My Video/archive/The Joy of Stats_20110713_1958
Processing "My Video/archive/The Joy of Stats_20110713_1958", Media ID is 1834
My Video/archive/Space Shuttle_ The Final Mission_20110724_2058
Processing "My Video/archive/Space Shuttle_ The Final Mission_20110724_2058", Media ID is 1836
My Video/archive/The Secrets of Scott's Hut DD5_1_20110418_0016
Processing "My Video/archive/The Secrets of Scott's Hut DD5_1_20110418_0016", Media ID is 1849
My Video/archive/The Space Shuttle's Last Flight_20110726_0123
Processing "My Video/archive/The Space Shuttle's Last Flight_20110726_0123", Media ID is 1870
My Video/archive/TEST1_20111211_1652
Processing "My Video/archive/TEST1_20111211_1652", Media ID is 1876
Getting http://127.0.0.1:9000/web/media/1876.TS
Connecting to 127.0.0.1:9000 (127.0.0.1:9000)
1876.TS              100% |*******************************|  6498M  0:00:00 ETA
Downloaded
1876.TS exists
Inuse =
Done
humax#

humax# cd "/media/My Video"
humax# cat .unencryptdebug
humax# ls -al
drwxr-xr-x  15 root    root          4096 Dec 11 23:30 .
drwxr-xr-x    9 root    root          4096 Dec 12 00:18 ..
-rw-------    1 root    root            0 Dec 11 23:15 .unencryptdebug
 
It looks like there is something about the copy of wget in /bin that was the problem, then. Are you using firmware 1.02.20?

The debug worked, it just prints some extra lines, so it's easier to tell which stage the program has reached and what certain values are
Code:
My Video/archive/TEST1_20111211_1652 <<debug
Processing "My Video/archive/TEST1_20111211_1652", Media ID is 1876
Getting http://127.0.0.1:9000/web/media/1876.TS
Connecting to 127.0.0.1:9000 (127.0.0.1:9000)
1876.TS              100% |*******************************|  6498M  0:00:00 ETA
Downloaded <<debug
1876.TS exists <<debug
Inuse = <<debug
Done

You can safely delete that .unencryptdebug file now as it can always be recreated when you need it.

Let me know if you have any further problems.
 
I don't want to spike your success, but it appeared before that the 6GB problem was a conflict when another instance was fired up after 15 minutes. Does the same problem still exist, but pushed into the background by making the re-run interval 30 minutes?
 
I don't want to spike your success, but it appeared before that the 6GB problem was a conflict when another instance was fired up after 15 minutes. Does the same problem still exist, but pushed into the background by making the re-run interval 30 minutes?

The possibility of a second running instance should have gone away since I changed the process-tracking logic to using lock files and not grepping the output of ps, so that shouldn't be a problem (crosses fingers).
 
Back
Top