• The forum software that supports hummy.tv has been upgraded to XenForo 2.3!

    Please bear with us as we continue to tweak things, and feel free to post any questions, issues or suggestions in the upgrade thread.

[unencrypt] Decrypt-in-place

Apparently there is an abbreviated way of saying the same thing but I didn't know that at the time.

*/5 * * * * means every 5 minutes.
Aha! I think I might have spotted a problem. My cron job says (scroll the line to see it all):
Code:
1,6,11,16,21,26,31,36,41,46,51,56 * * * * /mod/sbin/unencrypt "/mnt/hd2" > /mod/tmp/unencrypt.log 2>&1
Looks ok to me...
 
In reply to #217, I have the same problem.

My cron entry is:

1,31 * * * * /mod/sbin/unencrypt "/mnt/hd2" > /mod/tmp/unencrypt.log 2>&1

I have recently tried playing back a couple of long recordings which have been unencrypted.

Olympics 2012: Closing Ceremony
20:57 12/08 202min Ch 50 BBC One HD
5235933184 bytes Olympics 2012_ Closing Ceremony_20120812_2057.ts
Plays for 1:04:57

Eurovision Song Contest 2012
20:00 26/05 206min Ch 50 BBC One HD
7028998144 bytes Eurovision Song Contest 2012_20120526_2000.ts
Plays for 1:24:54

In both cases, the Hummy reports the correct programme length, but viewing on-screen, or downloading by FTP and playing with VLC playback stops early.

I am presuming that the decrypt job is not playing well with long recordings. Unfortunately, the recordings were made some time ago, so I don't suppose there's much evidence left of what went wrong.

Thoughts?

Jonathan.
 
Yes, use FTP or the WebIF to check the size of the files. They will be reported as the correct length in time because that information is held as a property in a sidecar file. HiDef recordings should be about 4.5GB per hour.

I believe the .ts files will be truncated because the machine got turned off before the decrypt was complete.
 
Sorted. Looks like it was the crontab entry. I'm still confused as to why I was getting that DNLA error in the log, but here is how I fixed it:

I ran the crontab command direct from the command line :

/mod/sbin/unencrypt "/mnt/hd2/My Video"

and it ran through and unencrypted one file and completed ok.

So I changed the crontab to run it every hour and lo and behold it started off doing one file an hour - so I can now watch stuff from the mounted drive on my HD.

Well worth the effort - Thanks
 
Seems I spoke too soon. Started noticing that programs weren't being unencrypted so checked the log file and it's the same error. Tried it from the command line and it failed :

humaxhdr# /mod/sbin/unencrypt "/mnt/hd2/My Video"
The DLNA server seems to have crashed
Please restart the box
 
Yes, use FTP or the WebIF to check the size of the files. They will be reported as the correct length in time because that information is held as a property in a sidecar file. HiDef recordings should be about 4.5GB per hour.

I believe the .ts files will be truncated because the machine got turned off before the decrypt was complete.

This is a bit of a disaster then! Long recordings are always going to end up truncated. I do appreciate that the Hummy is streaming to itself, to achieve the unencrypt magic, but on a long recording, it's likely the machine will be powered off before the operation is complete. How do others get around this problem? It really needs to work on a copy, and do a final rename once the entire programme has been processed.

Jonathan.
 
I think it does - it would have to! I don't know the ins and outs though, af123 would have to answer that one.
 
I suppose Redring could examin a 'Busy' file before allowing stand-by mode to be entered, but it may be that RR only monitors, and is not capable of preventing a Stand-by event. The Busy file could be written to by any program, decrypt, shrink, extract MPG etc. to indicate it's status e.g. busy or completed
 
I think it does - it would have to! I don't know the ins and outs though, af123 would have to answer that one.
The background decryption that's built into the web interface certainly does that. It is carefully written so that there are only a very few short critical sections (effectively where all of the files that make up a recording group are being renamed - and those rename operations are within the same filesystem so very quick). At all other times, being powered off will just cause it to restart the file that was being processed. It also does all of its temporary work under /mod/tmp/webif_auto (or something similar) and cleans up after itself so that you can't end up with clutter being left in the video area.

I can't currently comment on the unencrypt package which was written by Sam Widges and I don't use, but I'll have a read of it later and post again.
 
The unencrypt package uses the 'mv' command but doesn't move the original .ts file out of the way before moving the new one on top. This will probably mean the move operation is not as quick as it could be.
 
I use unencrypt rather than the WebIF autodecrypt option, so this is where I have presumably fallen foul very (very) occasionally if the final move gets interrupted. To conform to current standards it ought to move the original into the recycle bin (if there is one) before the decrypted version gets renamed.
 
I use unencrypt rather than the WebIF autodecrypt option, so this is where I have presumably fallen foul very (very) occasionally if the final move gets interrupted. To conform to current standards it ought to move the original into the recycle bin (if there is one) before the decrypted version gets renamed.

Hi,

I guess I feel quite unlucky to have had two long recordings truncated. And I am surprised if a shutdown during a rename would damage the decrypted copy. Perhaps it damages the original? (I seem to remember reading it's Humax's practice to truncate a file before deleting it.) Following my theory, on the next attempt, the truncated original gets re-decrypted, using the same temp file? Idle speculation, as I haven't consulted the source, and don't know what the kernel's behaviour is on an interrupted rename.

I would be very grateful if it got looked at ;)

Jonathan.
 
Surely you would run decrypt on the original file putting the decrypted file in the same directory but with a temp name. Once completed you mv the original to another directory, which is simply changing the reference to the file not copying it, so it is very quick. Then rename the temp file to the original name, which again is very quick. How would a file become truncated with the original name?
 
My HiDef recording of the Olympics opening ceremony was 12GB, although I do tend to leave the Humax on all night so it has plenty of time to complete the task list. It only takes 20 or 30 minutes to decrypt a file of that size.

If you do not need routine decryption, perhaps the auto-decrypt option would suit you better? Set up a specific folder, use the WebIF to mark it for auto-decrypt, then anything you want decrypted just move into the folder - then remember to leave the Humax on for a while.

Alternatively adjust the cron for unencrypt so that it only runs at times you know the box will be on for a while.

I have not so far had an interest in redring, but I would be very interested if it could be used to show when there is a background operation active and the box should not be turned off, even better if it can be turned off but stays half-awake until the job is finished (like when recording).
 
Surely you would run decrypt on the original file putting the decrypted file in the same directory but with a temp name. Once completed you mv the original to another directory, which is simply changing the reference to the file not copying it, so it is very quick. Then rename the temp file to the original name, which again is very quick. How would a file become truncated with the original name?

That is what I would do, but I haven't looked at the code for what actually happens. If you simply rename a file to an existing name, then the operating system will have to unlink the existing file (and return the disk space to the free list) before doing the rename. This is all allegedly atomic, but I don't know whether it will delay an attempt to shut down the operating system. All speculation on my part, you understand. But as I mentioned earlier, I do remember reading that deletes were slow, and that Humax-written applications truncate files before deleting them.

Jonathan.
 
I do remember reading that deletes were slow, and that Humax-written applications truncate files before deleting them.

You're right. The ext3 filesystem makes use of an indirect block mapping scheme which has to keep track of all blocks that make up a file. Removing a large file requires all blocks to be unmapped which can be very slow and cause journal bandwidth starvation, which can in turn directly affect other filesystem operations and cause stutters in playback or recording.

To get around this, the Humax software truncates large files incrementally prior to removal in order to spread out the block unmapping over several journal entries.

There is a custom firmware package called trm which adds a command of the same name which removes files in the same way. This is used by the web interface and other components for file removal.

The unencrypt package uses the 'mv' command but doesn't move the original .ts file out of the way before moving the new one on top. This will probably mean the move operation is not as quick as it could be.
Yes, it should move the old one aside first then remove it using trm.

The original unencrypt package is no longer maintained (I haven't seen Sam Widges around for a while). Maybe it's time to rewrite it to use the same mechanism as used by the web interface auto-decrypt?
 
It was crossing my mind that the WebIF Auto-decrypt could be all we need (perhaps with a few tweaks) and unencrypt could be deprecated, but it might still be useful in isolated cases where there is no routine WebIF management access. I am not planning on installing it on my remote unit, but I do have auto-unprotect in case I want to copy to USB.
 
Hi,

I've taken a look at the code, and I am none the wiser as to what happened.

I can see from the log file, that the broken programmes have only been processed once... for example:

humax# grep -i ceremony /mod/tmp/unencrypt.log
Processing "My Video/Olympics 2012_ Closing Ceremony_20120812_2057", Media ID is 656

Now I can see the problem with the rename:

if [ "$inuse" == "" ]; then
$MV $mediaID.TS $file.ts
$HMT -encrypted $file.hmt
else
$ECHO "File is in use, skipping"
fi

This could better be coded as:

if [ "$inuse" == "" ]; then
$MV $mediaID.TS $file.ts.delete_me
$MV $mediaID.TS $file.ts
$HMT -encrypted $file.hmt
$TRM $file.ts.delete_me
else
$ECHO "File is in use, skipping"
fi

which would reduce the critical section of code where the (truncate/)delete is taking place. But my original assumption that a truncated or damaged media file is being re-processed is blown out of the water, because it appears the script has only run once.

So now my suspicion turns to the wget command:

$WGET $url

if [ -n "$debug" ]; then $ECHO "Downloaded"; fi

#The copy can fail - only copy and process if the file actually exists

if [ -f "$mediaID.TS" ]; then
if [ -n "$debug" ]; then $ECHO "$mediaID.TS exists"; fi
# Need to check that the main file is not being modified

I don't know if wget returns an exit status. I can see that if wget only partially downloads the file for some reason, the unencrypt script will carry on running, copying a damaged file over the original. If the wget pull was interrupted by a power off, then the script would re-run completely, and this is not what has happened here.

Is a problem with wget likely to be what's happening? Does the webif decryption use wget, and have errors been seen?

Thanks,
Jonathan.
 
Yes, all our on-box decryptions will use wget (the only exception being a handset OPT+ copy to virtual drive), because the "cheat" that is used to obtain decryption involves capturing a stream from the DLNA server. Thus the problem need not originate in wget - if the server decides to terminate the stream for any reason (fair or foul) wget would presume that was the end of the stream and return a success code (if it returns any code at all).

Idea: on completion of wget, compare the source and decrypted file sizes - if the latter is smaller there must have been an aborted transfer and it should be considered suspect.

PS: Thanks for looking at this.
 
Back
Top