Humble feature request

ejstubbs

Member
I was wondering whether it would be possible for the "Mark new" function to be performed on more than one file at a time, e.g. by ticking the check box against each recording required to be marked as new, and thern having a 'batch action' button at the bottom of the window as there is for cutting, copying, deleting, joining and queuing.

The reason I ask for this is that we tend to build up a backlog of Simpsons episodes from Channel Four, and my missus doesn't want to watch non-widescreen episodes because she regards them as old. It's enough of a job going through the backlog deleting all the non-widescreen ones without having to reset the unwatched flag on all the ones that are left. Today's daunting tally is 39 recordings needing their unwatched flags reset!

Alternatively, if anyone knows of an easy way to differentiate between recordings of widescreen and non-widescreen programmes (albeit broadcast with black side borders) without having to play the recording, that would make the job even easier and not require a new feature in the WebIF.
 
I think it would be useful to have something like Set New as a batch option in WebIF Media Browser, but until that becomes available you have sweeper as a work-around.

sweeper is able to set and reset the New flag. If you are prepared to isolate all the recordings you want to batch-process into a particular folder, it is easy enough to process them all through sweeper (and even have it move them into another folder when done).

Alternatively, if anyone knows of an easy way to differentiate between recordings of widescreen and non-widescreen programmes
I don't see how that might be possible. The only differentiator could be 4:3 v 16:9, and sometimes even 4:3 material gets broadcast as 16:9.
 
Last edited:
Thanks, I'll give sweeper a go.
I don't see how that might be possible. The only differentiator could be 4:3 v 16:9, and sometimes even 4:3 material gets broadcast as 16:9.

I thought it would be a long shot. The Simpsons on C4 is broadcast at 16:9 whether the actual programme content is widescreen or not. If it's not it just has black bands added each side to fill up the empty space either side of the actual image.
 
I don't say that it's easy, but eg:
  • extract one I-frame from the .ts (which must have been, or be specially, decrypted) as a 640x360 image (ffmpeg can do this, as in the thumbnails package);
  • compute the average pixel value of the image, excluding the central 480x360 pillar-box;
  • if less than some threshold, ID as 16:9 pillar-box.
We don't have much in the way of graphics tools so real coding, not just a script using existing tools, would be needed for steps 2 & 3.
 
Can she not simply press the 'Stop' button followed by 'Delete' on the 4:3 ones?
But I suppose it's a personal challenge now. ("I can do that!" He said boldly.)
 
We don't have much in the way of graphics tools so real coding, not just a script using existing tools, would be needed for steps 2 & 3.
Sure, but it's more tractable than the project going on here: https://hummy.tv/forum/threads/bookmark-changes-between-5-1-and-stereo-audio.9800/

An uncompressed bitmap graphic file (such as the thumbnail) is easy to scan in a loop, and just sum all pixel values for R, G, and B (no need to do them separately or to average them). Set the detection threshold sufficient to account for channel idents and you're away. The loop can be aborted as soon as the accumulator exceeds the threshold - that's a "sidebars? = FALSE"; only if the loop reaches the end does it return "sidebars? = TRUE".

It seems to me the most difficult bit (conceptually) is to work out what timestamp should be used to grab the thumbnail - if it lands in an advert break, for example, the result might be incorrect - so maybe this needs to be applied after a round of ad detection, and then sample in the middle of a section between ads.

This could be added as a test in sweeper.
 
This isn't about the aspect ratio, it's about whether the broadcast has black bars down the side to put a 4:3 frame into a 16:9 frame.
 
What would be helpful is if the stream contained the active format descriptor flag (AFD) which would (when used correctly) have a value of 9 for 4:3 pillarboxed in 16:9, but sadly it doesn't seem to be there if I probe a known pillarboxed ts file with either ffprobe or MediaInfo.

 
What would be helpful is if the stream contained the active format descriptor flag (AFD) which would (when used correctly) have a value of 9 for 4:3 pillarboxed in 16:9, but sadly it doesn't seem to be there if I probe a known pillarboxed ts file with either ffprobe or MediaInfo.

It's not, that's the point.

Although there is a standard for switching between 4:3 and 16:9 source material, and although the HDR-FOX has user preference controls to decide how to display 4:3 material on a 16:9 screen (either pillar-boxed or stretched), typically all the (cheap) broadcasters do is leave it at 16:9 and put actual black bars in the transmission.
 
typically all the (cheap) broadcasters do is leave it at 16:9 and put actual black bars in the transmission.
Not just the cheap broadcasters. BBC2 showed an old black and white 4:3 film (Laura) the other day. That was shown in 16:9 with black bars. That's been happening on the BBC for quite a while now. (I'll leave it to the readers to decide whether the BBC is a cheap broadcaster). It used to p me off when I still had a 4:3 TV and the film ended up in postage stamp format.
 
There may be some mileage along the lines of the following, based on a few assumptions.

  1. The pillarboxed programme you're interested in scanning is running at a point 5 minutes into the file (rather than 16:9 adverts)
  2. The complete 16:9 frame would never be completely black for 5 seconds at a time
  3. The Humax would be up to the task of transcoding 5 seconds of video
Basically, you use ffmpeg in conjunction with its crop and blackframe filters, and do the following

  • Seek 5 minutes into the .ts file
  • Decode 5 seconds worth of video into two null outputs at the same time. The first is a crop of the LH side of the 16:9 frame encompassing only where you ought to find black if the file is pillarboxed, the second is the crop for the RH side. On each of these outputs you apply the ffmpeg blackframe filter to detect whether the cropped portion is black or not. For a frame that meets the spec specified in the filter parameters, ffmpeg spits out a line similar to
Code:
[Parsed_blackframe_1 @ 0x71a150] frame:1 pblack:100 pts:55748 t:0.619422 type:B last_keyframe:0

This is repeated for both the left and right black bars (labelled with the same frame number on each). So for 5 seconds of video (125 frames) you should get 125 x 2 = 250 of these lines. Wrap the ffmpeg command into a script and count them up as it produces them and if you get the right number then the content is likely pillarboxed.

An actual ffmpeg command that works (which I've tried on a BBC Four Bob Ross 'Joy Of Painting' episode from 1984) is

Code:
ffmpeg -ss 00:05:00 -i <ts_file>
        -vf "crop=iw/8:ih:0:0,blackframe=99:20" -t 5 -an -f null /dev/null
        -vf "crop=iw/8:ih:iw*7/8:0,blackframe=99:20" -t 5 -an -f null /dev/null

99:20 means that 99% of the pixels must have a luminance value of 20 or lower for the frame to be deemed black (black has a value of 16 in 8-bit video)

This ran at about 1 frame per second, so about 2 minutes to detect across 5 seconds of video.

It may not be worth doing given the time involved (unless choosing to detect over a smaller duration), but as a proof of concept it works.
 
Last edited:
Is there some way of telling ffmpeg just to process a few key frames rather than decoding all the intermediate dross?
 
You can do this

Code:
ffmpeg -ss 00:05:00 -i <ts_file>
        -vf "select=not(mod(n\,10)),crop=iw/8:ih:0:0,blackframe=99:20" -t 5 -an -f null /dev/null
        -vf "select=not(mod(n\,10)),crop=iw/8:ih:iw*7/8:0,blackframe=99:20" -t 5 -an -f null /dev/null

selecting (say) every 10th frame for 5 seconds, but it doesn't run any faster than simply doing every frame.
 
I don't see a need to check for sidebars over a period of seconds. Run a check for whole frame below a threshold, and if it is pick another sample point.
 
The problem with picking multiple separate sample points is the overhead in starting up ffmpeg each time (on a Humax at least). It sits for quite a long time ‘thinking’ before it actually starts processing frames. Having to do that only once therefore might be preferable.

You could ignore the sidebars and process the whole frame if you test that at least 25% of the pixels are black

Code:
blackframe=25:20

but you might get the odd false positive on 16:9 material e.g. small text on a black background etc. There appears to be no time overhead from doing the cropping anyway.
 
The problem with picking multiple separate sample points is the overhead in starting up ffmpeg each time
But that would only be the case if you get unlucky, and if it is a sweeper thing it will be happening in the background anyway.

If the sample is not in an ad break​
If the sidebar area is under the threshold​
If the non-sidebar area is over the threshold​
exit with status = TRUE​
else exit with status = RETRY​
else exit with status = FALSE​
else exit with status = RETRY​
 
Last edited:
By the way, I have noticed some old 4:3 content broadcast at 16:9 with narrower sidebars by clipping off the top and bottom of the original 4:3 frame. The sidebar detection needs to restrict itself to narrower strips than one might calculate strictly from 4:3 in 16:9.
 
There's also a cropdetect filter that might do the job; then you don't have to guess the border size. Performance looks similar (ffmpeg 4.1 from the CF package).

Test file is last night's "Please Sir! S01E02", with video stream 0 h264 (High) ([27][0][0][0] / 0x001B), yuv420p(tv, bt470bg, top first), 544x576 [SAR 64:33 DAR 544:297], 25 fps, 25 tbr, 90k tbn, 50 tbc. It starts out 16:9 colour for ads and promos, then the first few seconds of the B+W show itself have a colour pop-up at the bottom ("1960s sitcom with humour and dialogue of the time" -- come on), and the show then continues (~2 minutes in) in B+W with narrowish vertical black borders.
Code:
# time sh -c "ffmpeg -hide_banner -ss 00:05:00 -i Please\ Sir!_20200801_0010.ts -map 0:v:0 -t 5 -vf select='eq(pict_type\,I)',cropdetect=reset=0 -f null /dev/null 2>&1 | grep -Eo 'crop=([[:digit:]]{1,}:){3}[[:digit:]]{1,}'"
crop=400:576:74:0
crop=400:576:74:0
crop=400:576:74:0
real    0m 34.68s
user    0m 30.21s
sys     0m 0.28s
#time sh -c "ffmpeg -hide_banner -ss 00:05:00 -i Please\ Sir!_20200801_0010.ts -map 0:v:0 -t 5 -filter_complex select='eq(pict_type\,I)','split[l1][r1],[l1]crop=iw/9:ih:0:0[l2],[r1]crop=iw/9:ih:iw*8/9:0[r2],[l2][r2]hstack',blackframe=99:20 -f null /dev/null 2>&1 | grep -oE '[[]Parsed_blackframe_.+$'"
[Parsed_blackframe_5 @ 0x546e80] frame:0 pblack:100 pts:56833 t:0.631478 type:? last_keyframe:0
[Parsed_blackframe_5 @ 0x546e80] frame:1 pblack:100 pts:172033 t:1.911478 type:? last_keyframe:1
[Parsed_blackframe_5 @ 0x546e80] frame:2 pblack:100 pts:287233 t:3.191478 type:? last_keyframe:2
[Parsed_blackframe_5 @ 0x546e80] frame:3 pblack:100 pts:402433 t:4.471478 type:? last_keyframe:3
[Parsed_blackframe_5 @ 0x546e80] frame:4 pblack:100 pts:517633 t:5.751478 type:? last_keyframe:4
real    0m 36.29s
user    0m 31.08s
sys     0m 0.32s
#
The cropdetect output is suitable to be piped into uniq -c, which is available if either busybox or coreutils is installed:
Code:
# ... | uniq -c
    112 crop=400:560:68:8
#
Selecting I-frames doesn't make much difference to the CPU usage but avoids processing frames that are known to be almost the same.
 
Last edited:
Back
Top