[youtube-dl] Download files from youtube.com or other video platforms

Packaging error. I'd done it right on my manual update test for all three files, but then forgot to update the build script completely.
Beta package updated. If you could test and confirm...
Glasgow local news downloading now :)
 
Should --hls-prefer-ffmpeg be added to the default youtube-dl.conf file?
Any downside to using it always on Humax?
 
If you "prefer" the native HLS downloader, the CF Python SSL will be used and is likely to fail, whereas the rebuilt CF ffmpeg should succeed. Other downloaders don't know how to handle HLS (called m3u8 in the supports() methods in youtube_dl/downloader/external.py). However, ISTR that iPlayer is unusually lenient in this regard and if that's still the case then --hls-prefer-native could mean lower system load when fetching BBC shows.
 
I never used to have these errors downloading from iPlayer, but I've had it twice just now, requiring manual re-submission to the queue, which is dull.
Code:
23/02/2024 21:41:46 - Caught error: ERROR: unable to download video data: <urlopen error [Errno 145] Connection timed out>
23/02/2024 22:00:33 - Caught error: ERROR: unable to download video data: <urlopen error [Errno 145] Connection timed out>
Is there scope in yt-dl for retrying things that time out, rather than just bailing?
 
There is a default retry count of 10 for things that the code expects to be resolved by retrying. A timeout on connecting is, IIRC, not one of those.

Maybe we need to stop sending ancient UA strings in case iPlayer is blocking certain old UA strings. A random UA from a list made up when Chrome/FF versions were <80 is sent, but personally I think all clients should send just Mozilla/5.0 since there is no good use that a server could make of the UA string (now that other ways exist to discover client characteristics).

Or maybe upgrading to OpenSSL 1.0.0w and rebuilding wget would have an effect.
 
There is a default retry count of 10 for things that the code expects to be resolved by retrying. A timeout on connecting is, IIRC, not one of those.
Perhaps it ought to be. Or, more strongly, probably it ought to be.
I think all clients should send just Mozilla/5.0
Is this in yt-dl somewhere?
Or maybe upgrading to OpenSSL 1.0.0w and rebuilding wget would have an effect.
Presumably you mean 1.1.1w as we are already on 1.1.1d?
I'll start another thread about that in due course.
 
You can increase or make infinite the --socket-timeout ... but I'd say the default 600s ought to be plenty. Actually the --retries ... (there's --fragment-retries ... too) is only used in fetching the media data, although an extractor could implement its own retry mechanism using the same parameter.

--user-agent 'Mozilla/5.0', but I'm sure lots of sites will complain if they're not fed any OS and Gecko data. Maybe not BBC.

Yes 1.1.1, It was 1.0.0... for so long ...
 
You can increase or make infinite the --socket-timeout ... but I'd say the default 600s ought to be plenty.
Indeed it should, but this seems not to be the factor at play:
Code:
24/02/2024 18:28:33 - [download]  48.7% of ~2.22GiB at  1.61MiB/s ETA 12:02
24/02/2024 18:28:34 - [download]  48.7% of ~2.22GiB at  1.67MiB/s ETA 11:35
24/02/2024 18:28:34 - [download]  48.7% of ~2.22GiB at  1.67MiB/s ETA 11:35
24/02/2024 18:31:44 - Caught error: ERROR: unable to download video data: <urlopen error [Errno 145] Connection timed out>
 
As, according to G, this error code has never been seen in the context of Python opening a web connection, -v may help to show what's going on. It may be easier to run the command in a shell rather through qtube.
 
Back
Top