Perun

From Personal wiki
Revision as of 13:38, 7 January 2023 by A a (talk | contribs) (→‎/home/user/yt.py: clarified database access)

LXC container, connected to the internet and used for downloading everything. It has readwrite access to relevant mountpoints and has wget, youtube-dlp, rsync and lftp as download software installed.

/video/lectures/updatefeeds.sh

Completely automatic ETHZ-video lecture recording downloader. It reads dir;url lines from urls.csv where dir is a directory under /video/lectures and link is a RSS feed of an ETHZ video lecture series. It fetches the RSS feed through the feed2exec python module, that converts every item to a csv line. Those are internally interpreted to extract the published date in %Y-%m-%d format and the video url is then downloaded to {dir}/%Y-%m-%d.mp4. It prints a last line reporting the work done: amount of feeds fetched, files detected (which are mentioned in the feeds but also already downloaded) and new files downloaded.

/software/git_clone.sh

Fetches archived git repositories, ran automatically.

/software/syncrepo-template.sh

Fetches, through rsync, archlinux and artixlinux (depending on $1) entire package repositories. Also fetches database files allowing completely functional package mirror operations, see Lada#Arch and artix package mirror.

/home/user/osmdl.py

Fetches the newest .osc.gz daily changefiles from openstreetmap for osm container into /mnt/maps/tmp/, to be processed by Osm#Data_updates. The latest already imported changefile is identified by its sequence number, and that written to /mnt/maps/state.txt. The state number therefore refers to which changefiles have already been downloaded. Whether they are in the database is indicated if they are removed from /mnt/maps/tmp or not. Because this file presence/absence shows whether a changefile was successfully or not imported to database, the maps dataset is a subdataset of dbp which hosts the entire database: a recursive snapshot always contains a self-consistent state of the data. See Osm#Filesystem for details.

/home/user/yt.py

For regularly downloading youtube videos to be displayed in Lada#Youtube. Because Perun container has network passwordless readwrite access to Lada#db_videos, this script is entirely self-contained for adding or refreshing information on youtube videos, channels or playlists. Uses the python module yt_dlp and ffmpeg as main dependencies. The regularly scheduled task is as follows:

  1. Download newly uploaded videos to /mnt/youtube/{channel_id}/{video_id}.mkv.
  2. Move thumbnails to /mnt/icache/youtube/{channel_id}/{video_id}.{img_fmt}.
  3. Clean up temporary download files like .part and pre-transcoded files like .webm. Those were kept by yt-dlp to allow for resuming downloads instead of restarting them when the internet is very slow or unreliable. By this point, they will not be useful anymore.
  4. Trigger a database rescan :
    1. Look at all video files in /mnt/youtube that are not in the database yet.
    2. Add them by reading from the deduced .info.json, and deduce thumbnail by corresponding .{img_fmt}.
    3. Generate same-size and same-aspect-ratio (640x360p) thumbnails to /mnt/icache/youtube/{channel_id}/{video_id}.360p.{img_fmt}, with ffmpeg.
    4. For any special video files like .1080p.mkv, add them to their parent video as altvideos.
  5. Check if 1080p versions are missing: download to /mnt/youtube/{channel_id}/{video_id}.1080p.mkv if the video necessitates a smaller 1080p version (if it is natively >1080p).
  6. Rescan database to add those .1080p videos.
  7. Again like step 3 remove .part and similar temporary files.

This main script also allows for maintenance tasks to be run, like:

  • re-downloading all .info.json files of any video present in the database and updating that info.
  • downloading playlists of channels and adding those to the database.

/home/user/audio_podcasts_fetch2db.py

Multistep process to update a podcast. Has database acces on lada to dbnaHas databame=audio. Can either:

  • Fetch a podcast url={dbname=audio:podcasts:url} to a specified output file (then the XML should be manually edited and converted to json)
  • Import a given .json file as the next episode for a podcast (then only the file needs to be downloaded to the correct location)
  • Fully-automated url={dbname=audio:podcasts:url} fetch, XML parse of the latest item, import to database and download to the correct location. This also requires the given podcast to be specified in /home/user/audio_podcasts_parse.py for how which XML keys should be translated or converted into data for the database. Also checks database contstraints like length in advance. If the database import succeeded, run a wget subprocess to download the file.

For the third option, /home/user/sanitize_eschtml.py can parse HTML and remove attributes of some tags and remove some tags (while preserving children of those tags). It can be used to compress the descriptions as it removes cluttering attributes and <span> tags that have no attributes left. This script takes escaped HTML as both input and output.