Page 1 of 1

Automating encodes

Posted: Mon Jun 03, 2024 8:38 pm
by cumlord
Tdarr works good for this. It's a server/node setup, so once you get the server up you can keep adding nodes. For quality encodes you want CPU power and not GPU. So it's suitable to building out a small cluster of low powered arm devices like orange pi's to run as nodes for their CPU. ARM runs efficiently and each device then will work on a given number of encodes at a time as a work order given by the server. Tdarr has a docker image which can simplify the setup although i've had difficulty getting it to utilize gpu on certain hardware this way. You can also have different setups where you have some fast CPU's you want specifically to rip through certain content while everything else goes through your slower arm chips.

Although, if electricity usage is a concern the last time I ran calculations it seemed like arm doesn't really save total electric cost since encode times are usually longer. But they do run quiet and cooling is less of an issue. Pretty sure the nm process size is the more meaningful metric for efficiency in terms of electric.

By default tdarr will replace the source with the encode, but this can be changed if you want to save the source content. Each node and the server needs to be able to access the media in the same relative directory. This can be simplified with a centralized storage server. For a quick and dirty setup you could probably run a share from tdarr server, but using some form of nas simplifies things.

A cool thing you can do with the nodes: setup remotely. CPU encodes don't need much bandwidth unless it's a really fast CPU or fast settings, so a friend can lend you extra cpu/gpu this way. Or you could conceivably outsource all of your encodes that way, but I would imagine that could get expensive.

You can separate material into directories in your source content library based on the type of content and encode settings you want to use. Then in tdarr server set the directories as libraries with the encode settings you want. There's some plugins that will do additional things like re-order or remove audio streams and remux additional .srt files in the same directory into an mkv on top of the ffmpeg/handbrake commands you want it to run. So it's pretty extensible especially when you have content that would all be using the same settings.

There are some logic you can setup so certain things are done if tdarr sees x in when it scans media file, but I've found manually separating the directories to be more reliable.

Last major thing: you set the priority of libraries. So you can have certain directories set (like new content) as highest priority, then lower priority for everything else. That way when new content is retrieved it will get first in line when a node is ready. Or if you want to prioritize encodes in a given library, you can move that up in priority.

Biggest issue I've run into with this is if the content is different than expected so you didn't put it in the right library and bad settings get used (for that content). With hevc, it doesn't like dark scenes and grain so if there's a lot of that it's good to look at it manually.

Older content shot on film and animation can be somewhat finicky while most newer content isn't a big deal. Newer animation can often be given very bad settings (even sent through a GPU instead of CPU) and still look reasonably good.

There you go, no need to manually encode everything, save it for things that need it.