GithubHelp home page GithubHelp logo

rffmpeg's People

Contributors

alefnode avatar aleksasiriski avatar alexh-name avatar crisidev avatar gitdeath avatar jewe37 avatar joshuaboniface avatar markv9401 avatar mothlyme avatar pyaniz avatar robho avatar shadowghost avatar sim0nw0lf avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rffmpeg's Issues

/usr/local/bin/ffmpeg - Jellyfin

Did everything from SETUP.md and checked to see that /usr/local/bin has rffmpeg, ffmpeg and ffprobe. Getting this message from Jellyfin:

We're unable to find FFmpeg using the path you've entered. FFprobe is also required and must exist in the same folder. These components are normally bundled together in the same download. Please check the path and try again

(Question) Recent changes - where is this heading ?

Hello,

Just a question, not an issue. I'm not sure whether this is the right place to post this, if not, it would be good to know where should I post this question, perhaps the Jellyfin forum ?
I've forked rffmpeg a long time ago to add some extra functionality and make some tweaks for my setup (namely, support for remoting "vainfo" and also workarounds to get Intel VAAPI working properly).
I see there were a ton of changes merged since my fork. Now there's even a database of some kind, though it's not exactly clear to me what is it used for.
What is the purpose of these changes ? Because what was previously a simple remote ffmpeg invoker script seems to be turning into quite a complex program with a lot of extra functionality and some extra configuration steps. Are these changes intended purely for scaling rffmpeg to a lot of hosts or there are other purposes as well ?
I'm not sure whether I should merge these changes back to my fork or not.

Filename quoting issues?

rffmpeg is trying to run this:
/usr/bin/ssh -q -t -o ConnectTimeout=1 -o ConnectionAttempts=1 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ControlMaster=auto -o ControlPath=/run/shm/ssh-%r@%h:%p -o ControlPersist=300 -i /var/lib/jellyfin/.ssh/id_rsa jellyfin@yurie /usr/lib/jellyfin-ffmpeg/ffmpeg -dump_attachment:t -y -i "file:/mnt/Sarjat/Call of the Night (2022)/Season 01/Call of the Night - S01E11 - TBA HDTV-1080p.mkv" -t 0 -f null null
but it fails and running it manually gives this output:

bash: -c: line 1: syntax error near unexpected token `('
bash: -c: line 1: `/usr/lib/jellyfin-ffmpeg/ffmpeg -dump_attachment:t -y -i file:/mnt/Sarjat/Call of the Night (2022)/Season 01/Call of the Night - S01E11 - TBA HDTV-1080p.mkv -t 0 -f null null'

So the quotes disappear for some reason. This seems to happen only when it's doing this attachment dumping. When it's actually transcoding something things seem to work as expected.

rffmpeg breaks some Jellyfin plugins

I found out that Jellyfin Intro Skipper broke after I started using rffmpeg. The fact that the commands output in stderr instead of stdout, except ones listed in special_flags may break plugins of Jellyfin analysing the media library.

In the case of Jellyfin Intro Skipper, I had to edit rffmpeg and add the -muxers and -fp_format. Would it be possible to have the special_flags configurable through the config.yml file?

By the way, why is the output redirected to the stderr instead of stdout?

Can't select rffmpeg in Jellyfin

Screenshot_20220723_235857_org jellyfin mobile_edit_40948153236459

Screenshot_20220723_232102

Screenshot_20220724_000229

If I try to select rffmpeg in the Jellyfin settings it tells me there is no ffmpeg found.

I had all this running already and tried the new version of rffmpeg with Jellyfin 10.8.1 and 10.9.0, both had this issue.

I had to change the user in the rffmpeg file to "root" to initialize and add hosts but that worked fine then.

Symlinks are set up already in the folder just like it's suggested. Any idea how to make this work again?

Jellyfin in docker with docker volumes?

The setup guide assumes that the data directory is at /config and the cache at /cache. What if I want to use docker volumes? In my situation, the config is at /var/lib/docker/volumes/jellyfin-config/_data , the cache at /var/lib/docker/volumes/jellyfin-cache/_data . Root privileges are required though.

Logging enhancement request

Problem Statement: The logfile name is pulled from the .yml file, but I can't figure out a way to append a variable date to the name of the log by changing the yml.

Request: Have the application pull the name, exactly as it is today from the yml, except as it goes to write append a date.

Reason:
I'm interested in taking advantage of Jellyfin's log maintenance, by storing the log file in the Jellyfin log directory, but the current implementation never 'rolls', so it just continues to grow.

Add weights to servers

Would be nice if we could add weights and/or parameters to servers, like for instance 1 server that can handle 2 1080p streams alongside another server that can run 2 4k streams, as well as setting CPU and other settings independently per host (nvenc on one, vaapi on another, 4 threads vs 12, etc)

Not at all mandatory, but still a neat idea and would make this a lot more useful in my application.

Dynamically Add/Remove Hosts

Is there a way to dynamically add / remove a node from the cluster -- or any way to add / remove a node from the cluster other than editing the rffmpeg.yml file?

We're unable to find FFmpeg using the path you've entered.

Using /usr/local/bin/rffmpeg and made sure it had perms with chmod +x /usr/local/bin/rffmpeg

We're unable to find FFmpeg using the path you've entered. FFprobe is also required and must exist in the same folder. These components are normally bundled together in the same download. Please check the path and try again.

Using this image, you can read the Dockerfile here.

I actually encountered this error before and it decided to fix itself somehow, now it's back with this new version of rffmpeg...

Calling rffmpeg inside a wraper

Hey there!

Thanks again for the awesome projet. This issue might be out of rrfmpeg scope, but I search around a lot and am getting out of ideas.

I wrote a wake on lan wrapper for my Jellyfin server, that wakes up the transcoding machine and waits for SSH server to be up & running before calling rffmpeg. However, I'm getting issue with space escaping.

Calling ffmpeg directly using /path/to/rffmpeg/ffmpeg "$@" works. But it breaks Live TV transcoding. In that case, I need to replace 127.0.0.1:8096 with the external IP of Jellyfin.

So I need to rebuild the argument list:

args=()
for arg in "$@"; do
    args+=("$arg")
done
## sed to replace 127.0.0.1 by my IP address
/path/to/rffmpeg/ffmpeg "${args[@]}"

However, here, quoted arguments aren't anymore. I needed to add quotes for all args: args+=("\"$arg\""). But if the quoted arguments are containing spaces, rffmpeg still sees them as separate ones and tries to escape them anyway. That breaks if argument contains special characters, for instance user agent argument: "-user_agent" "Mozilla/5.0 "(Windows" NT 10.0; Win64; "x64)" AppleWebKit/537.36 "(KHTML," like "Gecko)" Chrome/64.0.3282.85 Safari/537.36".

I had to go into rffmpeg and edit run_remote_rffmpeg:

for arg in ffmpeg_args:
    # Match bad shell characters: * ' ( ) | [ ] or whitespace
    #if search("[*'()|\[\]\s]", arg):
    #    rffmpeg_ffmpeg_command.append(f'"{arg}"')
    #else:
    #    rffmpeg_ffmpeg_command.append(f"{arg}")
    rffmpeg_ffmpeg_command.append(f"{arg}")

That works. However, I can't use rffmpeg outside of that wrapper anymore.

Is there any way to make it cleaner? Am I missing something in my bash script that would allow something cleaner that my current solution?

Thank you for your help!

Unable to find ffmpeg

Hi, I am trying to utilse rffmpeg from inside a jellyfin docker instance to a bare metal transcoding machine running Ubunut22.04, however Jellyfin is unable to pickup the binary. I have double checked the paths, added just the folder without ffmpeg specified and even rebuilt the docker with the same results.

image

image

image

SSH port configuration?

Hi, this is a pretty straightforward question, I want to know if you can configure the port rffmpeg will use to ssh in as the port i have for my ssh server is diffrent due to some reasons

Kubernetes and Docker and CephFS

Hi.

Briefly looked at this repo as I have 5 servers but no GPU's.
I am running Jellyfin in a docker container on Kubernetes and saw this doesn't have docker support and relies on ssh?
Within Kubernetes, SSH into a pod is not really going to work well, I've seen some examples of a rsync server/client docker image, but I've not much experience with it. For example it can generate a fixed SSH key, but you'd need to use statefulset deployment or get a fixed ClusterIP for SSH to work.

Since this project is targeted at non-Docker installs, I am not sure I could adopt it - it's a neat idea - and painfully needed if you don't want to buy lots of beefy CPU's or GPU's.

TL:DL:
Seems no Docker support, saw this: https://github.com/BasixKOR/rffmpeg-docker
SSH into Kubernetes Docker container... is probably not an easy/good idea.
Reading media and storing the transcodes in a SHARED filesystem that all Docker pods can access at the same time (think NFS), removing SSH part by accessing a shared transcode directory would help in my case specifically.

Enable backslash \ escaping

Hey all,

I started using ErsatzTV using rffmpeg. So far, so good! The only issue I'm running into is that it breaks whenever I'm using space in my channel name.

The command is called with the argument: -metadata "service_name=\"Bastflix TV\"". This then gets translated to -metadata "service_name="Bastflix Gaming"". The unescaped quotes break the command call.

Is there any way to "forward" those backslashes? I dug into the code but I'm not sure where to do that.

Thank you!

New setup with Emby, getting errors...

Everytime Emby calls the ffmpeg, this is the error it's getting:

ssh: symbol lookup error: ssh: undefined symbol: ENGINE_register_all_complete, version OPENSSL_1_1_0

Any idea what this is? I can't seem to figure it out.

ssh pass auth

How can we use ssh password auth on here? I am running solely on my LAN so really dont want to use ssh keys if I can help it especially since I tried the key method and it doesnt work. It still asks for the transcode server password. I setup the key to not ask for a password.
I am running linuxserver jellyfin on my server and stock ubuntu on the transcode box.
Or how can we use sshpass with this?

Allow different path for local ffmpeg

Hi,

Im using this project in a docker setup with 2 VMs.
One VM is the docker host (media) and one VM (transcode) has the virtual GPU attached.
Now, as mounted paths between docker host and docker container vary (jellyfin config dir on host: /var/lib/jellyfin, jellyfin config dir in the container: /config) its a bit sturdy to place the ffmpeg files.
Im using "/usr/lib/jellyfin-ffmpeg/" as path on the transcode machine right now, as this path also exists in the container, but it would be really cool if this local fallback ffmpeg were further configurable

Regards and thanks for this amazing project
Ray

Feature: Allow host assignment based on hardware acceleration type

Effectively, one would have a tag appendable to a host specifying that it supports a given type of hardware acceleration. If this is detected via the -hwaccel ffmpeg argument, only hosts that allow this type of hardware acceleration are available.

For my specific use case I would like to be able to run everything that does not require any hardware acceleration locally and only attempt to use a remote transcode host when that is not an option.

I'm leaving this here for now, I might look into doing a PR at some point in the future. This wouldn't require any changes to the ffmpeg arguments still, and checking for this one argument should be simple enough.

ERROR - Finished rffmpeg with return code 251

Getting these error logs after enabling Low Power Encoding and Tone Mapping in Jellyfin Playback menu. - Edit: This may be a red herring as if I disable them the issue continues, but it is also the only thing I've changed recently.

Jellyfin Logs Error:
MediaBrowser.Common.FfmpegException: ffmpeg image extraction failed for file: Location/Name rffmpeg Error: ``ERROR - Finished rffmpeg with return code 251
Jellyfin container manually running the command error:
bash: -c: line 1: syntax error near unexpected token ('`

The command from rffmpeg log was the below:

/usr/bin/ssh -q -t -o ConnectTimeout=1 -o ConnectionAttempts=1 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ControlMaster=auto -o ControlPath=/run/ssh-%r@%h:%p -o ControlPersist=300 -i /rffmpeg/.ssh/id_rsa transcodessh@jellyfin-transcode-2 /usr/lib/jellyfin-ffmpeg/ffmpeg -f matroska,webm -ss 00:00:15.000 -i 'file:/media/tvshows/Name (YYYY) [tvdbid-Number]/Season 01/Series - Name (YYYY) - SXXEXX - Name [WEBDL-1080p][EAC3 2.0][EN+RO][x264]-playWEB.mkv' -threads 0 -v quiet -vframes 1 -vf 'scale=trunc(iw*sar):ih' -f image2 /cache/temp/CacheID.jpg

I found that from the Jellyfin container this part of the command worked:

/usr/bin/ssh -q -t -o ConnectTimeout=1 -o ConnectionAttempts=1 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ControlMaster=auto -o ControlPath=/run/ssh-%r@%h:%p -o ControlPersist=300 -i /rffmpeg/.ssh/id_rsa transcodessh@jellyfin-transcode-2

I also found that from both the Jellyfin container and the Remote Transcode container this part of the command worked:

/usr/lib/jellyfin-ffmpeg/ffmpeg -f matroska,webm -ss 00:00:15.000 -i 'file:/media/tvshows/Name (YYYY) [tvdbid-Number]/Season 01/Series - Name (YYYY) - SXXEXX - Name [WEBDL-1080p][EAC3 2.0][EN+RO][x264]-playWEB.mkv' -threads 0 -v quiet -vframes 1 -vf 'scale=trunc(iw*sar):ih' -f image2 /cache/temp/CacheID.jpg

If I adjust the total command to below then it works manually from the Jellyfin container:

/usr/bin/ssh -q -t -o ConnectTimeout=1 -o ConnectionAttempts=1 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ControlMaster=auto -o ControlPath=/run/ssh-%r@%h:%p -o ControlPersist=300 -i /rffmpeg/.ssh/id_rsa transcodessh@jellyfin-transcode-2 "/usr/lib/jellyfin-ffmpeg/ffmpeg -f matroska,webm -ss 00:00:15.000 -i 'file:/media/tvshows/Name (YYYY) [tvdbid-Number]/Season 01/Series - Name (YYYY) - SXXEXX - Name [WEBDL-1080p][EAC3 2.0][EN+RO][x264]-playWEB.mkv' -threads 0 -v quiet -vframes 1 -vf 'scale=trunc(iw*sar):ih' -f image2 /cache/temp/CacheID.jpg"

ERROR: Failed to load configuration: 'state' is missing

Just pulled the latest changes and copied them to my system and now sudo -u jellyfin /usr/local/bin/ffmpeg -version outputs the error in the title but I can't figure out why. I use the default state so I had left it commented in the config so I uncommented it but that didn't change anything.

Emby - rffmpeg launches but getting errors

Hello,

After moving Emby away from main server to it's own VM, I've decided to give rffmpeg a go since the VM has no GPU access.
I can't get encoding to work. Emby logs show rffmpeg does launch and starts remote ffmpeg for transcoding, but Emby does not seem to play well with it and keeps throwing errors.
NFS is set up correctly, Emby is using the same data directory as old Emby install on the main server, transcoding temporary directory is mounted as well. All files are accessible by emby user.

Host is Ubuntu 18.04, VM is Ubuntu 22.04. Emby is version 4.6.7.0

Logs attached.

ffmpeg-transcode-9ce0b0ba-6275-4173-b498-89091c40ea52_1.txt
ffmpeg-transcode-e86321a1-ff13-4bd9-a938-9ec343f324f4_1.txt
ffmpeg-transcode-e4c3c600-56df-421f-b2c8-48933325307e_1.txt
rffmpeg.log
rffmpeg.yml.txt

embyserver.txt

P.S. ffdetect support is added by me, it's a binary blob used by Emby to detect hardware transcoders.

Repeat of issue #39 - ffmpeg validation failed

Hi,

I'm suffering from the same problem as others with Jellyfin not detecting the symlinked ffmpeg files.
A little different, I'm running Jellyfin on Free-BSD, but it's been working just fine for everything else. I'm also using rffmpeg to talk to ffmpeg on a Windows system, but it all seems to be working OK from the command line.

Running:
sudo -u jellyfinserver /usr/local/bin/jrfm/ffmpeg -version

Returns (as expected):

ffmpeg version 5.1.2-Jellyfin Copyright (c) 2000-2022 the FFmpeg developers
built with gcc 10-win32 (GCC) 20220324
configuration: --prefix=/opt/ffmpeg --arch=x86_64 --target-os=mingw32 --cross-prefix=x86_64-w64-mingw32- --pkg-config=pkg-config --pkg-config-flags=--static --extra-libs='-lfftw3f -lstdc++' --extra-cflags=-DCH
ROMAPRINT_NODLL --extra-version=Jellyfin --disable-ffplay --disable-debug --disable-doc --disable-sdl2 --disable-ptx-compression --disable-w32threads --enable-pthreads --enable-shared --enable-lto --enable-gpl
 --enable-version3 --enable-schannel --enable-iconv --enable-libxml2 --enable-zlib --enable-lzma --enable-gmp --enable-chromaprint --enable-libfreetype --enable-libfribidi --enable-libfontconfig --enable-libas
s --enable-libbluray --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libwebp --enable-libvpx --enable-libzimg --enable-libx264 --enable-libx265 --enable-libsvtav1 --enable-l
ibdav1d --enable-libfdk-aac --enable-opencl --enable-dxva2 --enable-d3d11va --enable-amf --enable-libmfx --enable-ffnvcodec --enable-cuda --enable-cuda-llvm --enable-cuvid --enable-nvdec --enable-nvenc
libavutil      57. 28.100 / 57. 28.100
libavcodec     59. 37.100 / 59. 37.100
libavformat    59. 27.100 / 59. 27.100
libavdevice    59.  7.100 / 59.  7.100
libavfilter     8. 44.100 /  8. 44.100
libswscale      6.  7.100 /  6.  7.100
libswresample   4.  7.100 /  4.  7.100
libpostproc    56.  6.100 / 56.  6.100

Checking status all looks ok:
sudo -u jellyfinserver /usr/local/bin/rffmpeg status
Hostname Servername ID Weight State Active Commands
192.168.36.187 192.168.36.187 1 1 idle N/A

When trying to add the ffmpeg path into Jellyfin, it immediately errors:
image

The debug log from rffmpeg doesn't have any entry at all for any of these tries, it's like it's not hit the file (or at least not hit a logging point). It does have entries from my manual runs, so logging is OK.

JellyFin Log has the same error as other bugs:

[2023-01-04 01:33:08.496 +00:00] [INF] [32] MediaBrowser.MediaEncoding.Encoder.MediaEncoder: Attempting to update encoder path to "/usr/local/bin/jrfm". pathType: "Custom"
[2023-01-04 01:33:08.500 +00:00] [ERR] [32] MediaBrowser.MediaEncoding.Encoder.MediaEncoder: FFmpeg validation: The process returned no result
[2023-01-04 01:33:08.504 +00:00] [ERR] [32] Jellyfin.Server.Middleware.ExceptionMiddleware: Error processing request. URL "POST" "/System/MediaEncoder/Path".
MediaBrowser.Common.Extensions.ResourceNotFoundException: Exception of type 'MediaBrowser.Common.Extensions.ResourceNotFoundException' was thrown.
at MediaBrowser.MediaEncoding.Encoder.MediaEncoder.UpdateEncoderPath(String path, String pathType)
at Jellyfin.Api.Controllers.ConfigurationController.UpdateMediaEncoderPath(MediaEncoderPathDto mediaEncoderPath)
at lambda_method1294(Closure , Object , Object[] )
at Microsoft.AspNetCore.Mvc.Infrastructure.ActionMethodExecutor.SyncActionResultExecutor.Execute(IActionResultTypeMapper mapper, ObjectMethodExecutor executor, Object controller, Object[] arguments)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.InvokeActionMethodAsync()
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.Next(State& next, Scope& scope, Object& state, Boolean& isCompleted)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.InvokeNextActionFilterAsync()
--- End of stack trace from previous location ---
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.Rethrow(ActionExecutedContextSealed context)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.Next(State& next, Scope& scope, Object& state, Boolean& isCompleted)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.InvokeInnerFilterAsync()
--- End of stack trace from previous location ---
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.g__Awaited|25_0(ResourceInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted)
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.Rethrow(ResourceExecutedContextSealed context)
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.Next(State& next, Scope& scope, Object& state, Boolean& isCompleted)
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.InvokeFilterPipelineAsync()
--- End of stack trace from previous location ---
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.g__Awaited|17_0(ResourceInvoker invoker, Task task, IDisposable scope)
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.g__Awaited|17_0(ResourceInvoker invoker, Task task, IDisposable scope)
at Microsoft.AspNetCore.Routing.EndpointMiddleware.g__AwaitRequestTask|6_0(Endpoint endpoint, Task requestTask, ILogger logger)
at Jellyfin.Server.Middleware.ServerStartupMessageMiddleware.Invoke(HttpContext httpContext, IServerApplicationHost serverApplicationHost, ILocalizationManager localizationManager)
at Jellyfin.Server.Middleware.WebSocketHandlerMiddleware.Invoke(HttpContext httpContext, IWebSocketManager webSocketManager)
at Jellyfin.Server.Middleware.IpBasedAccessValidationMiddleware.Invoke(HttpContext httpContext, INetworkManager networkManager)
at Jellyfin.Server.Middleware.LanFilteringMiddleware.Invoke(HttpContext httpContext, INetworkManager networkManager, IServerConfigurationManager serverConfigurationManager)
at Microsoft.AspNetCore.Authorization.Policy.AuthorizationMiddlewareResultHandler.HandleAsync(RequestDelegate next, HttpContext context, AuthorizationPolicy policy, PolicyAuthorizationResult authorizeResult)
at Microsoft.AspNetCore.Authorization.AuthorizationMiddleware.Invoke(HttpContext context)
at Jellyfin.Server.Middleware.QueryStringDecodingMiddleware.Invoke(HttpContext httpContext)
at Swashbuckle.AspNetCore.ReDoc.ReDocMiddleware.Invoke(HttpContext httpContext)
at Swashbuckle.AspNetCore.SwaggerUI.SwaggerUIMiddleware.Invoke(HttpContext httpContext)
at Swashbuckle.AspNetCore.Swagger.SwaggerMiddleware.Invoke(HttpContext httpContext, ISwaggerProvider swaggerProvider)
at Microsoft.AspNetCore.Authentication.AuthenticationMiddleware.Invoke(HttpContext context)
at Jellyfin.Server.Middleware.RobotsRedirectionMiddleware.Invoke(HttpContext httpContext)
at Jellyfin.Server.Middleware.LegacyEmbyRouteRewriteMiddleware.Invoke(HttpContext httpContext)
at Microsoft.AspNetCore.ResponseCompression.ResponseCompressionMiddleware.InvokeCore(HttpContext context)
at Jellyfin.Server.Middleware.ResponseTimeMiddleware.Invoke(HttpContext context, IServerConfigurationManager serverConfigurationManager)
at Jellyfin.Server.Middleware.ExceptionMiddleware.Invoke(HttpContext context)

Any clues on what I should try next?

Playback stuck after couple of minutes

This seems to be a problem related most likely to "pausing" issued by Jellyfin, option:

Throttle Transcodes
When a transcode or remux gets far enough ahead from the current playback position, pause the process so it will consume less resources. This is most useful when watching without seeking often. Turn this off if you experience playback issues.

Logs as presented below, shows that encoding is paused an resumed and it works couple of times and then there's no resume and playback gets stuck. It happens after couple of minutes of playback, not initially.

SSH connection is kept established, there's no connectivity issue, as if playback is stopped or quit, at network layer, within same ssh session, rffmpeg sends all commands (no other ssh session gets established) and encoding gets terminated, within same set of logs. So network connectivity can be excluded.

It purely looks like related to that throttling/pausing which breaks after couple of minutes and it is really good Jellyfin functionality. Using latest Jellyfin amr64 docker image (Digest: 24742feeeec6, linux/arm64/v8, 171.37 MB).

Now, an interesting portion comes, after mangling with settings, disabling VAAPI acceleration and disabling throttling - it worked fine. Re-enabled back both, VAAPI and throttling and it seems to work for last couple of tries.

Any thoughts on how to troubleshoot it further in case if it would happen again?

frame= 6605 fps= 81 q=27.0 size=N/A time=00:04:24.17 bitrate=N/A speed=3.24x    
frame= 6658 fps= 81 q=27.0 size=N/A time=00:04:26.28 bitrate=N/A speed=3.24x    
frame= 6707 fps= 81 q=28.0 size=N/A time=00:04:28.24 bitrate=N/A speed=3.24x    
Transcoding is paused. Press [u] to resume.



frame= 6759 fps= 81 q=26.0 size=N/A time=00:04:30.33 bitrate=N/A speed=3.25x    

[q] command received. Exiting.

[hls @ 0x557f10813c40] Opening '/ram_transcode/5ba1e2011fbd7d15a9dda2faaaf279db377.ts' for writing
[hls @ 0x557f10813c40] Opening '/ram_transcode/5ba1e2011fbd7d15a9dda2faaaf279db378.ts' for writing
frame= 6760 fps= 81 q=-1.0 Lsize=N/A time=00:04:30.40 bitrate=N/A speed=3.24x    
video:78387kB audio:12679kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
[libx264 @ 0x557f108141c0] frame I:28    Avg QP:24.36  size: 59950
[libx264 @ 0x557f108141c0] frame P:6732  Avg QP:27.39  size: 11674
[libx264 @ 0x557f108141c0] mb I  I16..4: 100.0%  0.0%  0.0%
[libx264 @ 0x557f108141c0] mb P  I16..4: 13.1%  0.0%  0.0%  P16..4: 21.3%  0.0%  0.0%  0.0%  0.0%    skip:65.5%
[libx264 @ 0x557f108141c0] coded y,uvDC,uvAC intra: 12.2% 13.7% 1.3% inter: 5.8% 4.2% 0.0%
[libx264 @ 0x557f108141c0] i16 v,h,dc,p: 37% 21% 20% 22%
[libx264 @ 0x557f108141c0] i8c dc,h,v,p: 56% 16% 21%  6%
[libx264 @ 0x557f108141c0] kb/s:2374.77
==> rffmpeg.log <==
2024-02-19 22:27:23,008 - rffmpeg[31497] - DEBUG - Using SQLite as database.
2024-02-19 22:27:23,065 - rffmpeg[31497] - DEBUG - SQLite connection closed.
2024-02-19 22:27:23,072 - rffmpeg[31497] - INFO - Finished rffmpeg with return code 0


scanning metadata doesn't work

At the moment I can't scan new movies/series because no metadata is scanned properly. To me that looks like ffprobe is not working correctly but I have no idea if that's the issue.
Tried it with jellyfin 10.8.1 and 10.9.0 (unstable) both with rffmpeg.
The old rffmpeg version worked fine. I just tested it again with the old version and scanning metadata was no issue. For all tests I used jellyfin-ffmpeg5.

old rffmpeg
#!/usr/bin/env python3

# rffmpeg.py - Remote FFMPEG transcoding for Jellyfin
#
#    Copyright (C) 2019-2020  Joshua M. Boniface <[email protected]>
#
#    This program is free software: you can redistribute it and/or modify
#    it under the terms of the GNU General Public License as published by
#    the Free Software Foundation, either version 3 of the License, or
#    (at your option) any later version.
#
#    This program is distributed in the hope that it will be useful,
#    but WITHOUT ANY WARRANTY; without even the implied warranty of
#    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
#    GNU General Public License for more details.
#
#    You should have received a copy of the GNU General Public License
#    along with this program.  If not, see <https://www.gnu.org/licenses/>.
#
###############################################################################
#
# rffmpeg works as a drop-in replacement to an existing ffmpeg binary. It is
# used to launch ffmpeg commands on a remote machine via SSH, while passing
# in any stdin from the calling environment. Its primary usecase is to enable
# a program such as Jellyfin to distribute its ffmpeg calls to remote machines
# that might be better suited to transcoding or processing ffmpeg.
#
# rffmpeg uses a configuration file, by default at `/etc/rffmpeg/rffmpeg.yml`,
# to specify a number of settings that the processes will use. This includes
# the remote system(s) to connect to, temporary directories, SSH configuration,
# and other settings.
#
###############################################################################

###############################################################################
# Imports and helper functions
###############################################################################

import logging
import os
import re
import signal
import subprocess
import sys

import yaml

log = logging.getLogger("rffmpeg")


###############################################################################
# Configuration parsing
###############################################################################

# Get configuration file
default_config_file = "/etc/rffmpeg/rffmpeg.yml"
config_file = os.environ.get("RFFMPEG_CONFIG", default_config_file)

# Parse the configuration
with open(config_file, "r") as cfgfile:
    try:
        o_config = yaml.load(cfgfile, Loader=yaml.BaseLoader)
    except Exception as e:
        log.error("ERROR: Failed to parse configuration file: %s", e)
        exit(1)

try:
    config = {
        "state_tempdir": o_config["rffmpeg"]["state"]["tempdir"],
        "state_filename": o_config["rffmpeg"]["state"]["filename"],
        "state_contents": o_config["rffmpeg"]["state"]["contents"],
        "log_to_file": o_config["rffmpeg"]["logging"]["file"],
        "logfile": o_config["rffmpeg"]["logging"]["logfile"],
        "remote_hosts": o_config["rffmpeg"]["remote"]["hosts"],
        "remote_user": o_config["rffmpeg"]["remote"]["user"],
        "remote_args": o_config["rffmpeg"]["remote"]["args"],
        "pre_commands": o_config["rffmpeg"]["commands"]["pre"],
        "ffmpeg_command": o_config["rffmpeg"]["commands"]["ffmpeg"],
        "ffprobe_command": o_config["rffmpeg"]["commands"]["ffprobe"],
    }
except Exception as e:
    log.error("ERROR: Failed to load configuration: %s is missing", e)
    exit(1)

# Handle the fallback configuration using get() to avoid failing
config["ssh_command"] = o_config["rffmpeg"]["commands"].get("ssh", "ssh")
config["remote_persist_time"] = int(o_config["rffmpeg"]["remote"].get("persist", 0))
config["state_persistdir"] = o_config["rffmpeg"]["state"].get("persistdir", '/run/shm')
config["fallback_ffmpeg_command"] = o_config["rffmpeg"]["commands"].get("fallback_ffmpeg", config["ffmpeg_command"])
config["fallback_ffprobe_command"] = o_config["rffmpeg"]["commands"].get("fallback_ffprobe", config["ffprobe_command"])

# Parse CLI args (ffmpeg command line)
all_args = sys.argv
cli_ffmpeg_args = all_args[1:]

# Get PID
current_statefile = config["state_tempdir"] + "/" + config["state_filename"].format(pid=os.getpid())

log.info("Starting rffmpeg %s: %s", os.getpid(), " ".join(all_args))


def get_target_host():
    """
    Determine the optimal target host
    """
    log.info("Determining target host")

    # Ensure the state directory exists or create it
    if not os.path.exists(config["state_tempdir"]):
        os.makedirs(config["state_tempdir"])

    # Check for existing state files
    state_files = os.listdir(config["state_tempdir"])

    # Read each statefile to determine which hosts are bad or in use
    bad_hosts = list()
    active_hosts = list()
    for state_file in state_files:
        with open(config["state_tempdir"] + "/" + state_file, "r") as statefile:
            contents = statefile.readlines()
            for line in contents:
                if re.match("^badhost", line):
                    bad_hosts.append(line.split()[1])
                    log.info("Found bad host mark from rffmpeg process %s for host '%s'", re.findall(r"[0-9]+", state_file)[0], line.split()[1])
                else:
                    active_hosts.append(line.split()[0])
                    log.info("Found running rffmpeg process %s against host '%s'", re.findall(r"[0-9]+", state_file)[0], line.split()[0])

    # Get the remote hosts list from the config
    remote_hosts = list()
    for host in config["remote_hosts"]:
        if type(host) is str or host.get("name", None) is None:
            host_name = host
        else:
            host_name = host.get("name")

        if type(host) is str or host.get("weight", None) is None:
            host_weight = 1
        else:
            host_weight = int(host.get("weight"))

        remote_hosts.append({ "name": host_name, "weight": host_weight, "count": 0, "weighted_count": 0, "bad": False })


    # Remove any bad hosts from the remote_hosts list
    for bhost in bad_hosts:
        for idx, rhost in enumerate(remote_hosts):
            if bhost == rhost["name"]:
                remote_hosts[idx]["bad"] = True

    # Find out which active hosts are in use
    for idx, rhost in enumerate(remote_hosts):
        # Determine process counts in active_hosts
        count = 0
        for ahost in active_hosts:
            if ahost == rhost["name"]:
                count += 1
        remote_hosts[idx]["count"] = count

    # Reweight the host counts by floor dividing count by weight
    for idx, rhost in enumerate(remote_hosts):
        if rhost["bad"]:
            continue
        if rhost["weight"] > 1:
            remote_hosts[idx]["weighted_count"] = rhost["count"] // rhost["weight"]
        else:
            remote_hosts[idx]["weighted_count"] = rhost["count"]

    # Select the host with the lowest weighted count (first host is parsed last)
    lowest_count = 999
    target_host = None
    for rhost in remote_hosts:
        if rhost["bad"]:
            continue
        if rhost["weighted_count"] < lowest_count:
            lowest_count = rhost["weighted_count"]
            target_host = rhost["name"]

    if not target_host:
        log.warning("Failed to find a valid target host - using local fallback instead")
        target_host = "localhost"

    # Write to our state file
    with open(current_statefile, "a") as statefile:
        statefile.write(config["state_contents"].format(host=target_host) + "\n")

    log.info("Selected target host '%s'", target_host)
    return target_host


def bad_host(target_host):
    log.info("Setting bad host %s", target_host)

    # Rewrite the statefile, removing all instances of the target_host that were added before
    with open(current_statefile, "r+") as statefile:
        new_statefile = statefile.readlines()
        statefile.seek(0)
        for line in new_statefile:
            if target_host not in line:
                statefile.write(line)
        statefile.truncate()

    # Add the bad host to the statefile
    # This will affect this run, as well as any runs that start while this one is active; once
    # this run is finished and its statefile removed, however, the host will be retried again
    with open(current_statefile, "a") as statefile:
        statefile.write("badhost " + config["state_contents"].format(host=target_host) + "\n")


def setup_remote_command(target_host):
    """
    Craft the target command
    """
    rffmpeg_ssh_command = list()
    rffmpeg_ffmpeg_command = list()

    # Add SSH component
    rffmpeg_ssh_command.append(config["ssh_command"])
    rffmpeg_ssh_command.append("-q")

    # Set our connection timeouts, in case one of several remote machines is offline
    rffmpeg_ssh_command.extend([ "-o", "ConnectTimeout=1" ])
    rffmpeg_ssh_command.extend([ "-o", "ConnectionAttempts=1" ])
    rffmpeg_ssh_command.extend([ "-o", "StrictHostKeyChecking=no" ])
    rffmpeg_ssh_command.extend([ "-o", "UserKnownHostsFile=/dev/null" ])

    # Use SSH control persistence to keep sessions alive for subsequent commands
    persist_time = config["remote_persist_time"]
    if persist_time > 0:
        rffmpeg_ssh_command.extend([ "-o", "ControlMaster=auto" ])
        rffmpeg_ssh_command.extend([ "-o", "ControlPath={}/ssh-%r@%h:%p".format(config["state_persistdir"]) ])
        rffmpeg_ssh_command.extend([ "-o", "ControlPersist={}".format(persist_time) ])

    for arg in config["remote_args"]:
        if arg:
            rffmpeg_ssh_command.append(arg)

    # Add user+host string
    rffmpeg_ssh_command.append("{}@{}".format(config["remote_user"], target_host))
    log.info("Running as %s@%s", config["remote_user"], target_host)

    # Add any pre command
    for cmd in config["pre_commands"]:
        if cmd:
            rffmpeg_ffmpeg_command.append(cmd)

    # Prepare our default stdin/stdout/stderr (normally, stdout to stderr)
    stdin = sys.stdin
    stdout = sys.stderr
    stderr = sys.stderr

    # Verify if we're in ffmpeg or ffprobe mode
    if "ffprobe" in all_args[0]:
        rffmpeg_ffmpeg_command.append(config["ffprobe_command"])
        stdout = sys.stdout
    else:
        rffmpeg_ffmpeg_command.append(config["ffmpeg_command"])

    # Determine if version, encorders, or decoders is an argument; if so, we output stdout to stdout
    # Weird workaround for something Jellyfin requires...
    specials = ["-version", "-encoders", "-decoders", "-hwaccels", "-filters", "-h"]
    if any(item in specials for item in cli_ffmpeg_args):
        stdout = sys.stdout

    # Parse and re-quote any problematic arguments
    for arg in cli_ffmpeg_args:
        # Match bad shell characters: * ' ( ) whitespace
        if re.search("[*'()\s|\[\]]", arg):
            rffmpeg_ffmpeg_command.append('"{}"'.format(arg))
        else:
            rffmpeg_ffmpeg_command.append("{}".format(arg))

    return rffmpeg_ssh_command, rffmpeg_ffmpeg_command, stdin, stdout, stderr


def run_command(rffmpeg_ssh_command, rffmpeg_ffmpeg_command, stdin, stdout, stderr):
    """
    Execute the command using subprocess
    """
    rffmpeg_command = rffmpeg_ssh_command + rffmpeg_ffmpeg_command
    p = subprocess.run(
        rffmpeg_command, shell=False, bufsize=0, universal_newlines=True, stdin=stdin, stderr=stderr, stdout=stdout
    )
    returncode = p.returncode

    return returncode


def run_local_ffmpeg():
    """
    Fallback call to local ffmpeg
    """
    rffmpeg_ffmpeg_command = list()

    # Prepare our default stdin/stdout/stderr (normally, stdout to stderr)
    stdin = sys.stdin
    stdout = sys.stderr
    stderr = sys.stderr

    # Verify if we're in ffmpeg or ffprobe mode
    if "ffprobe" in all_args[0]:
        rffmpeg_ffmpeg_command.append(config["fallback_ffprobe_command"])
        stdout = sys.stdout
    else:
        rffmpeg_ffmpeg_command.append(config["fallback_ffmpeg_command"])

    # Determine if version, encorders, or decoders is an argument; if so, we output stdout to stdout
    # Weird workaround for something Jellyfin requires...
    specials = ["-version", "-encoders", "-decoders", "-hwaccels", "-filters", "-h"]
    if any(item in specials for item in cli_ffmpeg_args):
        stdout = sys.stdout

    # Parse and re-quote any problematic arguments
    for arg in cli_ffmpeg_args:
        rffmpeg_ffmpeg_command.append("{}".format(arg))

    log.info("Local command: %s", " ".join(rffmpeg_ffmpeg_command))

    return run_command([], rffmpeg_ffmpeg_command, stdin, stdout, stderr)


def run_remote_ffmpeg(target_host):
    rffmpeg_ssh_command, rffmpeg_ffmpeg_command, stdin, stdout, stderr = setup_remote_command(target_host)
    log.info("Remote command: %s '%s'", " ".join(rffmpeg_ssh_command), " ".join(rffmpeg_ffmpeg_command))

    return run_command(rffmpeg_ssh_command, rffmpeg_ffmpeg_command, stdin, stdout, stderr)


def cleanup(signum="", frame=""):
    # Remove the current statefile
    try:
        os.remove(current_statefile)
    except FileNotFoundError:
        pass


def main():
    signal.signal(signal.SIGTERM, cleanup)
    signal.signal(signal.SIGINT, cleanup)
    signal.signal(signal.SIGQUIT, cleanup)
    signal.signal(signal.SIGHUP, cleanup)

    log_to_file = config.get("log_to_file", False)
    if log_to_file:
        logfile = config.get("logfile")
        logging.basicConfig(
            filename=logfile, level=logging.INFO, format="%(asctime)s - %(name)s - %(levelname)s - %(message)s"
        )
    else:
        logging.basicConfig(level=logging.INFO, format="%(asctime)s - %(name)s - %(levelname)s - %(message)s")

    log.info("Starting rffmpeg PID %s", os.getpid())

    # Main process loop; executes until the ffmpeg command actually runs on a reachable host
    returncode = 1
    while True:
        target_host = get_target_host()
        if target_host == "localhost":
            returncode = run_local_ffmpeg()
            break
        else:
            returncode = run_remote_ffmpeg(target_host)

            # A returncode of 255 means that the SSH process failed;
            # ffmpeg does not throw this return code (https://ffmpeg.org/pipermail/ffmpeg-user/2013-July/016245.html)
            if returncode == 255:
                log.info(
                    "SSH failed to host %s with retcode %s: marking this host as bad and retrying",
                    target_host,
                    returncode,
                )
                bad_host(target_host)
            else:
                # The SSH succeeded, so we can abort the loop
                break

    cleanup()
    if returncode == 0:
        log.info("Finished rffmpeg PID %s with return code %s", os.getpid(), returncode)
    else:
        log.error("Finished rffmpeg PID %s with return code %s", os.getpid(), returncode)
    exit(returncode)


if __name__ == "__main__":
    main()

The lower episode is scanned with the new ffmpeg and new rffmpeg the older one with the old ffmpeg and rffmpeg.
Before it got the time and all metadata but now it doesn't get any information about the file.
image

This is what it should look like
image

but the next episode looks like this
image

Remote side quoting issue

Big thanks to @joshuaboniface for this!
This problem is similar if not same as: #28 though I couldn't find a solution there.
All set up and tests (version) works, paths are exact on both sides. Both sides are docker containers (different hosts) and data mapped via NFS - exact same paths.

The playback fails.

Any help in terms of troubleshooting would be great.

From my side tried with wrapper script on the remote side, though then it looses all quotes, probably mistake on my end:
wrapper on remote side

#!/bin/bash
LOG_FILE="/ram_transcode/ffmpeg-remote.log"

COMMAND="$0"
ARGS=("${@:1}")

# tested both options
#echo "Command: $COMMAND" >> "$LOG_FILE"
#echo "Arguments: ${ARGS[@]}" >> "$LOG_FILE"
echo "$0 $@" >> "$LOG_FILE"

exec "${COMMAND}.real" "${ARGS[@]}"

rffmpeg log

2023-10-08 15:05:02,929 - rffmpeg[15996] - INFO - Starting rffmpeg as /usr/local/bin/ffmpeg with args: -analyzeduration 200M -f matroska,webm -autorotate 0 -i file:/videos/private/Vide title  (1900)/Vide title  (1900) WEBDL-1080p.mkv -map_metadata -1 -map_chapters -1 -threads 5 -map 0:0 -map 0:1 -map -0:s -codec:v:0 libx264 -preset ultrafast -crf 28 -maxrate 292000 -bufsize 584000 -x264opts:0 subme=0:me_range=4:rc_lookahead=10:me=dia:no_chroma_me:8x8dct=0:partitions=none -force_key_frames:0 expr:gte(t,0+n_forced*3) -sc_threshold:v:0 0 -vf setparams=color_primaries=bt709:color_trc=bt709:colorspace=bt709,scale=trunc(min(max(iw\,ih*a)\,426)/2)*2:trunc(ow/a/2)*2,format=yuv420p -codec:a:0 libfdk_aac -ac 2 -ab 128000 -af volume=2 -copyts -avoid_negative_ts disabled -max_muxing_queue_size 2048 -f hls -max_delay 5000000 -hls_time 3 -hls_segment_type mpegts -start_number 0 -hls_segment_filename /ram_transcode/b935966a8c3f4b1c3539bc0b759118a5%d.ts -hls_playlist_type vod -hls_list_size 0 -y /ram_transcode/b935966a8c3f4b1c3539bc0b759118a5.m3u8
2023-10-08 15:05:02,929 - rffmpeg[15996] - DEBUG - Determining optimal target host
2023-10-08 15:05:02,930 - rffmpeg[15996] - DEBUG - Using SQLite as database.
2023-10-08 15:05:02,932 - rffmpeg[15996] - DEBUG - SQLite connection closed.
2023-10-08 15:05:02,932 - rffmpeg[15996] - DEBUG - Using SQLite as database.
2023-10-08 15:05:02,934 - rffmpeg[15996] - DEBUG - SQLite connection closed.
2023-10-08 15:05:02,934 - rffmpeg[15996] - DEBUG - Trying host ID 1 'rffmpeg.other.host'
2023-10-08 15:05:02,934 - rffmpeg[15996] - DEBUG - Running SSH test
2023-10-08 15:05:02,996 - rffmpeg[15996] - DEBUG - SSH test succeeded with retcode 0
2023-10-08 15:05:02,997 - rffmpeg[15996] - DEBUG - Selecting host as idle
2023-10-08 15:05:02,997 - rffmpeg[15996] - DEBUG - Found optimal host ID 1 'rffmpeg.other.host' (rffmpeg.other.host)
2023-10-08 15:05:02,998 - rffmpeg[15996] - INFO - Running command on host 'rffmpeg.other.host' (rffmpeg.other.host)
2023-10-08 15:05:02,999 - rffmpeg[15996] - DEBUG - Remote command: /usr/bin/ssh -q -t -o ConnectTimeout=1 -o ConnectionAttempts=1 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ControlMaster=auto -o ControlPath=/run//ssh-%r@%h:%p -o ControlPersist=300 -i /config/.ssh/id_rsa [email protected] /usr/lib/jellyfin-ffmpeg/ffmpeg -analyzeduration 200M -f matroska,webm -autorotate 0 -i 'file:/videos/private/Vide title  (1900)/Vide title  (1900) WEBDL-1080p.mkv' -map_metadata -1 -map_chapters -1 -threads 5 -map 0:0 -map 0:1 -map -0:s -codec:v:0 libx264 -preset ultrafast -crf 28 -maxrate 292000 -bufsize 584000 -x264opts:0 subme=0:me_range=4:rc_lookahead=10:me=dia:no_chroma_me:8x8dct=0:partitions=none -force_key_frames:0 'expr:gte(t,0+n_forced*3)' -sc_threshold:v:0 0 -vf 'setparams=color_primaries=bt709:color_trc=bt709:colorspace=bt709,scale=trunc(min(max(iw\,ih*a)\,426)/2)*2:trunc(ow/a/2)*2,format=yuv420p' -codec:a:0 libfdk_aac -ac 2 -ab 128000 -af volume=2 -copyts -avoid_negative_ts disabled -max_muxing_queue_size 2048 -f hls -max_delay 5000000 -hls_time 3 -hls_segment_type mpegts -start_number 0 -hls_segment_filename /ram_transcode/b935966a8c3f4b1c3539bc0b759118a5%d.ts -hls_playlist_type vod -hls_list_size 0 -y /ram_transcode/b935966a8c3f4b1c3539bc0b759118a5.m3u8
2023-10-08 15:05:03,000 - rffmpeg[15996] - DEBUG - Using SQLite as database.
2023-10-08 15:05:03,013 - rffmpeg[15996] - DEBUG - SQLite connection closed.
2023-10-08 15:05:03,363 - rffmpeg[15996] - DEBUG - Using SQLite as database.
2023-10-08 15:05:03,375 - rffmpeg[15996] - DEBUG - SQLite connection closed.
2023-10-08 15:05:03,375 - rffmpeg[15996] - ERROR - Finished rffmpeg with return code 1

Jellyfin side - for whatever reason it shows /videos in path. Though it is URL/URI based and not filesystem based - hence the suspicion, problem is there.

[2023-10-08 15:05:03.426 +02:00] [ERR] [86] Jellyfin.Api.Helpers.TranscodingJobHelper: FFmpeg exited with code 1
[2023-10-08 15:05:03.446 +02:00] [ERR] [60] Jellyfin.Server.Middleware.ExceptionMiddleware: Error processing request. URL "GET" "/videos/58b6fecc-97e3-fba5-039c-617abfb3e88e/hls1/main/0.ts".
MediaBrowser.Common.FfmpegException: FFmpeg exited with code 1
   at Jellyfin.Api.Helpers.TranscodingJobHelper.StartFfMpeg(StreamState state, String outputPath, String commandLineArguments, HttpRequest request, TranscodingJobType transcodingJobType, CancellationTokenSource cancellationTokenSource, String workingDirectory)
   at Jellyfin.Api.Controllers.DynamicHlsController.GetDynamicSegment(StreamingRequestDto streamingRequest, Int32 segmentId)
   at Jellyfin.Api.Controllers.DynamicHlsController.GetHlsVideoSegment(Guid itemId, String playlistId, Int32 segmentId, String container, Int64 runtimeTicks, Int64 actualSegmentLengthTicks, Nullable`1 static, String params, String tag, String deviceProfileId, String playSessionId, String segmentContainer, Nullable`1 segmentLength, Nullable`1 minSegments, String mediaSourceId, String deviceId, String audioCodec, Nullable`1 enableAutoStreamCopy, Nullable`1 allowVideoStreamCopy, Nullable`1 allowAudioStreamCopy, Nullable`1 breakOnNonKeyFrames, Nullable`1 audioSampleRate, Nullable`1 maxAudioBitDepth, Nullable`1 audioBitRate, Nullable`1 audioChannels, Nullable`1 maxAudioChannels, String profile, String level, Nullable`1 framerate, Nullable`1 maxFramerate, Nullable`1 copyTimestamps, Nullable`1 startTimeTicks, Nullable`1 width, Nullable`1 height, Nullable`1 maxWidth, Nullable`1 maxHeight, Nullable`1 videoBitRate, Nullable`1 subtitleStreamIndex, Nullable`1 subtitleMethod, Nullable`1 maxRefFrames, Nullable`1 maxVideoBitDepth, Nullable`1 requireAvc, Nullable`1 deInterlace, Nullable`1 requireNonAnamorphic, Nullable`1 transcodingMaxAudioChannels, Nullable`1 cpuCoreLimit, String liveStreamId, Nullable`1 enableMpegtsM2TsMode, String videoCodec, String subtitleCodec, String transcodeReasons, Nullable`1 audioStreamIndex, Nullable`1 videoStreamIndex, Nullable`1 context, Dictionary`2 streamOptions)
   at lambda_method1130(Closure , Object )
   at Microsoft.AspNetCore.Mvc.Infrastructure.ActionMethodExecutor.TaskOfActionResultExecutor.Execute(IActionResultTypeMapper mapper, ObjectMethodExecutor executor, Object controller, Object[] arguments)
   at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.<InvokeActionMethodAsync>g__Awaited|12_0(ControllerActionInvoker invoker, ValueTask`1 actionResultValueTask)
   at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.<InvokeNextActionFilterAsync>g__Awaited|10_0(ControllerActionInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted)
   at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.Rethrow(ActionExecutedContextSealed context)
   at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.Next(State& next, Scope& scope, Object& state, Boolean& isCompleted)
   at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.<InvokeInnerFilterAsync>g__Awaited|13_0(ControllerActionInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted)
   at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeNextResourceFilter>g__Awaited|25_0(ResourceInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted)
   at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.Rethrow(ResourceExecutedContextSealed context)
   at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.Next(State& next, Scope& scope, Object& state, Boolean& isCompleted)
   at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeFilterPipelineAsync>g__Awaited|20_0(ResourceInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted)
   at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeAsync>g__Awaited|17_0(ResourceInvoker invoker, Task task, IDisposable scope)
   at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeAsync>g__Awaited|17_0(ResourceInvoker invoker, Task task, IDisposable scope)
   at Microsoft.AspNetCore.Routing.EndpointMiddleware.<Invoke>g__AwaitRequestTask|6_0(Endpoint endpoint, Task requestTask, ILogger logger)
   at Jellyfin.Server.Middleware.ServerStartupMessageMiddleware.Invoke(HttpContext httpContext, IServerApplicationHost serverApplicationHost, ILocalizationManager localizationManager)
   at Jellyfin.Server.Middleware.WebSocketHandlerMiddleware.Invoke(HttpContext httpContext, IWebSocketManager webSocketManager)
   at Jellyfin.Server.Middleware.IpBasedAccessValidationMiddleware.Invoke(HttpContext httpContext, INetworkManager networkManager)
   at Jellyfin.Server.Middleware.LanFilteringMiddleware.Invoke(HttpContext httpContext, INetworkManager networkManager, IServerConfigurationManager serverConfigurationManager)
   at Microsoft.AspNetCore.Authorization.Policy.AuthorizationMiddlewareResultHandler.HandleAsync(RequestDelegate next, HttpContext context, AuthorizationPolicy policy, PolicyAuthorizationResult authorizeResult)
   at Microsoft.AspNetCore.Authorization.AuthorizationMiddleware.Invoke(HttpContext context)
   at Jellyfin.Server.Middleware.QueryStringDecodingMiddleware.Invoke(HttpContext httpContext)
   at Swashbuckle.AspNetCore.ReDoc.ReDocMiddleware.Invoke(HttpContext httpContext)
   at Swashbuckle.AspNetCore.SwaggerUI.SwaggerUIMiddleware.Invoke(HttpContext httpContext)
   at Swashbuckle.AspNetCore.Swagger.SwaggerMiddleware.Invoke(HttpContext httpContext, ISwaggerProvider swaggerProvider)
   at Microsoft.AspNetCore.Authentication.AuthenticationMiddleware.Invoke(HttpContext context)
   at Jellyfin.Server.Middleware.RobotsRedirectionMiddleware.Invoke(HttpContext httpContext)
   at Jellyfin.Server.Middleware.LegacyEmbyRouteRewriteMiddleware.Invoke(HttpContext httpContext)
   at Microsoft.AspNetCore.ResponseCompression.ResponseCompressionMiddleware.InvokeCore(HttpContext context)
   at Jellyfin.Server.Middleware.ResponseTimeMiddleware.Invoke(HttpContext context, IServerConfigurationManager serverConfigurationManager)
   at Jellyfin.Server.Middleware.ExceptionMiddleware.Invoke(HttpContext context)

unable to open database file

I just tried to rebuild my docker container and this happened. Not sure whats wrong 🤔

# docker exec -it jellyfin rffmpeg init

Traceback (most recent call last):
  File "/usr/local/bin/rffmpeg", line 831, in <module>
    run_control(config)
  File "/usr/local/bin/rffmpeg", line 819, in run_control
    return rffmpeg_click(obj={})
  File "/usr/lib/python3/dist-packages/click/core.py", line 829, in __call__
    return self.main(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/click/core.py", line 782, in main
    rv = self.invoke(ctx)
  File "/usr/lib/python3/dist-packages/click/core.py", line 1256, in invoke
    Command.invoke(self, ctx)
  File "/usr/lib/python3/dist-packages/click/core.py", line 1066, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/lib/python3/dist-packages/click/core.py", line 610, in invoke
    return callback(*args, **kwargs)
  File "/usr/local/bin/rffmpeg", line 496, in rffmpeg_click
    with dbconn(config) as cur:
  File "/usr/lib/python3.9/contextlib.py", line 117, in __enter__
    return next(self.gen)
  File "/usr/local/bin/rffmpeg", line 50, in dbconn
    conn = sqlite_connect(config["db_path"])
sqlite3.OperationalError: unable to open database file

My Dockerfile hasn't changed:

FROM alpine AS qemu

# QEMU Download
ENV QEMU_URL https://github.com/balena-io/qemu/releases/download/v4.0.0%2Bbalena2/qemu-4.0.0.balena2-aarch64.tar.gz
RUN apk add curl && curl -L ${QEMU_URL} | tar zxvf - -C . --strip-components 1

FROM --platform=linux/arm64 jellyfin/jellyfin:latest
#unstable
# Add QEMU
COPY --from=qemu qemu-aarch64-static /usr/bin

# Add Labels
LABEL maintainer [email protected]

RUN apt update && \
    apt install -y openssh-client python3-click python3-yaml wget &&\
    wget https://raw.githubusercontent.com/joshuaboniface/rffmpeg/master/rffmpeg -O /usr/local/bin/rffmpeg && \
    chmod +x /usr/local/bin/rffmpeg && \
    ln -s /usr/local/bin/rffmpeg /usr/local/bin/ffmpeg && \
    ln -s /usr/local/bin/rffmpeg /usr/local/bin/ffprobe && \
    mkdir -p /etc/rffmpeg && \
    wget https://raw.githubusercontent.com/joshuaboniface/rffmpeg/master/rffmpeg.yml.sample -O /etc/rffmpeg/rffmpeg.yml && \
    #configure config
    sed -i 's;#persist: "/run/shm";persist: "/run";g' /etc/rffmpeg/rffmpeg.yml && \
    sed -i 's;#state: "/var/lib/rffmpeg";state: "/config/rffmpeg";g' /etc/rffmpeg/rffmpeg.yml && \
    sed -i 's;#pre:;pre:;g' /etc/rffmpeg/rffmpeg.yml && \
    sed -i 's;#    - "";    - "sudo";g' /etc/rffmpeg/rffmpeg.yml && \
    sed -i 's;#owner: jellyfin;owner: root;g' /etc/rffmpeg/rffmpeg.yml && \
    sed -i 's;state: "/config/rffmpeg";state: "/var/lib/rffmpeg";g' /etc/rffmpeg/rffmpeg.yml && \
    sed -i 's;#user: jellyfin;user: ubuntu;g' /etc/rffmpeg/rffmpeg.yml && \
    sed -i 's;#persist: 300;persist: 0;g' /etc/rffmpeg/rffmpeg.yml && \

    #reinitialize rffmpeg
#    rffmpeg init -y && \
#    rffmpeg add Oracle4 && \
#    rffmpeg add Oracle5 && \
#    rffmpeg add Oracle6 && \
#    rffmpeg add Oracle7 && \
#    rffmpeg add Oracle8 && \   
    #clean up
    apt purge wget -y && \
    rm -rf /var/lib/apt/lists/* && \
    apt autoremove --purge -y && \
    apt clean

ENTRYPOINT "./jellyfin/jellyfin"

Does FFMPEG support multi CPU sockets?

Hello! Just found this awesome repository, looks awesome. I'm going to be setting it up. I'm just wondering if it can take advantage of a multi-CPU server, say, a server with 4 physical CPUs as an example.

If not already natively supported, I could see something where you could have a setting specified the number of instances of ffmpeg would correspond to the amount of CPUs. So if rffmpeg sends off 2 different encoding/transcoding jobs the first one would open up on CPU 0, and the next on CPU 1. Just an idea.

Not sure if it's even possible. But I at least wanted to tell you how awesome I think this idea is and it will definitely be coming in handy!

Kubernetes operator

Not really an issue, but I’ve had this idea to use a Kubernetes operator to spin up pods for transcoding for a while now. Not sure if this project would be a good starting point for this but thought I’d share the idea. It might also be similar to https://github.com/ccremer/clustercode. This would probably require some more integration on the Jellyfin side as well so when a new transcode is started it makes some sort of call which would then trigger the creation of a transcoding pod.

RFE: HW acceleration fallback to no acceleration or per destination HW accel enabled

Would it be possible to add a feature of disabling HW acceleration for fallback transcoding on local Jellyfin host?

Reasoning/Scenario:
Transcoding preferred host (SNUC) has HW acceleration (in my case VA-API) while the Jellyfin local system (N2) is based on SoC (no-HW accel).

When the SNUC is down ,rffmpeg will/should fallback to N2 based transcoding and what is missing at the moment to not use any HW accel.

Currently if HW accel is on, then fallback transcoding won't work. If HW accel is off, then benefits of HW accel available for 95% of time are being lost though when it's down, it doesn't become that visible to family.

Best would be to be able to get most of both worlds, by adding a function where rffmpeg would update ffmpeg command itself in case of falling back to local encoding.

If possible, and that would be preferred option, rffmpeg could test remote host (or have it statically configured at the time of adding host) and select appropriate HW accel. In this case Jellyfin side would see no HW accel configured and all would be handled/HW accel enabled at the time of rffmpeg calling remote ffmpeg.

Most of the time it is down to using appropriate ffmpeg codec switch,i.e. for VA-API h264_vaapi. (-codec:v:0 h264_vaapi).

[Feature Request] disable transcode on localhost and waking up remote machines

I've been using rffmpeg for a week or so on a jellyfin server running on a raspberry pi 400 (the one that is a keyboard) and it's been great being able to use a few remote pcs to transcode. But there is a few things i'd like to have added if possible natively to this so it can be used more widely.

I will start by having an option to differenciate betweeen a screen grabber and a full transcode, the screen grabber runs slow but fine on the pi itself, but trancode is impossible. it would be nice to default taking pictures to localhost and not enable transcode to happen on the server. instead of this if it is too hard to know if its just screen grabbing a feature to disable localhost as a backup, as i prefer having no video to the one being rendered on the pi.

The other thing that is a nice to have is (for me atleast) the remote servers on sleep mode, i edited the script adding a wakeonlan call to wake all the servers from sleep before selecting one. This works relatively fine, but waking every server is somewhat a burden as it may wake a server that is going to get no work.
dding a mac adress field to the config file and sending a magic packet to that machine before pinging for availability would be great (would write the code myself but i am a novice in python) it may also be needed a thread.sleep call to wait for the remote machine to fully wakeup.

Setting things up this way allows one to save on power consuption, noise and heat if running at home as i do. The drawback is that it may also include some troubles with the dead host remote marking procedures as i've encountered that sometimes a reboot needed on the pi as rffmpeg refuses to send thing remotely saying that it's not available, even if the remote machine is running at idle not even being on sleep.

Also, I have written a script that can be ran remotely to check if a machine is running ffmpeg and suspend it if its not, could be ran as a cancel encode procedure o as i do as a cronjob every x minutes

Edit: typos and text clarity

Proactive transcoding

For example my main worker is in synolgy NAS. it is not hardware-capable of transcoding / decoding.
I have a workstation which can do it when turned on would it be able to put into jellydin using rffmpeg to be used in future ?

Apostrophe ( ' ) in a name causes script to fail. Fails to recognize media

If a filename contains an apostrophe ( ' ), script fails to execute. Jellyfin rffmpeg fails to recognize media.

bash: -c: line 0: unexpected EOF while looking for matching `''
bash: -c: line 1: syntax error: unexpected end of file

Ex:
videos/bd523add-e7a8-f3b6-2eda-c5c6cf197f10/live.m3u8

{"Protocol":0,"Id":"bd523adde7a8f3b62edac5c6cf197f10","Path":"/media/TVShows/Batwoman/Season.02/Batwoman.2019.-.S02E13.-.I\u0027ll.Give.You.a.Clue.720p-HDTV.mkv"
etc....

Using multiple hosts at the same time?

I use rffmpeg to speed up Jellyfin on a Raspberry Pi, but none of the transcode nodes (Odroid SBCs) is not powerful enough to handle the transcode on its own.

Would it be possible to implement using multiple hosts to transcode one file?
I sort of expect the answer to be no...

What dirs do the workers really need?

I want to minimize exposure to workers, it's obvious that they need transcode and subtitle extract dir, but do you happen to know what else?

This is primarily so that I can share all the required files via NFS without needing to share the Jellyfin SQLite DB, I'm working on implementing my worker docker image as a deployment on Kubernetes.

Also side question, since I know you're the leader of Jellyfin project: do you happen to know how close is Jellyfin to having external DB support? With that and rffmpeg and -noweb flag I could have a real HA setup with seperate scaling for the server, transcoding, database as well as web ui.

Feature: rffmpeg run subcommand

I would like to be able to explicitly execute (non-ffmpeg) commands via rffmpeg. This would for instance be useful for running transcode commands that require piping between different processes, which would be extremely costly in terms of network bandwidth otherwise.

Dispatching to different hosts could happen as usual, pre-commands would be ignored, stdout and stderr just be passed through directly.

By all accounts this should be simple enough to implement and I don't think it would really break with the rffmpeg philosophy of simplicity, as it would mostly just be reusing existing components.

Rewrite in Go?

I don't know if you have any experience in Go, but I'm just learning it and since I'm moving my hcloud-rffmpeg over to Go I thought about doing a rewrite of rffmpeg as well (it may better handle SIGFAULTS and I could possible integrate it better with my script and provide concurrency idk yet)

The goal of this question is, would you be interested in this? I know it wouldn't provide much since it's a simple wrapper script… ofcourse I can always just fork and make it my own but I would like to stay on a collaborative project, wouldn't make much sense forking if it'll stay the same and I had to maintain it by myself

Windows transcode server possible?

For AMD gpus currently AMF is working much better than VAAPI and AMF in amdgpu-pro is a mess.

What would be needed to offload the transcodes with rffmpeg to a windows machine? Please don't say WSL2. That's also a mess for gpu acceleration.

rffmpeg not passing VAAPI args

Hello, after much effort it would seem that rffmpeg.py is not passing my VAAPI device argument. Log snippet before changing the ffmpeg path:

/usr/lib/jellyfin-ffmpeg/ffmpeg -hwaccel vaapi -hwaccel_output_format vaapi -vaapi_device /dev/dri/renderD128 -i file:"/movies/Close Encounters of the Third Kind (1977)/Close Encounters of the Third Kind (1977) Bluray-1080p.mkv" -map_metadata -1 -map_chapters -1 -threads 0 -map 0:0 -map 0:1 -map -0:s -codec:v:0 h264_vaapi -b:v 9651230 -maxrate 9651230 -bufsize 19302460 -profile:v:0 high -level 41 -force_key_frames:0 "expr:gte(t,0+n_forced*3)" -vf "format=nv12|vaapi,hwupload,scale_vaapi=format=nv12" -start_at_zero -vsync -1 -codec:a:0 aac -ac 6 -ab 640000 -copyts -avoid_negative_ts disabled -max_muxing_queue_size 2048 -f hls -max_delay 5000000 -hls_time 3 -hls_segment_type mpegts -start_number 0 -hls_segment_filename "/var/lib/jellyfin/transcodes/7d1a82f8da6be7c8c76b7a94a3dfccc9%d.ts" -hls_playlist_type vod -hls_list_size 0 -y "/var/lib/jellyfin/transcodes/7d1a82f8da6be7c8c76b7a94a3dfccc9.m3u8"
Log snippet after changing the path to rffmpeg:
/usr/local/bin/ffmpeg -i file:"/movies/Close Encounters of the Third Kind (1977)/Close Encounters of the Third Kind (1977) Bluray-1080p.mkv" -map_metadata -1 -map_chapters -1 -threads 0 -map 0:0 -map 0:1 -map -0:s -codec:v:0 h264_vaapi -b:v 9651230 -maxrate 9651230 -bufsize 19302460 -profile:v:0 high -level 41 -force_key_frames:0 "expr:gte(t,0+n_forced*3)" -vf "format=nv12|vaapi,hwupload,scale_vaapi=format=nv12" -start_at_zero -vsync -1 -codec:a:0 aac -ac 6 -ab 640000 -copyts -avoid_negative_ts disabled -max_muxing_queue_size 2048 -f hls -max_delay 5000000 -hls_time 3 -hls_segment_type mpegts -start_number 0 -hls_segment_filename "/var/lib/jellyfin/transcodes/8f9b6d5bf68f75dbe17f578e2a3b67cf%d.ts" -hls_playlist_type vod -hls_list_size 0 -y "/var/lib/jellyfin/transcodes/8f9b6d5bf68f75dbe17f578e2a3b67cf.m3u8"
Turning off hardware acceleration DOES properly offload to my rffmpeg host and media plays properly. I'm not sure what I'm missing that could be causing this behavior.

Finished rffmpeg 32852 with return code 127

Hello,

I am trying to get your script to work on Emby. I know well that it is not suitable for Emby but for Jellyfin but I am waiting for a long time for the mobile application to support chromecast. Unfortunately this is what is blocking me from making the switch.

I read and reread the script and I think I found the problem. It seems to be a character escape problem in my media path.

Here is a history of my logs when I run ffmpeg remotely:

2021-04-27 21:38:33,147 - rffmpeg - INFO - Starting process loop
2021-04-27 21:38:33,147 - rffmpeg - INFO - Determining target host
2021-04-27 21:38:33,148 - rffmpeg - INFO - Selected valid target host 192.168.1.182
2021-04-27 21:38:33,148 - rffmpeg - INFO - [rffmpeg] Running on 192.168.1.182
2021-04-27 21:38:33,148 - rffmpeg - INFO - Crafting remote command string
2021-04-27 21:38:33,148 - rffmpeg - INFO - Running rffmpeg 38953 on [email protected]
2021-04-27 21:38:33,148 - rffmpeg - INFO - Remote command for rffmpeg 38953: ssh -q -o ConnectTimeout=1 -o ConnectionAttempts=1 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i /var/lib/emby/.ssh/id_rsa [email protected] /opt/emby-server/bin/emby-ffmpeg -loglevel +timing -y -print_graphs_file /var/lib/emby/logs/ffmpeg-transcode-2a47d6f1-f34a-43e9-a1e0-fe8b1caf2ce8_1graph.txt -copyts -start_at_zero -f matroska,webm -c:v:0 hevc -c:v:1 png -i "/media/storage3/films/Lion (2016)/LION 2016 MULTi VFF 1080p BluRay E-AC3 x265-Winks.mkv" -filter_complex "[0:0]format@f1=pix_fmts=yuv420p[f1_out0]" -map "[f1_out0]" -map 0:1 -sn -c:v:0 libx264 -g:v:0 72 -maxrate:v:0 13265852 -bufsize:v:0 26531704 -sc_threshold:v:0 0 -keyint_min:v:0 72 -pix_fmt:v:0 yuv420p -preset:v:0 veryfast -profile:v:0 high -level:v:0 4.0 -x264opts:v:0 subme=0:me_range=4:rc_lookahead=10:partitions=none -crf:v:0 26 -c:a:0 aac -ab:a:0 192000 -ac:a:0 2 -metadata:s:a:0 language=fre -filter:a:0 volume=2 -disposition:a:0 default -max_delay 5000000 -avoid_negative_ts disabled -f segment -map_metadata -1 -map_chapters -1 -segment_format mpegts -segment_list /var/lib/emby/transcoding-temp/transcoding-temp/8CE3C8.m3u8 -segment_list_type m3u8 -segment_time 3 -segment_start_number 0 -individual_header_trailer 0 -write_header_trailer 0 -segment_write_temp 1 /var/lib/emby/transcoding-temp/transcoding-temp/8CE3C8_%d.ts -map 0:3 -map 0:0 -an -c:v:0 copy -c:s:0 webvtt -max_delay 5000000 -avoid_negative_ts disabled -f segment -map_metadata -1 -map_chapters -1 -segment_format webvtt -segment_list /var/lib/emby/transcoding-temp/transcoding-temp/8CE3C8_s3.m3u8 -segment_list_type m3u8 -segment_time 3 -segment_start_number 0 -break_non_keyframes 1 -individual_header_trailer 1 -write_header_trailer 0 -write_empty_segments 1 -segment_write_temp 1 -min_frame_time 00:00:00.000 /var/lib/emby/transcoding-temp/transcoding-temp/8CE3C8_s3_%d.vtt -map 0:4 -map 0:0 -an -c:v:0 copy -c:s:0 webvtt -max_delay 5000000 -avoid_negative_ts disabled -f segment -map_metadata -1 -map_chapters -1 -segment_format webvtt -segment_list /var/lib/emby/transcoding-temp/transcoding-temp/8CE3C8_s4.m3u8 -segment_list_type m3u8 -segment_time 3 -segment_start_number 0 -break_non_keyframes 1 -individual_header_trailer 1 -write_header_trailer 0 -write_empty_segments 1 -segment_write_temp 1 -min_frame_time 00:00:00.000 /var/lib/emby/transcoding-temp/transcoding-temp/8CE3C8_s4_%d.vtt
2021-04-27 21:38:33,148 - rffmpeg - INFO - Running command ssh -q -o ConnectTimeout=1 -o ConnectionAttempts=1 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i /var/lib/emby/.ssh/id_rsa [email protected] /opt/emby-server/bin/emby-ffmpeg -loglevel +timing -y -print_graphs_file /var/lib/emby/logs/ffmpeg-transcode-2a47d6f1-f34a-43e9-a1e0-fe8b1caf2ce8_1graph.txt -copyts -start_at_zero -f matroska,webm -c:v:0 hevc -c:v:1 png -i "/media/storage3/films/Lion (2016)/LION 2016 MULTi VFF 1080p BluRay E-AC3 x265-Winks.mkv" -filter_complex "[0:0]format@f1=pix_fmts=yuv420p[f1_out0]" -map "[f1_out0]" -map 0:1 -sn -c:v:0 libx264 -g:v:0 72 -maxrate:v:0 13265852 -bufsize:v:0 26531704 -sc_threshold:v:0 0 -keyint_min:v:0 72 -pix_fmt:v:0 yuv420p -preset:v:0 veryfast -profile:v:0 high -level:v:0 4.0 -x264opts:v:0 subme=0:me_range=4:rc_lookahead=10:partitions=none -crf:v:0 26 -c:a:0 aac -ab:a:0 192000 -ac:a:0 2 -metadata:s:a:0 language=fre -filter:a:0 volume=2 -disposition:a:0 default -max_delay 5000000 -avoid_negative_ts disabled -f segment -map_metadata -1 -map_chapters -1 -segment_format mpegts -segment_list /var/lib/emby/transcoding-temp/transcoding-temp/8CE3C8.m3u8 -segment_list_type m3u8 -segment_time 3 -segment_start_number 0 -individual_header_trailer 0 -write_header_trailer 0 -segment_write_temp 1 /var/lib/emby/transcoding-temp/transcoding-temp/8CE3C8_%d.ts -map 0:3 -map 0:0 -an -c:v:0 copy -c:s:0 webvtt -max_delay 5000000 -avoid_negative_ts disabled -f segment -map_metadata -1 -map_chapters -1 -segment_format webvtt -segment_list /var/lib/emby/transcoding-temp/transcoding-temp/8CE3C8_s3.m3u8 -segment_list_type m3u8 -segment_time 3 -segment_start_number 0 -break_non_keyframes 1 -individual_header_trailer 1 -write_header_trailer 0 -write_empty_segments 1 -segment_write_temp 1 -min_frame_time 00:00:00.000 /var/lib/emby/transcoding-temp/transcoding-temp/8CE3C8_s3_%d.vtt -map 0:4 -map 0:0 -an -c:v:0 copy -c:s:0 webvtt -max_delay 5000000 -avoid_negative_ts disabled -f segment -map_metadata -1 -map_chapters -1 -segment_format webvtt -segment_list /var/lib/emby/transcoding-temp/transcoding-temp/8CE3C8_s4.m3u8 -segment_list_type m3u8 -segment_time 3 -segment_start_number 0 -break_non_keyframes 1 -individual_header_trailer 1 -write_header_trailer 0 -write_empty_segments 1 -segment_write_temp 1 -min_frame_time 00:00:00.000 /var/lib/emby/transcoding-temp/transcoding-temp/8CE3C8_s4_%d.vtt
2021-04-27 21:38:33,154 - rffmpeg - INFO - Finished rffmpeg 38953 with return code 127

In this example you can see the error on the "(" character. Here is an example when I run the command manually:

ssh -q -o ConnectTimeout=1 -o ConnectionAttempts=1 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i /var/lib/emby/.ssh/id_rsa [email protected] /opt/emby-server/bin/emby-ffmpeg -loglevel +timing -y -print_graphs_file /var/lib/emby/logs/ffmpeg-transcode-2a47d6f1-f34a-43e9-a1e0-fe8b1caf2ce8_1graph.txt -copyts -start_at_zero -f matroska,webm -c:v:0 hevc -c:v:1 png -i "/media/storage3/films/Lion (2016)/LION 2016 MULTi VFF 1080p BluRay E-AC3 x265-Winks.mkv" -filter_complex "[0:0]format@f1=pix_fmts=yuv420p[f1_out0]" -map "[f1_out0]" -map 0:1 -sn -c:v:0 libx264 -g:v:0 72 -maxrate:v:0 13265852 -bufsize:v:0 26531704 -sc_threshold:v:0 0 -keyint_min:v:0 72 -pix_fmt:v:0 yuv420p -preset:v:0 veryfast -profile:v:0 high -level:v:0 4.0 -x264opts:v:0 subme=0:me_range=4:rc_lookahead=10:partitions=none -crf:v:0 26 -c:a:0 aac -ab:a:0 192000 -ac:a:0 2 -metadata:s:a:0 language=fre -filter:a:0 volume=2 -disposition:a:0 default -max_delay 5000000 -avoid_negative_ts disabled -f segment -map_metadata -1 -map_chapters -1 -segment_format mpegts -segment_list /var/lib/emby/transcoding-temp/transcoding-temp/8CE3C8.m3u8 -segment_list_type m3u8 -segment_time 3 -segment_start_number 0 -individual_header_trailer 0 -write_header_trailer 0 -segment_write_temp 1 /var/lib/emby/transcoding-temp/transcoding-temp/8CE3C8_%d.ts -map 0:3 -map 0:0 -an -c:v:0 copy -c:s:0 webvtt -max_delay 5000000 -avoid_negative_ts disabled -f segment -map_metadata -1 -map_chapters -1 -segment_format webvtt -segment_list /var/lib/emby/transcoding-temp/transcoding-temp/8CE3C8_s3.m3u8 -segment_list_type m3u8 -segment_time 3 -segment_start_number 0 -break_non_keyframes 1 -individual_header_trailer 1 -write_header_trailer 0 -write_empty_segments 1 -segment_write_temp 1 -min_frame_time 00:00:00.000 /var/lib/emby/transcoding-temp/transcoding-temp/8CE3C8_s3_%d.vtt -map 0:4 -map 0:0 -an -c:v:0 copy -c:s:0 webvtt -max_delay 5000000 -avoid_negative_ts disabled -f segment -map_metadata -1 -map_chapters -1 -segment_format webvtt -segment_list /var/lib/emby/transcoding-temp/transcoding-temp/8CE3C8_s4.m3u8 -segment_list_type m3u8 -segment_time 3 -segment_start_number 0 -break_non_keyframes 1 -individual_header_trailer 1 -write_header_trailer 0 -write_empty_segments 1 -segment_write_temp 1 -min_frame_time 00:00:00.000 /var/lib/emby/transcoding-temp/transcoding-temp/8CE3C8_s4_%d.vtt
bash: -c: line 0: syntax error near unexpected token `('
bash: -c: line 0: `/opt/emby-server/bin/emby-ffmpeg -loglevel +timing -y -print_graphs_file /var/lib/emby/logs/ffmpeg-transcode-2a47d6f1-f34a-43e9-a1e0-fe8b1caf2ce8_1graph.txt -copyts -start_at_zero -f matroska,webm -c:v:0 hevc -c:v:1 png -i /media/storage3/films/Lion (2016)/LION 2016 MULTi VFF 1080p BluRay E-AC3 x265-Winks.mkv -filter_complex [0:0]format@f1=pix_fmts=yuv420p[f1_out0] -map [f1_out0] -map 0:1 -sn -c:v:0 libx264 -g:v:0 72 -maxrate:v:0 13265852 -bufsize:v:0 26531704 -sc_threshold:v:0 0 -keyint_min:v:0 72 -pix_fmt:v:0 yuv420p -preset:v:0 veryfast -profile:v:0 high -level:v:0 4.0 -x264opts:v:0 subme=0:me_range=4:rc_lookahead=10:partitions=none -crf:v:0 26 -c:a:0 aac -ab:a:0 192000 -ac:a:0 2 -metadata:s:a:0 language=fre -filter:a:0 volume=2 -disposition:a:0 default -max_delay 5000000 -avoid_negative_ts disabled -f segment -map_metadata -1 -map_chapters -1 -segment_format mpegts -segment_list /var/lib/emby/transcoding-temp/transcoding-temp/8CE3C8.m3u8 -segment_list_type m3u8 -segment_time 3 -segment_start_number 0 -individual_header_trailer 0 -write_header_trailer 0 -segment_write_temp 1 /var/lib/emby/transcoding-temp/transcoding-temp/8CE3C8_%d.ts -map 0:3 -map 0:0 -an -c:v:0 copy -c:s:0 webvtt -max_delay 5000000 -avoid_negative_ts disabled -f segment -map_metadata -1 -map_chapters -1 -segment_format webvtt -segment_list /var/lib/emby/transcoding-temp/transcoding-temp/8CE3C8_s3.m3u8 -segment_list_type m3u8 -segment_time 3 -segment_start_number 0 -break_non_keyframes 1 -individual_header_trailer 1 -write_header_trailer 0 -write_empty_segments 1 -segment_write_temp 1 -min_frame_time 00:00:00.000 /var/lib/emby/transcoding-temp/transcoding-temp/8CE3C8_s3_%d.vtt -map 0:4 -map 0:0 -an -c:v:0 copy -c:s:0 webvtt -max_delay 5000000 -avoid_negative_ts disabled -f segment -map_metadata -1 -map_chapters -1 -segment_format webvtt -segment_list /var/lib/emby/transcoding-temp/transcoding-temp/8CE3C8_s4.m3u8 -segment_list_type m3u8 -segment_time 3 -segment_start_number 0 -break_non_keyframes 1 -individual_header_trailer 1 -write_header_trailer 0 -write_empty_segments 1 -segment_write_temp 1 -min_frame_time 00:00:00.000 /var/lib/emby/transcoding-temp/transcoding-temp/8CE3C8_s4_%d.vtt'

To get around the problem manually I escaped completely the path of my media (space, parenthesis, ect...):

ssh -q -o ConnectTimeout=1 -o ConnectionAttempts=1 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i /var/lib/emby/.ssh/id_rsa [email protected] /opt/emby-server/bin/emby-ffmpeg -loglevel +timing -y -print_graphs_file /var/lib/emby/logs/ffmpeg-transcode-2a47d6f1-f34a-43e9-a1e0-fe8b1caf2ce8_1graph.txt -copyts -start_at_zero -f matroska,webm -c:v:0 hevc -c:v:1 png -i "/media/storage3/films/Lion\ \(2016\)/LION\ 2016\ MULTi\ VFF\ 1080p\ BluRay\ E-AC3\ x265-Winks.mkv" -filter_complex "[0:0]format@f1=pix_fmts=yuv420p[f1_out0]" -map "[f1_out0]" -map 0:1 -sn -c:v:0 libx264 -g:v:0 72 -maxrate:v:0 13265852 -bufsize:v:0 26531704 -sc_threshold:v:0 0 -keyint_min:v:0 72 -pix_fmt:v:0 yuv420p -preset:v:0 veryfast -profile:v:0 high -level:v:0 4.0 -x264opts:v:0 subme=0:me_range=4:rc_lookahead=10:partitions=none -crf:v:0 26 -c:a:0 aac -ab:a:0 192000 -ac:a:0 2 -metadata:s:a:0 language=fre -filter:a:0 volume=2 -disposition:a:0 default -max_delay 5000000 -avoid_negative_ts disabled -f segment -map_metadata -1 -map_chapters -1 -segment_format mpegts -segment_list /var/lib/emby/transcoding-temp/transcoding-temp/8CE3C8.m3u8 -segment_list_type m3u8 -segment_time 3 -segment_start_number 0 -individual_header_trailer 0 -write_header_trailer 0 -segment_write_temp 1 /var/lib/emby/transcoding-temp/transcoding-temp/8CE3C8_%d.ts -map 0:3 -map 0:0 -an -c:v:0 copy -c:s:0 webvtt -max_delay 5000000 -avoid_negative_ts disabled -f segment -map_metadata -1 -map_chapters -1 -segment_format webvtt -segment_list /var/lib/emby/transcoding-temp/transcoding-temp/8CE3C8_s3.m3u8 -segment_list_type m3u8 -segment_time 3 -segment_start_number 0 -break_non_keyframes 1 -individual_header_trailer 1 -write_header_trailer 0 -write_empty_segments 1 -segment_write_temp 1 -min_frame_time 00:00:00.000 /var/lib/emby/transcoding-temp/transcoding-temp/8CE3C8_s3_%d.vtt -map 0:4 -map 0:0 -an -c:v:0 copy -c:s:0 webvtt -max_delay 5000000 -avoid_negative_ts disabled -f segment -map_metadata -1 -map_chapters -1 -segment_format webvtt -segment_list /var/lib/emby/transcoding-temp/transcoding-temp/8CE3C8_s4.m3u8 -segment_list_type m3u8 -segment_time 3 -segment_start_number 0 -break_non_keyframes 1 -individual_header_trailer 1 -write_header_trailer 0 -write_empty_segments 1 -segment_write_temp 1 -min_frame_time 00:00:00.000 /var/lib/emby/transcoding-temp/transcoding-temp/8CE3C8_s4_%d.vtt

So I edited this path:

"/media/storage3/films/Lion (2016)/LION 2016 MULTi VFF 1080p BluRay E-AC3 x265-Winks.mkv"

To:

"/media/storage3/films/Lion\ \(2016\)/LION\ 2016\ MULTi\ VFF\ 1080p\ BluRay\ E-AC3\ x265-Winks.mkv"

I tried to edit the python script but I'm not an expert in this. I think the problem is the same under Jellyfin if I understood everything correctly.

If you have an idea to unblock me, it would be really cool.

rffmpeg error 1

Hello,

i followed the setup and it doesn't work, the following error occurs :

2023-08-21 17:01:12,109 - rffmpeg[103236] - INFO - Starting rffmpeg as /usr/local/bin/ffmpeg with args: -analyzeduration 200M -f matroska,webm -autorotate 0 -i file:/media/Parents/Rampage (2018)/Rampage.2018.Truefrench.1080p.HDLight.DTS.H264-Xantar.mkv -map_metadata -1 -map_chapters -1 -threads 0 -map 0:0 -map 0:1 -map -0:s -codec:v:0 libx264 -preset veryfast -crf 23 -maxrate 4528996 -bufsize 9057992 -profile:v:0 high -level 41 -x264opts:0 subme=0:me_range=4:rc_lookahead=10:me=dia:no_chroma_me:8x8dct=0:partitions=none -force_key_frames:0 expr:gte(t,0+n_forced*3) -sc_threshold:v:0 0 -vf setparams=color_primaries=bt709:color_trc=bt709:colorspace=bt709,scale=trunc(min(max(iw\,ih*a)\,min(1920\,1080*a))/2)*2:trunc(min(max(iw/a\,ih)\,min(1920/a\,1080))/2)*2,format=yuv420p -codec:a:0 libfdk_aac -ac 2 -ab 384000 -af volume=2 -copyts -avoid_negative_ts disabled -max_muxing_queue_size 2048 -f hls -max_delay 5000000 -hls_time 3 -hls_segment_type mpegts -start_number 0 -hls_segment_filename /var/lib/jellyfin/transcodes/1e0c437a3b5e0fbce5616b24afbbc50f%d.ts -hls_playlist_type vod -hls_list_size 0 -y /var/lib/jellyfin/transcodes/1e0c437a3b5e0fbce5616b24afbbc50f.m3u8
2023-08-21 17:01:12,109 - rffmpeg[103236] - DEBUG - Determining optimal target host
2023-08-21 17:01:12,109 - rffmpeg[103236] - DEBUG - Using SQLite as database.
2023-08-21 17:01:12,110 - rffmpeg[103236] - DEBUG - SQLite connection closed.
2023-08-21 17:01:12,110 - rffmpeg[103236] - DEBUG - Using SQLite as database.
2023-08-21 17:01:12,110 - rffmpeg[103236] - DEBUG - SQLite connection closed.
2023-08-21 17:01:12,110 - rffmpeg[103236] - DEBUG - Trying host ID 1 'pi3d'
2023-08-21 17:01:12,110 - rffmpeg[103236] - DEBUG - Running SSH test
2023-08-21 17:01:12,207 - rffmpeg[103236] - DEBUG - SSH test succeeded with retcode 0
2023-08-21 17:01:12,207 - rffmpeg[103236] - DEBUG - Selecting host as idle
2023-08-21 17:01:12,207 - rffmpeg[103236] - DEBUG - Found optimal host ID 1 'pi3d' (pi3d)
2023-08-21 17:01:12,207 - rffmpeg[103236] - INFO - Running command on host 'pi3d' (pi3d)
2023-08-21 17:01:12,207 - rffmpeg[103236] - DEBUG - Remote command: /usr/bin/ssh -q -t -o ConnectTimeout=1 -o ConnectionAttempts=1 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ControlMaster=auto -o ControlPath=/run/shm/ssh-%r@%h:%p -o ControlPersist=300 -i /var/lib/jellyfin/.ssh/id_rsa pi3d /usr/lib/jellyfin-ffmpeg/ffmpeg -analyzeduration 200M -f matroska,webm -autorotate 0 -i 'file:/media/Parents/Rampage (2018)/Rampage.2018.Truefrench.1080p.HDLight.DTS.H264-Xantar.mkv' -map_metadata -1 -map_chapters -1 -threads 0 -map 0:0 -map 0:1 -map -0:s -codec:v:0 libx264 -preset veryfast -crf 23 -maxrate 4528996 -bufsize 9057992 -profile:v:0 high -level 41 -x264opts:0 subme=0:me_range=4:rc_lookahead=10:me=dia:no_chroma_me:8x8dct=0:partitions=none -force_key_frames:0 'expr:gte(t,0+n_forced*3)' -sc_threshold:v:0 0 -vf 'setparams=color_primaries=bt709:color_trc=bt709:colorspace=bt709,scale=trunc(min(max(iw\,ih*a)\,min(1920\,1080*a))/2)*2:trunc(min(max(iw/a\,ih)\,min(1920/a\,1080))/2)*2,format=yuv420p' -codec:a:0 libfdk_aac -ac 2 -ab 384000 -af volume=2 -copyts -avoid_negative_ts disabled -max_muxing_queue_size 2048 -f hls -max_delay 5000000 -hls_time 3 -hls_segment_type mpegts -start_number 0 -hls_segment_filename /var/lib/jellyfin/transcodes/1e0c437a3b5e0fbce5616b24afbbc50f%d.ts -hls_playlist_type vod -hls_list_size 0 -y /var/lib/jellyfin/transcodes/1e0c437a3b5e0fbce5616b24afbbc50f.m3u8
2023-08-21 17:01:12,207 - rffmpeg[103236] - DEBUG - Using SQLite as database.
2023-08-21 17:01:12,275 - rffmpeg[103236] - DEBUG - SQLite connection closed.
2023-08-21 17:01:12,383 - rffmpeg[103236] - DEBUG - Using SQLite as database.
2023-08-21 17:01:12,421 - rffmpeg[103236] - DEBUG - SQLite connection closed.
2023-08-21 17:01:12,421 - rffmpeg[103236] - ERROR - Finished rffmpeg with return code 1

in my jellyfin log i have :

[2023-08-21 17:01:12.475 +02:00] [ERR] Error processing request. URL "GET" "/videos/619aa58d-6c90-3676-3b03-110e719eb82e/hls1/main/0.ts".
MediaBrowser.Common.FfmpegException: FFmpeg exited with code 1

My setup is a dedicated VM in proxmox for jellyfin and my transcoder is a raspberry pi 4.

can you help me please ?

Rewrite in Go

It's finished! rffmpeg-go has a separate DB module that can be imported into other projects (like rffmpeg-autoscaler) and I also improved SQL queries so it should be more performant, as well as being written in Go which provides binaries for all OSes on Releases tab. I've also updated my jellyfin-rffmpeg docker image to include it.

Everything is literally Go equivalent of your Python code and a little bit optimized (merged some for loops, made dedicated SQL queries per host instead of storing all processes/states in-memory, using SELECT COUNT(id) to check if there are any hosts/processes/states).

Also, every option can now be set using the config file and env vars, the latter takes precedence.

As of now I've only tested it on localhost (fallback), and it works correctly. I'll proceed to test the main functionality: remote ffmpeg commands.

local fallback ffmpeg honors hardware acceleration

Im running in a particullary weird issue, if hardware acceleration is enabled, the local fallback ffmpeg is useless, because it cannot find a hardware device (on the media vm), as the device is hooked up to the transcoding vm.

Planned features

Hi,
Please feel free it move or close this post after this question.

I want to setup a remote media server to process my video files, and this script seems to be what is needed to make this happen.
I only issue is that this seems to only deal with ffmpeg,
Are their any plans to expand this to also add?.. for example:

FFMPEG Path | /usr/local/bin/ffmpeg | [PATH TO FFMPEG ON YOUR SERVER] |  
MPLAYER Path | /usr/local/bin/mplayer  | [PATH TO MPLAYER ON YOUR SERVER] |  
MENCODER Path | /usr/local/bin/mencoder  | [PATH TO MENCODER ON YOUR SERVER] |  
FLVTOOL2 Path | /usr/bin/flvtool2  | [PATH TO FLVTOOL2 ON YOUR SERVER] |  
MP4Box Path | /usr/local/bin/MP4Box  | [PATH TO MP4BOX ON YOUR SERVER] |  
Yamdi Path | /usr/local/bin/yamdi  | [PATH TO YAMDI ON YOUR SERVER]

Let me know
thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.