• 1 Post
  • 7 Comments
Joined 2 months ago
cake
Cake day: June 20th, 2025

help-circle
  • Reading jellyfin’s issues it’s clear it’s web ui and API cannot be allowed to talk to the general internet.

    I’d push for a VPN solution first. Tailscale or wireguard. If you’re happy with cloudflare sniffing all traffic and that they make take it away suddenly someday use their tunnel with authentication.

    The only other novel solution I’d suggest is putting jellyfin behind an Authentik wall (not OIDC, though you can use OIDC for users after the wall). That puts security on Authentik, and that’s their only job so hopefully that works. I’d use that if VPN (tailscale or wireguard) are problematic for access. The downside is that jellyfin apps will not be able to connect, only web browsers that can log into the Authentik web ui wall.

    Flow would go caddy/other reverse proxy -> Authentik wall for jellyfin -> jellyfin

    I’d put everything in docker, I’d put caddy and Authentik in a VM for a DMZ (incus + Zabbly repo web ui to manage the VM), I’d set all 3 in the compose to read-only, user:####:####, cap-drop all, no new privileges, limited named networks.

    Podman quadlets would be even better security than docker, but there’s less help for that (for now). Do docker and get something working to start, then grow from there


  • I’m looking at Opnsense on an Incus VM soon, what was your fight there? Good to know what I’ll hit ;)

    Agreed on that path - some networking (like mimicking proxmox’s bridge connections which give VMs their own MAC/IP) takes effort to find the solution. But the basic LXC/VM-shares-your-IP works super easily and the script ability is great. Plus it doesn’t feel like a yoke on your system that is heavy and drives it, but just another application! I feel it’s close enough, and when you get it where you want it, it’s perf. I assume they’ll get “one click” solutions for the harder stuff baked in as they get more attention and traction.



  • I wanted Jellyfin on its own IP so I could think about implementing VLANs. I havent yet, and I’m not sure what I did is even needed. But I did do it! You very likely don’t need to do it.

    There are likely guides on enabling Jellyfin hardware acceleration on your Asustor NAS - so just follow them!

    I do try to set up separate networks for each service.

    On one server I have a monolithic docker compose file with a ton of networks defined to keep services from talking to the internet or each other if it’s not useful (pdf converter is prevented from talking to the internet or the Authentik database, for example). Makes the most sense here, has the most power.

    On this server I have each service split up with its own docker compose file. The network bit makes more sense on services that have an external database and other bits, it lets me set it up so only the service can talk to its database and its database cannot reach the internet at large (via adding a ‘internal: true’ to the networks: section). In this case, yes the pdf converter can talk to other services and I’d need to block its internet access at the router somehow.

    The monolithic method gets more annoying to deal with with many services via virtue of a gigantic docker compose file and the up/down time (esp. for services that don’t acknowledge shutdown commands). But it lets me use fine-grained networking within the docker compose file.

    For each service on its own, they expose a port and things talk to them from there. So instead of an internal docker network letting Authentik talk to a service, Authentik just looks up the address of the service. I don’t notice any difference in perceptible lag.




  • So I’ve found that if you use the user: option with a user: UserName it requires the container to have that UserName alsoo inside. If you do it with a UID/GID, it maps the container’s default user (likely root 0) to the UID/GID you provide user: 1500:1500. For many containers it just works, for linuxserver (a group that produces containers for stuff) containers I think it biffs it - those are way jacked up. I put the containers that won’t play ball in a LXC container (via Incus GUI), or for simple permission fixes I just make a permissions-fixing version of the container (runs as root, but only executes commands I provide) to fill a volume with the data that has the right permissions then load that volume into the container. Luckily jellyfin doesn’t need that.

    I give jellyfin read-only access (via :ro in the volumes:) to my media stuff because it doesn’t need to write to it. I think it’s fine if your use-case needs :rw, keep a backup (even if you :ro!).

    Here’s my docker-compose.yml, I gave jellyfin its own IP with macvlan. It’s pretty janky and I’m still working it, but you can have jellyfin use your server’s IP by deleting everything after jellyfin-nw: (but keep jellyfin-nw:!) in both the networks: section and services: section. Delete the mac: in the services: section too. In the ports: part that 10.0.1.69 would be the IP of your server (or in this case, what I declare the jellyfin container’s IP to be) - it makes it so the container can only bind to the IP you provide, otherwise it can bind to anything the server has access to (as far as I understand).

    And of course, I have GPU acceleration working here with some embeded Intel iGPU. Hope this helps!

    # --- NETWORKS ---  
    networks:  
      jellyfin-nw:  
        # In docker, `macvlan` gets similar stuff to 
        driver: macvlan  
        driver_opts:  
            parent: 'br0'  
        #    mode: 'l2'  
        name: 'doc0'  
        ipam:  
            config:  
              - subnet: "10.0.1.0/24"  
                gateway: "10.0.1.1"  
    
    # --- SERVICES ---  
    services:  
        jellyfin:  
            container_name: jellyfin  
            image: ghcr.io/jellyfin/jellyfin:latest  
            environment:  
              - TZ=America/Los_Angeles  
              - JELLYFIN_PublishedServerUrl=https://jellyfin.guzzlezone.local/  
            ports:  
              - '10.0.1.69:8096:8096/tcp'  
              - '10.0.1.69:7359:7359/udp'  
              - '10.0.1.69:1900:1900/udp'  
            devices:  
              - '/dev/dri/renderD128:/dev/dri/renderD128'  
            #  - '/dev/dri/card0:/dev/dri/card0'  
            volumes:  
              - '/mnt/ssd/jellyfin/config:/config:rw,noexec,nosuid,nodev,Z'  
              - '/mnt/cache/jellyfin/log:/config/log:rw,noexec,nosuid,nodev,Z'  
              - '/mnt/cache/jellyfin/cache:/cache:rw,noexec,nosuid,nodev,Z'  
              - '/mnt/cache/jellyfin/config-cache:/config/cache:rw,noexec,nosuid,nodev,Z'  
              # Media links below  
              - '/mnt/spinner/movies:/data/movies:ro,noexec,nosuid,nodev,z'  
              - '/mnt/spinner/shows:/data/shows:ro,noexec,nosuid,nodev,z'  
              - '/mnt/spinner/music:/data/music:ro,noexec,nosuid,nodev,z'  
            restart: unless-stopped  
            # Security stuff  
            read_only: true  
            tmpfs:  
              - /tmp:uid=2200,gid=2200,rw,noexec,nosuid,nodev  
            # mac address is 02:42 then 10.0.1.69 in hex for each # betwen the .s mapped to the :s in the mac address  
            # its how docker assigns so there will never be a mac address collision  
            mac_address: 02:42:0A:00:01:45  
            networks:  
                jellyfin-nw:  
                    # Docker is pretty jacked up and can't get an IP via DHCP so manually specify it  
                    ipv4_address: 10.0.1.69  
            user: 2200:2200  
            # gpu capability needs render capability, see the # for your server with `getent group render | cut -d: -f3`  
            group_add:  
              - "109"  
            security_opt:  
              - no-new-privileges:true  
            cap_drop:  
              - ALL  
    

    Lastly thought I should add the external stuff needed for the hardware acceleration to work/get the user going:

    # For jellyfin low power (LP) intel QSV stuff  
    # if trouble see https://jellyfin.org/docs/general/administration/hardware-acceleration/intel/#configure-and-verify-lp-mode-on-linux  
    sudo apt install -y firmware-linux-nonfree #intel-opencl-icd  
    sudo mkdir -p /etc/modprobe.d  
    sudo sh -c "echo 'options i915 enable_guc=2' >> /etc/modprobe.d/i915.conf"  
    sudo update-initramfs -u  
    sudo update-grub  
    
    APP_NAME="jellyfin"  
    APP_PID=2200  
    sudo useradd -u $APP_PID $APP_NAME  
    

    The Jellyfin user isn’t added to the render group, rather the group is added to the container in the docker-compose.yml file.


  • Per this guide https://cheatsheetseries.owasp.org/cheatsheets/Docker_Security_Cheat_Sheet.html I do not. I have a cron/service script that updates containers automatically (‘docker compose pull’ I think) that I don’t care if they fail for a bit (pdf converter, RSS reader, etc.) or they’re exposed to the internet directly (Authentik, caddy).

    Note that smart peeps say that the docker socket is not safe as read-only. Watchtower is inherently untenable sadly, so is Traefik (trusting a docker-socket-proxy container with giga root permissions only made sense to me if you could audit the whole thing and keep auditing with updates and I cannot). https://stackoverflow.com/a/52333163 https://blog.quarkslab.com/why-is-exposing-the-docker-socket-a-really-bad-idea.html

    I then just have scripts to do the ‘docker compose pull’ for things with oodles of breaking changes (Immich) or things I’d care if they did break suddenly (paperless).

    Overall, I’ve only had a few break over a few years - and that’s because I also run all services (per link above) as a user, read-only, and with no capabilities (that aren’t required, afaik none need any). And while some containers are well coded, many are not, and if an update makes changes that want to write to ‘/npm/staging’ suddenly, the read-only torches that until I can figure it out and put in a tmpfs fix. The few failures are worth the peace of mind that it’s locked the fuck down.

    I hope to move to podman sometime to eliminate the last security risk - the docker daemon running the containers, which runs as root. Rootless docker seems to be a significant hassle to do at any scale, so I haven’t bothered with that.

    Edit: this effort is to prevent the attack vector of “someone hacks or buys access to a well-used project (e.g., Watchtower last updated 2 years ago, commonly used docker socket proxy, etc.) which is known to have docker socket access and then pushes a malicious update that to encrypt and ransom your server with root access escalations from the docker socket”. As long as no container has root, (and the container doesn’t breach the docker daemon…) the fallout from a good container turned bad is limited to the newly bad container.