docker image prune
Real question is, why Jackett instead of Prowlarr? 😄
Dunno, man. Its been working so far. I’ll check out prowlarr, thanks
Yeah no worries - I discovered Prowlarr from that exact same comment years ago so jumped at the opportunity to post it here 😆
Tbh the whole arr suite is a headache to get working well…
I have them all running in a docker compose, that also has gluetun as the gateway.
It’s a real basic compse file, but I can share it if you like.
Sure, why not? I’m setting up my new server, so no better time. Thanks
services: gluetun: image: qmcgaw/gluetun container_name: gluetun cap_add: - NET_ADMIN environment: - DNS_ADDRESS= - PUID=1000 - PGID=1000 - SERVER_CITIES= - FIREWALL_VPN_INPUT_PORTS= - TZ=Etc/UTC # Provider readmes: https://github.com/qdm12/gluetun-wiki/tree/main/setup/providers - VPN_SERVICE_PROVIDER= #- VPN_TYPE=openvpn #- OPENVPN_CUSTOM_CONFIG=/config/custom.conf #- VPN_TYPE=wireguard #- WIREGUARD_PRIVATE_KEY= #- WIREGUARD_ADDRESSES= ports: - 6767:6767 # bazaar - 7878:7878 # radaar - 8118:8118 # privoxy - 8191:8191 # flaresolverr - 8787:8787 # readaar - 8989:8989 # sonaar - 9091:9091 # transmission - 9696:9696 # prowlarr # You can add an forwarded listening ports your VPN provider might have here as well. volumes: - /data/gluetun:/config bazarr: image: lscr.io/linuxserver/bazarr:latest container_name: bazarr environment: - PUID=1000 - PGID=1000 - TZ=Etc/UTC volumes: - /data/bazarr:/config - /share/downloads/movies:/share/downloads/movies - /share/downloads/tv:/share/downloads/tv restart: unless-stopped network_mode: service:gluetun flaresolverr: # DockerHub mirror flaresolverr/flaresolverr:latest image: ghcr.io/flaresolverr/flaresolverr:latest container_name: flaresolverr environment: - LOG_LEVEL=info - LOG_HTML=false - CAPTCHA_SOLVER=none - TZ=Etc/UTC restart: unless-stopped network_mode: service:gluetun privoxy: image: caligari/privoxy:latest container_name: privoxy restart: unless-stopped network_mode: service:gluetun prowlarr: image: lscr.io/linuxserver/prowlarr:latest container_name: prowlarr environment: - PUID=1000 - PGID=1000 - TZ=Etc/UTC volumes: - /data/prowlarr:/config restart: unless-stopped network_mode: service:gluetun radarr: image: lscr.io/linuxserver/radarr:latest container_name: radarr environment: - PUID=1000 - PGID=1000 - TZ=Etc/UTC volumes: - /data/radarr:/config - /share/downloads/movies:/share/downloads/movies restart: unless-stopped network_mode: service:gluetun readarr: image: lscr.io/linuxserver/readarr:develop container_name: readarr environment: - PUID=1000 - PGID=1000 - TZ=Etc/UTC volumes: - /data/readarr:/config - /share/downloads/books:/share/downloads/books restart: unless-stopped network_mode: service:gluetun sonarr: image: lscr.io/linuxserver/sonarr:latest container_name: sonarr environment: - PUID=1000 - PGID=1000 - TZ=Etc/UTC volumes: - /data/sonarr:/config - /share/downloads/tv:/share/downloads/tv restart: unless-stopped network_mode: service:gluetun transmission: image: lscr.io/linuxserver/transmission:latest container_name: transmission environment: - PUID=1000 - PGID=1000 - TZ=Etc/UTC - TRANSMISSION_WEB_HOME= #optional - USER= #optional - PASS= #optional - WHITELIST= #optional - PEERPORT= #optional - HOST_WHITELIST= #optional volumes: - /data/transmission:/config - /share/downloads/movies:/share/downloads/movies - /share/downloads/books:/share/downloads/books - /share/downloads/tv:/share/downloads/tv restart: unless-stopped network_mode: service:gluetun watchtower: container_name: watchtower image: containrrr/watchtower volumes: - /var/run/docker.sock:/var/run/docker.sock network_mode: service:gluetun
You might also want to check out https://yams.media/, it’s pretty much an install script and configuration walkthrough that’s very complete and detailed. Includes most relevant Arrs and gluetun builtin. Containerized. Choice of Emby, Plex or Jellyfin.
What’s not working for you?
For me after a decade using -arr the only thing I’ve had significant issues with has been trying to use the Tailscale integration on Unraid 7 to tunnel the dockers through an exit node which is… not at all the fault of -arr containers lol
Sorry to hear that that’s been your experience! :( My installation has been running for ~5 years without any problems
you got the hard links working?
Hard links are a built-in feature of basically every modern filesystem. The bigger question to me is, why aren’t hard links working for you?
Just found this. https://lemm.ee/post/58579926
Seems like I’m not so weird after all…
There needs to be an overlap in the mount points of docker jellyfish and docker sonarr, etc. I don’t think I got it right. Besides, sonar ends up not moving the series inside the tv shows folder, leaving the episodes outside, in the media folder above. If I knew exactly what was going on I would fix it. Last time I dealt with it was ages ago, so perhaps I can do it now.
Prowlarr, recyclarr, and trash guides.
it’s those pesky docker volume maps and hardlinks
I’ve had the opposite experience. It all “just worked”. Try running unraid. It makes a lot of it so much easier.
It’s* been working
Looking at
linuxserver/jackett
on Docker Hub, it seems it indeed update everyday.I’m not receiving daily updates from my gotify server, where watchtower reports the updates. But I guess it makes sense if it has some sort of automated build process. I’ve added the environment variable not to be updated by watchtower and I will keep an eye on it.
You can also tell watchtower to cleanup images after update so you don’t end up with all of those old ones.
Interesting
If you’re just pulling “latest” then docker will fetch the latest when it starts. You can pin to a version tag if you want to keep it stable.
I believe Linux Server builds images every day for most of their containers, even though there has been no code changes.
If the code doesn’t change, the resulting docker image will have the same hash, and a new image won’t be created
https://github.com/jackett/jackett/releases
Jackett is literally just releasing a new version every day
Presumably because it updates daily
I thought so but my watchtower says “no”.