Title. Mostly because of two flags: --read-only and --log-driver.

  • losttourist@kbin.social
    link
    fedilink
    arrow-up
    19
    ·
    11 months ago

    I’m not sure why Docker would be a particularly good (or particularly bad) fit for the scenario you’re referring to.

    If you’re suggesting that Docker could make it easy to transfer a system onto a new SD card if one fails, then yes that’s true … to a degree. You’d still need to have taken a backup of the system BEFORE the card failed, and if you’re making regular backups then to be honest it will make little difference if you’ve containerised the system or not, you’ll still need to restore it onto a new SD card / clean OS. That might be a simpler process with a Docker app but it very much depends on which app and how it’s been set up.

    • AggressivelyPassive@feddit.de
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      11 months ago

      I think the idea is rather, that read only container - as the name implies - only read and drive write. Since SD cards aren’t exactly great at being written to often, that could increase the lifetime of the SD card.

      • losttourist@kbin.social
        link
        fedilink
        arrow-up
        4
        ·
        edit-2
        11 months ago

        I’m still struggling to understand what advantage Docker brings to the set-up.

        Maybe the application doesn’t need to write anything to disk at all (which seems unlikely) but if so, then you’re not saving any disk-write cycles by using docker.

        Or maybe you want it only to write to filesystems mounted from longer-life storage e.g. magnetic disk and mark the SD card filesystems as --read-only. In which case you could mount those filesystems directly in the host OS (indeed you have to do this to make them visible to docker) and configure the app to use those directly, no need for docker.

        Docker has many great features, but at the end of the day it’s just software - it can’t magic away some of the foundational limitiations of system architecture.

        • AggressivelyPassive@feddit.de
          link
          fedilink
          arrow-up
          2
          arrow-down
          4
          ·
          11 months ago

          I think you still don’t get the idea of read-only containers.

          They’re set up in a way that prohibits any writes except some very well defined locations. That could mean piping logs directly to stdout and don’t write them to disk, or not caching on disk, etc.

          That is standard practice in professional setup (though for security reasons).

          No, it’s not magic, but software can get configured, you know? And if you do that properly, you might see a change in behavior.

          • aksdb@feddit.de
            link
            fedilink
            arrow-up
            5
            arrow-down
            1
            ·
            11 months ago

            If the application in question doesn’t need to write anything, it also doesn’t write outside of docker, so it also won’t wear down the SD card.

            If the app has to write something, a fully read-only container will simply not work (the app will crash or fail otherwise).

  • sir_reginald@lemmy.world
    link
    fedilink
    arrow-up
    14
    arrow-down
    2
    ·
    edit-2
    11 months ago

    honestly, it’s not worth it. hard drives are cheap, just plug one via USB 3 and make all the write operations there. that way your little SBC doesn’t suffer the performance overhead of using docker.

    • aksdb@feddit.de
      link
      fedilink
      arrow-up
      3
      ·
      11 months ago

      The point with an external drive is fine (I did that on my RPi as well), but the point with performance overhead due to containers is incorrect. The processes in the container run directly on the host. You even see the processes in ps. They are simply confined using cgroups to be isolated to different degrees.

      • sir_reginald@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        2
        ·
        11 months ago

        docker images have a ton of extra processes from the OS they were built in. Normally a light distro is used to build images, like Alpine Linux. but still, you’re executing a lot more processes than if you were installing things natively.

        Of course the images does not contain the kernel, but still they contain a lot of extra processes that would be unnecessary if executing natively.

        • aksdb@feddit.de
          link
          fedilink
          arrow-up
          3
          ·
          11 months ago

          To execute more than one process, you need to explicitly bring along some supervisor or use a more compicated entrypoint script that orchestrates this. But most container images have a simple entrypoint pointing to a single binary (or at most running a script to do some filesystem/permission setup and then run a single process).

          Containers running multiple processes are possible, but hard to pull off and therefore rarely used.

          What you likely think of are the files included in the images. Sure, some images bring more libs and executables along. But they are not started and/or running in the background (unless you explicitly start them as the entrypoint or using for example docker exec).

  • Engywuck@lemm.ee
    link
    fedilink
    arrow-up
    9
    ·
    11 months ago

    I use docker myself on my RPi4, but the OS is on a 128 GB SSD connected through USB3. These SSD are pretty cheap nowadays and (likely?) more resilient than sdcards…

  • Avid Amoeba@lemmy.ca
    link
    fedilink
    arrow-up
    8
    ·
    11 months ago

    Unless you make your host OS read-only, it itself will keep writing while running your docker containers. Furthermore slapping read-only in a docker container won’t make the OS you’re running in it able to run correctly with an RO root fs. The OS must be able to run with an RO root fs to begin with. Which is the same problem you need to solve for the host OS. So you see, it’s the same problem and docker doesn’t solve it. It’s certainly possible to make an Linux OS that runs on an RO root fs and that’s what you need to focus on.

  • Synthead@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    11 months ago

    I think Docker is a tool, and it depends on how you implement said tool. You can use Docker in ways that make your infra more complicated, less efficient, and more bloated with little benefit, if not a loss of benefits. You can also use it in a way that promotes high uptime, fail-overs, responsible upgrades, etc. Just “Docker” as-is does not solve problems or introduce problems. It’s how you use it.

    Lots of people see Docker as the “just buy a Mac” of infra. It doesn’t make all your issues magically go away. Me, personally, I have a good understanding of what my OS is doing, and what software generally needs to run well. So for personal stuff where downtime for upgrades means that I, myself, can’t use a service while it’s upgrading, I don’t see much benefit for Docker. I’m happy to solve problems if I run into them, also.

    However, in high-uptime environments, I would probably set up a k8s environment with heavy use of Docker. I’d implement integration tests with new images and ensure that regressions aren’t being introduced as things go out with a CI/CD pipeline. I’d leverage k8s to do A-B upgrades for zero downtime deploys, and depending on my needs, I might use an elastic stack.

    So personally, my use of Docker would be for responsible shipping and deploys. Docker or not, I still have an underlying Linux OS to solve problems for; they’re just housed inside a container. It could be argued that you could use a first-party upstream Docker image for less friction, but in my experience, I eventually want to tweak things, and I would rather roll my own images.

    For SoC boards, resources are already at a premium, so I prefer to run on metal for most of my personal services. I understand that we have very large SoC boards that we can use now, but I still like to take a simpler, minimalist approach with little bloat. Plus, it’s easier to keep track of things with systemd services and logs anyway, since it uniformly works the way it should.

    Just my $0.02. I know plenty of folks would think differently, and I encourage that. Just do what gives you the most success in the end 👍

  • Mikelius@lemmy.ml
    link
    fedilink
    arrow-up
    2
    ·
    11 months ago

    I don’t use those two flags, but have several pis running docker with no issues. They’ve been running (almost) 24/7/365 going on maybe 2 years now with the same sd cards.