I run a small server with Proxmox, and I’m wondering what are your opinions on running Docker in separate LXC containers vs. running a specific VM for all Docker containers?

I started with LXC containers because I was more familiar with installing services the classic Linux way. I later added a VM specifically for running Docker containers. I’m thinking if I should continue this strategy and just add some more resources to the docker VM.

On one hand, backups seem to be easier with individual LXCs (I’ve had situations where I tried to update a Docker container but the new container broke the existing configuration and found it easiest just to restore the entire VM from backup). On the otherhand, it seems like more overhead to install Docker in each individual LXC.

  • MangoPenguin@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    6 days ago

    Regardless of VM or LXC, I would only install docker once. There’s generally no need to create multiple docker VMs/LXCs on the same host. Unless you have a specific reason; like isolating outside traffic by creating a docker setup for only public services.

    Backups are the same with VM or LXC on Proxmox.

    The main advantages of LXC that I can think of:

    • Slightly less resource overhead, but not much (debian minimal or alpine VM is pretty lightweight already).
    • Ability to pass-through directories from the host.
    • Ability to pass-through hardware acceleration from a GPU, without passing through the entire GPU.
    • Ability to change CPU cores or RAM while it’s running.
    • non_burglar@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      6 days ago

      I use individual lxc for each docker compose so I don’t have to revert 8 services at once if I need to restore.

      I would also argue that an alpine lxc runs in 22mb ram by itself … Significantly smaller footprint on disk and in memory. But most importantly, lxc can actually share memory space effectively, one doesn’t need to reserve blocks of ram.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        6 days ago

        You don’t have to revert 8 services, you can stop/start them independently: docker compose stop <service name>.

        This is actually how I update my services, I just stop the ones I want to update, pull, and restart them. I do them one or two at a time, mostly to mitigate issues. The same is true for pulling down new versions, my process is:

        1. edit the docker-compose file to update the image version(s) (e.g. from 1.0 -> 1.1, or 1.1 -> 2.0); I check changelog/release notes to check for any manual upgrade notices
        2. pull new images (doesn’t impact running services)
        3. docker compose up -d brings up any stopped services using new image(s)
        4. test
        5. go back to 1 until all services are done

        I do this whenever I remember, and it works pretty well.

  • bizdelnick@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 days ago

    What’s the purpose of running container in a container? Why not install docker on your host machine?

    • darkknight@discuss.online
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      You want to to keep modification of the host to a minimum in virtualization. It makes troubleshooting so much easier.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        6 days ago

        I don’t use proxmox, but it works absolutely fine for me on my regular Linux system, which has a firewall, some background services, etc. Could you be more specific on the issues you’re running into?

        Also, I only really expose two services on my host:

        • Caddy - handles all TLS and proxies to all other services in the internal docker network
        • Jellyfin - my crappy smart TV doesn’t seem to be able to handle Jellyfin + TLS for some reason, it causes the app to lock up

        Everything else just connects through an internal-only docker network.

        If you’re getting conflicts, I’m guessing you’ve configured things oddly, because by default, docker creates its own virtual interface to explicitly not interfere with anything else on the host.

          • sugar_in_your_tea@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            0
            ·
            5 days ago

            I don’t use proxmox, so I guess I don’t understand the appeal. I don’t see any reason to backup a container or a VM, I just backup configs and data. Backing up a VM makes sense if you have a bunch of customizations, but that’s pretty much the entire point of docker, you quarantine your customizations to your configs so it’s completely reproducible if you have the configs and data.

            • MangoPenguin@lemmy.blahaj.zone
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              5 days ago

              Ease of use mostly, one click to restore everything including the OS is nice. Can also easily move them to other hosts for HA or maintenance.

              Not everything runs in docker too, so it’s extra useful for those VMs.

    • ddh@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 days ago

      If you do that, Docker is stuck on that host. If it’s in an LXC it can move to another host. Plus, backing up and snapshotting are easier IMO.

      • bizdelnick@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 days ago

        Snapshotting in docker is as easy as docker commit. After that you can back it up with docker save. Then move to another host, but not without downtime.

        However normally you need to backup/move only volumes attached to containers. If that’s not the way how you like to organize your services, you likely don’t need docker.