I run a small server with Proxmox, and I’m wondering what are your opinions on running Docker in separate LXC containers vs. running a specific VM for all Docker containers?
I started with LXC containers because I was more familiar with installing services the classic Linux way. I later added a VM specifically for running Docker containers. I’m thinking if I should continue this strategy and just add some more resources to the docker VM.
On one hand, backups seem to be easier with individual LXCs (I’ve had situations where I tried to update a Docker container but the new container broke the existing configuration and found it easiest just to restore the entire VM from backup). On the otherhand, it seems like more overhead to install Docker in each individual LXC.
I have been run Docker container in both LXC and VM for a long time without issues or meaningful performance penalties. So I run important single docker containers on top of LXC and everything else in Dockge / Portainer VMs.
What’s the purpose of running container in a container? Why not install docker on your host machine?
Honestly, I never really thought of installing Docker directly on Proxmox. I guess that might be a simpler solution, to run Dockers directly, but I kind of like to keep the hypervisor more stripped down.
I personally like lxc’s over vms for my home lab and i run a dedicated lxc for docker and one running a single node k8s.
I used to use LXC, and switched to VM since internet said it was better.
I kinda miss the LXC setup. Day to day I don’t notice any difference, but increasing storage space in VM was a small pain compared to LXC. In VM I increased disk size through proxmox, but then I had to increase the partition inside VM.
In LXC you can just increase disk size and it immediately is available to the containers
If you use Live Migrate, realize that it doesn’t work on an LXC, only VMs. Your containers will be restarted with the LXC on the new node.
Run Docker at the host level. Every level down from there is not only a knock to performance across the spectrum, it just makes a mess of networking. Anyone in here saying “it’s easy to backup in a VM” has completely missed the point of containers, and apparently does not understand how to work with them.
You shouldn’t ever need to backup containers, and if you’re expecting data loss if one goes away, yerdewinitwrawng.
Is your server a dedicated server, or a VPS? Because if it’s a VPS, you’re probably already running in a VM.
Adding a VM might provide more security, especially if you aren’t an expert in LXC security configuration. It will add overhead. Running Docker inside Docker provides nothing but more overhead and unnecessary complexity to your setup.
Also, because it isn’t clear to me from your post: LXC and Docker are two ways of doing the same thing, using the same Kernel capabilities. Docker was, in fact, written in top of LXC. The only real difference is the container format. Saying “running Docker on LXC” is like saying “running Docker on Docker,” or “running Docker on Podman,” or “running LXC on Docker”. All you’re doing is nesting container implementations. As opposed to VMs, which do not just use Linux namespace capabilities, and which emulate an entirely different computer.
LXC, Podman, and Docker use the underlying OS kernel and resources. VMs create new, virtual hardware (necessarily sharing the same hardware architecture, but nothing else from the host) and run their own kernels.
Saying “Docker VM” is therefore confusing. Containers - LXC, Podman, or Docker - don’t create VMs. They partition and segregate off resources from the host, but they do not provide a virtual machine. You can not run OpenBSD in a Docker container on Linux; you can run OpenBSD in a VM on Linux.
It’s a dedicated server (a small Dell micro-pc). Thanks for the comment, I understand the logic, I was approaching it more from an end-user perspective of what’s easier to work with. Which given my skill set are LXC containers. I have a VM on top of Proxmox specifically for Docker :-)
Lxc and docker are not equivalent. They are system and software containers respectively.
This thread has raised so many questions I’d like answered:
- Why are people backing up containers?
- Why are people running docker-in-docker?
- I saw someone mention snapshotting containers…what’s the purpose of this?
- Why are people backing up docker installs?
Seriously thought I was going crazy reading some of these, and now I’m convinced the majority of people posting suggestions in here do not understand how to use containers at all.
Flat file configs, volumes, layers, versioning…it’s like people don’t know what these are are how to use them, and that is incredibly disconcerting.
Follow-up question: do you have any good resources to start with for a simple overview on how we should be using containers? I’m not a developer, and from my experiences most documentation on the topic I’ve come across targets developers and devops people. As someone else mentioned, I use docker because it’s the way lots of things happen to be packaged - I’m more used to the Debian APT way of doing things.
I don’t have anything handy, but I see your point, and I’d shame lazy devs for not properly packaging things maybe 😂
You mentioned you use Proxmox, which is already an abstraction on bare-metal, so that’s about as easy as easy an interface as I can imagine for a hosted machine without using something like Docker Desktop and using it to manage a machine remotely (not a good idea).
As a develop, I guess I was slightly confused on some suggestions on ways to use things being posted in this sub, but some of the responses I guess clarify that. There isn’t enough simplicity in explaining the “what” of containers, so people just use them the simplest way they understand, which also happens to be the “wrong way”. It’s kind of hard to grasp that when you live with these things 24/7 for years. Kind of a similar deal with networking solutions like Tailscale where I see people installing it everywhere and not understanding why that’s a bad idea 😂
So save you a lot of learning, I’ll just not go down a rabbit hole if you just want something to work well. Ping back here if you get into a spot of trouble, and I’ll definitely hop in to give a more detailed explanation on a workflow that is more effective than what it seems most people in here are using.
In fact, I may have just been inspired to do a write up on it.
Fair enough, would love to read something like this :-)
Yeah, I’ve been into Linux for 20 years, sometimes a bit on/off, as an all-around-sysadmin in mainly Windows places. And learned just enough of Docker to use it instead of apt - which I’d prefer, but as you said, many newer services don’t exist in debian repos or as .deb packages, only docker or similar.
If you’re familiar with Linux, just read the Dockerfile of any given project. It’s literally just a script for running a thing. You can take that info and install how you’d like if needed.