I’ve got a QNAP NAS and two Linux servers. Whenever the power goes down, the UPS kicks in and shut downs the NAS and the Linux servers, all good. The servers + NAS are automatically started when the power comes back on line using WOL. All good.
The problem is that I have apps running using Docker which heavily rely on connections to the NAS. As the Linux servers boot quicker than the NAS, the mount points are not mounted, and thus everything falls apart. Even when I manually re-mount, it’s not propagated to the Docker instances. All mount points use NFS.
Currently, I just reboot the Linux servers manually, and then all works well.
Probably easiest would be to run a cron job to check the mounts every x minutes, and if they are not mounted, then just reboot. The only issue is that this may cause an infinite loop of reboots if e.g. the NAS has been turned off.
I could also install a monitoring solution, but I’ve seen so many options that I’m not sure which one to do. If it’s easier with a monitoring solution, I’d like the simplest one.
Just specaluting… is it possible to mount NFS through systemd and make docker service dependent from that mount?
This is the answer. You can straight up make things dependent on .mount units that represent stuff in fstab. To add, you can create any number of systemd services that just check if something is “as you want it” and only then “start”. You simply make the Exec line “/bin/bash -c ‘your script here’”. Then you make whatever else you want dependent on it. For example I have such a unit that monitors for Internet connection by checking some public DNS servers. Then I have services that depend on Internet connection dependent on that. Here’s for example my Plex service which demonstrates how to depend on a mount, docker and shows how to manage a docker container with systemd:
~$ cat /etc/systemd/system/plex-docker.service [Unit] Description=Plex Media Server After=docker.service network-internet.service media-storage\x2dvolume1.mount After=docker.service [Service] TimeoutStartSec=0 Restart=always RestartSec=10 ExecStartPre=-/usr/bin/docker rm -f plex ExecStartPre=/usr/bin/docker pull plexinc/pms-docker:latest ExecStart=/usr/bin/docker run \ --name plex \ --net=host \ -e TZ="US/Eastern" \ -e "PLEX_UID=1000" \ -e "PLEX_GID=1000" \ -v /tmp:/tmp \ -v /var/lib/plex/config:/config \ -v /var/cache/plex/transcode:/transcode \ -v "/media/storage-volume1:/media/storage-volume1" \ plexinc/pms-docker:latest [Install] WantedBy=multi-user.target
BTW you can also do timers in systemd which allows doing what you can do with cron but much more flexibly and utilize dependencies too.
You can use
RequiresMountsFor=
(egRequiresMountsFor=/media/storage-volume1
) instead of manually adding.mount
toAfter
/Requires
- you can then use.mount
files or fstab as you’re stipulating the path rather than a potentially changeable systemd unit name.The systemd.mount manpage also strongly recommends using fstab for human added mount points over
.mount
files.Oh this is nice. I’ll probably start using it.
That’s interesting! I’ve converted all my docker run commands to docker compose, as I found that easier to manage. But, I guess you can’t do the dependencies like you have. Also, yours has the advantage it always pulls the latest.
Doesn’t seem mutually exclusive. Replace the docker rm with compose down and the docker run with compose up.
Exactly. In fact I have a few multi-container services with docker-compose that I have to write systemd unit files for.
Perhaps you could also add the mounts as dependencies to the Docker daemon.
Sorry, I’m absolutely not a Linux expert :) I use /etc/fstab for the mounts, and to manually re-mount I run “mount -a”.
This is a great opportunity to learn a bit of systemd then. Look at my other comment. I’ve had a nearly identical problem which prompted me to learn in order to solve it years ago.
Especially if you find a corner case autofs doesn’t cover. ☺️
Awesome, yes, definitely will do. After years of using Linux, the whole systemd thing is still a bit of a black box to me. I know how to create /start/stop services etc but that’s about it. Thanks for the prompt replies!
I think this is the way!
I think that is a good question to write something positive about SystemD.
I start my services with SystemD. I also moved my containers and docker-compose stack to be started by systemd. And it does mounting and bind-mounts, too. So I removed things from /etc/fstab and instead created unit files for systemd to mount the network mounts. And then you can edit the service file that starts the docker-container and say it relies on the mount. SystemD will figure it out and start them in the correct order, wait until the network and the mounts are there.
You have to put some effort in but it’s not that hard. And for me it’s turned out to be pretty reliable and low maintenance.
The absolute easiest and simplest would be to modify your grub config to have a longer timer on the boot menu, effectively delaying them until the NAS is up.
That doesn’t necessarily mean it’s the best option- there are ways to make the actual boot process wait for mounts, or to stagger the WOL signals, or the solutions others have mentioned. But changing grub is quick and easy.
Try looking into “autofs”.
Thanks! I’ve just set that up. That would seem to solve the solution, right, without reboots?
Yes. The important detail is that it remounts the path once the path gets called. So I setup a cron job to “ls” the path every few minutes to make sure it’s always remounted quickly.
Best option is to delay docker startup until the mounts are ready.
Read systemd mounts and systemd dependencies.
It’s been a while since I set it up, but from memory my mount point was set to be owned by root and immutable. That stopped any of my docker containers making new files and folders if the mounted drive or network location was not mounted or unavailable.
Yeah I used /etc/fstab which are static mounts.
I switched to autofs and that seems to be much better, as it does the mounts “at runtime”, ie when requested.
Not sure how Docker behaves, but in a Stack/Compose file you can define volumes to use a specific driver, such as smb. E.g.:
volumes: smb-calibre: driver_opts: type: "smb3" device: "//mynas/ebooks/CalibreDB" o: "ro,vers=3.1.1,addr=192.168.1.1,username=mbirth,password=secret,cache=loose,iocharset=utf8,noperm,hard"
So Docker will take care of mounting. Paired with
restart: unless-stopped
, all my Containers come back online just fine after an outage.You can use bind mounts instead of volumes to prevent the container to start when the target is missing.
https://docs.docker.com/storage/bind-mounts/
I’m not sure what happens if the target goes down while the container is running. And you would still need a monitoring solution for telling the container to start when the target comes up.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters DNS Domain Name Service/System NAS Network-Attached Storage Plex Brand of media server package
3 acronyms in this thread; the most compressed thread commented on today has 9 acronyms.
[Thread #509 for this sub, first seen 13th Feb 2024, 14:05] [FAQ] [Full list] [Contact] [Source code]
The other options of making the containers dependent on mounts or similar are all really better, but a simple enough one is to use SMB/CIFS rather than NFS. It’s a lot more transactional in design so the drive vanishing for a bit will just come back when the drive is available. It’s also a fair bit heavier on the overhead.
Using NFSv4 seems to work in similar fashion without the overhead though I haven’t dug into the exact back and forth of the system to know how it differs from the v3 to accomplish that.