I have a confession to make.
I’ve been working in IT for about 6/7 years now and I’ve been selfhosting for about 5. And in all this time, in my work environment or at home, I’ve never bothered about backups. I know they are essential for every IT network, but I never cared to learn it. Just a few copies of some harddisks here and there and that is actually all I know. I’ve tried a few times, but I’ve often thought the learning curve to steep, or the commandline gave me some errors I didn’t want to troubleshoot.
It is time to make a change. I’m looking for an easy to learn backup solution for my home network. I’m running a Proxmox server with about 8 VMs on it, including a NAS full of photos and a mediaserver with lots of movies and shows. It has 2x 8TB disks in a RAID1 set. Next to that I’ve got 2 windows laptops and a linux desktop.
What could be a good backup solution that is also easy to learn?
I’ve tried Borg, but I couldn’t figure out all the commandline options. I’m leaning towards Proxmox Backup Server, but I don’t know if it works well with something other than my Proxmox server. I’ve also thought about Veeam since I encounter it sometimes at work, but the free version supports only up to 10 devices.
My plan now is to create 2 backup servers, 1 onsite, running on something like a raspberry pi or an HP elitedesk. The other would be an HP microserver N40L, which I can store offsite.
What could be the perfect backup solution for me?
EDIT:
After a few replies I feel the need to mention that I’m looking for a free and centrally managed option. Thanks!
I’ve been working in IT for about 6/7 years now and I’ve been selfhosting for about 5. And in all this time, in my work environment or at home, I’ve never bothered about backups.
That really is quite a confession to make, especially in a professional context. But good for you to finally come around!
I can’t really recommend a solution with a GUI but I can tell you a bit about how I backup my homelab. Like you I have a Proxmox cluster with several VMs and a NAS. I’ve mounted some storage from my NAS into Proxmox via NFS. This is where I let Proxmox store backups of all VMs.
On my NAS I use restic to backup to two targets: An offsite NAS which contains full backups and additionally Wasabi S3 for the stuff I really don’t want to lose. I like restic a lot and found it rather easy to use (also coming from borg/borgmatic). It supports many different storage backends and multithreading (looking at you, borg).
I run TrueNAS, so I make use of ZFS Snapshots too. This way I have multiple layers of defense against data loss with varying restore times for different scenarios.
deleted by creator
Do you use restic to move the backups to remote on it’s own? Or are you using rclone to move your restic repo to remote?
I don’t use rclone at all, restic is perfectly capable to backup to remote storage on its own.
You mentioned Borg and all of its command-line options, but have you taken a look at borgmatic? It should be much easier to learn and use than Borg, while still retaining Borg’s features. Just note though that borgmatic probably doesn’t hit all of your stated requirements (e.g., no GUI).
Can confirm Borg/Borgmatic. Was looking for something good also and Borg is hands down the best. Borgmatic is kind of a wrapper for Borg which makes things even easier. One thing that makes Borg awesome is it’s excellent documentation. Maybe give cli tools a try ;)
It looks good, but I think it’s difficult to work without a central view to view all the machines statuses. How can you make sure all your machines have run a good backup?
Many folks use a centralized monitoring solution like Healthchecks to monitor backups across all of their servers. And borgmatic integrates directly with Healthchecks among others.
Proxmox backup server is free and absolutely essential in a PVE system. You can restore entire VMs, volumes, folders and files. You can keep many versions with it’s fantastic dedup system, you can mirror the backups to USB drives or other PBS remotes. If you’re using a ZFS filesystem on your PVE storage, then every backup is snapshotted at a point in time to prevent database issues on restore.
I’m going to try that for my servers! What do you use for your files (music, photos and such)?
I use a docker of Nextcloud. I have a Debian LXC on Proxmox that runs my docker containers, and since the backend storage is ZFS, I can snapshot it before any major upgrades to the OS or the docker containers. I have restored a whole LXC from PBS when something like my mailserver has gotten borked if I’ve forgotten to snapshot.
Maybe have a look at urbackup. Gui, “centrally managed”, free…
And please, as mentioned in another comment, have a look at Borgmatic. It makes Borg really easy to use and has some super handy features. Super easy backups to multiple locations by just adding a line in the config… And I just love the healthchecks integration. Set and forget until either healthchecks notifies you of a problem or you really need to recover data.
I’m gonna look into that! Borgmatic looks a lot easier than borg, but that CLI still scares me. I like working with Linux commands but something new like backups makes me want to click in a GUI to set everything up.
When I got started I preferred GUI apps too. The more you use them, the more you get to appreciate cli tools. Meanwhile I find cli tools better, they are just more precise and have a good way to push you to use them correctly. Also they are mostly well documented and even offer “on the fly” help with -h flags or alike… also the get started page of Borgmatic is really well written. Just play around with it ;)
If you are not afraid of Windows: Veeam B/R (Community Edition)
It has a nice GUI and works very well.
GUI is well explained, knowledgebases for Hyper-V, VMware and some others.
The Agent can be deployed manually and linux agents can write to a repository.
I don’t think Proxmox is a supported hypervisor.Community Edition is free
I think up to 10 workloadsMaybe take a look.
You could try to get hands on a NFR license that has the premium features with a 1 year runtime
Edit: I use Windows Agent for my personal rig and backup via SMB.
We use it at work so I am partially biased to that solution.I’ll second Veeam. It only runs on Windows but as far as backup and recovery software goes it’s the gold standard and the competition is not even close.
You ever had it back up a proxmox cluster? I’d say it’s suboptimal advice to go for veeam for this use-case.
Yeah - i use veeam for backups at work, but we run VMware, some MS servers and use rsync or bacula for our Linux boxes. A great product.
What would you recommend for me?
I have a homelab with:
1 laptop on Windows
3 desktop PCs (2 on Linux, 1 on Windows)
1 server running Proxmox VE
1 old 2 bay Synology NAS.
Veeam is amazing for sure. Used it for years in workloads big and small. “It just works” is their tagline for a reason.
Unlike Bethesdas :p
Free and centrally managed, not aware of any but definitely interested in something like that too.
My current setup has Proxmox backing up all LXC and VMs to Synology NAS then the Synology NAS backing up to Backblaze. Both run nightly. Using the built-in backup utility on Proxmox VE pointed at CIFS share on the Synology NAS.
Synology does have a software backup client available but I have never used it. For my desktops & laptops, they are easily reinstalled+reconfigured, I just make sure the data I care about is stored or synchronized to my NAS or the cloud. Nextcloud for files, Firefox sync for history+bookmarks, bitwarden client+vaultwarden for passwords, chezmoi for some dotfiles on some linux systems.
Synology’s ActiveBackup surprised me in it’s quality for being essentially a “”““free””“” (*bundles with hardware) solution. In total it’s saved my bacon about 4-6 times already, twice for a desktop death, two restores of my PDC, one semi-successful save of my DHCP server (it’s eventual death was not ABB’s fault), and one BMR simply to upgrade the disk of a laptop. (Before you ask, yes I do have two AD DCs for homelab). All in all it’s a lovely product, but doesn’t fit the bill as a F/OSS backup system so I don’t feel it deserves a root comment in this thread. I myself have been looking for an OSS solution similar to OP, not because I dislike my ABB deployment, but because I don’t want to be beholden to Synology forever. (they annoyed me s touch over the announcement of drive firmware lock in, and I do want to build my own NAS someday)
My RPO for critical assets (vCenter, AD, NAS, Unifi controller) and my personal desktops is 24h, and RTO of whenever I get to it - but the software itself is pretty fast once engaged (but not wire rate). Non critical assets are backed up on Sunday night. Schedules for both critical and non critical are staggered out along with interleaving with my Syno NAS’s self-backups to USB and Backblaze. If I remember correctly, there is a “max running tasks” gate in ABB, but don’t quote me on that.
Most of my infra is ESX (vSAN, iSCSI, local disk), so the majority of my backups are done using the snapshot-based VM backup feature. This goes pretty smooth and has a pretty fine grained retention schedule, so I’m happy. As a snapshot backup, you can’t restore just one file, you have to restore the VM as a whole.
My other two NAS (the VMs I run TrueNAS and Nextcloud on respectively) use the file server rsync backup method. The latter is Linux and I tried the native Linux agent a while back, but I remember running into a kernel version issue since it would have to install a snapshot driver. I stopped messing with the native Linux agent at that point because I’ve seen what happens to XFS when you run a version of Acronis that doesn’t match the kernel version (it doesn’t end well for your data). Admittedly, that was the first major release of ABB for Linux, so some stuff may have changed in the intermediate. There will come a day when I need to back up a native Linux hardware box, and that day I will also select my distro as much as possible with a matching kernel release to ABB.
Windows native agent is nearly invisible and runs great. MacOS (fingers crossed) I’ve never had to restore from , but my low-use Mac is connected and does show it’s jobs regularly running (and yes, I know it doesn’t exist unless it’s tested :P )
My last NAS & ESXi box were 12 years old when I retired them. I had thought about sticking with used enterprise gear but wanted a break to be a little lazy for a couple years. Storage is on Synology (DS1520+) and Proxmox runs on Asus PN63-S1 mini PC. Hyper Backup was primary reason I chose Synology (always been lazy about off-site backups) and docker feature has come in handy for things like secondary pihole & DNS. LXC with docker or podman have been able to cover majority of my needs in proxmox but still have Home Assistant & Unifi Network Controller on their own VMs. Home Assistant I have zero plans to move. Unifi I eventually plan to move over to docker but it works for now, albeit on an older version. Really need to up my documentation & diagram game, it’s all a huge mess, lol.
Future plans would love to have closet full of used enterprise servers running proxmox with all flash ceph storage backend then can do whatever NAS distro I want as a VM. My budget is focused elsewhere for next year or two unfortunately so gonna be awhile unless something breaks.
Always like to hear about other setups as I am constantly re-thinking my own.
Good on you to finally get into it, I switched to something systematic only very recently myself (previously it was “copy important stuff to an external HDD whenever I think of it”).
The one thing that I learned (luckily the easy-ish way) is: test your backup. Yes, it’s annoying, but since you rarely (ideally never!) will need to restore the backup it’s incredibly easy to think that everything in your system is working and it either never having worked properly or it somehow started failing at some point.
A backup solution that has never been tested via a full restore of at least something has to be assumed to be broken.
Which reminds me: I have to set up the cron job to periodically test a percentage of all backed up data.
I decided to use Kopia, btw, but can’t really say if that’s well-suited for your goals.
I use the daily/weekly/monthly pattern for machine backups:
- Use a rsync job to copy whatever you deem important from the target machine to a backup dir. Run this once a day.
- Once a week, sync the daily dir to a weekly dir.
- Once a month, take a snapshot of the weekly dir as a tarball.
In addition to that I use Pika Backup (it’s a very user friendly GUI for Borg) to make incremental backups of the monthly dir to a couple of external HDDs.
I use Restic, for the incremental backups and deduplication. I feel tar balls won’t factor in those two cases.
If you use a backup solution that does incremental/deduplication you can probably replace the monthly tarball with a monthly deduplicative backup.
Tarballs are useful in repetitive backups, like for example long term archiving to optical media (burning Blu Rays).
deleted by creator
I can’t speak for Proxmox specifically, but Duplicacy works great on my unRAID box and has a fully built out GUI. One of the best solutions I’ve found for my uses so far.
I too use duplicacy. I am just worried one day I can’t start the server and I’m stuck without access to duplicacy. What would be the solution? Try to get the folder from the appdata and point a new docker container to it?
I like BackupPC, it’ll do what you want but it may be more challenging to learn than some of these other options.
I use rclone and the gui https://rclone.org/gui/ in my proxmox environment.
That said, the backup itself is still initiated via batch script.
Edit: to backup my PC and all smartphones to my server I use syncthing.
And the rclone backs the data to an cloud system. Some parts encrypted
I use rclone as well and was in your position not long ago (looking for a non complicated backup solution). Landed on rclone based on feedback and what I read online. Spent about an hour reading rcl one’s documentation and built a script to do the backups daily.
OP if you go the rclone route, I can share my template script with you to get you started.
The script is pretty simple: makes sure there’s a logging file created on the system ahead of time, timestamps, the actual backup job, error checking, notification via discord (success or failure) and log output to the file created above.
Edit: I forgot to mention that recently (don’t know exactly when) Proxmox released something call Proxmox Backup Server (PBS). I have not used it but I imagine it integrates well with your Proxmox cluster but even then you may want to look at a complimentary solution to backup that server too.
Edit: Even if you go with Proxmox Backup Server, you may want to thinking about how you backup the backup server. Preferably off site, in my opinion.
I’m running Urbackup, runs on my thin client server and backups all windows machines and itself. But is actually seems quite unreliable.
What makes it unreliable?
For Windows, robocopy on a scheduled task.
For Linux, rsync in cron
And then just copy everything to a share somewhere.
I know you asked for a gui, but these are literally single line commands so it should be very easy to set up.
It has been a while since I used proxmox, but I seem to recall it having an option to export the VMs on a periodic cadance to an external host built in? That would solve for the configured system backup issue if it still exists. More directly, my preffered method is in keeping the payload objects (photos/files) on a separate dedicated storage NAS with RAID and automatic zfs dataset snapshots to accomodate for both a disk failing and the ‘oh shit, I meant to delete the file not the whole folder!’ type of losses. For a NAS in my case I use xigmanas, which is the predicessor to corenas, fka freenas largely because it doesn’t try to be too fancy, just serve the drives and provide some ancilary services around that job.
So long version short, what particularly are you trying to back up? The pictures or the service hosting them?
Yeah, Proxmox has a built in backup utility. I use it for nightly backup of all VMs and LXCs to cifs share on my NAS.
I guess I just want to make sure the pictures are safe. Next to that I’ll backup my /home/user folder, but next to that it’s not that hard to rebuild my VMs.
Simplest way there is to keep them on a dedicated storage system that you don’t even need to access directly for the most part. If there’s one thing I learned over many years playing with servers is that the end user/admin is more a hazard to your data than the system failing ever could be. A raid1 will automatically protect you if one of the hardrives happens to die without thinking about it, but will just as quickly delete everything on both drives if you run the wrong command.
My nightmare example from personal experience, installing a new pair of drives with the intent to migrate to them.
Install drive ‘b’, rsync -a dive ‘a’ to ‘b’ Wipe ‘a’ for storge/disposal, Install new drive ‘a’ to original slot of ‘a’ Start second rsync intended to be ‘b’ to ‘a’ but forget to change drives and instead sync the new blank ‘a’ to ’ b’ with the only copy of your data…
Fortunately I managed to get most everything back with some data recovery tools, but that second after pressing enter and watching it all go away was wrenching. Since then I’ve become a lot more aware of having a certain level of protection against human error.