Hello everyone,

In a day or two, I am getting a motherboard with an N100 integrated CPU as a replacement to the Raspberry Pi 4 (2 GB Model). I want to run Jellyfin, the *arr stack and Immich on it. However, I have a lot of photos(for Immich) and movies(for Jellyfin) (in total about 400 GB) that I want to back up, just in case something happens. I have two 1TB drives, one will have the original files, and the second will be my boot drive and have the backup files.

How can I do that? Just copy the files? Do I need to compress them first? What tools do I need to use, and how would you do it?

Thanks in advance.

EDIT: I forgot to mention that I would prefer the backups to be local.

  • Scrubbles@poptalk.scrubbles.tech
    link
    fedilink
    English
    arrow-up
    7
    ·
    9 months ago

    rclone is my go to for backups I run regularly. It is very nice and scriptable.

    rsync might be what you’re looking for, a bit more verbose and… determined? for a large job like that.

  • mlaga97@lemmy.mlaga97.space
    link
    fedilink
    English
    arrow-up
    5
    ·
    9 months ago

    Restic and borg are both sorta considered ‘standard’ for doing incremental backups beyond filesystem snapshotting.

    I use restic and it automatically handles stuff like snapshotting, compression, deduplication, and encryption for you.

      • lemmyvore@feddit.nl
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 months ago

        If you literally mean one time then rsync is fine-ish… if you combine it with a checksum tool so you can verify it copied everything properly.

        If you need to backup regularly then you need something that can do deduplication, error checking, compression, probably encryption too. Rsync won’t cut it, unless you mean to cover each of those points using a different tool. But there are tools like borg that can do all of them.

  • r0ertel@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    9 months ago

    Restic and Borg seem to be the current favorites, but I really like the power and flexibility of Duplicity. I like that I can push to a wide variety of back ends (I’m using the rsync), it can do synchronous or asynchronous encryptions and I like that it can do incremental with timed full backups. I don’t like that it keeps a local cache of index files.

    I back up to a Pi 0 with a big local disk and rsync the whole disk to another Pi at a relative’s house over tailscale. I’ve never needed the remote, but it’s there.

    I’ve had to do a single directory restore once and it was pretty easy. I was able to restore to a new directory and move only the files that I clobbered.

  • themachine@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    9 months ago

    I prefer restic for my backups. There’s nothing inherently wrong with just making a copy if that is sufficient for you though. Restic will create small point in time snapshots as compared to just a file copy so I’m the event that perhaps you made a mistake and accidentally deleted something from the “live” copy and managed to propagate that to your backup it is a nonissue as you could simply restore from a previous snapshot.

    These snapshots can also be compressed and deduplicated making them extremely space efficient.

  • CaptDust@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    ·
    9 months ago

    I use rsync with a systemd timer. When I first installed the backup drive it took a while to build the file system, but now every Monday it runs, finds the difference between source and target drive, and pulls just the changes down for backup. It’s pretty quick, doesn’t do any compression or anything like that.

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    9 months ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    LXC Linux Containers
    RAID Redundant Array of Independent Disks for mass storage
    SSD Solid State Drive mass storage

    3 acronyms in this thread; the most compressed thread commented on today has 12 acronyms.

    [Thread #804 for this sub, first seen 15th Jun 2024, 12:25] [FAQ] [Full list] [Contact] [Source code]

  • Andrzej@lemmy.myserv.one
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 months ago

    If you have the RAM for it, I would recommend going the Promox route. I made the switch this year, and now running daily container image backups is a doddle.

      • Andrzej@lemmy.myserv.one
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 months ago

        Sure. The hardware is a cheap little beelink with an n100 and 16gb of RAM. Proxmox can do VMs, but is primarily focused on LXCs, which are Linux containers. They share the kernel with the host, so they’re very lightweight — you can spin up basically as many (say) Debian systems as you want. So I have Jellyfin on one container, Sonarr/Radarr on another (though you could put them on separate containers if you wanted), transmission has a container, sabnzb has a co- … you get the idea lol.

        The cool thing is that it’s easy to mount drives/directories from the host, and have your containers share them that way.

        Wrt backups, Proxmox had some built in functionality you can run from the web ui. So I back up images of the LXCs to the external hard drive daily, then have a borg container that backs up the back up directory to cloud storage.

        It’s also very convenient to make a quick backup before making any changes to a container — you can restore to a previous image with the click of a button.