What is everyone doing? SELinux? AppArmor? Something else?
I currently leave my nextcloud exposed to the Internet. It runs in a VM behind an nginx reverse proxy on the VM itself, and then my OPNSense router runs nginx with WAF rules. I enforce 2fa and don’t allow sign-ups.
My goal is protecting against ransomware and zerodays (as much as possible). I don’t do random clicking on links in emails or anything like that, but I’m not sure how people get hit with ransomware. I keep nextcloud updated (subscribed to RSS update feed) frequently and the VM updates everyday and reboots when necessary. I’m running the latest php-fpm and that just comes from repos so it gets updated too. HTTPS on the lan with certificates maintained by my router, and LE certs for the Internet side.
Beside hiding this thing behind a VPN (which I’m not prepared to do currently), is there anything else I’m overlooking?
For protection against ransomware you need backups. Ideally ones that are append-only where the history is preserved.
Good call. I do some backups now but I should formalize that process. Any recommendations on selfhost packages that can handle the append only functionality?
I use and love Kopia for all my backups: local, LAN, and cloud.
Kopia creates snapshots of the files and directories you designate, then encrypts these snapshots before they leave your computer, and finally uploads these encrypted snapshots to cloud/network/local storage called a repository. Snapshots are maintained as a set of historical point-in-time records based on policies that you define.
Kopia uses content-addressable storage for snapshots, which has many benefits:
Each snapshot is always incremental. This means that all data is uploaded once to the repository based on file content, and a file is only re-uploaded to the repository if the file is modified. Kopia uses file splitting based on rolling hash, which allows efficient handling of changes to very large files: any file that gets modified is efficiently snapshotted by only uploading the changed parts and not the entire file.
Multiple copies of the same file will be stored once. This is known as deduplication and saves you a lot of storage space (i.e., saves you money).
After moving or renaming even large files, Kopia can recognize that they have the same content and won’t need to upload them again.
Multiple users or computers can share the same repository: if different users have the same files, the files are uploaded only once as Kopia deduplicates content across the entire repository.
There’s a ton of other great features but that’s most relevant to what you asked.
I’ve used rclone with backblaze B2 very successfully. rclone is easy to configure and can encrypt everything locally before uploading, and B2 is dirt cheap and has retention policies so I can easily manage (per storage pool) how long deleted/changed files should be retained. works well.
also once you get something set up. make sure to test run a restore! a backup solution is only good if you make sure it works :)
As a person who used to be “the backup guy” at a company, truer words are rarely spoken. Always test the backups otherwise it’s an exercise in futility.
No, I’d actually be interested in that myself. I currently just rsync to another server.
Borg backup has append only
Restic can do append-only when you use their rest server (easily deployed in a docker container)
Nextcloud isn’t exposed, only a WireGuard connection allows for remote access to Nextcloud on my network.
The whole family has WireGuard on their laptops and phones.
They love it, because using WireGuard also means they get a by-default ad-free/tracker-free browsing experience.
Yes, this means I can’t share files securely with outsiders. It’s not a huge problem.
Tailscale has a feature called Funnel that enables you to share a resource over Tailscale to users who don’t have Tailscale.
Wonder if Wireguard has something similar (Tailscale uses Wireguard)
Neat, I’ll have to look it up. Thanks for sharing!
Wireguard is awesome and doesn’t even show up on the battery usage statistics of my phone.
With such a small attack surface I don’t have to worry about zero days for vaultwarden and immich.
Not only for Nextcloud, but I recommend setting up crowdsec for any publicly facing service. You’d be surprised by the amount of bots and script kiddies out there trying their luck…
One of my next steps was hardening my OPNSense router as it handles all the edge network reverse proxy duties, so IDS was in the list. I’m digging into Crowdsec now, it looks like there’s an implementation for OPNsense. Thanks for the tip!
How is this different from Fail2Ban?
Iirc crowdsec is like fail2ban but blocks ips reported by other servers, not just ones attacking your server. Kinda like a distributed fail2ban I guess?
Neat
My recollection is that Fail2Ban has some default settings, but is mostly reactionary in terms of blacklisting things that it observes trying to get in. Crowdsec behaves in a similar vein but, as the name implies, includes a lot of crowdsourced rules and preventative measures.
Make sure your backups are solid and can’t be deleted or altered.
In addition to normal backups, something like zfs snapshots also help and make it easier to restore if needed.
I think I remember seeing a nextcloud plugin that detects mass changes to a lot of files (like ransomware would cause). Maybe something like that would help?
Also enforce good passwords.
Do you have anything exposed to the internet that also has access to either nextcloud or the server it’s running on? If so, lock that down as much as possible too.
Fail2ban or similar would help against brute force attacks.
The VM you’re running nextcloud on should be as isolated as you can comfortably make it. E.g. if you have a camera/iot vlan, don’t let the VM talk to it. Don’t let it initiate outbound connections to any of your devices, etc
You can’t entirely protect against zero day vulnerabilities, but you can do a lot to limit the risk and blast radius.
All the measures you listed amount to nothing against a zero day remote exploit. They bypass the normal authentication process.
If you’re not able to use a VPN then use a IAM layer, which requires you to login through another method. You can use a dedicated app like Authelia/Authentik in front of the reverse proxy, or if you use nginx as reverse proxy you also have to option of using the vouch-proxy plugin.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters CA (SSL) Certificate Authority DNS Domain Name Service/System HTTP Hypertext Transfer Protocol, the Web HTTPS HTTP over SSL PiHole Network-wide ad-blocker (DNS sinkhole) SSL Secure Sockets Layer, for transparent encryption TLS Transport Layer Security, supersedes SSL VPN Virtual Private Network nginx Popular HTTP server
[Thread #394 for this sub, first seen 1st Jan 2024, 18:55] [FAQ] [Full list] [Contact] [Source code]
I’ve had my Nextcloud exposed for a long while now without any incidents (that I know of). I know automatic updates are not generally recommended but if you want a lighter load, you could use LSIO’s docker container (I use the standard db in the sample config). I run mine that way with watchtower and can’t recall ever in recent times when an update broke Nextcloud. Other than that, nextcloud has a brute-force plugin and you could consider overall hardening the entry points of the machine hosting Nextcloud (e.g ssh).
Yikes! I’d avoid leaving any services externally exposed unless they’re absolutely necessary…
Tailscale+Headscale are pretty easy to implement these days. Since it’s effectively zero trust, the tunnels become the encrypted channel so there’s an argument that HTTPS isn’t really required unless some endpoints won’t be accessing services over the Tailnet. SmallStep and Caddy can be used to automatically manage certs if it’s needed though.
You can even configure a PiHole (or derivative) to be your DNS server on the VPN, giving you ad blocking on the go.
there’s an argument that HTTPS isn’t really required…
Talescale is awesome but you gotta remember that Talescale itself is one of those services (Yikes). Like all applications it’s potentially susceptible to vulnerabilities and exploits so don’t fall into the trap of thinking that anything in your private network is safe because it’s only available through the VPN. “Defence in depth” is a thing and you have nothing to lose from treating your services as though they were public and having multiple layers of security.
The other thing to keep in mind is that HTTPS is not just about encryption/confidentiality but also about authenticity/integrity/non-repudiation. A cert tells you that you are actually connecting to the service that you think you are and it’s not being impersonated by a man in the middle/DNS hijack/ARP poison, etc.
If you’re going to the effort of hosting your own services anyway, might as well go to the effort of securing them too.
Tailscale is one of those services…
Tailscale isn’t an exposed service. Headscale is, and it isn’t connected to the Tailnet. It’s a control server used to communicate public keys and connectivity information between nodes. Sure, a threat actor can join nodes to the Tailnet should it become compromised. But have you looked at Headscale’s codebase? The attack surface is significantly smaller than anything like OpenVPN.
A cert tells you that you are actually…
I’m all for ssl/tls, but it’s more work and may not always be worth the effort depending upon the application, which is exactly why I recommended SmallStep+Caddy. Let’s not pretend that introducing things like a CA don’t introduce complexity and overhead, even if it’s just distributing the root cert to devices.
MITM/DNS Hijack/ARP Poisoning…
Are you suggesting that these attack techniques are effective against zero trust tunnels? Given that the encryption values are sent out of band, via the control channel, how would one intercept and replay the traffic?
Tailscale isn’t an exposed service. Headscale is
Absolutely! And it’s a great system that I thoroughly recommend. The attack surface is very small but not non-existent. There have been RCE using things like DNS rebinding(CVE-2022-41924) etc. in the past and, although I’m not suggesting that it’s in any way vulnerable to that kind of thing now, or that it even affected most users we don’t know what will happen in future. Trusting a single point of failure with no defence in depth is not ideal.
it’s more work and may not always be worth the effort
I don’t really buy this. Certs have been free and easy to deploy for a long time now. It’s not much more effort than setting up whatever service you want to run as well as head/tailscale, and whatever other fun services you’re running. Especially when stuff like caddy exists.
I recommended SmallStep+Caddy.
Yes! Do this if you don’t want to get your certs signed for some reason. I’m only advocating against not using certs at all.
Are you suggesting that these attack techniques are effective against zero trust tunnels
No I’m talking about defence in depth. If Tailscale is compromised (or totally bypassed by someone war driving your WiFi or something) then all those services are free to be impersonated by a threat actor pivoting into the local network after an initial compromise. Don’t assume that something is perfectly safe just because it’s airgapped, let alone available via tunnel.
I feel like it’s a bit like leaving all your doors unlocked because there’s a big padlock on the fence. If someone has a way to jump the fence or break the lock you don’t want them to have free reign after that point.
My claim is that Headscale has a lesser likelihood of compromise than Nextcloud, and that the E2EE provides an encrypted channel between nodes without an immediate need for TLS. Of course TLS over E2EE enhances CIA. There’s no pushback to defense in depth here. But in the beginning, the E2EE will get them moving in the right direction.
OP began the post by stating that the login page to a complex PHP web application is internet facing (again, yikes). Given the current implementation, I can only assume that OP is not prepared to deploy a CA, and that the path of least resistance – and bolstered security – can be via implementation of HS+TS. They get the benefit of E2EE without the added complexity, for which there is plenty, of a CA until if/when they’re ready to take the plunge.
If we’re going to take this nonsense all or nothing stance, don’t forget to mention that they’re doing poorly unless they implement EDR, IDS, TOTP MFA on all services, myriad DNS controls, and full disk encryption. Because those components don’t add to the attack surface as well, right?
Totally agree on all points!
My only issue was with the assertion that OP could comfortably do away with the certs/https. They said they were already using certs in the post and I wanted to dispel the idea that they arguably might not need them anymore in favour of just using headscale as though one is a replacement for the other.
I would move it into docker as that will give you a extra layer of security and simplify updates.
From there make sure you have backups that aren’t easily deleted. Additionally make sure your reverse proxy is setup correctly and implements proper security.