How can one configure their Lemmy instance to reject illegal content? And I mean the bad stuff, not just the NSFW stuff. There are some online services that will check images for you, but I’m unsure how they can integrate into Lemmy.
As Lemmy gets more popular, I’m worried nefarious users will post illegal content that I am liable for.
https://github.com/db0/fedi-safety can scan images pre and post upload for you for CSAM, including novel GenAI ones. If you need pre-scanning, you will also need to run this service https://github.com/db0/pictrs-safety along with your instance. Both of these need a budget GPU to do the scans, but you can use your home PC.
I am not a lawyer and I don’t play one on the internet.
To my understanding the process is only prevented by controlling who can have an account on your instance.
That said, it’s not clear to me how federated content is legally considered.
The only thing I can think of is running a bot on your instance that uses the API of a service such as what you mention to deal with such images.
Your post is the first one I’ve seen recently that is even describing the issue of liability, but it’s in my opinion the single biggest concern that exists in the fediverse and it’s why I’ve never hosted my own instance.
There’s no such integration that I’m aware of. We rely on users reporting CSAM and such.
That’s not true: https://github.com/db0/pictrs-safety & https://github.com/db0/fedi-safety
Oh cool.