cross-posted from: https://lemmy.dbzer0.com/post/4500908

In the past months, there’s a been a issue in various instances where accounts would start uploading blatant CSAM to popular communities. First of all this traumatizes anyone who gets to see it before the admins get to it, including the admins who have to review to take it down. Second of all, even if the content is a link to an external site, lemmy sill caches the thumbnail and stores it in the local pict-rs, causing headaches for the admins who have to somehow clear that out. Finally, both image posts and problematic thumbnails are federated to other lemmy instances, and then likewise stored in their pict-rs, causing such content to be stored in their image storage.

This has caused multiple instances to take radical measures, from defederating liberaly, to stopping image uploads to even shutting down.

Today I’m happy to announce that I’ve spend multiple days developing a tool you can plug into your instance to stop this at the source: pictrs-safety

Using a new feature from pictr-rs 0.4.3 we can now cause pictrs to call an arbitary endpoint to validate the content of an image before uploading it. pictrs-safety builds that endpoint which uses an asynchronous approach to validate such images.

I had already developed fedi-safety which could be used to regularly go through your image storage and delete all potential CSAM. I have now extended fedi-safety to plug into pict-rs safety and scan images sent by pict-rs.

The end effect is that any images uploaded or federated into your instance will be scanned in advance and if fedi-safety thinks they’re potential CSAM, they will not be uploaded to your image storage at all!

This covers three important vectors for abuse:

  • Malicious users cannot upload CSAM to for trolling communities. Even novel GenerativeAI CSAM.
  • Users cannot upload CSAM images and never submit a post or comment (making them invisible to admins). The images will be automatically rejected during upload
  • Deferated images and thumbnails of CSAM will be rejected by your pict-rs.

Now, that said, this tool is AI-driven and thus, not perfect. There will be false positives, especially around lewd images and images which contain children or child-topics (even if not lewd). This is the bargain we have to take to prevent the bigger problem above.

By my napkin calculations, false positive rates are below 1%, but certainly someone’s innocent meme will eventually be affected. If this happen, I request to just move on as currently we don’t have a way to whitelist specific images. Don’t try to resize or modify the images to pass the filter. It won’t help you.

For lemmy admins:

  • pictrs-safety contains a docker-compose sample you can add to your lemmy’s docker-compose. You will need to your put the .env in the same folder, or adjust the provided variables. (All kudos to @Penguincoder@beehaw.org for the docker support).
  • You need to adjust your pict-rs ENVIRONMENT as well. Check the readme.
  • fedi-safety must run on a system with GPU. The reason for this is that lemmy provides just a 10-seconds grace period for each upload before it times out the upload regardless of the results. A CPU scan will not be fast enough. However my architecture allows the fedi-safety to run on a different place than pictrs-safety. I am currently running it from my desktop. In fact, if you have a lot of images to scan, you can connect multiple scanning workers to pictrs-safety!
  • For those who don’t have access to a GPU, I am working on a NSFW-scanner which will use the AI-Horde directly instead and won’t require using fedi-safety at all. Stay tuned.

For other fediverse software admins

fedi-safety can already be used to scan your image storage for CSAM, so you can also protect yourself and your users, even on mastodon or firefish or whatever.

I will try to provide real-time scanning in the future for each software as well and PRs are welcome.

Divisions by zero

This tool is already active now on divisions by zero. It’s usage should be transparent to you, but do let me know if you notice anything wrong.

Support

If you appreciate the priority work that I’ve put in this tool, please consider supporting this and future development work on liberapay:

https://liberapay.com/db0/

All my work is and will always be FOSS and available for all who need it most.

    • shagie@programming.dev
      link
      fedilink
      English
      arrow-up
      30
      ·
      1 year ago

      Reddit didn’t try make it. It’s a free service from services such as Cloudflare and Google (Reddit uses the google one).

    • veroxii@aussie.zone
      link
      fedilink
      English
      arrow-up
      29
      arrow-down
      1
      ·
      1 year ago

      To be fair there are now off the shelf ai solutions available which were simply impossible 10 or even 5 years ago.

  • cwagner@lemmy.cwagner.me
    link
    fedilink
    English
    arrow-up
    44
    ·
    1 year ago

    For those who don’t have access to a GPU, I am working on a NSFW-scanner which will use the AI-Horde directly instead and won’t require using fedi-safety at all. Stay tuned.

    Damn, that sounds great, my tiny ARM server would probably struggle even with only caching thumbnails.

    In the interest of creating as little load as possible for the eventual AI Horde cluster, will there be an option to only check federated images? I’m close to 100% certain that I will never upload CSAM, so images that I (the only user on my instance) uploads won’t need to be checked, those are also far bigger than federated thumbnails.

    A third thought I just had, distributing CSAM is about as bad for everyone involved, hotlinked images in comments are not uploaded to pict-rs, but just as problematic. Any plan to integrate with lemmy directly and check those as well, removing the post if triggered?

    Finally: Thank you for your work on this!

    • db0@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      23
      ·
      1 year ago

      In the interest of creating as little load as possible for the eventual AI Horde cluster, will there be an option to only check federated images?

      That would depend on lemmy and pict-rs devs providing such classification. If it exists, I can support it.

      Any plan to integrate with lemmy directly and check those as well, removing the post if triggered?

      That might be more load than your worker can serve. But this is theoretically already possibly using pythorhead and parsing every incoming comment for image links, like an automoderator. You don’t need pictrs-safety for this.

  • ABluManOnLemmy@feddit.nl
    link
    fedilink
    English
    arrow-up
    43
    arrow-down
    3
    ·
    1 year ago

    Be careful with this though. I think I remember some jurisdictions require server owners not to delete CSAM and report it instead. Verify that you aren’t obligated to keep it before deleting it

    • HTTP_404_NotFound@lemmyonline.com
      link
      fedilink
      English
      arrow-up
      46
      arrow-down
      2
      ·
      1 year ago

      not upload CSAM to for trolling communities. Even novel GenerativeAI CSAM. Users cannot upload CSAM images and never submit a post or comment (making them invisible to admins). The images will be automatically rejected during upload

      There wouldn’t be anything to delete, as it would have never been saved with this.

      • shagie@programming.dev
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        4
        ·
        1 year ago

        https://www.law.cornell.edu/uscode/text/18/2258A

        (a) Duty To Report.—
        (1) In general.—
        (A) Duty.—In order to reduce the proliferation of online child sexual exploitation and to prevent the online sexual exploitation of children, a provider—
        (i) shall, as soon as reasonably possible after obtaining actual knowledge of any facts or circumstances described in paragraph (2)(A), take the actions described in subparagraph (B); and
        (ii) may, after obtaining actual knowledge of any facts or circumstances described in paragraph (2)(B), take the actions described in subparagraph (B).
        (B) Actions described.—The actions described in this subparagraph are—
        (i) providing to the CyberTipline of NCMEC, or any successor to the CyberTipline operated by NCMEC, the mailing address, telephone number, facsimile number, electronic mailing address of, and individual point of contact for, such provider; and
        (ii) making a report of such facts or circumstances to the CyberTipline, or any successor to the CyberTipline operated by NCMEC.

        (e) Failure To Report.—A provider that knowingly and willfully fails to make a report required under subsection (a)(1) shall be fined—
        (1) in the case of an initial knowing and willful failure to make a report, not more than $150,000; and
        (2) in the case of any second or subsequent knowing and willful failure to make a report, not more than $300,000.

        Check with a lawyer if blocking an upload that your server has access to because of suspected CSAM constitutes “actual knowledge or any facts or circumstances”.

        • HTTP_404_NotFound@lemmyonline.com
          link
          fedilink
          English
          arrow-up
          31
          arrow-down
          4
          ·
          edit-2
          1 year ago

          HEY LOCAL PD OFFICE,

          SOMEONE TRIED TO UPLOAD SOME POTENTIALLY CHILD PORN TO MY LEMMY INSTANCE.

          No… I don’t have an IP for who uploaded it.

          Sorry, I don’t know where it came from. It just got federated across the fediverse to me.

          No… I don’t have the content either, it doesn’t get saved.

          Sorry… I guess I really don’t have any details at all for you.

      • Obinice@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        2
        ·
        1 year ago

        So the image never touches the server side, even in RAM, it always remains only on the client machine, and it’s checked there?

        If so, then this could be a pretty neat tidy way to deal with this issue, otherwise the image is on the server, even if you “delete it real fast” or such, and I imagine then you’d still need to be in compliance with the law regarding saving and reporting it.

        • Deiv@lemmy.ca
          link
          fedilink
          English
          arrow-up
          12
          ·
          1 year ago

          Did you read the post? The image is sent to an endpoint that has a hosted AI solution that checks it

          It 100% touches the server, it’s just not stored anywhere and gets blocked

    • explodicle@local106.com
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      Does that leave open a possible attack, in which the attacker can just fill up the server’s hard drive with AI-generated CSAM?

      • ABluManOnLemmy@feddit.nl
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        I think that if, in good faith, the person is unable to accept more CSAM due to the fact that their hard drive is full, there isn’t an issue. The intent of the law is that, it someone knows something is CSAM, they need to report it. I don’t think the government is going to come hard on Lemmy server owners unwittingly receiving CSAM through federation (though they certainly would want them to report and take down the CSAM on their servers)

    • deFrisselle@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      It’s not getting uploaded, so nothing to keep
      That’s the point The Kiddy Porn never hits the server There might be an argument for the scanner cache to be saved for later reporting to authorities Thank is assuming the scanner also logs the account, ip, time, etc of the upload

  • ram@feddit.nl
    link
    fedilink
    English
    arrow-up
    21
    ·
    1 year ago

    I’m curious. How do you train such AI without being raided by the authorities?

    • Callie@pawb.social
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      Best guess would be using the database that sites like tumblr and twitter use to automatically identify known CSAM. I’m not sure how it works, I just know that sites use a database to quickly shut down any uploads

    • Deepus@lemm.ee
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      Offload all work to an anonymous VPS provider possibly? I dunno just spit balling.

  • Anonymousllama@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    1 year ago

    Looks like a really good solution to the problem, even a false positive of 1% seems like a small trade-off considering the amount of spam and rubbish posted.

  • redcalcium@lemmy.institute
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    1 year ago

    This is very cool! Too bad I don’t have access to vps with gpu to try it at the moment.

    Is it possible to offer this as a service with a small monthly fee (e.g. on demand pricing depending how many images you scan) or donations, so instance owners without gpu can use it?

    • db0@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      19
      ·
      1 year ago

      I’ve love to, but there’s legal concerns about the transfer of potential CSAM to third party services which I’d rather not think about.

  • joshuaacasey@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    14
    ·
    1 year ago

    disappointed that this uses AI instead of something like Microsoft’s PhotoDNA that compares image hashes. AI has too much (unnecessary & unacceptable) risk of false positives that results in overbroad censorship.

    • db0@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      49
      arrow-down
      2
      ·
      1 year ago

      PhotoDNA requires a lot more bureaucratic work than most instance admins can handle, but if you really want it, you can easily plug it into pictrs-safety instead.

      However PhotoDNA will not catch novel generativeAI CSAM.

      • joshuaacasey@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        13
        ·
        1 year ago

        there’s no such thing as “AI-generated CSAM”. CSAM literally is created by abusing a real human child. There’s no such thing as an “AI child”. It would be a much better idea to protect *ACTUAL existing children instead of wasting resources on *checks notes* fiction

        • db0@lemmy.dbzer0.comOP
          link
          fedilink
          English
          arrow-up
          16
          arrow-down
          1
          ·
          edit-2
          1 year ago

          You and especially your users won’t know a photorealistic generative AI image is real or not.

          • joshuaacasey@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            so you admit that like most people you don’t actually give a shit about protecting anyone. You would rather protect imaginary fake fictional characters because it’s easier and makes you “feel good about yourself”. I genuinely hate performative assholes (which is 99% of humans, let’s be honest. 99% of people only care about their feelings and making themselves feel good by thinking that they are doing something good, not actually doing a good thing). There’s no evidence that fictional material is harmful, in fact, quite the opposite there is some evidence that access to fictional material may actually protect kids and prevent abuse from occurring, by serving as a harmless sexual outlet. I mean let’s put it this why, go ask a victim of sexual abuse “If you had a choice, would you prefer that your abuser abused you or that your abuser relieved their pent-up sexual frustration to some fictional material” I guarantee 100% of them will say that they would prefer to have not been abused.

        • glue_snorter@lemmy.sdfeu.org
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          1 year ago

          I think you were merely being pedantic, but there are some interesting points in there.

          Is it a crime to generate fake “csam”?

          Should it be a crime?

          How can prosecutors get convictions against a defense of “no, your honour, that video is AI-generated”?

          What we have now is still miles off general AI, but it’s going to take years for society to catch up. Interesting times.

        • Omega_Haxors@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          edit-2
          1 year ago

          Ah the kinds of comments I quit reddit to no longer see…

          Well on the bright side, at least they get downvoted here.

    • hitagi@ani.social
      link
      fedilink
      English
      arrow-up
      23
      arrow-down
      1
      ·
      1 year ago

      Microsoft’s PhotoDNA

      My issue with these services is that they aren’t available for non-US people. db0’s project can be deployed anywhere (provided you have a capable GPU).

          • droans@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            edit-2
            1 year ago

            It’s available to every Cloudflare user, US or global.

            PhotoDNA is also available for every website in the world.

            • hitagi@ani.social
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              1 year ago

              It isn’t. I already tried applying for both. You need NCMEC credentials which is only available for those in the US.

              edit: Here’s a comment I made about it.

    • xeddyx@lemmy.nz
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      3
      ·
      1 year ago

      I don’t see the problem here. What makes you think that the false positives in this case is “unacceptable”? So what if Joe Bloggs isn’t able to share a picture of a random kid (why tho) or an image of a child-like person?

      • joshuaacasey@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        6
        ·
        1 year ago

        false positives not only leads to unnecessary censorship, but it also wastes resources that would be better used to protect *ACTUAL victims and children (although, the optimal solution is protecting people before any harm is done so that we don’t even need these “band-aid” solutions for reacting afterward)

        • 👁️👄👁️@lemm.ee
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          1 year ago

          Unnecessary censorship is fine when it’s clearly a underaged person. You don’t need to check their ID to tell if it’s CSAM, and you don’t need to as well with generated child stuff. If you want to debate it’s legality, that’s a diff conversation, but even an AI generated version is enough to mentally scar the viewer, so there is still harm being done.

          • joshuaacasey@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            an imaginary person has no ID because an imaginary person doesn’t exist. Why do you care about protecting imaginary characters instead of protecting actual real existing human beings?

        • xeddyx@lemmy.nz
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          Again, what you’re saying isn’t relevant to Lemmy at all. Please elaborate how would a graphics card on some random server help protect actual victims?

      • droans@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        edit-2
        1 year ago

        PhotoDNA isn’t run by Microsoft anymore, but by the International Centre for Missing and Exploited Children.

      • glue_snorter@lemmy.sdfeu.org
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        1 year ago

        My friend, you haven’t heard about Oracle.

        Microsoft at least gave the world Powershell, to balance out their sins. I can also name other good things they have done. Oracle is pure and deliberate evil.

        I believe that the human race will end in one of three ways:

        • asteroid strike
        • disease
        • Oracle
  • muntedcrocodile@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    1 year ago

    As you said this may have to be the bargain of the fediverse. I think a democratic process on the training of said ai might be what gives the best outcomes from this.