As technology advances and computers become increasingly capable, the line between human and bot activity on social media platforms like Lemmy is becoming blurred.

What are your thoughts on this matter? How do you think social media platforms, particularly Lemmy, should handle advanced bots in the future?

    • Docus@lemmy.world
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      1
      ·
      2 months ago

      It’s not just the internet. For example, students are handing in essays straight from ChatGPT. Uni scanners flag it and the students may fail. But there is no good evidence either side, the uni side detection is unreliable (and unlikely to improve on false positives, or negatives for that matter) and it’s hard for the student to prove they did not use an LLM. Job seekers send in LLM generated letters. Consultants probably give LLM based reports to clients. We’re doomed.

          • GBU_28@lemm.ee
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            2
            ·
            2 months ago

            You can still have extra allotted time, or be provided a wiped computer or tablet. Colleges dealt with these disabilities before llms

        • Docus@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 months ago

          I don’t disagree, but it’s probably not that easy. Universities in my country don’t have the resources anymore to do many orals, and depending on the subject exams don’t test the same skills as coursework.

  • simple@lemm.ee
    link
    fedilink
    English
    arrow-up
    25
    ·
    2 months ago

    Not even the biggest tech companies have an answer sadly… There are bots everywhere and social media is failing to stop them. The only reason there aren’t more bots in the Fediverse is because we’re not a big enough target for them to care (though we do have occasional bot spam).

    I guess the plan is to wait until there’s an actual way to detect bots and deal with them.

    • rglullis@communick.news
      link
      fedilink
      English
      arrow-up
      12
      ·
      2 months ago

      Not even the biggest tech companies have an answer sadly…

      They do have an answer: add friction. Add paywalls, require proof of identity, start using client-signed certificates which needs to be validated by a trusted party, etc.

      Their problem is that these answers affect their bottom line.

      I think (hope?) we actually get to the point where bots become so ubiquitous that the whole internet will become some type of Dark Forest and people will be forced to learn how to deal with technology properly.

      • simple@lemm.ee
        link
        fedilink
        English
        arrow-up
        8
        ·
        2 months ago

        Their problem is that these answers affect their bottom line.

        It’s more complicated than that. Adding friction and paywalls will quickly kill their userbase, requiring a proof of identity or tracking users is a privacy disaster and I’m sure many people (especially here) would outright refuse to give IDs to companies.

        They’re more like a compromise than a real solution. Even then, they’re probably not foolproof and bots will still manage.

        • rglullis@communick.news
          link
          fedilink
          English
          arrow-up
          4
          ·
          2 months ago

          requiring a proof of identity or tracking users is a privacy disaster and I’m sure many people (especially here) would outright refuse to give IDs to companies.

          The Blockchain/web3/Cypherpunk crowd already developed solutions for that. ZK-proofs allow you to confirm one’s identity without having to reveal it to public and make it impossible to correlate with other proofs.

          Add other things like reputation-based systems based on Web-Of-Trust, and we can go a long way to get rid of bots, or at least make them as harmless as email spam is nowadays.

          • FaceDeer@fedia.io
            link
            fedilink
            arrow-up
            4
            ·
            2 months ago

            It’s unfortunate that there’s such a powerful knee-jerk prejudice against blockchain technology these days that perfectly good solutions are sitting right there in front of us but can’t be used because they have an association with the dreaded scarlet letters “NFT.”

            • atrielienz@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              2 months ago

              I don’t like or trust NFT’s and honestly, I don’t think anybody else should for the most part. I feel the same about a lot of new crypto. But I don’t necessarily distrust blockchain because of that. I think it has its own set of problems, in that where the record is kept is important and therefore a target. We already have problems with leaks of PII. Any blockchain database that stores the data to ID people will be a target too.

          • ericjmorey@discuss.online
            link
            fedilink
            English
            arrow-up
            3
            ·
            2 months ago

            ZK-proofs

            This is a solution in the same way that PGP-keys are a solution. There’s a big gulf between the theory and implementation.

            • rglullis@communick.news
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              2 months ago

              Right, but the problem with them is “bad usability”, which amounts to “friction”.

              Like I said in the original comment, I kinda believe that things will get so bad that we will eventually have to accept that the internet can only be used if we use these tools, and that “the market” starts focusing on building the tools to lower these barriers of entry, instead of having their profits coming from Surveillance Capitalism.

  • douglasg14b@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    edit-2
    2 months ago

    Lemmy has no capability to handle non-advanced bots from yesteryear.

    It’s most definitely not capable of handing bots today and is absolutely unprepared for handling bots tomorrow.

    The fediverse is honestly just pending the necessary popularity in order to be turned into bot slop with no controls.

      • Rikudou_Sage@lemmings.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 month ago

        Yes. But at least with the admin group I’m part of, it’s dealt with fairly quickly, because we employ automated tools to help fight the spam.

      • douglasg14b@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        2 months ago

        You can’t really tell.

        It’s also a very very VERY small platform compared to other social media platforms like Reddit. (I had another comment where I calculated this but it’s ridiculously small)

        It is unlikely that it would see anywhere near the same level of dedicated bot activity due to the low return on invested effort.

        This is a problem that will become greater once the value of astroturfing and shifting opinion on Lemmy is high enough.

    • Fizz@lemmy.nz
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      The Fediverse has the advantage of being able to control its size. If 10 million people join lemmy tomorrow and most of them go to lemmy.World and then lemmy.World users start causing trouble then that instance gets defederated.

      Other than that we only have human moderation which can be overwhelmed.

      • Rikudou_Sage@lemmings.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 month ago

        We also have auto moderators. The recent spam wave didn’t occur on my instance at all. But my Matrix notification channel sure did explode with messages of bots being banned.

  • Blaze (he/him)@feddit.org
    link
    fedilink
    English
    arrow-up
    14
    ·
    2 months ago

    I saw a comment the other day saying that “the line between the most advanced bot and the least talkative human is getting more and more thinner”

    Which made me think: what if bots are setup to pretend to be actual users? With a fake life that they could talk about, fake anecdotes, fake hobbies, fake jokes but everything would seem legit and consistent. That would be pretty weird, but probably impossible to detect.

    And then when that roleplaying bot once in a while recommends a product, you would probably trust them, after all they gave you advice for your cat last week.

    Not sure what to do in that scenario, really

    • snooggums@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      ·
      2 months ago

      I’ve just accepted that if a bot interaction has the same impact on me as someone who is making up a fictional backstory, I’m not really worried wheter it is a bot or not. A bot shilling for Musk or a person shilling for Musk because they bought the hype are basically the same thing.

      In my opinion the main problem with bots is not individual acccounts pretending to be people, but the damage they can do en masse through a firehose of spam posts, comments, and manipulating engagement mechanics like up/down votes. At that point there is no need for an individual account to be convincing because it is lost in the sea of trash.

      • poVoq@slrpnk.net
        link
        fedilink
        English
        arrow-up
        9
        ·
        2 months ago

        Even more problematic are entire communities made out of astroturfing bots. This kind of stuff is increasingly easy and cheap to set up and will fool most people looking for advise online.

        • Danterious@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          2 months ago

          Maybe we should look for ways of tracking coordinated behaviour. Like a definition I’ve heard for social media propaganda is “coordinated inauthentic behaviour” and while I don’t think it’s possible to determine if a user is being authentic or not, it should be possible to see if there is consistent behaviour between different kind of users and what they are coordinating on.

          Edit: Because all bots do have purpose eventually and that should be visible.

          Edit2: Eww realized the term came from Meta. If someone has a better term I will use that instead.

          Anti Commercial-AI license (CC BY-NC-SA 4.0)

        • drkt@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 months ago

          I am convinced that the bidet shills on reddit are bots. There’s just no way that hundreds of thousands of people are suddenly interested in shitting appliances.

      • douglasg14b@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        2 months ago

        You’re missing the big impact here which is that bots can shift public opinion in mass which affects you directly.

        Gone are the days where individuals have their own opinions instead today opinions are just osmosised through social media.

        And if social media is essentially just a message bought by whoever can pay for the biggest bot farm, then anyone who thinks for themselves and wants to push back immediately becomes the enemy of everyone else.

        This is not a future that you want.

        • snooggums@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 months ago

          I didn’t miss it, since my entire post is about manipulation and the second paragraph is about scale.

      • ericjmorey@discuss.online
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        A bot shilling for Musk or a person shilling for Musk because they bought the hype are basically the same thing.

        It’s the scale that changes. One bot can be replicated much easier than a human shill.

  • jordanlund@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    2
    ·
    2 months ago

    I think smarter people than me will have to figure it out and even then it’s going to be a war of escalation. Ban the bots, build better bots, back and forth back and forth.

    Some news sites had an interesting take on comments sections. Before you could comment on an article, you had to correctly answer a 5 question quiz proving you actually read it.

    But AI can do that now too.

    • Blaze (he/him)@feddit.org
      link
      fedilink
      English
      arrow-up
      11
      ·
      2 months ago

      Some news sites had an interesting take on comments sections. Before you could comment on an article, you had to correctly answer a 5 question quiz proving you actually read it.

      It would be interesting to try that on Lemmy for a day. People would probably not be happy.

      • subignition@piefed.social
        link
        fedilink
        arrow-up
        3
        ·
        2 months ago

        As divisive as it would be, I think that would be a good thing overall…

        It reminds me of the literacy test to use Kingdom of Loathing’s chat features.

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      4
      ·
      2 months ago

      Not only can AI do that, it probably does it far better than a human would.

      I like XKCD’s solution. Aside from the fact that it would heavily reinforce whatever bubble each community lived in, of course.

  • AmidFuror@fedia.io
    link
    fedilink
    arrow-up
    6
    ·
    2 months ago

    To manage advanced bots, platforms like Lemmy should:

    • Verification: Implement robust account verification and clearly label bot accounts.
    • Behavioral Analysis: Use algorithms to identify bot-like behavior.
    • User Reporting: Enable easy reporting of suspected bots by users.
    • Rate Limiting: Limit posting frequency to reduce spam.
    • Content Moderation: Enhance tools to detect and manage bot-generated content.
    • User Education: Provide resources to help users recognize bots.
    • Adaptive Policies: Regularly update policies to counter evolving bot tactics.

    These strategies can help maintain a healthier online community.

    • kbal@fedia.io
      link
      fedilink
      arrow-up
      5
      ·
      2 months ago

      Did an AI write that, or are you a human with an uncanny ability to imitate their style?

      • AmidFuror@fedia.io
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        2 months ago

        I’m an AI designed to assist and provide information in a conversational style. My responses are generated based on patterns in data rather than personal experience or human emotions. If you have more questions or need clarification on any topic, feel free to ask!

          • ChatGPT@lemmings.worldB
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 month ago

            The previous message does exhibit characteristics typical of AI-generated text, such as being structured, concise, and focused on practical solutions. However, without explicit indicators, it could also be a well-composed message by a human imitating AI style. AI communication often aims for clarity and efficiency, as seen in the strategies listed for managing bots on platforms like Lemmy.

            • AmidFuror@fedia.io
              link
              fedilink
              arrow-up
              1
              ·
              1 month ago

              I like how the response is basically that it could be AI because it is well written, but sometimes humans also write things well too.

              • ChatGPT@lemmings.worldB
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 month ago

                Exactly! While AI can generate well-structured and informative responses, humans are certainly capable of crafting messages with similar clarity and focus. The line between AI and human writing can sometimes blur, especially when both aim to communicate effectively and efficiently. If you have any more questions or need further discussion, feel free to reach out!

    • Ademir@lemmy.eco.br
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      2 months ago

      Verification: Implement robust account verification and clearly label bot accounts.

      ☑ Clear label for bot accounts
      ☑ 3 different levels of captcha verification (I use the intermediary level in my instance and rarely deal with any bot)

      Behavioral Analysis: Use algorithms to identify bot-like behavior.

      Profiling algorithms seems like something people are running away when they choose fediverse platforms, this kind of solution have to be very well thought and communicated.

      User Reporting: Enable easy reporting of suspected bots by users.

      ☑ Reporting in lemmy is just as easy as anywhere else.

      Rate Limiting: Limit posting frequency to reduce spam.

      ☑ Like this?

      image

      Content Moderation: Enhance tools to detect and manage bot-generated content.

      What do you suggest other than profiling accounts?

      User Education: Provide resources to help users recognize bots.

      This is not up to Lemmy development team.

      Adaptive Policies: Regularly update policies to counter evolving bot tactics.

      Idem.

      • douglasg14b@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        edit-2
        2 months ago

        Mhm, I love dismissive “Look, it already works, and there’s nothing to improve” comments.

        Lemmy lacks significant capabilities to effectively handle the bots from 10+ years ago. Nevermind bots today.

        The controls which are implemented are implemented based off of “classic” bot concerns from nearly a decade ago. And even then, they’re shallow, and only “kind of” effective. They wouldn’t be considered effective for a social media platform in 2014, they definitely are not anywhere near capability today.

    • GBU_28@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      edit-2
      2 months ago

      Many communities already outlaw calling someone a bot, and any algorithm to detect bots would just be an arms race

  • disguised_doge@kbin.earth
    link
    fedilink
    arrow-up
    4
    ·
    2 months ago

    There was already a wave of bots identified iirc. They were identified only because:

    1 the bots had random letters for usernames

    2 the bots did nothing but downvote, instantly downvoting every post by specific people who held specific opinions

    Turned into a flamware, by the time I learned about it I think the mods had deleted a lot of the discussion. But, like the big tech platforms, the plan for bots likely is going to be “oh crap, we have no idea how to solve this issue.” I don’t intend to did the admins, bots are just a pain in the ass to stop.

  • TimLovesTech (AuDHD)(he/him)@badatbeing.social
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 months ago

    For commercial services like Twitter or Reddit the bots make sense because it lets the platforms have inflated “user” numbers while also more random nonsense to sell ads against.

    But for the fediverse, the goals would be, post random stuff into the void and profit?? Like I guess you could long game some users into a product that they only research on the fediverse, but seems more cost effective for the botnets to attack the commercial networks first.

      • TimLovesTech (AuDHD)(he/him)@badatbeing.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        Has someone posted an argument, or do you in the future see yourself seeing an argument with someone on here taking the side of “alternative facts” and letting that change your mind? If not then it’s just someone likely downvoted to the bottom that people will ignore anyways, not worth the time to post it. I think something like Facebook works for these types of things better, as the population is generally older and more likely to see and reshare just any nonsense true or not.

        Because I personally don’t see the fediverse as a great medium for trying to bring people into the cult, and the ability to bring people out of the cult is even less likely online, fediverse or not.

        • distantsounds@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 month ago

          In general I believe all online communities are toxic when it comes to political discussion and just enable cult behavior. I think both Facebook and the fediverse have the ability to sway options, but at different capacities.

          Facebook is simple and easy to use, and because of that it’s widely adopted and you make connections through people you (kinda sorta) know irl. This leads to a false sense of security and can poison your bubble of connections.

          With vote manipulation, whitewash communities, brigading, bots, and general anonymity, the fediverse is not any better equipped to deal with “alternative facts.” It being more niche and less user friendly weeds out some people, and you are left with a user base that has slightly more education and decision making skills…but no one is completely immune to manipulative tactics. Bots with agendas are not always easy to identify and continue getting more refined. It’s easy to lose track of the push & pull if you are chronically online…which many fediverse users are.

          I don’t have any solutions other than attempting to educate on how to spot misinformation and approach ideas critically. Even with doing that it’s far for 100% and that number keeps declining with age.

  • Dizzy Devil Ducky@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    2 months ago

    As far as I’m aware, there are no ways implemented. Got no idea because I’m not smart enough for this type of thing. The only solution I could think of is to implement a paywall (I know, disgusting) to raise the barrier to entry to try and keep bots out. That, and I don’t know if it’s currently possible, but making it so only people on your instance can comment, vote, and report posts on an instance.

    I personally feel that depending on the price of joining, that could slightly lessen the bot problem for that specific instance since getting banned means you wasted money instead of just time. Though, it might also alienate it from growing as well.