This thought entered my mind today as I came across a thread on Quora, and noticed that they have added a feature where ChatGPT would have a go at answering the question.

Today alone I have used a few varying “AI” tools, including one which automatically paraphrases text for you, one which analyses your writing in SwiftKey as you type, and of course the big players like Bard and Bing Chat. It got me thinking about whether these features are actually valuable, and if we would start to see them on this platform.

  • Neuromancer@lemmy.ml
    link
    fedilink
    arrow-up
    20
    ·
    2 years ago

    No. I come here to interact with people. One of my favorite features here is the setting that lets me hide all bot accounts.

    • Lobstronomosity@lemmy.mlOP
      link
      fedilink
      arrow-up
      2
      ·
      2 years ago

      What do you think it adds to a keyboard (swiftkey)? The feature is there regardless whether we like it or not. The question is whether you think such a thing should be on the site, and in what form, not whether one user considers to be useful or not.

      One thing that it could do is analyse your perceived tone, and suggest edits to come across less aggressive, or more inquisitive for example. Such a thing could well make certain social places a more pleasant forum to use.

      • Kichae@kbin.social
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        2 years ago

        One thing that it has added to keyboards is making it much more difficult to use purposefully creative spelling or grammar.

        For every problem they fix, they restrain creativity, because they themselves are constrained to their training set.

        The Internet already has a problem with tone policing. Maybe we don’t need mathematical models burning the planet to tell me that in pissed off.

        If people want a moderated space, they can request some moderators.

        Also, you didn’t answer my question. Examples of what cna be done are not the same as answering why a thing should be done.

  • _NetNomad @ DXC@forum.dxcomplex.com
    link
    fedilink
    arrow-up
    8
    ·
    2 years ago

    i think there are still way too many issues, both functional and ethical, with large language models (“ai”) to be implemented at the scale it’s being done right now. for me the fediverse is a very welcome reprieve from corporate fad-chasing

  • darkfoe@lemmy.serverfail.party
    link
    fedilink
    arrow-up
    6
    ·
    2 years ago

    Closest thing I’ve seen that I’ve liked is on Google chat workspaces: a quick summary of what people have said since my last visit to the thread. Would make a nifty plug-in/addon

    • Lobstronomosity@lemmy.mlOP
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      2 years ago

      That does seem a useful feature.

      I’m stuck in Teams / Microsoft’s universe. I wonder how they will try and shoehorn Bing Chat into my work.

      • darkfoe@lemmy.serverfail.party
        link
        fedilink
        arrow-up
        2
        ·
        2 years ago

        My guess is by doing this. I think they’re testing things out with outlook too from what I’ve heard from some old coworkers in the Microsoft space. Google only added it in about two or three weeks ago, so it’s still a pretty new feature.

  • Cosmiiko@lemmy.ml
    link
    fedilink
    arrow-up
    4
    ·
    2 years ago

    I don’t think so. Integrating with existing AI services equals direct privacy concerns to me.
    A solution would be a self-hosted model, but then that could cost a lot in machine resources for instance owners, especially bigger instances.

    Even then I’m having a hard time thinking of a useful usecase for an LLM integration on Lemmy (or even social media in general).

  • ganymede@lemmy.ml
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    2 years ago

    Which ML model, is it proprietary? which service would handle the requests? which servers would do the computation? etc

    Most here are seeking freedom away from the corporate tech monsters, not to give them a seat at the table.

  • Thomas@lemmy.douwes.co.uk
    link
    fedilink
    arrow-up
    4
    ·
    2 years ago

    Might be interesting to make an read only instance like r/SubredditSimulator that only has bot accounts. maybe with an open source LLM hosted on the server with the instance.

    • Lobstronomosity@lemmy.mlOP
      link
      fedilink
      arrow-up
      3
      ·
      2 years ago

      Great idea, this is the sort of thing I was thinking of.

      I find it strange how this post has become heavily downvoted, I guess some people would rather not think about it at all rather than consider the possibilities.

      • Thomas@lemmy.douwes.co.uk
        link
        fedilink
        arrow-up
        3
        ·
        2 years ago

        probably best to keep it in a corner though, even if they weren’t all Generative AI, reddit got so flooded with repost/LLM bots that it became annoying to read comments on the big subs.
        I don’t want them it to flood lemmy too. It’s nice to known all the content here it human generated. And I’m sure even the people training these models get annoyed by the training data getting filled with AI output.

  • SavvyWolf@beehaw.org
    link
    fedilink
    arrow-up
    3
    ·
    2 years ago

    The massive push towards everything integrating AI legitimately scares me. These systems have zero motivation to post things that are factually accurate or unbiased, and yet people are happy forcing them everywhere.

    On a social media site, I’d like to interact with genuine people with genuine thoughts and experiences, not a faceless conglomerate of a bunch of books and reddit posts.

    And of course, there’s the ethical and privacy issues of these systems just scraping and regurgitating huge amounts of content without regards to whether the original creators are okay with it or not.

  • Dame@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    2 years ago

    It’s really odd the take some people have with “AI” dislike the term since it’s too broad. If anyone here has a high end modern smartphone they have dedicated chips to ML, predictive text, virtual assistants etc. odds are most people here use a device daily that uses “AI” or have the lot of you gone down to older smartphones or use non smart devices?

    • Kichae@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      2 years ago

      It’s not odd, we’re just able to read for understanding. No one is asking if we want colour correction in photos here, just like no one is asking if we want linear regressions or MCMC models.

      They’re asking do we want to include generative LLMs.

      • Lobstronomosity@lemmy.mlOP
        link
        fedilink
        arrow-up
        1
        ·
        2 years ago

        I think the crux of the issue is that corporations are quick to throw around the term “AI” as it’s a buzz word, and the lay person does not know what it means other than “smart-ish”. I’d argue that there is no real AI in existence (yet).