Authors using a new tool to search a list of 183,000 books used to train AI are furious to find their works on the list.

  • FaceDeer@kbin.social
    link
    fedilink
    arrow-up
    5
    arrow-down
    4
    ·
    1 year ago

    Well, now you know; software can be inspired by other people’s works. That’s what AIs are instructed to do during their training phase.

    • BURN@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      6
      ·
      1 year ago

      Software cannot be “inspired”

      AIs in their training stages are simply just running extreme statistical analysis on the input material. They’re not “learning” they’re not “inspired” they’re not “understanding”

      The anthropomorphism of these models is a major problem. They are not human, they don’t learn like humans.

      • lloram239@feddit.de
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        1 year ago

        The anthropomorphism of these models is a major problem.

        People attributing any kind of person hood or sentience is certainly a problem, the models are fundamentally not capable of that (no loops, no hidden thought). At least for now. However what you are doing isn’t really much better, just utterly wrong in the opposite direction.

        Those models are very definitely do “learn” and “understand” by every definition of the word. Simply playing around with that will quickly show that and it’s baffling that anybody would try to claim otherwise. Yes, there are limits to what they can understand and there are plenty things that they can’t do, but the amount of questions they can answer goes far beyond what is directly in the training data. Heck, even the fact that they hallucinate is proof that they understand, since it would be impossible to make completely plausible, but incorrect, stuff up without having a deep understanding of the topics. Also humans make mistakes too and they’ll also make stuff up, so this isn’t even anything AI specific.

        • BURN@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          7
          ·
          1 year ago

          Yeah, that’s just flat out wrong

          Hallucinations happen when there’s gaps in the training data and it’s just statistically picking what’s most likely to be next. It becomes incomprehensible when the model breaks down and doesn’t know where to go. However, the model doesn’t see a difference between hallucinating nonsense and a coherent sentence. They’re exactly the same to the model.

          The model does not learn or understand anything. It statistically knows what the next word is. It doesn’t need to have seen something before to know that. It doesn’t understand what it’s outputting, it’s just outputting a long string that is gibberish to it.

          I have formal training in AI and 90%+ of what I see people claiming AI can do is a complete misunderstanding of the tech.

          • lloram239@feddit.de
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            2
            ·
            edit-2
            1 year ago

            I have formal training in AI

            Than why do you keep talking such bullshit? You sound like you never even tried ChatGPT.

            It statistically knows what the next word is.

            Yes, that’s understanding. What do you think your brain does differently? Please define whatever weird definition you have of “understand”.

            You are aware of Emergent World Representations? Or have a listen to what Ilya Sutskever has to say on the topic, one of the people behind GPT-4 and AlexNet.

            It doesn’t understand what it’s outputting, it’s just outputting a long string that is gibberish to it.

            Which is obviously nonsense, as I can ask it questions about its output. It can find mistakes in its own output and all that. It obviously understands what it is doing.