Looks so real !

  • nednobbins@lemmy.zip
    link
    fedilink
    arrow-up
    11
    arrow-down
    1
    ·
    3 days ago

    I can define “LLM”, “a painting”, and “alive”. Those definitions don’t require assumptions or gut feelings. We could easily come up with a set of questions and an answer key that will tell you if a particular thing is an LLM or a painting and whether or not it’s alive.

    I’m not aware of any such definition of conscious, nor am I aware of any universal tests of consciousness. Without that definition, it’s like Ebert claiming that, “Video games can never be art”.

    • khepri@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      3 days ago

      Absolutely everything requires assumptions, even our most objective and “laws of the universe” type observations rely on sets of axioms or first principles that must simply be accepted as true-though-unprovable if we are going to get anyplace at all even in math and the hard sciences let alone philosophy or social sciences.

      • nednobbins@lemmy.zip
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        2 days ago

        Defining “consciousness” requires much more handwaving and many more assumptions than any of the other three. It requires so much that I claim it’s essentially an undefined term.

        With such a vague definition of what “consciousness” is, there’s no logical way to argue that an AI does or does not have it.

        • 2xar@lemmy.world
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          2 days ago

          Your logic is critically flawed. By your logic you could argue that there is no “logical way to argue a human has consciousness”, because we don’t have a precise enough definition of consciousness. What you wrote is just “I’m 14 and this is deep” territory, not real logic.

          In reality, you CAN very easily decide whether AI is conscious or not, even if the exact limit of what you would call “consciousness” can be debated. You wanna know why? Because if you have a basic undersanding of how AI/LLM works, than you know, that in every possible, concievable aspect in regards with consciusness it is basically between your home PC and a plankton. None of which would anybody call conscious, by any definition. Therefore, no matter what vague definition you’d use, current AI/LLM defintiely does NOT have it. Not by a longshot. Maybe in a few decades it could get there. But current models are basically over-hyped thermostat control electronics.

          • nednobbins@lemmy.zip
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            edit-2
            2 days ago

            I’m not talking about a precise definition of consciousness, I’m talking about a consistent one. Without a definition, you can’t argue that an AI, a human, a dog, or a squid has consciousness. You can proclaim it, but you can’t back it up.

            The problem is that I have more than a basic understanding of how an LLM works. I’ve written NNs from scratch and I know that we model perceptrons after neurons.

            Researchers know that there are differences between the two. We can generally eliminate any of those differences (and many research do exactly that). No researcher, scientist, or philosopher can tell you what critical property neurons may have that enable consciousness. Nobody actually knows and people who claim to know are just making stuff up.

            • 2xar@lemmy.world
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              2 days ago

              I’m not talking about a precise definition of consciousness, I’m talking about a consistent one.

              Does not matter, any which way you try to spin it, any imprecise or “inconsistent” definition anybody would want to use, literally EVERYBODY with half a brain will agree that humans DO have consciousness and a rock does not. A squid could be arguable. But LLMs are just a mm above rocks, and lightyears below squids on the ladder towards consciousness.

              The problem is that I have more than a basic understanding of how an LLM works. I’ve written NNs from scratch and I know that we model perceptrons after neurons.

              Yea. The same way Bburago models real cars. They look somewhat similar, if you close one eye and squint the other and don’t know how far each of them are. But apart from looks, they have NOTHING in common and in NO way offer the same functionality. We don’t even know how many different types of neurons are, let alone be close to replicating each of their functions and operations:

              https://alleninstitute.org/news/why-is-the-human-brain-so-difficult-to-understand-we-asked-4-neuroscientists/

              So no, AI/LLMs are absolutely and categorically nowhere near where we could be lamenting about whether they would be conscious or not. Anyone questioning this is a victim of the Dunning-Kruger effect, by having zero clue about how complex brains and neurons are, and how basic, simple and function-lacking current NN technology is in comparison.