AI one-percenters seizing power forever is the real doomsday scenario, warns AI godfather::The real risk of AI isn’t that it’ll kill you. It’s that a small group of billionaires will control the tech forever.

  • Pohl@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    5
    ·
    1 year ago

    Either ML is going to scale in an unpredictable way, or it is a complete dead end when it comes to artificial intelligence. The “godfathers” of ai know it’s a dead end.

    Probabilistic computing based on statistical models has value and will be useful. Pretending it is a world changing AI tech was a grift from day 1. The fact that art, that cannot be evaluated objectively, was the first place it appeared commercially should have been the clue.

    • frezik@midwest.social
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      1 year ago

      ML isn’t a dead end. I mean, if your target is strong AI at human-like intelligence, then maybe, maybe not. If your goal is useful tools for getting shit done, then ML is already a success. Almost every push for AI in the last 60 years has born fruit, even if it didn’t meet its final end goal.

      • Pohl@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        4
        ·
        1 year ago

        That’s pretty much what I meant. ML has a lot of value, promising that it will deliver artificial intelligence is probably hogwash.

        Useful tools? yes. AI? No. But never let the truth get in the way of an investor bonanza.

    • Richard@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      1 year ago

      Probabilistic computing based on statistical models has value and will be useful. Pretending it is a world changing AI tech was a gift from day 1.

      That is literally modelling how your and all our brains work, so no, neuromorphic computing / approximate computing is still the way to go. It’s just that neuromorphic computing does not necessarily equal LLMs. Paired with powerful mixed analogue and digital signal chips based on photonics, we will hopefully at some point be able to make neural networks that can scale the simulation of neurons and synapses to a level that is on par or even superior to thr human brain.

      • Pohl@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        A claim that we have a computing model that shares a design with the operation of a biological brain is philosophical and conjecture.

        If we had a theory of mind that was complete, it would simply be a matter of counting up the number of transistors required to approximate varying degrees of intelligence. We do not. We have no idea how the computational meat we all possess enables us to translate sensory input into a contiguous sense of self.

        It is totally valid to believe that ML computing is a match to the biological model and that it will cross a barrier at some point. But it is a belief that does not support itself with empirical evidence. At least not yet.

        • Restaldt@lemm.ee
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          1 year ago

          A claim that we have a computing model that shares a design with the operation of a biological brain is philosophical and conjecture

          Mathematical actually. See the 1943 McCulloch and Pitts paper for why Neural networks are called such.

          We use logic and math to approximate neurons

      • Sharklaser@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Neural networks have been phenomenal in the results they have achieved, out doing support vector machines, random trees, Markov models etc… But I do wonder if there is a bias towards it being able to mimick what the brain does like the other post said, and where are the limits.

        For example in medicine, we want to spot unknown correlations to improve things like drug discovery, stratified medince, strange patterns in disease within a population that suggests unknown factors at play… There might be a mathematical model better that convolutional neural networks that doesn’t mimick the brain, but we maybe need an ai to develop that, maybe like deep thought in hgttg!