The majority of U.S. adults don’t believe the benefits of artificial intelligence outweigh the risks, according to a new Mitre-Harris Poll released Tuesday.

  • Queen HawlSera@lemm.ee
    link
    fedilink
    English
    arrow-up
    55
    arrow-down
    15
    ·
    1 year ago

    At first I was all on board for artificial intelligence and spite of being told how dangerous it was, now I feel the technology has no practical application aside from providing a way to get a lot of sloppy half assed and heavily plagiarized work done, because anything is better than paying people an honest wage for honest work.

    • nandeEbisu@lemmy.world
      link
      fedilink
      English
      arrow-up
      39
      arrow-down
      1
      ·
      1 year ago

      AI is such a huge term. Google lens is great, when I’m travelling I can take a picture of text and it will automatically get translated. Both of those are aided by machine learning models.

      Generative text and image models have proven to have more adverse affects on society.

      I think we’re at a point where we should start normalizing using more specific terminology. It’s like saying I hate machines, when you mean you hate cars, or refrigerators or air conditioners. It’s too broad of a term to be used most of the time.

      • CoderKat@lemm.ee
        link
        fedilink
        English
        arrow-up
        16
        ·
        1 year ago

        Yeah, I think LLMs and AI art have overdominated the discourse to the degree that some people think they’re the only form of AI that exists, ignoring things like text translation, the autocompletion of your phone keyboard, Photoshop intelligent eraser, etc.

        Some forms of AI are debatable of their value (especially in their current form). But there’s other types of AI that most people consider highly useful and I think we just forget about it because the controversial types are more memorable.

        • nandeEbisu@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 year ago

          AI is a tool, its value is dependent on whatever the application is. Transformer architectures can be used for generating text or music, but they were also originally developed for text translation which people have fewer qualms with.

        • SnipingNinja@slrpnk.net
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          ignoring things like text translation, the autocompletion of your phone keyboard, Photoshop intelligent eraser, etc.

          AFAIK two of those are generative AI based or as you said LLMs and AI art

        • nandeEbisu@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          Its not a matter of slang, its referring to too broad of a thing. You don’t need to go as deep as the type of model, something like AI image generation, or generative language models is what you would refer to. We’ll hopefully start converging on shorthand from there for specific things.

        • kicksystem@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          I’d like people to make a distinction between AI and machine learning, machine learning and neural networks (the word deep is redundant nowadays). And then have some sense of different popular types of neural nets: GANs, CNN, Transformer, stable diffusion. Might be nice if people know what is supervised unsupervised and reinforcement learning. Lastly people should have some sense of the difference between AI and AGI and what is not yet possible.

        • nandeEbisu@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          I’m kind of surprised people are more concerned with the output quality for chatGPT, and not where they source their training set from, like for image models.

          Language models are still in a stage where they aren’t really a product by themselves, they really need to be cajoled into becoming a good product, like looking up context via a traditional search and feeding it to the model, or guiding it towards solving problems. That’s more of a traditional software problem that leverages large language models.

          Even the amount of engineering to go from text prediction model trained on a bunch of articles to something that infers you should put an answer after a question is a lot of work.

    • Franzia@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      1
      ·
      edit-2
      1 year ago

      This is basically how I feel about it. Capital is ruining the value this tech could have. But I don’t think it’s dangerous and I think the open source community will do awesome stuff with it, quietly, over time.

      Edit: where AI can be used to scan faces or identify where people are, yeah that’s a unique new danger that this tech can bring.

      • Alenalda@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        I’ve been watching a lot of geoguesser lately and the number of people who can pinpoint a location given just a picture is staggering. Even for remote locations.

    • Chickenstalker@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      6
      ·
      1 year ago

      Dude. Drones and sexbots. Killing people and fucking (sexo) people have always been at the forefront of new tech. If you think AI is only for teh funni maymays, you’re in for a rude awakening.

      • mriormro@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        edit-2
        1 year ago

        you think AI is only for teh funni maymays

        When did they state this? I’ve seen it used exactly as they have described. My inbox is littered with terribly written ai emails, I’m seeing graphics that are clearly ai generated being delivered as ‘final and complete’, and that’s not to mention the homogeneous output of it all. It’s turning into nothing but noise.