The majority of U.S. adults don’t believe the benefits of artificial intelligence outweigh the risks, according to a new Mitre-Harris Poll released Tuesday.

  • ShadowRam@kbin.social
    link
    fedilink
    arrow-up
    129
    arrow-down
    17
    ·
    1 year ago

    The majority of U.S. adults don’t understand the technology well enough to make an informed decision on the matter.

    • GoodEye8@lemm.ee
      link
      fedilink
      English
      arrow-up
      65
      arrow-down
      6
      ·
      1 year ago

      To be fair, even if you understand the tech it’s kinda hard to see how it would benefit the average worker as opposed to CEOs and shareholders who will use it as a cost reduction method to make more money. Most of them will be laid off because of AI so obviously it’s of no benefit to them.

      • Billiam@lemmy.world
        link
        fedilink
        English
        arrow-up
        16
        arrow-down
        4
        ·
        1 year ago

        Just spitballing here, and this may be a bit of pie-in-the-sky thinking, but ultimately I think this is what might push the US into socialized healthcare and/or UBI. Increasing automation won’t reduce population- and as more workers are out of work due to automation, they’ll have more time and motivation to do things like protest.

        • Khotetsu@lib.lgbt
          link
          fedilink
          English
          arrow-up
          24
          ·
          1 year ago

          The US economy literally depends on 3-4% of the workforce being so desperate for work that they’ll take any job, regardless of how awful the pay is. They said this during the recent labor shortage, citing how this is used to keep wages down and how it’s a “bad thing” that almost 100% of the workforce was employed because it meant people could pick and choose rather than just take the first offer they get, thus causing wages to increase.

          Poverty and homelessness are a feature, not a bug.

          • Billiam@lemmy.world
            link
            fedilink
            English
            arrow-up
            11
            ·
            1 year ago

            Yes, but for capitalism it’s a delicate balance- too many job openings gives labor more power, but too few job openings gives people reason to challenge the status quo. That 3-4% may be enough for the capitalists, but what happens when 15-20% of your workforce are unemployed because of automation? That’s when civil unrest happens.

            Remember that the most progressive Presidential administration in US history, FDR, happened right after the gilded age and roaring 20’s crashed the economy. When 25% of Americans were out of work during the Great Depression, social programs suddenly looked much more preferable than food riots. And the wealth disparity now is even greater, relatively, than it was back then.

            • Khotetsu@lib.lgbt
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              Very true, but it’s precisely that wealth disparity that concerns me. I’ve seen the current US wealth disparity described as being on par with the disparity in France just before the French Revolution happened, where the cost of a loaf of bread had soared to more than the average worker made in a day. I worry that the more than half a century of anti-union propaganda and “get what I need and screw everybody else” attitude has beaten down the general public enough that there simply won’t be enough of a unified effort to enact meaningful change. I worry about how bad things will have to get before it’s too much. How many families will never recover.

              But these are also very different times compared to the 1920s in that we’ve been riding on the coattails of the post WW2 economic boom for almost 70 years, and as that continues to slow down we might see some actual pushback. We already have, with every generation being more progressive than the last.

              But I still can’t help but worry.

          • rambaroo@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            1 year ago

            Yep. I stopped listening to Marketplace on NPR because the last time I listened they were echoing this exact sentiment. Somehow it’s a good thing that wages aren’t keeping up with inflation. Fuck NPR.

        • TwilightVulpine@lemmy.world
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          1
          ·
          1 year ago

          Seems more likely that they’ll have more time not in the sense of having easier jobs but by being laid off and having to fight for their livelihood. In the corporate-driven society that we live today, it’s unlikely that the benefits of new advancements will be spontaneously shared.

          • Billiam@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            ·
            1 year ago

            Seems more likely that they’ll have more time not in the sense of having easier jobs but by being laid off and having to fight for their livelihood.

            This is exactly what I meant.

            People who have to fight for subsistence won’t easily revolt, because they’re too busy trying to survive.

            People who are unemployed have nothing to lose by not revolting. And the more automation there is, the more unemployed people there will be.

            • TwilightVulpine@lemmy.world
              link
              fedilink
              English
              arrow-up
              8
              ·
              1 year ago

              So we see it the same way, but I don’t feel much optimistic about it because it’s going to get much worse before it might get better. All the suffering and struggle that it will take to reform society will be ugly.

              • Billiam@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                ·
                1 year ago

                Yes, I think it will get worse before it gets better. As long as there is a sociopathic desire to hoard wealth, and no fucks given to our fellow humans, this is how it will be. Capitalism causes these issues, and so capitalism can’t fix them.

      • treadful@lemmy.zip
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        1 year ago

        Efficiency and productivity aren’t bad things. Nobody likes doing bullshit work.

        Unemployment may become a huge issue, but IMO the solution isn’t busy work. Or at least come up with more useful government jobs programs.

        • GoodEye8@lemm.ee
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          1
          ·
          1 year ago

          Of course, there’s nothing inherently wrong with using AI to get rid of bullshit work. The issue is who will benefit from using AI and it’s unlikely to be the people who currently do the bullshit work.

          • treadful@lemmy.zip
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            1 year ago

            But that’s literally everything in a capitalist economy. Value collects to the capital. It has nothing to do with AI.

        • credit crazy@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          You see the problem with that is how ai in the case of animation and art is how it’s not removing menial labor your removing hobbys that people get paid for taking part in

        • GoodEye8@lemm.ee
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          1
          ·
          1 year ago

          You could cut the housing price to a tenth of what they currently are and it wouldn’t matter to the homeless people who don’t have a job. Things being cheaper don’t matter to people who can’t make a living.

      • rambaroo@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Most of them? The vast majority of jobs cannot be replaced by LLMs. The CEOs who believe that are delusional.

    • Moobythegoldensock@lemm.ee
      link
      fedilink
      English
      arrow-up
      20
      ·
      1 year ago

      If you look at the poll, the concerns raised are all valid. AI will most likely be used to automate cyberattacks, identity theft, and to spread misinformation. I think the benefits of the technology outweigh the risks, but these issues are very real possibilities.

    • meseek #2982@lemmy.ca
      link
      fedilink
      English
      arrow-up
      27
      arrow-down
      11
      ·
      1 year ago

      Informed or not, they aren’t wrong. If there is an iota that something can be misused, it will be. Human nature. AI will be used against everyone. It’s potentially for good is equally as strong as its potential for evil.

      But imagine this. You get laid off. At that moment, bots are contacting your bank, LinkedIn, and most of the financial lenders about the incident. Your credit is flagged as your income has dropped significantly. Your bank seizes the opportunity and jacks up your mortgage rates. Lenders are also making use of the opportunity to seize back their merchandise as you’ll likely not be able to make payments and they know it.

      Just one likely incident when big brother knows all and can connect the dots using raw compute power.

      Having every little secret parcelled over the internet because we live in the digital age is not something humanity needs.

      I’m actually stunned that even here, among the tech nerds, you all still don’t realize how much digital espionage is being done on the daily. AI will only serve to help those in power grow bigger.

      • treadful@lemmy.zip
        link
        fedilink
        English
        arrow-up
        18
        ·
        1 year ago

        But imagine this. You get laid off. At that moment, bots are contacting your bank, LinkedIn, and most of the financial lenders about the incident. Your credit is flagged as your income has dropped significantly. Your bank seizes the opportunity and jacks up your mortgage rates. Lenders are also making use of the opportunity to seize back their merchandise as you’ll likely not be able to make payments and they know it.

        None of this requires “AI.” At most AI is a tool to make this more efficient. But then you’re arguing about a tool and not the problem behavior of people.

      • aidan@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        2
        ·
        1 year ago

        AI is not bots, most of that would be easier to do with traditional code rather than a deep learning model. But the reality is there is no incentive for these entities to cooperate with each other.

    • ZzyzxRoad@lemm.ee
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      1 year ago

      Seeing technology consistently putting people out of work is enough for people to see it as a problem. You shouldn’t need to be an expert in it to be able to have an opinion when it’s being used to threaten your source of income. Teachers have to do more work and put in more time now because ChatGPT has affected education at every level. Educators already get paid dick to work insane hours of skilled labor, and students have enough on their plates without having to spend extra time in the classroom. It’s especially unfair when every student has to pay for the actions of the few dishonest ones. Pretty ironic how it’s set us back technologically, to the point where we can’t use the tech that’s been created and implemented to make our lives easier. We’re back to sitting at our desks with a pencil and paper for an extra hour a week. There’s already AI “books” being sold to unknowing customers on amazon. How long will it really be until researchers are competing with it? Students won’t be able to recognize the difference between real and fake academic articles. They’ll spread incorrect information after stealing pieces of real studies without the authors’ permission, then mash them together into some bullshit that sounds legitimate. You know there will be AP articles (written by AI) with headlines like “new study says xyz!” and people will just believe that shit.

      When the government can do its job and create fail safes like UBI to keep people’s lives/livelihoods from being ruined by AI and other tech, then people might be more open to it. But the lemmy narrative that overtakes every single post about AI, that says the average person is too dumb to be allowed to have an opinion, is not only, well, fucking dumb, but also tone deaf and willfully ignorant.

      Especially when this discussion can easily go the other way, by pointing out that tech bros are too dumb to understand the socioeconomic repercussions of AI.

    • bob_wiley@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      1 year ago

      Those who do know it have a strong bias toward new tech, which blinds them from reality or any possible negatives. We’ve see this countless times in tech. Like when NFTs were going to change the world, you couldn’t tell those guys otherwise without being branded out of touch or someone who doesn’t understanding the tech.

      • ShadowRam@kbin.social
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        1 year ago

        I mean, NFT’s is a ridiculous comparison because those that understood that tech were exactly the ones that said it was ridiculous.

        • bob_wiley@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          1 year ago

          I have to believe the crypto bros understood it; they were just blinded my dollar signs… like much of those involved in AI right now.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Wasn’t it the ones who didn’t understand NFTs who were the fan boys? Everyone who knew what they were said they were bloody stupid from the get-go.

    • archon@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      3
      ·
      1 year ago

      You can make an observation that something is dangerous without intimate knowledge of its internal mechanisms.

      • ShadowRam@kbin.social
        link
        fedilink
        arrow-up
        3
        arrow-down
        2
        ·
        1 year ago

        Sure you can, but that doesn’t change the fact that your ignorant whether it’s dangerous or not.

        And these people are making ‘observations’ without knowledge of even the external mechanisms.

        • archon@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          1 year ago

          I’m sure I can name many examples of things I observed as dangerous, and the observation being correct. But sure, claim unilateral ignorance and dismiss anyone who don’t agree with your view.

  • GreenBottles@lemmy.world
    link
    fedilink
    English
    arrow-up
    98
    arrow-down
    11
    ·
    1 year ago

    Most adult Americans don’t know the difference between a PC Tower and Monitor, or a Modem and a PC, or an ethernet cable and a usb cable.

        • Armen12@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          1 year ago

          It’s an outdated observation. Everyone today has a basic knowledge of computers

          • foggenbooty@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            I’d dispute that. The iPadification of tech has people using computers more, but with less actual knowledge of computers.

            • Armen12@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              2
              ·
              1 year ago

              I’d like some data on that because the tech world seems to have come a long way in just the last 20 years. Things wouldn’t progress this fast if people didn’t know what they were doing. Saying “kids these days” is such a cliche

          • Echo Dot@feddit.uk
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            No it’s pretty up-to-date really.

            Where I work we have a lot of mini DELL workstations and everyone insists on calling them hard drives or processes. The irony been they actually don’t have any data storage capability at all, other then RAM, since they all save data on the network drives. So the one thing they definitively are not is hard drives.

  • Uncle_Iroh@lemmy.world
    link
    fedilink
    English
    arrow-up
    88
    arrow-down
    30
    ·
    1 year ago

    Most of the U.S. adults also don’t understand what AI is in the slightest. What do the opinions of people who are not in the slightest educated on the matter affect lol.

    • Mac@mander.xyz
      link
      fedilink
      English
      arrow-up
      54
      arrow-down
      1
      ·
      1 year ago

      “What do the opinions of people who are not in the slightest educated on the matter affect”

      Judging by the elected leaders of the USA: quite a lot, in fact.

      • Armen12@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        10
        ·
        1 year ago

        So you’d rather only the 1% get the right to vote? How about only white land owners? How about only men get to vote in this wonderful utopia of yours

          • Armen12@lemm.ee
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            4
            ·
            edit-2
            1 year ago

            Making a mockery of the workforce who rely on jobs to not be homeless is not appropriate in this conversation, nor is it even an argument to begin with, it’s just a snobbish incel who probably lives in a gated community mocking poor people

      • Wolf_359@lemmy.world
        link
        fedilink
        English
        arrow-up
        39
        arrow-down
        14
        ·
        1 year ago

        Prime example. Atomic bombs are dangerous and they seem like a bad thing. But then you realize that, counter to our intuition, nuclear weapons have created peace and security in the world.

        No country with nukes has been invaded. No world wars have happened since the invention of nukes. Countries with nukes don’t fight each other directly.

        Ukraine had nukes, gave them up, promptly invaded by Russia.

        Things that seem dangerous aren’t always dangerous. Things that seem safe aren’t always safe. More often though, technology has good sides and bad sides. AI does and will continue to have pros and cons.

        • Hexagon@feddit.it
          link
          fedilink
          English
          arrow-up
          38
          arrow-down
          1
          ·
          1 year ago

          Atomic bomb are also dangerous because if someone end up launching one by mistake, all hell is gonna break loose. This has almost happened multiple times:

          https://en.wikipedia.org/wiki/List_of_nuclear_close_calls

          We’ve just been lucky so far.

          And then there are questionable state leaders who may even use them willingly. Like Putin, or Kim, maybe even Trump.

          • gravitas_deficiency@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            4
            ·
            1 year ago

            …and the development and use of nuclear power has been one of the most important developments in civil infrastructure in the last century.

            Nuclear isn’t categorically free from the potential to harm, but it can also do a whole hell of a lot for humanity if used the right way. We understand it enough to know how to use it carefully and safely in civil applications.

            We’ll probably get to the same place with ML… eventually. Right now, everyone’s just throwing tons of random problems at it to see what sticks, which is not what one could call responsible use - particularly when outputs are used in a widespread sense in production environments.

        • cheery_coffee@lemmy.ca
          link
          fedilink
          English
          arrow-up
          26
          arrow-down
          1
          ·
          1 year ago

          Alright, when the AI takes my job and I can’t feed my family while the billionaires add another digit to their net worth I’ll consider the pros.

          There’s about 0% chance we reform society for AI, it will just funnel more wealth to the rich. People claim it will open new jobs but I don’t see it.

          • Jerkface@lemmy.world
            link
            fedilink
            English
            arrow-up
            9
            arrow-down
            5
            ·
            edit-2
            1 year ago

            People have had the same concerns about automation since basically forever. Automation isn’t the problem. The people who use automation to perpetuate the systems that work against us will continue to find creative ways to exploit us with or without AI. Those people and those systems-- they are the problem. And believe it or not, that problem is imminently solvable.

            • cheery_coffee@lemmy.ca
              link
              fedilink
              English
              arrow-up
              9
              arrow-down
              1
              ·
              1 year ago

              It’s fair to compare but you can’t dismiss concerns based on that.

              Past automation often removed duplicate or superfluous work type things, AI removes thought work. It’s a fundamentally different kind of automation than we’ve seen before.

              It will make many things cheaper to do and easier to start some businesses, but it will also decimate workers. It’s also not something that’s generally available to lower classes to wield yet.

              It’s here but I don’t have to be optimistic.

              • Jerkface@lemmy.world
                link
                fedilink
                English
                arrow-up
                4
                arrow-down
                2
                ·
                edit-2
                1 year ago

                I want to avoid using the term solution, not least of all because implementation has its own set of challenges, but some of us used to dream that automation would do that work for us. Perhaps naively, some of us assumed that people just wouldn’t have to work as much. And perhaps I continue to be naive in thinking that that should still be our end goal. If automation reduces the required work hours by 20% with no reduction in profit, full time workers should have a 32 hour week with no reduction in income.

                But since employers will always pocket that money if given the option, we need more unionization, we need unions to fight for better contracts, we need legislation that will protect and facilitate them, and we need progressive taxation that will decouple workers most essential needs from their employers so they have more of a say in where and how they work, be that universal public services, minimum income guarantee, or what have you.

                We’re quite far behind in this fight but there has been some recent progress about which I am pretty optimistic.

                Edit: for clarification

                • Franzia@lemmy.blahaj.zone
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  edit-2
                  1 year ago

                  This was so very thoughtful, and after reading it, I feel optimistic too. Fuck yeah.

                  Edit: thank you.

          • PsychedSy@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            4
            ·
            1 year ago

            Technology tends to drive costs down and create more jobs, but in different areas. It’s not like there hasn’t been capture by the super rich in the past 150 years, but somehow we still enjoy better lives decade by decade.

        • RichieAdler 🇦🇷@lemmy.myserv.one
          link
          fedilink
          English
          arrow-up
          18
          ·
          edit-2
          1 year ago

          If you’re from one of the countries with nukes, of course you’ll see it as positive. For the victims of the nuke-wielding countries, not so much.

        • walrusintraining@lemmy.world
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          1
          ·
          1 year ago

          That’s a good point, however just because the bad thing hasn’t happened yet, doesn’t mean it wont. Everything has pros and cons, it’s a matter of whether or not the pros outweigh the cons.

        • bogdugg@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          I don’t disagree with your overall point, but as they say, anything that can happen, will happen. I don’t know when it will happen; tomorrow, 50 years, 1000 years… eventually nuclear weapons will be used in warfare again, and it will be a dark time.

        • Techmaster@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          1 year ago

          No world wars have happened since the invention of nukes

          Except the current world war.

      • GigglyBobble@kbin.social
        link
        fedilink
        arrow-up
        7
        arrow-down
        5
        ·
        edit-2
        1 year ago

        You need to understand to correctly classify the danger though.

        Otherwise you make stupid decisions such as quiting nuclear energy in favor of coal because of an incident like Fukushima even though that incident just had a single casualty due to radiation.

      • StereoTrespasser@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        9
        ·
        1 year ago

        I’m over here asking chatGPT for help with a pandas dataframe and loving every minute of it. At what point am I going to feel the effects of nuclear warfare?

        • walrusintraining@lemmy.world
          link
          fedilink
          English
          arrow-up
          20
          arrow-down
          5
          ·
          1 year ago

          I’m confused how this is relevant. Just pointing out this is a bad take, not saying nukes are the same as AI. chatGPT isn’t the only AI out there btw. For example NYC just allowed the police to use AI to profile potential criminals… you think that’s a good thing?

              • Jerkface@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                2
                ·
                1 year ago

                The take is “let’s not forget to hold people accountable for the shitty things they do.” AI is not a killing machine. Guns aren’t particularly productive.

      • WhyIDie@kbin.social
        link
        fedilink
        arrow-up
        4
        arrow-down
        6
        ·
        edit-2
        1 year ago

        you also don’t have to understand how 5g works to know it spreads covid /s

        point is, I don’t see how your analogy works beyond the limited scope of only things that result in an immediate loss of life

        • walrusintraining@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          1
          ·
          1 year ago

          I don’t need to know the ins and outs of how the nazi regime operated to know it was bad for humanity. I don’t need to know how a vaccine works to know it’s probably good for me to get. I don’t need to know the ins and outs of personal data collection and exploitation to know it’s probably not good for society. There are lots of examples.

          • WhyIDie@kbin.social
            link
            fedilink
            arrow-up
            3
            ·
            edit-2
            1 year ago

            okay, I’ll concede, my scope also was pretty limited. I still stand by not trusting the public with deciding what’s the best use of AI, when most people think what we have now is anything more than statistics supercharged in its implementation.

          • linearchaos@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            I can certainly give that “you” don’t need to know but there are a lot of differing opinions on even the things you’re talking about inside of the people that are in this very community.

            I would say that the Royal we need to know because there are a lot of opinions on facts that don’t line up with actual facts for a lot of people. Sure, not you, not me but a hell of a lot of people.

            • walrusintraining@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              1 year ago

              I don’t disagree that people are stupid, but the majority of people got/supported the vaccine. Majority is sometimes a good indicator, that’s how democracy works. Again, it’s not perfect, but it’s not useless either.

      • Uncle_Iroh@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        4
        ·
        1 year ago

        You chose an analogy with the most limited scope possible but sure I’ll go with it. To understand how dangerous an atomic bomb is exactly without just looking up a hiroshima you need to have atleast some knowledge on the subject, you’d also have to understand all the nuances etc. The thing about AI is that most people haven’t a clue what it is, how it works, what it can do. They just listen to the shit their telegram loving uncle spewed at the family gathering. A lot of people think AI is fucking sentient lmao.

        • walrusintraining@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          1 year ago

          I don’t think most people think ai is sentient. In my experience the people that think that are the ones who think they’re the most educated saying stuff like “neural networks are basically the same as a human brain.”

          • Uncle_Iroh@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            1 year ago

            You don’t think, yet a software engineer from google, Blake Lemoine, thought LaMDA was sentient. He took a lot of idiots down with him when he went public with said claims. Not to mention the movies that were made with the premise of sentient AI.

            Your anecdotal experience and your feelings don’t in the slightest affect the reality that there is tons of people who think AI is sentient and will somehow start some fucking robo revolution.

    • kitonthenet@kbin.social
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      1 year ago

      Because they live in the same society as you, and they get to decide who goes to jail as much as you do

    • Franzia@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      3
      ·
      1 year ago

      Well and being a snob about it doesn’t help. If all the average joe knows about AI is what google or openAI pushed to corporate media, that shouldn’t be where the conversation ends.

      • Uncle_Iroh@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        4
        ·
        1 year ago

        The average joe can have their thoughts on it all they want, but their opinions on the matter aren’t really valid or of any importance. AI is best left to the people who have a deep knowledge of the subject, just as nuclear fusion is best left to scientists studying the field. I’m not going to tell average Joe the mechanic that I think the engine he just revised might just blow up, because I have no fucking clue about it. Sure I have some very basic knowledge of it, that’s pretty much where it end too though.

    • gravitas_deficiency@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      1 year ago

      You can not know the nuanced details of something and still be (rightly) sketched out by it.

      I know a decent amount about the technical implementation details, and that makes me trust its use in (what I perceive as) inappropriate contexts way less than the average layperson.

    • Armen12@lemm.ee
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      7
      ·
      1 year ago

      What a terrible thing to say, they’re human beings so I hope they matter to you

          • Uncle_Iroh@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            4
            ·
            1 year ago

            I am a terrible person simply because they don’t matter to me? Do you cry for every death victim your military caused? Do you cry for every couple with a stillborn baby? No, you don’t. You think it’s shitty, because it is. But you don’t really care, they don’t truly matter to you. The way you throw those words around makes their meaning less.

            • Armen12@lemm.ee
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              Lot of words to just say you’re a terrible person, we got it already, you don’t need to explain why you’re terrible

  • Endorkend@kbin.social
    link
    fedilink
    arrow-up
    50
    arrow-down
    2
    ·
    1 year ago

    The problem is that there is no real discussion about what to do with AI.

    It’s being allowed to be developed without much of any restrictions and that’s what’s dangerous about it.

    Like how some places are starting to use AI to profile the public Minority Report style.

    • pavnilschanda@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      1 year ago

      Yep. It’s either “embrace the future, adapt or die” or “let’s put the technological genie back in the bottle”. No actual nuance.

      • PopOfAfrica@lemmy.world
        link
        fedilink
        English
        arrow-up
        20
        ·
        1 year ago

        The problem is capitalism puts us in this position. Nobody is abstractly upset the jobs we hate can now be automated.

        What is upsetting is that we wont be able to eat because of it.

    • RememberTheApollo_@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      1 year ago

      Depends on who you talk to. If you’re a business that can replace human labor with AI, you’re probably discussing it pretty hard.

      What restrictions should it have? How would you implement them, because there would certainly be “you can’t make “x” with AI, unless of course you’re a big business that can profit off of it?

  • Dasnap@lemmy.world
    link
    fedilink
    English
    arrow-up
    46
    arrow-down
    3
    ·
    1 year ago

    The past decade has done an excellent job of making people cynical about any new technology. I find looking at what crypto bros are currently interested in as a good canary for what I should be suspicious of.

    • iopq@lemmy.world
      link
      fedilink
      English
      arrow-up
      30
      arrow-down
      4
      ·
      1 year ago

      The vaccine saved millions of lives, yet people will be cynical despite reality

      • Dasnap@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        2
        ·
        1 year ago

        I feel like anti-vaccine groups have been around for a good chunk of time, but they certainly seemed to get a boost from the internet.

        • huginn@feddit.it
          link
          fedilink
          English
          arrow-up
          10
          ·
          1 year ago

          If more of your family and friends are dying why would you avoid the ounce of prevention? That doesn’t make sense

          • GigglyBobble@kbin.social
            link
            fedilink
            arrow-up
            3
            arrow-down
            1
            ·
            edit-2
            1 year ago

            They wouldn’t attribute it to the virus but something like 5G radiation. And yes, it doesn’t make sense.

    • raktheundead@fedia.io
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      1 year ago

      It’s also worth noting that the same VCs who backed cryptocurrency have pivoted to generative AI. It’s all part of the same grift, just with different clothes.

      • WldFyre@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Most major companies didn’t touch crypto with a 10ft pole, but they’ve leapt at the chance to use AI tech. I don’t think it’s the same grift at all personally.

        • raktheundead@fedia.io
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          A lot of companies investigated cryptocurrency obliquely; “blockchain” was the hype word for several years in tech. And several of those companies had a serious sunk-cost fallacy going when they perpetuated their blockchain projects, despite blockchain only at best being a case of Worse Is Better, where a solution that sucks, but exists can be better than a perfect option that doesn’t.

    • kitonthenet@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      It doesn’t hurt that the same companies that did all the things that made people cynical about technologies are the ones perpetrating this round of BS

    • Fermion@feddit.nl
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      1 year ago

      I am really dissapointed that crypto became synonymous with speculative “investing.” The core blockchain technology seems like it could be useful for enhancing privacy online. However, the majority of groups loudly advertising that they use crypto are exploitative money grabs.

  • Queen HawlSera@lemm.ee
    link
    fedilink
    English
    arrow-up
    55
    arrow-down
    15
    ·
    1 year ago

    At first I was all on board for artificial intelligence and spite of being told how dangerous it was, now I feel the technology has no practical application aside from providing a way to get a lot of sloppy half assed and heavily plagiarized work done, because anything is better than paying people an honest wage for honest work.

    • nandeEbisu@lemmy.world
      link
      fedilink
      English
      arrow-up
      39
      arrow-down
      1
      ·
      1 year ago

      AI is such a huge term. Google lens is great, when I’m travelling I can take a picture of text and it will automatically get translated. Both of those are aided by machine learning models.

      Generative text and image models have proven to have more adverse affects on society.

      I think we’re at a point where we should start normalizing using more specific terminology. It’s like saying I hate machines, when you mean you hate cars, or refrigerators or air conditioners. It’s too broad of a term to be used most of the time.

      • CoderKat@lemm.ee
        link
        fedilink
        English
        arrow-up
        16
        ·
        1 year ago

        Yeah, I think LLMs and AI art have overdominated the discourse to the degree that some people think they’re the only form of AI that exists, ignoring things like text translation, the autocompletion of your phone keyboard, Photoshop intelligent eraser, etc.

        Some forms of AI are debatable of their value (especially in their current form). But there’s other types of AI that most people consider highly useful and I think we just forget about it because the controversial types are more memorable.

        • nandeEbisu@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 year ago

          AI is a tool, its value is dependent on whatever the application is. Transformer architectures can be used for generating text or music, but they were also originally developed for text translation which people have fewer qualms with.

        • SnipingNinja@slrpnk.net
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          ignoring things like text translation, the autocompletion of your phone keyboard, Photoshop intelligent eraser, etc.

          AFAIK two of those are generative AI based or as you said LLMs and AI art

        • nandeEbisu@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          Its not a matter of slang, its referring to too broad of a thing. You don’t need to go as deep as the type of model, something like AI image generation, or generative language models is what you would refer to. We’ll hopefully start converging on shorthand from there for specific things.

        • kicksystem@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          I’d like people to make a distinction between AI and machine learning, machine learning and neural networks (the word deep is redundant nowadays). And then have some sense of different popular types of neural nets: GANs, CNN, Transformer, stable diffusion. Might be nice if people know what is supervised unsupervised and reinforcement learning. Lastly people should have some sense of the difference between AI and AGI and what is not yet possible.

        • nandeEbisu@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          I’m kind of surprised people are more concerned with the output quality for chatGPT, and not where they source their training set from, like for image models.

          Language models are still in a stage where they aren’t really a product by themselves, they really need to be cajoled into becoming a good product, like looking up context via a traditional search and feeding it to the model, or guiding it towards solving problems. That’s more of a traditional software problem that leverages large language models.

          Even the amount of engineering to go from text prediction model trained on a bunch of articles to something that infers you should put an answer after a question is a lot of work.

    • Franzia@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      1
      ·
      edit-2
      1 year ago

      This is basically how I feel about it. Capital is ruining the value this tech could have. But I don’t think it’s dangerous and I think the open source community will do awesome stuff with it, quietly, over time.

      Edit: where AI can be used to scan faces or identify where people are, yeah that’s a unique new danger that this tech can bring.

      • Alenalda@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        I’ve been watching a lot of geoguesser lately and the number of people who can pinpoint a location given just a picture is staggering. Even for remote locations.

    • Chickenstalker@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      6
      ·
      1 year ago

      Dude. Drones and sexbots. Killing people and fucking (sexo) people have always been at the forefront of new tech. If you think AI is only for teh funni maymays, you’re in for a rude awakening.

      • mriormro@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        edit-2
        1 year ago

        you think AI is only for teh funni maymays

        When did they state this? I’ve seen it used exactly as they have described. My inbox is littered with terribly written ai emails, I’m seeing graphics that are clearly ai generated being delivered as ‘final and complete’, and that’s not to mention the homogeneous output of it all. It’s turning into nothing but noise.

  • orca@orcas.enjoying.yachts
    link
    fedilink
    English
    arrow-up
    33
    arrow-down
    2
    ·
    1 year ago

    I work with AI and don’t necessarily see it as “dangerous”. CEOs and other greed-chasing assholes are the real danger. They’re going to do everything they can to keep filling human roles with AI so that they can maximize profits. That’s the real danger. That and AI writing eventually permeating and enshittifying everything.

    A hammer isn’t dangerous on its own, but becomes a weapon in the hands of a psychopath.

    • Mjpasta@kbin.social
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      1 year ago

      So, because of greed and endless profit seeking, expect all corporations to replace everything that can be replaced - with AI…?

      • orca@orcas.enjoying.yachts
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        I mean, they’re already doing it. Not in every role because not every one of them can be filled by AI, but it’s happening.

  • DarkGamer@kbin.social
    link
    fedilink
    arrow-up
    23
    arrow-down
    1
    ·
    1 year ago

    “Can’t we just make other humans from lower socioeconomic classes toil their whole lives, instead?”

    The real risk of AI/automation is if we fail to adapt our society to it. It could free us from toil forever but we need to make sure the benefits of an automated society are spread somewhat evenly and not just among the robot-owning classes. Otherwise, consumers won’t be able to afford that which the robots produce, markets will dry up, and global capitalism will stop functioning.

  • gmtom@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    1 year ago

    Most US adults couldnt tell you what LLM stands for, nevermind tell you how stable diffusion works. So theres not much point in asking them as they wont understand the benefits and the risks

  • bigkix@lemm.ee
    link
    fedilink
    English
    arrow-up
    13
    ·
    1 year ago

    My opinion - current state of AI is nothing special compared to what it can be. And when it will be close to all it can be, it will be used (as it always happens) to generate even more money and no equality. Movie “Elysium” comes to mind.

  • vzq@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    12
    ·
    1 year ago

    The problem is that I’m pretty sure that whatever benefits AI brings, they are not going to trickle down to people like me. After all, all AI investments are coming from the digital land lords and are designed to keep their rent seeking companies in the saddle for at least another generation.

    However, the drawbacks certainly are headed my way.

    So even if I’m optimistic about the possible use of AI, I’m not optimistic about this particular stand of the future we’re headed toward.

  • Echo Dot@feddit.uk
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    The general public don’t understand what they’re talking about so it’s not worth asking them.

    What is the point in surveys like this, we don’t operate on direct democracy so there’s literally no value in these things except to stir the pot.

  • balloflearning@midwest.social
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    1 year ago

    Generally, people are wary of disruptive technology. While this technology has potential to displace a plethora of jobs for the sake of increased productivity, companies won’t be able to move product if unemployment skyrockets.

    Regardless if what people think, the Pandora’s box of AI is opened and now the only way forward is to adapt.

    • flossdaily@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      1 year ago

      Yes.

      All our science fiction stories prepared us for a world where AI was only possible with a giant supercomputer somewhere, or some virus that exists beyond human control, spread throughout the internet.

      We were not prepared for the reality that all at once, any average Joe could create an AI on their home PC.

      We absolutely can’t go backwards, and right now we’re are in the most important race in history, against every other country and company to create the best AI.

      Whoever can make a self-replicating, self-improving AI first will rule the world. Or rather its AI will.