In the whirlwind of technological advancements, artificial intelligence (AI) often becomes the scapegoat for broader societal issues. It’s an easy target, a non-human entity that we can blame for job displacement, privacy concerns, and even ethical dilemmas. However, this perspective is not only simplistic but also misdirected.

The crux of the matter isn’t AI itself, but the economic system under which it operates - capitalism. It’s capitalism that dictates the motives behind AI development and deployment. Under this system, AI is primarily used to maximize profits, often at the expense of the workforce and ethical considerations. This profit-driven motive can lead to job losses as companies seek to cut costs, and it can prioritize corporate interests over privacy and fairness.

So, why should we shift our anger from AI to capitalism? Because AI, as a tool, has immense potential to improve lives, solve complex problems, and create new opportunities. It’s the framework of capitalism, with its inherent drive for profit over people, that often warps these potentials into societal challenges.

By focusing our frustrations on capitalism, we advocate for a change in the system that governs AI’s application. We open up a dialogue about how we can harness AI ethically and equitably, ensuring that its benefits are widely distributed rather than concentrated in the hands of a few. We can push for regulations that protect workers, maintain privacy, and ensure AI is used for the public good.

In conclusion, AI is not the enemy; unchecked capitalism is. It’s time we recognize that our anger should not be at the technology that could pave the way for a better future, but at the economic system that shapes how this technology is used.

  • JackGreenEarth@lemm.ee
    link
    fedilink
    arrow-up
    20
    arrow-down
    2
    ·
    9 months ago

    This is actually an unpopular opinion sadly, on Lemmy as well in the outside world. A rare case of a post on this community where I ca upvote both because it’s unpopular and I agree with it.

    • ClamDrinker@lemmy.world
      link
      fedilink
      arrow-up
      8
      arrow-down
      1
      ·
      edit-2
      9 months ago

      Depends on where you live I suppose. Irrational AI hate is something I only really encounter online. Then again my country has pretty good worker protections, so there’s less reason to be afraid of AI.

    • nilloc@discuss.tchncs.de
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      9 months ago

      I don’t know, there’s plenty of anti-billionaire sentiment, fuck_cars is basically anti-capitalist, and most of the environmentalists get to the same conclusion pretty quickly too.

      The realists (and cynics in some cases) just know that it’s going to take a huge process to shift us away. I’m a realist and am opting for a progressive takeover that leads to taxing billionaires, carbon/pollution, and dangerous vehicles (among other clear hazards) out of existence.

      But when I’m feeling cynical, I get worried that it’s going to take a war to happen, and I hope for my son’s sake that doesn’t happen.

  • 3volver@lemmy.world
    link
    fedilink
    arrow-up
    14
    arrow-down
    4
    ·
    9 months ago

    It’s pretty evident that AI is incompatible with capitalism, but most people direct their anger at AI. Late-stage capitalism is the problem, not automation. I upvoted because I think this is actually an unpopular opinion factoring in the world population rather than just Lemmy.

  • SomeGuy69@lemmy.world
    link
    fedilink
    arrow-up
    7
    arrow-down
    2
    ·
    edit-2
    9 months ago

    This post is written by an AI. Lmao

    “Are you scared of an AI world? You’re already in it.”

  • echo64@lemmy.world
    link
    fedilink
    arrow-up
    10
    arrow-down
    7
    ·
    9 months ago

    AI outside of capitalism is still incredibly dangerous. It’s all the baises that create the world we have today but on steroids. Take all the injustices against minority peoples today and scale it up to however much compute you have.

    It’s completely naive to think that AI will solve the world’s problems if that pesky capitalism would get out of the way. But this website is full of tech bros, so it’s impossible to get past that.

    Also, being angry at capitalism doesn’t pay the rent. I can’t boycott capitalism. I can use my small power under capitalism to boycot your shitty ai.

    • kromem@lemmy.world
      link
      fedilink
      arrow-up
      5
      arrow-down
      2
      ·
      9 months ago

      Mhmm. Here’s the uncensored anti-woke AI Elon tried to create answering Twitter blue subscribers questions:

      Or

      Or

      Yeah, so horribly biased and terrible…

    • 4am@lemm.ee
      link
      fedilink
      arrow-up
      6
      arrow-down
      8
      ·
      9 months ago

      it’s completely naive to think that AI will solve the world problems

      It’s also a complete strawman to exaggerate what most proponents think of AI just because you saw some crypto bro Elon dickrider spouting off propaganda on reddit somewhere

      The only people who want to use “AI” to “change the world” are the billionaires who think that they can use it to shrink or eliminate their workforce while gaining efficiency and control. That’s it. the capitalists are the problem, and you don’t have any power under them. None.

  • kometes@lemmy.world
    link
    fedilink
    arrow-up
    18
    arrow-down
    15
    ·
    9 months ago

    Maybe work on proving “AI” is actually a technological advancement instead of an overhyped plagiarism machine first.

      • assassin_aragorn@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        3
        ·
        9 months ago

        This is like asking someone to prove God doesn’t exist. The burden of proof is on you to show how humans are effectively over hyped plagiarists. You’re the one making the claim.

        • agamemnonymous@sh.itjust.works
          link
          fedilink
          arrow-up
          3
          ·
          9 months ago

          Maybe work on proving “AI” is actually a technological advancement instead of an overhyped plagiarism machine first.

          This statement has the implicit claim: “AI” is actually an overhyped plagiarism machine instead of a technological advancement. The burden of proof is on them to show this claim. Additionally, this statement contains the implicit claims that: “AI” is not in fact intelligence, real intelligence is not an overhyped plagiarism machine. The burden of proof lies with them for these claims as well. My question was merely to highlight this existing burden.

    • kromem@lemmy.world
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      9 months ago

      Furthermore, simple probability calculations indicate that GPT-4’s reasonable performance on k=5 is suggestive of going beyond “stochastic parrot” behavior (Bender et al., 2021), i.e., it combines skills in ways that it had not seen during training.

      Do these networks just memorize a collection of surface statistics, or do they rely on internal representations of the process that generates the sequences they see? We investigate this question by applying a variant of the GPT model to the task of predicting legal moves in a simple board game, Othello. Although the network has no a priori knowledge of the game or its rules, we uncover evidence of an emergent nonlinear internal representation of the board state.

      So you already have research showing that GPT LLMs are capable of modeling aspects of training data at much deeper levels of abstraction than simply surface statistics of words and research showing that the most advanced models are already generating novel and new outputs distinct from anything that would be in the training data by virtue of the complexity of the number of different abstract concepts it combines from what was learned in the training data.

      Like - have you actually read any of the ongoing actual research on the field at all? Or just articles written by embittered people who are generally misunderstanding the technology (for example, if you ever see someone refer to them as Markov chains, that person has no idea what they are talking about given the key factor of the transformer model is the self-attention mechanism which negates the Markov property characterizing Markov chains in the first place).

    • A_Very_Big_Fan@lemmy.world
      link
      fedilink
      arrow-up
      4
      arrow-down
      2
      ·
      9 months ago

      instead of an overhyped plagiarism machine first.

      If I paint an Eiffel Tower from memory, am I plagiarizing?

      If it’s not plagiarism when humans do it, it’s not plagiarism when a machine does it.

      • assassin_aragorn@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        6
        ·
        9 months ago

        Of course it is. A machine is not a human.

        If you want to make this argument, then AI companies should be required to treat their AI models like employees. Paid for 40 hours a week of work, extra for overtime.

        If it’s human to have “memory” that isn’t subject to plagiarism, then it’s human enough to be paid hourly.

        • A_Very_Big_Fan@lemmy.world
          link
          fedilink
          arrow-up
          5
          ·
          9 months ago

          I don’t need to be paid to make a painting, I’ll just do it for fun or because a good friend wanted it.

          Why does a machine doing something that I do for fun constitute plagiarism?

  • xigoi@lemmy.sdf.org
    link
    fedilink
    arrow-up
    9
    arrow-down
    7
    ·
    9 months ago

    As much as I hate AI run by megacorporations, I don’t think AI run by a communist government would be any better.

    • A_Very_Big_Fan@lemmy.world
      link
      fedilink
      arrow-up
      10
      arrow-down
      6
      ·
      9 months ago

      We laughed at crypto bros because it was a blatantly bad investment, built on technology that was over-hyped in terms of feasibility to use it as currency.

      AI actually has practical uses, and the vast majority of us are getting all of the benefits for no cost.

        • A_Very_Big_Fan@lemmy.world
          link
          fedilink
          arrow-up
          2
          arrow-down
          2
          ·
          9 months ago

          Nuclear power is a hell of a lot better for us and the environment than the fossil fuels that Microsoft gets their electricity from today.

          AI lets me generate textures and sprites for games in a matter of seconds instead of doing it by hand. Sometimes it can even fix my code in ways I don’t even understand. When I’m talking about religion with my friends I can pinpoint several instances of whatever I want to find from any number of holy texts with a handful of key presses instead of years worth of reading or hours of googling. What consequences am I supposed to be feeling?

          • xantoxis@lemmy.world
            link
            fedilink
            arrow-up
            5
            arrow-down
            1
            ·
            edit-2
            9 months ago

            The point is not that nuclear is bad, the point is that microsoft is projecting that the power requirements will only be satisfied by extreme measures.

            Sometimes it can even fix my code in ways I don’t even understand

            It’s kind of incredible that you think this is a good thing.

            • A_Very_Big_Fan@lemmy.world
              link
              fedilink
              arrow-up
              2
              ·
              edit-2
              9 months ago

              Every article I can find says it’s about climate harm reduction, not for an increase in the power they will need. Citation needed.

              It’s kind of incredible that you think this is a good thing.

              Right, because grabbing code from Stack Exchange is so much more virtuous. Welcome to programming, you must be new here.

              Ask literally any professional and they’ll tell you you’re gonna learn a lot more by looking at what others have done than you will banging your head against the wall hoping for a eureka moment.

          • hatedbad@lemmy.sdf.org
            link
            fedilink
            arrow-up
            3
            arrow-down
            3
            ·
            9 months ago

            please just stop writing code. just walk away and find something else to do.

            “ai helps me debate religion with my friends” is some real bleak shit

            • A_Very_Big_Fan@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              9 months ago

              My friends are exploring other religions and we enjoy learning about them, and I’m not shocked that the mere idea would send an AI hysteric like you into a seethe-fest

              If you just wanna piss and moan about software doing it’s job, go somewhere else

    • HerrBeter@lemmy.world
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      9 months ago

      If you tire out an argument or discussion by bullying, using fallacies, you can never be sure you were right.

    • nottelling@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      9 months ago

      It wasn’t laughing and name-calling that made crypto into the joke it always was. It was the grift finally collapsing on top of them. Same cycle will run with LLM AI.

  • BilboBargains@lemmy.world
    link
    fedilink
    arrow-up
    2
    arrow-down
    2
    ·
    9 months ago

    Feels like a bit of a straw man argument. I don’t meet people who accuse the actual AI itself of malice. That would be like getting mad with an actor who portrays a villain. I know these people exist and they are stupid but most people understand that that the A in AI stands for artificial and that means scientists and engineers make these things and capitalists provide the funding and own them.

    The interesting thing about AI is that once it becomes self aware, we can legitimately insert agency into its actions in the way we do with people. We can criminalise it and punish it for decisions that it has made. We could use artificial lawyers to prosecute artificial doctors that perform botched surgeries on artificial warehouse workers.

  • BothsidesistFraud@lemmy.world
    link
    fedilink
    arrow-up
    1
    arrow-down
    2
    ·
    edit-2
    9 months ago

    Please explain how in a non-capitalist world, AI would never be used for the sorts of things you dislike AI being used for such as job elimination. You think nobody will realize that it can be used to produce lots of art, for example?

    In this non-capitalist world you’re thinking of, would we have any automation? Like do we have harvester combines, or is it still 35 people breaking their backs to cut and thresh an acre of wheat?

    • Cowbee [he/him]@lemmy.ml
      link
      fedilink
      arrow-up
      5
      ·
      9 months ago

      If the means of production are collectively owned, and thus directed towards the good of society, job elimination isn’t as much of a problem.

      Socialists are huge proponents of automation, because instead of being used to cut jobs for profit, dirty and hard jobs can be eliminated.

    • Wereduck@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      3
      ·
      9 months ago

      Job elimination is a problem in capitalism because workers need jobs to survive. In a socialist society, job elimination can be a good thing, as it allows us to either increase access to resources or reduce how much time people need to work without dispossessing the people whose jobs were eliminated.

      The difference is that, in capitalism, workers only survive by proving their usefulness to capitalists making money. Automation is thus a threat to worker bargaining power. If the means of production were socially owned (through for example government run utilities or worker coops), worker bargaining power is then through a vote or through ownership. It is possible to by default distribute the spoils of automation rather than concentrate them in the hands of capitalists.