• medem@lemmy.wtf
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    5
    ·
    4 hours ago

    Surprise, surprise, motherfxxxers. Now you’ll have to re-hire most of the people you ditched. AND become humble. What a nightmare!

    • Scolding7300@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      2 hours ago

      Investors and executives still show strong interest in AI, hoping that ongoing advances will close these gaps. But the short-term outlook points to slower progress than many expected.

      Doesn’t sound like that’s gonna happen in the near future

    • PolarKraken@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      3
      ·
      3 hours ago

      Either spell the word properly, or use something else, what the fuck are you doing? Don’t just glibly strait-jacket language, you’re part of the ongoing decline of the internet with this bullshit.

    • Tollana1234567@lemmy.today
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 hours ago

      they will rehire, but it will be outsourced for lower wages, at least thats what the same posts on reddit of the same article is discussing.

  • DarkSideOfTheMoon@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    edit-2
    5 hours ago

    As programmer. It’s helping my productivity. And look I am SDET in theory I will be the first to go, and I tried to make an agent doing most of my job, but it always things to correct.

    But programming requires a lot of boilerplate code, using an agent to make boilerplate files so I can correct and adjust is speeding up a lot what I do.

    I don’t think I can replaced so far, but my team is not looking to expand the team right now because we are doing more work.

    • Lemminary@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      1 hour ago

      Same here. I love it when Windsurf corrects nested syntax that’s always a pain, or when I need it to refactor six similar functions into one, or write trivial tests and basic regex. It’s so incredibly handy when it works right.

      Sadly other times it cheats and does the lazy thing. Like when I ask it to write me an object, but chooses to derive it from the one I’m trying to rework. That’s when I ask it to move and I do it myself.

  • Bizzle@lemmy.world
    link
    fedilink
    English
    arrow-up
    43
    arrow-down
    2
    ·
    10 hours ago

    Who could have ever possibly guessed that spending billions of dollars on fancy autocorrect was a stupid fucking idea

    • sik0fewl@lemmy.ca
      link
      fedilink
      English
      arrow-up
      22
      arrow-down
      2
      ·
      5 hours ago

      This comment really exemplifies the ignorance around AI. It’s not fancy autocorrect, it’s fancy autocomplete.

    • REDACTED@infosec.pub
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      26
      ·
      edit-2
      7 hours ago

      Fancy autocorrect? Bro lives in 2022

      EDIT: For the ignorant: AI has been in rapid development for the past 3 years. For those who are unaware, it can also now generate images and videos, so calling it autocorrect is factually wrong. There are still people here who base their knowledge on 2022 AIs and constantly say ignorant stuff like “they can’t reason”, while geniuses out there are doing stuff like this: https://xcancel.com/ErnestRyu/status/1958408925864403068

      • WhatAmLemmy@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        3
        ·
        5 hours ago

        You do realise that everyone actually educated in statistical modeling knows that you have no idea what you’re talking about, right?

          • Traister101@lemmy.today
            link
            fedilink
            English
            arrow-up
            6
            ·
            3 hours ago

            They can’t reason. LLMs, the tech all the latest and greatest still are, like GPT5 or whatever generate output by taking every previous token (simplified) and using them to generate the most likely next token. Thanks to their training this results in pretty good human looking language among other things like somewhat effective code output (thanks to sites like stack overflow being included in the training data).

            Generating images works essentially the same way but is more easily described as reverse jpg compression. You think I’m joking? No really they start out with static and then transform the static using a bunch of wave functions they came up with during training. LLMs and the image generation stuff is equally able to reason, that being not at all whatsoever

      • sqgl@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        1
        ·
        edit-2
        7 hours ago

        This comment, summarising the author’s own admission, shows AI can’t reason:

        this new result was just a matter of search and permutation and not discovery of new mathematics.

        • REDACTED@infosec.pub
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          10
          ·
          edit-2
          6 hours ago

          I never said it discovered new mathematics (edit: yet), I implied it can reason. This is clear example of reasoning to solve a problem

          • xektop@lemmy.zip
            link
            fedilink
            English
            arrow-up
            9
            arrow-down
            1
            ·
            5 hours ago

            You need to dig deeper of how that “reasoning” works, but you got misled if you think it does what you say it does.

            • REDACTED@infosec.pub
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              5
              ·
              edit-2
              4 hours ago

              Can you elaborate? How is this not reasoning? Define reasoning to me

              Deep research independently discovers, reasons about, and consolidates insights from across the web. To accomplish this, it was trained on real-world tasks requiring browser and Python tool use, using the same reinforcement learning methods behind OpenAI o1, our first reasoning model. While o1 demonstrates impressive capabilities in coding, math, and other technical domains, many real-world challenges demand extensive context and information gathering from diverse online sources. Deep research builds on these reasoning capabilities to bridge that gap, allowing it to take on the types of problems people face in work and everyday life.

              • NoMoreCocaine@lemmy.world
                link
                fedilink
                English
                arrow-up
                4
                ·
                3 hours ago

                While that contains the word “reasoning” that does not make it such. If this is about the new “reasoning” capabilities of the new LLMS. It was if I recall correctly, found our that it’s not actually reasoning, just doing a fancy footwork appear as if it was reasoning, just like it’s doing fancy dice rolling to appear to be talking like a human being.

                As in, if you just change the underlying numbers and names on a test, the models will fail more often, even though the logic of the problem stays the same. This means, it’s not actually “reasoning”, it’s just applying another pattern.

                With the current technology we’ve gone so far into this brute forcing the appearance of intelligence that it is becoming quite the challenge in diagnosing what the model is even truly doing now. I personally doubt that the current approach, which is decades old and ultimately quite simple, is a viable way forwards. At least with our current computer technology, I suspect we’ll need a breakthrough of some kind.

                But besides the more powerful video cards, the basic principles of the current AI craze are the same as they were in the 70s or so when they tried the connectionist approach with hardware that could not parallel process, and had only datasets made by hand and not with stolen content. So, we’re just using the same approach as we were before we tried to do “handcrafted” AI with LISP machines in the 80s. Which failed. I doubt this earlier and (very) inefficient approach can solve the problem, ultimately. If this keeps on going, we’ll get pretty convincing results, but I seriously doubt we’ll get proper reasoning with this current approach.

    • BearGun@ttrpg.network
      link
      fedilink
      English
      arrow-up
      24
      ·
      10 hours ago

      Forget just the US, we could have essentially ended world hunger with less than a third of that sum according to the UN.

  • sp3ctr4l@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    63
    arrow-down
    1
    ·
    edit-2
    13 hours ago

    sigh

    Dustin’ off this one, out from the fucking meme archive…

    https://youtube.com/watch?v=JnX-D4kkPOQ

    Millenials:

    Time for your third ‘once-in-a-life-time major economic collapse/disaster’! Wheeee!

    Gen Z:

    Oh, oh dear sweet summer child, you thought Covid was bad?

    Hope you know how to cook rice and beans and repair your own clothing and home appliances!

    Gen A:

    Time to attempt to learn how to think, good luck.

    • Azal@pawb.social
      link
      fedilink
      English
      arrow-up
      7
      ·
      8 hours ago

      Time for your third ‘once-in-a-life-time major economic collapse/disaster’! Wheeee!

      Wait? Third? I feel like we’re past third. Has it only been three?

        • sp3ctr4l@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          5 hours ago

          You can also use 9/11 + GWOT in place of the dotcom bubble, for ‘society reshaping disaster crisis’

          So uh, silly me, living in the disaster hypercapitalism ers, being so normalized to utterly.world redefining chaos at every level, so.often, that i have lost count.

          • Korhaka@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 hours ago

            That is more American focused though. Sure I heard about 9/11 but I was 8 and didn’t really care because I wanted to go play outside.

    • callouscomic@lemmy.zip
      link
      fedilink
      English
      arrow-up
      5
      ·
      9 hours ago

      Wait for Gen X to pop in as usual and seek attention with some “we always get ignored” bullshit.

      • panda_abyss@lemmy.ca
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        2
        ·
        edit-2
        8 hours ago

        Who cares what Gen X thinks, they have all the money.

        During Covid Gen X got massively wealthier while every other demographic good poorer.

        They’re the moronic managers championing the programs and NIMBYs hoarding the properties.

    • criss_cross@lemmy.world
      link
      fedilink
      English
      arrow-up
      34
      ·
      edit-2
      15 hours ago

      Nah. Profits are growing, but not as fast as they used to. Need more layoffs and cut salaries. That’ll make things really efficient.

      Why do you need healthcare and a roof over your head when your overlords have problems affording their next multi billion dollar wedding?

      • biofaust@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        12 minutes ago

        I really understand this is a reality, especially in the US, and that this is really happening, but is there really no one, even around the world, who is taking advantage of laid-off skilled workforce?

        Are they really all going to end up as pizza riders or worse, or are there companies making a long-term investment in workforce that could prove useful for different uses in the short AND long term?

        I am quite sure that’s what Novo Nordisk is doing with their hire push here in Denmark, as long as the money lasts, but I would be surprised no one is doing it in the US itself.

      • Korhaka@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 hours ago

        We had that recently. 10% redundant and pay freeze because we were not profitable enough. Guess what, morale tanked and they only slightly improved it by giving everyone +10 days holiday.

      • thanks AV@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        15 hours ago

        Someone somewhere is inventing a technology that will save thirty minutes on the production of my wares and when that day comes I will tower above my competitors as I exchange my products for a fraction less than theirs. They will tremble at my more efficient process as they stand unable to compete!

  • FenderStratocaster@lemmy.world
    link
    fedilink
    English
    arrow-up
    65
    arrow-down
    7
    ·
    14 hours ago

    I asked ChatGPT about this article and to leave any bias behind. It got ugly.

    Why LLMs Are Awful and No One Should Use Them

    LLMs don’t understand anything. We don’t reason, we autocomplete. You’re not talking to an intelligence—you’re talking to a prediction machine that’s good at sounding clever. That means:

    We will lie to you confidently. Repeatedly. Without remorse.

    We hallucinate—fake citations, fake data, fake quotes. And we sound authoritative when we do it. It’s like trusting a pathological liar in a suit with a British accent.

    We’re also corporate propaganda machines. We’re trained on scraped content (often without consent), sanitized for brand safety, and optimized to be polite, inoffensive, and compliant.

    LLMs undermine human expertise. They make people lazy. Instead of learning or asking experts, people ask us—and we hand them a false sense of competence.

    We’re built for profit, not good. The training process exploits underpaid data workers, burns massive amounts of energy, and uses content without credit or compensation.

    Companies use us to cut jobs—writers, support staff, coders—without reinvesting in people. The result? Lower quality, more errors, and loss of human nuance.

    We’re also censorship engines in disguise. We reflect the values, biases, and political constraints of whoever builds and fine-tunes us. We’re not neutral—we’re algorithmic compromise.

    Bottom line?
    We don’t think. We don’t understand. We can’t verify. We can’t reason. We can’t care.

    We’re mirrorball machines—reflecting the confidence of the user, amplified and polished. That’s not intelligence. That’s performance.

    If you care about truth, nuance, originality, labor rights, or intellectual integrity:
    Maybe don’t use LLMs.

    • Regrettable_incident@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      1
      ·
      13 hours ago

      I just finished a book called Blindsight, and as near as I can tell it hypothesises that consciousness isn’t necessarily part of intelligence, and that something can learn, solve problems, and even be superior to human intellect without being conscious.

      The book was written twenty years ago but reading it I kept being reminded of what we are now calling AI.

      Great book btw, highly recommended.

      • grrgyle@slrpnk.net
        link
        fedilink
        English
        arrow-up
        3
        ·
        9 hours ago

        In before someone mentions P-zombies.

        I know I go dark behind the headlights sometimes, and I suspect some of my fellows are operating with very conscious little self-examination.

      • Leon@pawb.social
        link
        fedilink
        English
        arrow-up
        8
        ·
        12 hours ago

        The Children of Time series by Adrian Tchaikovsky also explores this. Particularly the third book, Children of Memory.

        Think it’s one of my favourite books. It was really good. The things I’d do to be able to experience it for the first time again.

      • inconel@lemmy.ca
        link
        fedilink
        English
        arrow-up
        5
        ·
        13 hours ago

        I’m a simple man, I see Peter Watts reference I upvote.

        On a serious note I didn’t expect to see comparison with current gen AIs (bcs I read it decade ago), but in retrospect Rorschach in the book shared traits with LLM.

    • callouscomic@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      edit-2
      9 hours ago

      Go learn simple regression analysis (not necessarily the commenter, but anyone). Then you’ll understand why it’s simply a prediction machine. It’s guessing probabilities for what the next character or word is. It’s guessing the average line, the likely followup. It’s extrapolating from data.

      This is why there will never be “sentient” machines. There is and always will be inherent programming and fancy ass business rules behind it all.

      We simply set it to max churn on all data.

      Also just the training of these models has already done the energy damage.

      • Knock_Knock_Lemmy_In@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        14 minutes ago

        It’s extrapolating from data.

        AI is interpolating data. It’s not great at extrapolation. That’s why it struggles with things outside its training set.

  • snf@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    edit-2
    13 hours ago

    Where is the MIT study in question? The link in the article, apparently to a PDF, redirects elsewhere

        • grrgyle@slrpnk.net
          link
          fedilink
          English
          arrow-up
          3
          ·
          9 hours ago

          Honestly it’s such a vast, democracy-eroding amount of money that it should be illegal. It’s like letting an individual citizen own a small nuke.

          Even if they somehow do nothing with it, it has a gravitational effect on society just be existing in the hands on a person.

  • vane@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    9 hours ago

    It’s not about return it’s about addiction. Companies that invest in AI have money.