• gravitas_deficiency@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    195
    arrow-down
    2
    ·
    edit-2
    7 months ago

    Short term yes; long term probably not. All the dipshit c-suites pushing the “AI” worker replacement initiatives are going to destroy their workforces and then realize that LLMs can’t actually reliably replace any of the workers they fired. And I love that for management.

    • foggy@lemmy.world
      link
      fedilink
      English
      arrow-up
      86
      arrow-down
      3
      ·
      7 months ago

      They’re gonna realize the two jobs it can actually replace is HR and the C suite.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          19
          ·
          7 months ago

          Yeah, HR gets by because of legal compliance, and execs get by through convincing the board to give them X years, and then jump to the next one.

      • AdamEatsAss@lemmy.world
        link
        fedilink
        English
        arrow-up
        33
        arrow-down
        11
        ·
        7 months ago

        Lol AI cannot replace either of those jobs. “I’m sorry I can’t help with your time off request but here is a gluten free recipe for a pie that feeds 30 people.”

    • 3volver@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      9
      ·
      7 months ago

      You’re referring to something that is changing and getting better constantly. In the long term LLMs are going to be even better than they are now. It’s ridiculous to think that it won’t be able to replace any of the workers that were fired. LLMs are going to allow 1 person to do the job of multiple people. Will it replace all people? No. But even if it allows 1 person to do the job of 2 people, that’s 50% of the workforce unemployed. This isn’t even mentioning how good robotics have gotten over the past 10 years.

      • JeffKerman1999@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        24
        arrow-down
        2
        ·
        7 months ago

        You must have one person constantly checking for hallucinations in everything that is generated: how is that going to be faster?

        • Grippler@feddit.dk
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          7
          ·
          edit-2
          7 months ago

          Sure you sort of need that at the moment (not actually everything, but I get your hyperbole), but you seem to be working under the assumption that LLMs are not going to improve beyond what they are now. It is still very much in its infancy, and as the tech matures this will be less and less until it only requires few people to manage LLMs that solve the tasks of a much larger work force.

          • SupraMario@lemmy.world
            link
            fedilink
            English
            arrow-up
            8
            arrow-down
            1
            ·
            7 months ago

            It’s hard to improve when the data in is human and the data out cannot be error checked back against its own data in. It’s like trying to solve a math problem with two calculators that both think 2+2 = 6 because the data they were given said that it’s true.

          • Muehe@lemmy.ml
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            7 months ago

            (not actually everything, but I get your hyperbole)

            How is it hyperbole? All artificial neural networks have “hallucinations”, no matter their size. What’s your magic way of knowing when that happens?

          • JeffKerman1999@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            2
            ·
            7 months ago

            LLMs now are trained on data generated by other LLMs. If you look at the “writing prompt” stuff 90% is machine generated (or so bad that I assume it’s machine generated) and that’s the data that is being bought right now.

      • MeanEYE@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        7 months ago

        There is a plateau to be hit at some point. How close it is, depends who you ask. Some say we are close, others say we are not but it definitely exists. LLMs suffer, just like other forms of machine learning, from data overload. You simply can’t be infinitely feeding it data and keep getting better and better results. ChatGPT’s models got famous because value function for learning had humans involved who helped curate quality of responses.

    • fidodo@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      7 months ago

      It can potentially allow 1 worker to do the job of 10. For 9 of those workers, they have been replaced. I don’t think they will care that much for the nuance that they technically weren’t replaced by AI, but by 1 co-worker who is using AI to be more efficient.

      That doesn’t necessarily mean that we won’t have enough jobs any more, because when in human history have we ever become more efficient and said “ok, good enough, let’s just coast now”? We will just increase the ambition and scope of what we will build, which will require more workers working more efficiently.

      But that still really sucks because it’s not going to be the same exact jobs and it will require re-training. These disruptions are becoming more frequent in human history and it is exhausting.

      We still need to spread these gains so we can all do less and also help those whose lives have been disrupted. Unfortunately that doesn’t come for free. When workers got the 40 hour work week it was taken by force.

      • AProfessional@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        2
        ·
        7 months ago

        My colleagues are starting to use AI, it just makes their code worse and harder to review. I honestly can’t imagine that changing, AI doesn’t actually understand anything.

        • Yggnar@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          3
          ·
          6 months ago

          This comment has similar vibes to a boomer in the 80s saying that the Internet is useless and full of nothing but nerds arguing on forums, and he doesn’t see that changing.

          • AProfessional@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            6 months ago

            Probably. I’m just not seeing it actually doing any logic or problem solving. It’s a pattern matching machine today. A new technology could certainly happen.

            • fidodo@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              6 months ago

              Do you know what pattern matching is great for? Finding commonly cited patterns in long debug log messages. LLMs are great for brainstorming problem solving. They’re basically word granularity search engines, so they’re great for looking things up are more niche knowledge that document search engines fail on. If the thing you’re trying to look up doesn’t exist, it will make shit up so you need to cross reference everything, but it’s still incredibly helpful. Pattern matching is also great for boilerplate. I use the codium extension and it comes up with auto complete suggestions that don’t have much logic, but save a good amount of key strokes.

              I didn’t think the foundational tech of LLMs are going to get substantially better, but we will develop programming patterns that make them more robust and reliable.

      • gravitas_deficiency@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        5 months ago

        Short term? Sure.

        Long term? Not a chance that equation works out favorably.

        But then again, c-suites these days only seem to give a shit about short-term implications.

    • MxM111@kbin.social
      link
      fedilink
      arrow-up
      6
      arrow-down
      17
      ·
      7 months ago

      That’s not what is going to happen. Copilot will simply increase productivity over, and where before they needed 10 people, gradually, through attrition they will need only 9, then 8, and so on. That does not mean higher unemployment though, it means more product.

      • slaacaa@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        edit-2
        7 months ago

        Businesses want to grow, not keep stable. They might fire a few ppl in the short term, but in the long term it’s more likely the group of 10 would just do now the work of a 12-13 group with AI, producing hugher outputs for the same money they were getting before, meaning extra profit for the shareholders.

        • MxM111@kbin.social
          link
          fedilink
          arrow-up
          4
          arrow-down
          6
          ·
          7 months ago

          That’s exactly what I meant by

          That does not mean higher unemployment though, it means more product.

  • Hobbes_Dent@lemmy.world
    link
    fedilink
    English
    arrow-up
    65
    ·
    7 months ago

    Microsoft argues that its AI automation will remove the boring bits of jobs instead of replacing jobs entirely.

    Simple question: does the Microsoft HR department agree?

  • Fake4000@lemmy.world
    link
    fedilink
    English
    arrow-up
    35
    ·
    7 months ago

    No. But expect to do more work for the same pay since things are slightly easier now with AI.

  • aesthelete@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    ·
    7 months ago

    Meanwhile, I’m at my job trying to get an instance of a machine that can automatically SFTP somewhere as part of a script like it’s 1998 and I need a shell account from my dialup connection.

    • bionicjoey@lemmy.ca
      link
      fedilink
      English
      arrow-up
      20
      ·
      7 months ago

      Working on legacy systems is nice sometimes. So far removed from the techbro bullshit

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 months ago

      There’s actually a someone I work with that’s basically that. Essentially, their job is to communicate across teams to coordinate product releases. They’re not involved in deciding what’s in those releases, they just needs to know what’s going in each one and when it’s happening. They also do something with budget approvals.

      Basically, they just look at Jira and spreadsheets, and I didn’t think they really make changes to either. We could 100% do without that role, at least from my perspective.

      Then again, we’re not worried about job security here. We have good funding, we’re expanding our team, and my boss is generally against AI. But if I had to suggest someone to let go, it would be that person. It’s a really weird role and they don’t really have any unique skills, but whatever.

  • ahal@lemmy.ca
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    7 months ago

    Microsoft argues that its AI automation will remove the boring bits of jobs instead of replacing jobs entirely.

    This argument is such bullshit. As if Microsoft doesn’t know there exist jobs that are entirely “the boring bits”.

    • Blemgo@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 months ago

      That is very true, especially when it comes to any administrative task. However I’d argue that these jobs are less likely to be replaced, as these jobs are born out of a system that is favoring bureaucracy for the sake of bureaucracy over efficiency. Challenging that system would result in a shift in the power dynamics, often towards subordinates, which, of course, wouldn’t really be accepted by leading positions.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    2
    ·
    7 months ago

    This is the best summary I could come up with:


    Microsoft will soon allow businesses and developers to build AI-powered Copilots that can work like virtual employees and perform tasks automatically.

    Instead of Copilot sitting idle waiting for queries, it will be able to do things like monitor email inboxes and automate a series of tasks or data entry that employees normally have to do manually.

    It’s a big change in the behavior of Copilot in what the industry commonly calls AI agents, or the ability for chatbots to intelligently perform complex tasks autonomously.

    Microsoft’s argument that it only wants to reduce the boring bits of your job sounds idealistic for now, but with the constant fight for AI dominance between tech companies, it feels like we’re increasingly on the verge of more than basic automation.

    You can build Microsoft’s Copilot agents with the ability to flag certain scenarios for humans to review, which will be useful for more complex queries and data.

    We constantly see AI fail on basic text prompts, provide incorrect answers to queries, or add extra fingers to images, so do businesses and consumers really trust it enough to automate tasks in the background?


    The original article contains 842 words, the summary contains 188 words. Saved 78%. I’m a bot and I’m open source!

  • werefreeatlast@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    13
    ·
    7 months ago

    Incoming data stream! This one is from a scientist! Ok Chat-GPT, get ready to learn how to be a scientist!

    Incoming data stream! From a mom…oh shit Chat-GPT… run! It’s a single mom! Oh hold on! She found someone to love again! Nope, he was just using her for her beauty again. Well maybe just learn to do all chores, feed the kids, go to work, get sexually abused, fall a sleep, do all chores and make-up! And figure out if only fans really does work! And more chores.