• buttnugget@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      12 hours ago

      Perfect comment. This is what’s so scary about the current state of affairs, at least to me. They don’t give a fuck if it doesn’t affect their bottom line. These are the same people who got rid of copy editors and turned journalism into content mill churn.

      • BlameTheAntifa@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        11 hours ago

        Exactly. Quality doesn’t matter. They don’t have to compete against quality, so they will enshittify as much as they can get away with if it means costs go down more than income does.

      • BlameTheAntifa@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        24 hours ago

        I am dreading the AI bubble bursting, because it will be the start of the Second Great Depression. There is no way to curb it at this point.

        • JargonWagon@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          23 hours ago

          Same. All we can do is vote with our wallets and doomsday prep, honestly. Get rid to scrimp and save, and live paycheck to paycheck.

  • Twipped@l.twipped.social
    link
    fedilink
    arrow-up
    12
    ·
    1 day ago

    Given that LLMs are always at least several months behind reality in their training, and the data they’re training on is content produced by real journalists, I really don’t see how it could EVER act as a journalist. I’m not sure it could even interview someone reliably.

    • AeonFelis@lemmy.world
      link
      fedilink
      arrow-up
      13
      ·
      1 day ago
      • “Why did you take that bribe?”
      • “I did not take any bribes!”
      • “You are absolutely correct! You did not take any bribes”
    • utopiah@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      1 day ago

      AFAIK that’s what RAG https://en.wikipedia.org/wiki/Retrieval-augmented_generation is all about namely the dataset is what it is but you try to extend it with your own data that can be brand new and even stay private.

      That being said… LLMs are still language model, they make prediction statistics on the next word (or token) but in practice, despite names like “hallucinations” or “reasoning” or labels like “thinking” they are not doing any reasoning, have no logic to do e.g. fact checks. I would expect journalists to pay a LOT of attention to distinguish between facts and fictions, between speculation and historical elements, between propaganda, popular ideas and actual events.

      So… even if were to magically solve outdated datasets that still doesn’t solve the linchpin, namely models are NOT thinking.

  • LemmyKnowsBest@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    1 day ago

    He’s feeling pretty good about how well he did building the ai?

    Or he’s feeling pretty good about his job security as a real human news journalist because ai was awful?

  • wizardbeard@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    85
    arrow-down
    1
    ·
    2 days ago

    My workplace has finally succumbed to the whole “we need to use AI to stay competitive” bullshit, and while my boss and my team aren’t happy about it, he made it clear that execs would be looking for teams to have stories about how it’s helped them work faster.

    So, into the slop mines I went. Microsoft Copilot is the slop generator approved by our legal and infosec teams. I figured that I’d give it something easy, just pull me info on Exchange Online, Microsoft’s corporate email system in Azure. Most of this stuff I’m already familiar with, so I’m just trying to see if it saves any time looking through Microsoft’s documentation websites for specific details that they don’t make obvious so you have to combine info from multiple pages.

    I shit you not, every single response has had an innacuracy/issue. Many of them required follow up prompts because it gave me shit for Exchange Server instead of Online, despite how I set up the equivalent of the system prompt. I’m asking some pretty technical stuff, but it should absolutely be there in the training data.

    Having a way to search the internet using natural language is useful, but that’s not really what these things are. Natural language parsing is an amazing tool. I would love to see it stand on its own.

    But the slop generation for responses falls flat, because it’s trying to be everything for everyone. It doesn’t know anything, so it can’t reliably discern the difference between Exchange Online and Exchange Server (two different, but interoperable, systems with critical differences in functionality).

    And all of that ignores that it’s all built off widespread content theft, and the obscene resource usage, and that despite just how much shit is burning to make it work, the end result just isn’t impressive.

    TL;DR- It’s not a threat to technology related “knowledge workers” either.

    • plenipotentprotogod@lemmy.world
      link
      fedilink
      arrow-up
      34
      ·
      2 days ago

      Your thoughts are very similar to mine. The usefulness of machine learning to bridge the gap between the endlessly messy real world and the strictly regimented digital one can’t be overstated, but throwing all your problems into an LLM chatbot is never going to yield good results. Case in point is the AlphaFold project which used AI to basically solve one of the hardest problems in modern biochemistry. They didn’t do it by asking ChatGPT to “please fold this protein for me” they did it by assembling a multi-disciplinary team with a deep technical understanding of the problem and building a fully custom machine learning system which was designed from the ground up for the sole purpose of predicting protein structures. That’s applied AI. All these so-called AI startups who’s product is basically a coat of paint and a custom system prompt on the ChatGPT API are nothing more than app developers chasing the latest trend.

      Incidentally, I had a very similar experience to your “Exchange Server / Exchange Online” problem. I was asked to make updates to an old VBS code base, and VBS is a language that I have neither experience with nor interest in learning, so I was using a chatbot to try and save a few minutes on some super simple beginner level questions. It kept confidently spitting out answers for VBA code, which is similar to, but very much not the same as VBS.

      • lichtmetzger@discuss.tchncs.de
        link
        fedilink
        arrow-up
        11
        ·
        edit-2
        2 days ago

        It kept confidently spitting out answers for VBA code, which is similar to, but very much not the same as VBS.

        What I get from this is that Microsoft is absolutely terrible at naming products. Even a human will get confused about Exchange Online and Exchange Server being two different products. So it’s no wonder the LLM can’t handle it either.

        For fucks sake, they even renamed the Remote Desktop app to “Windows App”, have fun finding any help for that one.

      • WFH@lemmy.zip
        link
        fedilink
        arrow-up
        18
        ·
        2 days ago

        This.

        We have a brilliant data scientist in our team who spent the last 6 months creating and improving a ML model that’s now very accurate and efficient.

        Of course, our company went all in on GenAI and hired a full team in a few months. Those scammers are still trying to find a way to make themselves useful justify their inflated salaries, so of course, being so high on their own farts, they decided they had a genius idea that nobody had before and spent a few weeks trying to vibe-code a similar model without even asking around if it has already been done. And when presented with the facts, they doubled down on “yeah but if we can access EVERYTHING, we can make the model explainable!!1!”. Ah yes, the part that’s already been developed and tested and going in production like, right now.

        What’s depressing is that AI IS EVERYTHING so they basically became the most powerful entity in the company almost instantly, and we can’t do anything to prevent them wreaking havoc in all the teams roadmaps at once constantly with impossible and extremely urgent demands because they answer directly to the CFO.

        • plenipotentprotogod@lemmy.world
          link
          fedilink
          arrow-up
          8
          ·
          2 days ago

          The AI hype has become so stifling that I can certainly understand the knee-jerk reaction that many are having, but at the end of the day it’s just math. It’s a tool which, like all tools, can be used for good or bad. If you’re interested, I’d highly recommend looking into the story of AlphaFold. You can find some really good long form articles that have been written about it. It’s not only a truly impressive technical achievement, but also one of the few technological breakthroughs in recent memory that is (at least as far as I can tell) truly, indisputably, good for humanity.

      • BarqsHasBite@lemmy.world
        link
        fedilink
        arrow-up
        5
        ·
        edit-2
        2 days ago

        Some of them adding an addendum sentence works: “Not VBA, VBS”. Works for other items they confuse/autocorrect, not sure for coding.

    • Lost_My_Mind@lemmy.worldM
      link
      fedilink
      arrow-up
      5
      ·
      2 days ago

      Google has natural phrasing searchability.

      I did a google search for “Who’s that one celebrity who’s a smug bastard?” And it gave results for George Clooney. So it was even right about George Clooney…the smug bastard!

      • Elvith Ma'for@feddit.org
        link
        fedilink
        arrow-up
        4
        ·
        2 days ago

        I’m used to search using a bunch of keywords. But this is true, even before all this AI bullshit. E.g. a few years ago I edited a video and I couldn’t remember how that one song is called that I wanted to put in and just entered „What’s the name of the song that plays when there’s a sped up funny montage of something in a video?“. It gave me the Benny Hill Theme right as the first result which was exactly what I was looking for…

    • vala@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      2 days ago

      Small technical point. The documentation retrieval likely uses RAG and is not actually in the training data itself.

  • skisnow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    84
    ·
    edit-2
    2 days ago

    Has anyone told him he just needs to be better at prompting yet? Or maybe he just needs to wait another 6 months for the next model which will definitely work this time?

  • Phoenixz@lemmy.ca
    link
    fedilink
    arrow-up
    14
    arrow-down
    3
    ·
    2 days ago

    Well yeah but…

    People no longer care about good journalism…they care about being told a story, whatever the story is, and they no longer care about it being factual or even remotely true. They love it if the story is incendiary, the worse the better, the harder they rile.

    All that can be done with an AI journalist, so your job is done for

  • Grimy@lemmy.world
    link
    fedilink
    arrow-up
    5
    arrow-down
    12
    ·
    edit-2
    2 days ago

    This reminds me of the guy in my town. He used to work on a factory line and was worried about losing his jobs to automatisation, so he decided to try to build his own packaging robot.

    He failed, can you believe it? And automatisation never happened either, technology never advanced and he is still at that factory hand packing boxes 30 years later!!!

    Just goes to show that if you don’t have the know-how, it’s really easy to show that it can’t be done and you instantly keep your job forever.

    • BakerBagel@midwest.social
      link
      fedilink
      arrow-up
      11
      arrow-down
      1
      ·
      2 days ago

      Middle managers like to pitch automated assembly options, and new factory construction will always include them. But the moment upper management looks at repair of maintaince costs on those automated systems, they go right back to paying someone $24 an hour to do that

      • Grimy@lemmy.world
        link
        fedilink
        arrow-up
        2
        arrow-down
        6
        ·
        edit-2
        2 days ago

        That’s besides my point. My job isn’t safe simply because I myself can’t build the AI to do it. This tweet isn’t saying anything.

  • TheReturnOfPEB@reddthat.com
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    15
    ·
    edit-2
    2 days ago

    Why did he create just one ?

    A newspaper has a team of reporters, journalists, editors, photographers/videographers, and fact checkers.