I really can’t understand this LLM hype (note, I think models used for finding cures to diseases and other sciences are a good thing. I’m referring to the general populace LLM hype).

It’s not interesting. To me, computers were so cool and interesting because of what you can do yourself, with just the hardware and learning code. It’s awesome. What I don’t find interesting in any way is typing a prompt. “But bro, prompt engineer!” that is about the stupidest fucking thing I’ve ever heard.

How anyone thinks its anything beyond a parlor trick baffles me. Plus, you’re literally just playing with a toy made by billionaires to fuck the planet and the rest of us over even more.

And yes, to a point I realize “coding” is similar to “prompting” the computers hardware…if that was even an argument someone would try to make. I think we can agree it’s nowhere near the same thing.

I would like to see if there is a correlation between TikTok addicts and LLM believers. I could guarantee it’s probably very high.

  • cabbage@piefed.social
    link
    fedilink
    English
    arrow-up
    40
    arrow-down
    2
    ·
    edit-2
    23 hours ago

    note, I think models used for finding cures to diseases and other sciences are a good thing. I’m referring to the general populace LLM hype

    The good news: These models are never LLMs, because LLMs can never be anything else than bullshit generators.

    The bad news: The first studies of actual performance seem to indicate a similar pattern in hospitals to what is seen with the AI hype all the time: While people often perceive themselves to be more efficient, actual efficiency might in fact drop. It turns out that four Polish hospitals that started using machine learning models to detect cancer actually found fewer instances of cancer as a result, not more. It might of course improve as the technology gets better, and it could supplement instead of replace human expertise. But the story “AI is amazing for detecting cancer” is sadly not as clear-cut as we have been lead to believe.

    Edit: read comments below!

    • bridgeenjoyer@sh.itjust.worksOP
      link
      fedilink
      arrow-up
      13
      ·
      edit-2
      1 day ago

      Thats interesting. Yes, I’ve even seen it my work people think they are more efficient using it until I review their work and its full of mistakes.

    • FriendOfDeSoto@startrek.website
      link
      fedilink
      English
      arrow-up
      6
      ·
      23 hours ago

      The study isn’t about how good so-called AI is at detecting cancer. The study is about how these doctors lose the ability to spot cancer after having delegated the spotting part of their difficult jobs to a model. They looked at the numbers before the introduction, while using it, and then when they took this assistance away. This study can say something about these doctors’ behavior. I don’t think it proves so-called AI is shit at it. It’s more about how humans get lazy. Roughly six generations ago people could recite poetry from memory, know the dates of historical importance, and remember 50 phone numbers. Now we’re like eff that I got those in my phone plus Wikipedia access. It’s more like that.

  • net00@lemmy.today
    link
    fedilink
    arrow-up
    13
    ·
    1 day ago

    If LLMs had any degree of accuracy guarantee then they would be useful.

    I often think, “I could let an LLM do this and save me time, if it worked…”. But it’s always been a professional bullshit generator.

    Can’t trust it to make even a simple list, so you always end up doing double work.

  • black_flag@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    27
    ·
    1 day ago

    It’s just rich fucks being rich fucks

    I mean, as someone who kinda struggles with words sometimes, it can be a little helpful with like formalizing language for official documents and stuff but the hype is just another capital bubble.

  • ansiz@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    1 day ago

    Some of the LLM deployments are just stupid as well. Take the way AWS is using Claude as part of the Q CLI offering. You would assume such a model would at a bare minimum be able to reference and refer to AWS’ own public documentation and knowledgebase data. But no, it doesn’t even have access or the ability to read AWS public website unless you copy and paste the text into your chat session. That’s just so fucking stupid I can’t understand it.

    As a result, it’s all too common to get the model to just make shit up about how an AWS service functions, and if you ask it how do you know that? It will admit that it actually doesn’t know and just made it up.

    The only thing I’ve found it useful for is very limited and basic python scripts, but even then you have to be careful since it’s not very good at that either.

  • OpenStars@discuss.online
    link
    fedilink
    English
    arrow-up
    22
    ·
    1 day ago

    Number must go up.

    These products are looking to be sold… and propping up the entire economic bubble in the meantime.

    Whether they “work” is entirely besides the point.

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    1
    ·
    1 day ago

    Some people say everything was more interesting when you still needed to learn and know things, figure out stuff and work for it. And today everything is just provided as a service, people know shit about things they just use on a surface level and have no control over, or understanding of…

      • cecilkorik@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        20 hours ago

        As a fellow bridge enjoyer, I look forward to discussing various bridges with you when the rest of intelligent civilization has completely collapsed and only the finest bridges remain standing.

  • GeneralEmergency@lemmy.world
    link
    fedilink
    arrow-up
    10
    ·
    1 day ago

    AI chatbots aren’t a new thing. You can find them going back decades. That’s why there was so much hype for them becoming usable by the masses.

  • Rhaedas@fedia.io
    link
    fedilink
    arrow-up
    3
    ·
    1 day ago

    I think the math and science behind the inner works are interesting. The fact that you can feed in stuff and get things that make sense (not meaning they’re accurate, just that they are usually grammatically good). If you don’t find that sort of thing interesting then sure, the rest is absolutely crazy, but not really unexpected for humans who anthropomorphize everything.

    • ZDL@lazysoci.al
      link
      fedilink
      arrow-up
      2
      ·
      21 hours ago

      Internal consistency is also usually considered a good thing. Any individual sentence an LLMbecile generates is usually grammatically correct and internally consistent (though I have caught sentences whose endings have contradicted the beginning here and there), but as soon as you reach a second sentence the odds of finding a direct contradiction mount.

      LLMbeciles are just not very good for anything.

      • Rhaedas@fedia.io
        link
        fedilink
        arrow-up
        2
        ·
        20 hours ago

        Some models are better than others at holding context. They all wander at some point if you push them though. Ironically, the newer versions that have a “thinking mode” are worse because of this, the context gets stretched out and they start second guessing even correct answers.

        • ZDL@lazysoci.al
          link
          fedilink
          arrow-up
          2
          ·
          15 hours ago

          Indeed. The reasoning models can get incredibly funny to watch. I had one (DeepSeek) spinning around for over 850 seconds only to have it come up with the wrong answer to a simple maths question.