• ZILtoid1991@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    1 hour ago

    Oh god, that fall was really big. Trump’s second term? The fast erosion of our rights and democracies? You might have a concussion. We’re just in a second boring term of Joe Biden, with the usual liberal ineffectiveness. Don’t you remember all the MAGAts crying on television and facebook, many asking how to become homosexual for some weird reason?

  • TwistedTree@lemmy.world
    link
    fedilink
    arrow-up
    18
    ·
    8 hours ago

    Are we sure that OpenAI didn’t use a quantum computer that is accessing facts from an alternate timeline (those lucky bastards).

  • jsomae@lemmy.ml
    link
    fedilink
    arrow-up
    22
    arrow-down
    7
    ·
    edit-2
    9 hours ago

    The knowledge cut-off for GPT5 is 2024 just so you know. Obviously, it would be better if it didn’t hallucinate a response to fill in its own blanks. But it’s software, so if you’re going to use it then please use it like software and not like it’s magic.

    In general I’m not too moved either way when somebody misuses AI and then posts gobsmacked about how bad it is. Really though, the blame is on AI companies for trying to push AI onto everyone rather than only to domain experts.

    • RedFrank24@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      57 minutes ago

      That’s funny though because I know Copilot can google things and talk about them.

      Like, a news story can appear that day, and you go “Did you hear about the guy that did X and Y?” and Copilot will google it and be like “Oh yeah you’re referring to the news story that came out today about the guy that did X and Y. It was reported in Newspaper that Z was also involved” and then send you a link to the article.

      So like… GPT5 should be able to supplement its knowledge with basic searching, it just doesn’t.

    • BluescreenOfDeath@lemmy.world
      link
      fedilink
      English
      arrow-up
      21
      ·
      8 hours ago

      This is the fundamental problem with LLMs and all the hype.

      People with technology experience can understand the limitations of the tech, and will be more skeptical of the output from them.

      But your average person?

      If they go to Google and ask if vaccines cause autism, and the Google’s AI search slop trough contains an answer they like, accurate or not there will be exactly no second guessing. I mean, this is supposed to be a PhD level person, and it was right about the other softball questions they asked, like what color is the sky. Surely they’re right about that too, right?

      • jsomae@lemmy.ml
        link
        fedilink
        arrow-up
        1
        arrow-down
        3
        ·
        8 hours ago

        Yeah. The average person just doesn’t have a good intuition about AI, at least not yet. Maybe in a few years people will be burned by it and they’ll start to grok its limits, but idk. I still blame the AI companies here.

    • pulsewidth@lemmy.world
      link
      fedilink
      arrow-up
      9
      arrow-down
      2
      ·
      7 hours ago

      If the knowledge cutoff for GPT5 is 2024 it should absolutely not be commenting on current day events and claiming accuracy.

      This is not the defence you think it is. It still shows ChatGPT in an accurate and very negative light.

      • jsomae@lemmy.ml
        link
        fedilink
        arrow-up
        4
        arrow-down
        5
        ·
        6 hours ago

        It seems as though you read the first sentence I wrote and not any of the sentences afterward.

        • pulsewidth@lemmy.world
          link
          fedilink
          arrow-up
          7
          arrow-down
          1
          ·
          5 hours ago

          Indeed I did. Especially the parts where you made excuses for it, saying…

          it’s software, so if you’re going to use it then please use it like software and not like it’s magic.

          Nobody claimed it was magic. They gave it a very reasonable prompt that a grade 1 child could answer, and it failed. And this…

          In general I’m not too moved either way when somebody misuses AI and then posts gobsmacked about how bad it is.

          Again, you’re claiming the prompt is misuse, “tHEyRe uSInG iT wRonG”. Going on to say it’s the ‘AI companies fault really’ for pushing it to everyone instead of just domain experts is again not getting the point. The AI should never respond with a confident answer to a prompt it has no idea about. That’s nothing to do with the user or the targeted audience, that’s just shit programming.

          • jsomae@lemmy.ml
            link
            fedilink
            arrow-up
            1
            arrow-down
            5
            ·
            edit-2
            4 hours ago

            The AI should never respond with a confident answer to a prompt it has no idea about.

            Agreed. But the technology isn’t there yet. It’s not shit programming, because the theory of how to solve this problem doesn’t even exist yet. I mean, there are some attempts, but nobody has a good solution yet. It’s like you’re complaining that cars can’t go at 500 miles per hour since the technology limits them to 200 mph or so, and blaming this on bad car design when it’s actually the user’s expectation that’s the problem. The user has been mislead by the way things are presented by AI companies, so ultimately it’s the AI company’s fault for overmarketing their product.

            (Fuck cars btw).

            They gave it a very reasonable prompt that a grade 1 child could answer, and it failed.

            LLMs don’t work like grade 1 children. The real problem is that AIs are being marketed in such a way that people are expecting them to be able to be at least as good as anything a grade 1 child can do. But AIs are not humans. They are able to do some things better than any human yet on other tasks they can be outperformed by a kindergartner. This is just how the technology is.

            Blame expectations, blame marketing, fuck AI in general, but you’ve been totally misled if you’re expecting it to be able to, say, count the number of letters in a word or break a kanji into components when all it sees are tokens; not letters, not characters.

  • FrChazzz@lemmus.org
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    1
    ·
    12 hours ago

    My best friend, in our late teens, once emphatically claimed that Eric Clapton wrote “I Shot The Sheriff” and that Bob Marley effectively stole the song from him. This was before the internet as we know it, so fact-checking took effort. He and I argued about this off and on for weeks. Until I wound up in a used record store and happened upon the Clapton album that had “I Shot The Sheriff”. Right there, plain as day, it stated “written by R. Marley.” So I bought the LP, even though I did not own a record player at the time, just so I could put it in front of his face and show him.

    His reaction? “Well, I’ve seen a Cream album where it says he wrote it.” CLAPTON WASN’T WITH CREAM WHEN HE PUT OUT HIS COVER!11!

    Similarly, my brother-in-law as a kid was quite assured that Elton John’s hit song was actually “Panty and the Jets” and refused to believe otherwise for years.

    Both are pretty right-leaning guys these days and so maybe “confidently wrong” is just something that comes with a certain political persuasion? ChatGPT is just made in its makers’ image.

    • finitebanjo@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      6 hours ago

      I don’t think it’s fair to say “humans are also flawed” in response to AI’s flaws because the AI has never and according to research by the industry giants WILL NEVER reach human level accuracy.

      It’s a statistical model and it’s percentages are lower than asking your dudes.

  • dogs0n@sh.itjust.works
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    8 hours ago

    I wanna know how it breaks it down day by day. Is it gonna list every single day from the starting point!!! That be funny

    • pulsewidth@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      7 hours ago

      And yet there are still people in the thread claiming that ‘oh ChatGPTs knowledge data cuts off at the end of 2024, this prompt is using ChatGPT wrong’ completely missing your point.

      If ChatGPT doesn’t know something it just lies about it, all while being passed off as doctorate-level intelligence.

      Inb4 defenders ‘an AI can’t lie it just asserts falsehoods as truth because it’s having a scary dream/hallucination’ as if semantics will save the day.

    • belit_deg@lemmy.world
      link
      fedilink
      arrow-up
      18
      ·
      edit-2
      13 hours ago

      …And that people take the bait and anthropomorphize it, believing it is “reasoning” and “thinking”.

      It seems like people want to believe it because it makes the world more exciting and sci-fi for them. Even people who don’t find gpt personally useful, get carried away when talking about the geopolitical race to develop agi first.

      And I sort of understand why, because the alternative (and I think real explanation) is so depressing - namely we are wasting all this money, energy and attention on fools’ gold.

      • Lemminary@lemmy.world
        link
        fedilink
        arrow-up
        7
        arrow-down
        2
        ·
        edit-2
        14 hours ago

        I sincerely don’t think this is true, but it’s a nice narrative that fits well with one of Lemmy’s. It’d still be worth the same or more if it hallucinated to a minimum because it would better match one of its ideal business applications: replacing human labor at a fraction of the cost. Unfortunately, this is only a convenient side effect for many who stand to benefit from creating propaganda and false information in bulk.

        • Brave Little Hitachi Wand@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          ·
          14 hours ago

          I prefer to live in a world where it is possible to say something oversimplified and ridiculous, and people just laugh and don’t feel like it deserves correction. That’s practically everything I ever feel like saying.

          • Lemminary@lemmy.world
            link
            fedilink
            arrow-up
            4
            arrow-down
            1
            ·
            14 hours ago

            Sorry! I know the feeling and I wasn’t meaning to be a jerk about it by ackshually’ing you. I mostly replied because I think Lemmy could do better at criticizing LLMs in general.

            • Brave Little Hitachi Wand@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              14 hours ago

              I agree actually. It’s hard to get everyone on the same page about much of anything, and the topic is both very large and changing rapidly. It’s a lot less work to just make vague sarky remarks about venture capitalists.

  • 6nk06@sh.itjust.works
    link
    fedilink
    arrow-up
    45
    arrow-down
    1
    ·
    21 hours ago

    OP is wrong and it’s good that he admits it. Joe Biden is the actual president in 2025, and he will soon meet Steven Seagal the new Russian president about the war on Belgium.

  • Adulated_Aspersion@lemmy.world
    link
    fedilink
    arrow-up
    18
    arrow-down
    2
    ·
    18 hours ago

    GPT5 probably has access to the real poll data before Musk helped to “confirm” it. The cheezeit himself said that Musk knows the voting machines better than anyone.

    (this is the same mentality that Trumpers used when Trump completely lost the election in 2020, and I will be damned if it doesn’t feel right)

  • Nougat@fedia.ioM
    link
    fedilink
    arrow-up
    25
    arrow-down
    1
    ·
    20 hours ago

    It’s doing precisely what it’s intended to do: telling you what it thinks you want to hear.

    • brucethemoose@lemmy.world
      link
      fedilink
      arrow-up
      11
      ·
      edit-2
      17 hours ago

      Bingo.

      LLMs are increasingly getting a sycophancy bias, though that only applies here if you give them anything to go on in the chat history.

      It makes benchmarks look better. Which are all gamed now anyway, but kinda all they have to go on.

  • slaacaa@lemmy.worldOP
    link
    fedilink
    arrow-up
    24
    arrow-down
    2
    ·
    21 hours ago

    Wonder what else it’s wrong about, if it misses somehing this obvious. It seems to be ChatGPT is getting worse by every update

  • Krauerking@lemy.lol
    link
    fedilink
    arrow-up
    12
    ·
    19 hours ago

    Ha, even text prediction thinks Trump is a loser.

    Honestly this is why you shouldnt trust statistical predictions on human behavior anyways.