I need to step away from my PC – for a moment – because, although I have so much to write, the statements made in this video touch me too deeply and are too closely aligned to my own views and too close to the fundamental reasons underlying my own depression and disillusionment and burn-out.

Watch it.

Seriously. Watch it. If you are well briefed on the A.I. bubble and A.I. Hell, just skip to:

  • ~ 34 minutes to miss the demonstration of the tedious issue.[1]
  • ~ 38 minutes to reach the philosophical statements
  • ~ 39 minutes to hear about deception – the universal “tell” of A.I. scammers
  • ~ 41 minutes if you’re prepared for tears: to lament what we’ve lost, what we so nearly had, what humanity is losing, what is being stolen from artists ¬

(I need some space.)


  1. I assure you this video is not about content farms, SEO or the death of search but one might be mistaken for thinking that, in the first half. Don’t. It is worth your patience. ↩︎

  • luciole (he/him)@beehaw.org
    link
    fedilink
    arrow-up
    22
    ·
    2 days ago

    I think a lot of the people that embrace genAI do so because they’ve been drilled to embrace all new tech or risk becoming obsolete. At least that’s what I feel from my workplace: non technical coworkers nervously grasping at it, trying to squeeze the advertised productivity out of it, with the fear the competition is already doing it, and better. The mediocrity of the results are then interpreted as a shortcoming on their end so they double down.

    I’m thrilled to see someone like Freya, a passionate of maths & arts, weighing in on the matter and I agree wholeheartedly with her. GenAI is destructive in many ways, and attacking an essential part of the human experience, storytelling, is not mentioned enough.

    Every person that speaks out creates opportunities for many others to give it a second thought and question this venture.

    • barsoap@lemm.ee
      link
      fedilink
      arrow-up
      2
      ·
      1 day ago

      Just for the sake of argument, let me turn this around half-ways: Just as guns don’t kill people, people do, so AI does not produce slop, people do. Generative AI has its uses, where it becomes problematic is when people who can’t even judge a medium use it to produce something in that medium. It’s perfectly possible to get good stuff out of Stable Diffusion, but you have to know a thing or five about visual media or it’ll dazzle you. Applies the same for all genAI I’ve ever looked at.

      Likewise, humans are perfectly capable of producing horrendous slop without aid of these tools – just look at the romance novel isle or what Hollywood was up to last year. What’s different nowadays is that it has become very easy to generate that slop, there has been an explosion of slop. If I were to imagine a silver lining it would be that people are probably going to get bored of it and non-slop, AI or non-AI, will see an increase in prestige and value. Just as the invention of photography saved painters from doing portraits.


      Bonus:

      Summarise the fable of the robot and the antipsychotic

      “The Fable of the Robot and the Antipsychotic” is a story that explores themes of mental health, technology, and the human condition. In this fable, a robot represents modern technology and its capabilities, while the antipsychotic symbolizes medication and support for mental health issues.

      The robot, designed to optimize efficiency and productivity, struggles to understand the complexities of emotions and human experiences. It encounters an individual who is grappling with mental health challenges and is hesitant to seek help. The robot, despite its advanced programming, cannot provide the emotional support the individual needs.

      Through the interaction, the fable highlights the importance of empathy, understanding, and the role of medication in managing mental health. The robot eventually learns that while it can assist in many areas, the human experience requires compassion and connection, which technology alone cannot provide.

      In the end, the story conveys that while robots and technology can enhance our lives, they should complement, rather than replace, the human touch in addressing emotional and mental well-being.

  • kevlar21@lemm.ee
    link
    fedilink
    arrow-up
    26
    ·
    2 days ago

    Once, men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.

  • kibiz0r@midwest.social
    link
    fedilink
    English
    arrow-up
    14
    ·
    2 days ago

    Oh shit. New Freya dropped?! On AI?!?!?!

    Anything she uploads is well worth an hour or two of my time.

    • Cybrpwca@beehaw.org
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 days ago

      I was introduced to Continuity of Splines last month. Never thought I would watch a 73 minute video on methods for drawing curves, but I did and I loved it.

  • drspod@lemmy.ml
    link
    fedilink
    arrow-up
    19
    ·
    2 days ago

    There’s a huge gap in the market for just decent quality dumb web search right now.

    The problem is filtering out the garbage from the results.

    • Jo Miran@lemmy.ml
      link
      fedilink
      arrow-up
      5
      ·
      2 days ago

      I use Kagi specifically because Google and DDG were too infested with AI garbage. There is no escaping it. I get less generated slop, but I still get plenty.

      • Snot Flickerman@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        32
        ·
        edit-2
        2 days ago

        I’ll never forget the Kagi CEO tracking down a random blogger so he could “explain to her how she was wrong about Kagi” and harassed her for a phone conversation, and when she politely declined several times over, he just said fuck it and sent his arguments as a follow-up email because like hell was he going to take no for an answer.

        I don’t trust any company from some psycho abusive asshole who thinks he needs to do that kind of shit to prove his product. Half stalking half harassment for someone merely expressing their opinion. Why would I trust any product from someone who clearly doesn’t understand or doesn’t care about ideas like consent. If he wants you to hear his opinion, he doesn’t care if you don’t consent to it, you owe him listening to his loud stupid voice saying loud stupid things, I guess, in his view. Sounds like another Elon Musk, to me.

        So, in short, fuck Kagi.

        • sanpo@sopuli.xyz
          link
          fedilink
          arrow-up
          13
          ·
          2 days ago

          And besides… aren’t AI features getting pushed in Kagi heavily?

          Weird idea to use it as an argument against Google and DDG, but conveniently ignore it for Kagi.

          • DdCno1@beehaw.org
            link
            fedilink
            arrow-up
            8
            ·
            2 days ago

            On top of all that, the last time I checked, search results were 100% identical between Kagi and Google. It’s snake oil.

            • sanpo@sopuli.xyz
              link
              fedilink
              arrow-up
              7
              ·
              2 days ago

              Yeah, I tried it for a bit 1 or 2 years ago and didn’t see much difference.

              Only cool thing was the automatic summaries with sources, but the I found out LLM summaries are like everything else “AI” - unreliable, at best, so…

  • kbal@fedia.io
    link
    fedilink
    arrow-up
    5
    ·
    2 days ago

    Searching the web for “glb file format” seems a natural thing to do while listening to the first part of this video. On my favourite searx instance I had to scroll past only 7 links to LLM-generated garbage before finding a link to the actual spec. I wonder how Kagi fares in this test.

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      8
      ·
      2 days ago

      If you wanted the specification why not search for “.glb format specification”? I did that on Google and the specification was the first hit.

      • kbal@fedia.io
        link
        fedilink
        arrow-up
        7
        ·
        2 days ago

        I find it interesting as an experiment to measure how polluted the information environment is with machine-generated shit, not as an exercise in how to navigate around it.

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          4
          ·
          2 days ago

          It’s only “polluted” if you’re looking for something specific and you refuse to ask for something specific.

          If you go into a restaurant and ask them for “a drink” without specifying what drink you want, don’t complain about the quality of the coffee when they bring you a coke.

          • kbal@fedia.io
            link
            fedilink
            arrow-up
            3
            ·
            2 days ago

            Coffee and cola are both pretty good. Are you reflexively coming to the defense of generative AI on general principle or did you actually look at those sites, the horribleness of which is exhaustively detailed in the video, and decide that they look as if they could be useful to anyone?

            • FaceDeer@fedia.io
              link
              fedilink
              arrow-up
              3
              ·
              2 days ago

              I looked at the sites. Did you? The thing that OP was looking for that they claimed had been made unfindable or “polluted” were perfectly accessible and fine.

              • DarkNightoftheSoul@mander.xyz
                link
                fedilink
                English
                arrow-up
                2
                ·
                edit-2
                2 days ago

                “google-fu” is a skill. it’s not reasonable to expect everyone to be perfectly proficient. and it seems like you are saying “just perfectly define your search query to match the exact result you need every time” which, i think most would agree is often very difficult even if your fu is quite good. I think this might actually be an np problem. I used to have good fu, these days I find that google sometimes breaks or ignores my quotes, changes exact wordings to thesaurus matches, shit like that. also, there are filter bubbles.

                now we’re adding to all of those already known uncontroversial challenges to googling the information “pollution” of low-quality unsupervised chatbot hallucinations where genuine good search results might otherwise go. it’s not that it’s unfindable- that’s not what the op video claims. it’s that the signal-to-noise ratio is intolerably low, and that basically by design.

                and yes, it seems there is still room for human-creative solutions to these search problems, like the one you suggest which basically rearranges words and exchanges some thesaurus hits. sort of like how you can sometimes “jailbreak” llms by TaLkInG tO tHeM lIkE ThIs. for now. actually, they probably already patched that.

                • FaceDeer@fedia.io
                  link
                  fedilink
                  arrow-up
                  3
                  ·
                  2 days ago

                  The “google-fu” in this case was to search for “.glb format specification” when seeking the .glb format specification.

                  This really doesn’t seem like a huge challenge requiring sophisticated skills.

    • Jo Miran@lemmy.ml
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      2 days ago

      I just searched using Kagi. It fares poorly. Normally I get better results but “glb file format” seems to be a cursed search. Nothing but AI with the exception of one Wikipedia result and one Reddit post.

      I will try “how to parse data from a glb file” and report again in an edit.

      EDIT: Solid hits from the first try. It seems that search engines are being swarmed by generated garbage for open or vague questions. The moment you get specific, the AI slop seems to fall off…at least on Kagi.