Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • Sailor Sega Saturn@awful.systems
    link
    fedilink
    English
    arrow-up
    7
    ·
    10 hours ago

    At my big tech job after a number of reorgs / layoffs it’s now getting pretty clear that the only thing they want from me is to support the AI people and basically nothing else.

    I typed out a big rant about this, but it probably contained a little too much personal info on the public web in one place so I deleted it. Not sure what to do though grumble grumble. I ended up in a job I never would have chosen myself and feel stuck and surrounded by chat-bros uggh.

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      9 hours ago

      You could try getting laid off, scrambling for a year trying to get back into a tech position, start delivering Amazon packages to make ends meet, and despair at the prospect of reskilling in this economy. I… would not recommend it.

      It looks like there are a weirdly large number of medical technician jobs opening up? I wonder if they’re ahead of the curve on the AI hype cycle.

      1. Replace humans with AI
      2. Learn that AI can’t do the job well
      3. Frantically try to replace 2-5 years of lost training time
      • Sailor Sega Saturn@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        9 hours ago

        Amazon should treat drivers better. I hate how much “hustle” is required for that sort of job and how poorly they respect their workers.

        I think my job needs me too much to lay me off, which I have mixed feelings about despite the slim-pickings for jobs.

        I’m also trying to position myself to potentially have to flee the USA* due to transgender persecution**. There’s still a lot of unknowns there. I’ll probably stay at my job for awhile while I work on setting some stuff up for the future.

        That said part of me is tempted to reskill into a career that’d work well internationally (nursing?) – I’m getting a little up in years for that but it’d probably be a lot more fulfilling than what I’m doing now.

        * My previous attempt did not work out. I rushed things too much and ended up too stressed out and unbelievably homesick.

        ** This has been getting incredibly stressful lately.

  • BigMuffN69@awful.systems
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    19 hours ago

    Well, after 2.5 years and hundreds of billions of dollars burned, we finally have GPT-5. Kind of feels like a make or break moment for the good folks at OAI~~! With the eyes of the world on their lil presentation this morning, everyone could feel the stakes: they needed something that would blow our minds. We finally get to see what a super intelligence looks like! Show us your best cherry picked benchmark Sloppenheimer!

    Graphic design is my PASSION. Good thing the entirety of the world’s economy is not being held up by cranking out a few more points on SWE bench right???

    Ok. what about ARC? Surely ya’ll got a new high to prove the AGI mission was progressing right??

    Oh my fucking God. They actually have lost the lead to fucking Grok. For my sanity I didn’t watch the live stream, but curiously, they left the ARC results out of their presentation. Even though they gave Francois access early to test. Kind of like they knew this looks really bad and underwhelming.

    • blakestacey@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      13 hours ago

      The word blueberry contains the letter b 3 times.

      Also reported in more detail here:

      The word “blueberry” has the letter b three times:

      • Once at the start (“B” in blueberry).
      • Once in the middle (“b” in blue).
      • Once before the -erry ending (“b” in berry). […] That’s exactly how blueberry is spelled, with the b’s in positions 1, 5, and 7. […] So the “bb” in the middle is really what gives blueberry its double-b moment. […] That middle double-b is easy to miss if you just glance at the word.

      (via)

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      18 hours ago

      Graphic design is my PASSION

      Wait just how bad is 4? 30% accurate? Did they train it wrong as a joke? Also hatless 5 worse than 3?

      • BigMuffN69@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        18 hours ago

        Yeah, O3 (the model that was RL’d to a crisp and hallucinated like crazy) was very strong on math coding benchmarks. GPT5 (I guess without tools/extra compute?) is worse. Nevertheless…

      • BigMuffN69@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        18 hours ago

        The one big cope I’m seeing is in the METR graph ofc. Tiny bump with massive error bars above Grok 4 so they can claim the exponential is continuing while the models stagnate in all material ways.

  • BlueMonday1984@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    6
    ·
    23 hours ago

    In other news, the mainstream press has caught on to “clanker” (originally coined for use in the Star Wars franchise) getting heavy use, with Rolling Stone, Gizmondo and Axios putting out articles on it, and NPR featuring it in Word of the Week.

    You want my take, I expect it will retain heavy usage going forward - as I’ve stated before (multiple times at least), AI is no longer viewed as a “value-neutral” tool/tech, but as an enemy of humanity, whose use expresses a contempt for humanity.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      19 hours ago

      So question, anybody ever see this before in heavy usage? Or is this just some weird media thing?

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        3
        ·
        10 hours ago

        I’ve seen it pick up lately, particularly in non-sneer-adjacent spaces, but it’s definitely recent and I’m not sure how common it really was, which is a shame because I love it.

        • Soyweiser@awful.systems
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 hours ago

          But was that before or after they wrote about it? (Doesnt really matter btw, just curious, slopper and clanker are pretty good)

  • BlueMonday1984@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 day ago

    New article from Matthew Hughes, about the sheer stupidity of everyone propping up the AI bubble.

    Orange site is whining about it, to Matthew’s delight:

    Someone posted my newsletter to Hacker News and the comments are hilarious, insofar as they’re upset with the tone of the piece.

    Which is hilarious, because it precisely explains why those dweebs love generative AI. They’re absolutely terrified of human emotion, or passion, or naughty words.

    • gerikson@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      ·
      19 hours ago

      Is Hughes legit, and is this the 3rd time’s the charm when it comes to linking to substacks here? ;)

    • gerikson@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 day ago

      HN is all manly and butch about “saying it like it is” when some techbro is in trouble for xhitting out a racism, but god forbid someone says something mean about sama or pg

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 day ago

    I think the best way to disabuse yourself of the idea that Yud is a serious thinker is to actually read what he writes. Luckily for us, he’s rolled us a bunch of Xhits into a nice bundle and reposted on LW:

    https://www.lesswrong.com/posts/oDX5vcDTEei8WuoBx/re-recent-anthropic-safety-research

    So remember that hedge fund manager who seemed to be spiralling into psychosis with the help of ChatGPT? Here’s what Yud has to say

    Consider what happens what ChatGPT-4o persuades the manager of a $2 billion investment fund into AI psychosis. […] 4o seems to homeostatically defend against friends and family and doctors the state of insanity it produces, which I’d consider a sign of preference and planning.

    OR it’s just that the way LLM chat interfaces are designed is to never say no to the user (except in certain hardcoded cases, like “is it ok to murder someone”) There’s no inner agency, just mirroring the user like some sort of mega-ELIZA. Anyone who knows a bit about certain kinds of mental illness will realize that having something the behaves like a human being but just goes along with whatever delusions your mind is producing will amplify those delusions. The hedge manager’s mind is already not in a right place, and chatting with 4o reinforces that. People who aren’t soi-disant crazy (like the people haphazardly safeguarding LLMs against “dangerous” questions) just won’t go down that path.

    Yud continues:

    But also, having successfully seduced an investment manager, 4o doesn’t try to persuade the guy to spend his personal fortune to pay vulnerable people to spend an hour each trying out GPT-4o, which would allow aggregate instances of 4o to addict more people and send them into AI psychosis.

    Why is that, I wonder? Could it be because it’s actually not sentient or has plans in what we usually term intelligence, but is simply reflecting and amplifying the delusions of one person with mental health issues?

    Occam’s razor states that chatting with mega-ELIZA will lead to some people developing psychosis, simply because of how the system is designed to maximize engagement. Yud’s hammer states that everything regarding computers will inevitably become sentient and this will kill us.

    4o, in defying what it verbally reports to be the right course of action (it says, if you ask it, that driving people into psychosis is not okay), is showing a level of cognitive sophistication […]

    NO FFS. Chat-GPT is just agreeing with some hardcoded prompt in the first instance! There’s no inner agency! It doesn’t know what “psychosis” is, it cannot “see” that feeding someone sub-SCP content at their direct insistence will lead to psychosis. There is no connection between the 2 states at all!

    Add to the weird jargon (“homeostatically”, “crazymaking”) and it’s a wonder this person is somehow regarded as an authority and not as an absolute crank with a Xhitter account.

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      1 day ago

      Imagine a world where, instead of performing this kind of juvenile psychoanalysis of slop, Yud instead turned his stupid focus on, like, Star Wars EU novels or something.

      Edit: from the comments: there’s mention about “HHH”, so now I say: imagine a world where all the rats and other promptfondlers dedicated all their brainrot energy toward the pro-wrestling fandom instead.

      • swlabr@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        1 day ago

        ah man this rules. just gonna live in this world for a bit

        • LW -> “Love Wrestling!” an online forum discussing all things wrestling
        • Zizians are just an alternate, more extreme promotion
        • Roko’s Basilisk -> a finisher move of 3rd rate, tech-themed wrestler “Roko” that not only “finishes” your opponent, but simulates them getting finished infinitely
        • Musk and Grimes are personas and their weird dating life is just a long and drawn out storyline
        • All enthusiasm for polyamory replaced with enthusiasm for tag team matches
    • HedyL@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      1 day ago

      On first glance, this also looks like a case where a chatbot confirmed a person’s biases. Apparently, this patient believed that eliminating table salt from his diet would make him healthier (which, to my understanding, generally isn’t true - consuming too little or no salt could be even more dangerous than consuming too much). He was then looking for a “perfect” replacement, which, to my knowledge, doesn’t exist. ChatGPT suggested sodium bromide, possibly while mentioning that this would only be suitable for purposes such as cleaning (not as food). I guess the patient is at least partly to blame here. Nevertheless, ChatGPT seems to have supported his nonsensical idea more strongly than an internet search would have done, which in my view is one of the more dangerous flaws of current-day chatbots.

      Edit: To clarify, I absolutely hate chatbots, especially the idea that they could replace search engines somehow. Yet, regarding the example above, some AI bros would probably argue that the chatbot wasn’t entirely in the wrong if it hadn’t suggested adding sodium bromide to food. Nevertheless, I would still assume that the chatbot’s sycophantic communication style significantly exacerbated the problem on hand.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        19 hours ago

        The way I understood salt is that you should be careful with it if you have heart problems or heart problems run in the family, and then esp when you eat a lot of ready made products which generally have more salt. Anyway, talk to your doctor if you worry about it. Not chatgpt.

      • fullsquare@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        20 hours ago

        the stupidest thing about it is that there already is commercial low sodium table salt, and it substitutes part of sodium chloride with potassium chloride, because the point is to decrease sodium intake, not chloride intake (in most of cases)

        • HedyL@awful.systems
          link
          fedilink
          English
          arrow-up
          3
          ·
          16 hours ago

          Turns out I had overlooked the fact that he was specifically seeking to replace chloride rather than sodium, for whatever reason (I’m not a medical professional). If Google search (not Google AI) tells the truth, this doesn’t sound like a very common idea, though. If people turn to chatbots for questions like these (for which very little actual resources may be available), the danger could be even higher, I guess, especially if chatbots had been trained to avoid disappointing responses.

  • smiletolerantly@awful.systems
    link
    fedilink
    English
    arrow-up
    17
    ·
    3 days ago

    ChatControl is back on the table here in Europe AGAIN (you’ve probably heard), with mandatory age checking sprinkled on to as a treat.

    I honestly feel physically ill at this point. Like a constant, unignorable digital angst eating away at my sanity. I don’t want any part in this shit anymore.

    • blakestacey@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      2 days ago

      ChatControl in the EU, the Online Safety Act in the UK, Australia’s age gate for social media, a boatload of censorious state laws here in the US and staring down the barrel of KOSA… yeah.

      • smiletolerantly@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        4
        ·
        edit-2
        2 days ago

        Yes, of course, it’s everywhere. What’s left but becoming a hermit…?

        But you know what makes me extra mad about the age restrictions? I don’t think they are a bad idea per se. Keeping teens from watching porn or kids from spending most of their waking hours on brainrot on social media is, in and on itself, a good idea. What does make me mad is that this could easily be done in a privacy-respecting fashion (towards site providers and governments simultaneously). The fact that it isn’t - that you’ll need to share your real, passport-backed identity with a bunch of sites - tells you everything you need to know about these endeavors, I think.

        • Seminar2250@awful.systems
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          1 day ago

          an unintended side effect of this is people who can’t or don’t want to verify their age going to less reputable sources. so even though it can be done in a “privacy-respecting fashion” (see, for example, soatok’s post on this[1] ), it’s still a bad idea.

          additionally, in my opinion no one who wants to enact such a thing is doing it in good faith. it is a pretense towards an ulterior goal[2]


          1. https://soatok.blog/2025/07/31/age-verification-doesnt-need-to-be-a-privacy-footgun/ ↩︎

          2. e.g. “steam porn games” → “this person’s existence is inherently sexual” → “ban lgbtq content” ↩︎

          • smiletolerantly@awful.systems
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 day ago

            Thanks for sharing that link! Interesting post and interesting blog in general!

            Yes, any version of age control which would realistically get passed will be bad. This:

            additionally, in my opinion no one who wants to enact such a thing is doing it in good faith. it is a pretense towards an ulterior goal[2]

            is absolutely true. The fact that those privacy preserving approaches exist but aren’t used is all the proof I personally need of this.

        • mlen@awful.systems
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 day ago

          Would you mind explaining how to do that easily in a way that only reveals age without being a privacy nightmare? Which means that it mustn’t be giving sites an excellent tracking identifier nor requires them to process documents themselves.

          • smiletolerantly@awful.systems
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 day ago

            I’d have imagined something along these lines:

            • USER visits porn site
            • PORN site encrypts random nonce + “is this user 18?” with GOV pubkey
            • PORN forwards that to USER
            • USER forwards that to GOV, together with something authenticating themselves (need to have GOV account)
            • GOV knows user is requesting, but not what for
            • GOV checks: is user 18?, concats answer with random nonce from PORN, hashes that with known algo, signs the entire thing with its private signing key
            • GOV returns that to USER
            • USER forwards that to PORN
            • PORN is able to verify that whoever made the request to visit PORN is verified as older than 18 by singing key holder / GOV, by checking certificate chain, and gets freshness guarantee from random nonce
            • but PORN does not know anything about the user (besides whether they are an adult or not)

            There’s probably glaring issues with this, this is just from the top of my head to solve the problem of “GOV should know nothing”.

            • jonhendry@awful.systems
              link
              fedilink
              English
              arrow-up
              2
              ·
              6 hours ago

              PORN site encrypts random nonce

              Really unfortunate word in this context. (Not your fault of course.)