Not something I believe full stop, but imo there are signs that should there be a bubble, it will pop later than we may think. A few things for consideration.

Big tech continues to invest. They are greedy. They aren’t stupid. They have access to better economic forcasting than we do. I believe they are aware of markets for the /application/ of AI which will continue to be profitable in the future. Think of how many things are pOwErEd By ArTiFiCiAl InTelIGence. That’s really speak for we have api tokens we pay for.

Along these lines comes the stupid. Many of us have bosses who insist, if not demand, we use AI. The US Secretary of Defense had his own obnoxious version if this earlier this week. If the stupid want it, the demand will remain if not increase.

Artificial intellegence is self replicating, meaning if we feed it with whatever stupid queries we make, it will “get better” at the specifics and “create more versions”. This creates further reliance and demand on those products that “do exactly what we want”. It’s an opiate. Like that one tng episode with the headsets (weak allusion and shameless pandering I know)

IMO generative AI is a dead end which will only exacerbate existing inequity. That doesn’t mean there won’t continue to be tremendous buy in which will warp our collective culture maintaining it’s profitability. If the bubble bursts, I don’t think it will be for a while.

  • SaveTheTuaHawk@lemmy.ca
    link
    fedilink
    English
    arrow-up
    6
    ·
    3 days ago

    They aren’t stupid.

    Yeah, they are. So much tech out of the USA is a house of cards built on bullshit. There is so much bullshit, it’s consumed an industry that after digitization of practical databases once on paper, has no new ideas.

    " bullshitters seek to convey a certain impression of themselves without being concerned about whether anything at all is true. They quietly change the rules governing the conversation so that claims about truth and falsity are irrelevant. Although bullshit can take innocent forms, excessive indulgence in it can eventually undermine the bullshitter’s capacity to tell the truth in a way that lying doesn’t. Liars at least acknowledge that the truth matters. Because of this, Frankfurt says, “bullshit is a greater enemy of the truth than lies are.”

    Harry G. Frankfurt, On Bullshit. Princeton Press.

  • brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    3 days ago

    Artificial intellegence is self replicating, meaning if we feed it with whatever stupid queries we make, it will “get better” at the specifics and “create more versions”

    Disagree.

    In fact, there are signs that extensive “user preference” training is deep frying models, so they score better in settings like LM Arena but get worse at actual work. See: ChatGPT 5.2, Gemini 2.5 Experimental before it regressed at release, Mistral’s latest deepseek-arch release, Qwen3’s reduction in world knowledge vs 2.5, and so on.

    And also, they don’t train themselves, they don’t learn on the go; that’s all manually done.

    They aren’t stupid.

    No, but… The execs are drinking a lot of Kool-aid, at least going from their public statements and behavior. Zuckerburg, for example, has completely gutted his best AI team, the literal inventor of modern open LLM infrastructure, for a bunch of tech bros with egos bigger than their contributions. OpenAI keeps making desperate short-term decisions instead of (seemingly) investing in architectural experimentation, giving everyone an easy chance to catch them. Google and Meta are poisoning their absolute best data sources, and it’s already starting to bite Meta.

    Honestly, I don’t know what they’re thinking.


    If the bubble bursts, I don’t think it will be for a while.

    …I think the bubble will drag on for some time.

    But I’m a massive local LLM advocate, and I’m telling you: it’s a bubble. These transformers(ish) LLMs are useful tools, not human replacements that will scale infinitely. That last bit is a scam.

    • SaveTheTuaHawk@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      Honestly, I don’t know what they’re thinking.

      They are thinking that if you get the hype train fast enough, nothing will slow it down and by the time everyone realizes it’s all bullshit, you already have your third yacht and have cashed out.

      US tech is driven by Boomer investors with too much money, too much greed, and too little education.

  • 6nk06@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    19
    ·
    5 days ago

    They aren’t stupid.

    That’s where we disagree. Even more when you see all those idiots who were parading in front of the cameras on the pedophile’s island, and trying to deny it publicly.

    • nymnympseudonym@piefed.social
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      6
      ·
      edit-2
      4 days ago

      they aren’t stupid

      see all those idiots

      What is this groupthink tribalism other-politics? Who exactly are “they”? I know people think of Peter Theil and Alex Carp and Sam Altman but what about Dario Amodei or Ilya Sutskever? Are Yan LeCun or Geoffrey Hinton “Them”? Do you even know these names or just the ones in the news that make you indignant?

      Would you believe in an industry with millions of workers and thousands of executive bosses, there is a broad range of individuals and perspectives?

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    4 days ago

    Mmhh, I don’t think AI is self-replicating. We have papers detailing how it gets stupider after being fed its own output. So it needs external, human-written text to learn. And that’s in limited supply.

    Reinforcement learning with human feedback is certainly a thing, but I don’t think that feedback changes it substantially. It is a bit of fine-tuning which happens with user feedback but not much.

    And I mean Altman, Zuckerberg etc just say whatever gets them investor money. It’s not like they have a crystal ball and can tell whether there is going to be some scientific breaktrough in 2029 which is going to solve the scaling problem. They’re just charismatic salesmen and people like to throw money ontop of huge piles of money… And we have some plain crazy people like Musk and Peter Thiel. But I really don’t think there’s some advanced forecasting involved here. It’s a good old hype. And maybe the technology really has some potential.

    Agree on the whole, will pop later than we think.

    • nymnympseudonym@piefed.social
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      edit-2
      4 days ago

      papers detailing how it gets stupider after being fed its own output

      “Model collapse”

      Turns out to be NBD, you just have to be careful with both the generated outputs and with the mathematics. All major models are pretrained on synthetic (AI-generated) data these days.

      • hendrik@palaver.p3x.de
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        4 days ago

        I think it’s a bit more complicated than that, not sure if I’d call it no big deal… You’re certainly right, it’s impressive what they can do with synthetic data these days. But as far as I’m aware that’s mostly used to train substantially smaller models from output of bigger models. I think it’s called distillation? I did not read any paper revising the older findings with synthetic data. And to be honest, I think we want the big models to improve. And not just by a few percent each year, like what OpenAI is able to do these days… We’d need to make it like 10x more intelligent and less likely to confabulate answers, so it starts becoming reliable and usable for tasks like proper coding. And with the exponential need for more training data, we’d probably need many times the internet and all human-written books to go in, to make it two times or five times better than it is today. So it needs to work with mostly synthetic data. And then I’m not sure if that even works. Can we even make more intelligent newer models learn from the output of their stupider predecessors? With humans we mostly learn from people who are more intelligent than us, it’s rarely the other way round. And I don’t see how language is like chess, where AI can just play a billion games and just learn from that, that’s not really how LLMs work.

  • nymnympseudonym@piefed.social
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    4 days ago

    You’ll get downvoted to hell and so will I, but I’ll share my personal observation working at a large IT company in the AI space.

    Everybody in my company and our competitors are automating the shit out of everything we can. In some cases it’s stuff we could have automated with regular cloud automation stuff, there just wasn’t organizational focus. But in ~75% of cases it’s automating things that used to require an engineer doing some Brain Work.

    Simple build breaks or bug fixes used now get auto-fixed and reviewed later. Not at 100% success rate but it started at like 15% and then 25% and …

    Whoops some problem in the automation scripts, only have a junior engineer on call right now and he doesn’t know Groovy syntax? No problem, not knowing the language is not a blocker anymore. Engineer just needs to tweak the AI suggestions.

    Code reviews? Well the AI already caught a lot of the common stuff in our org standards before the PR was submitted, so engineers are focusing on the tricky issues not the common easy to find ones.

    Management wants quantifiable numbers. Sometimes that’s easy (“X% of bugs fixed automatically saving ~Y person-hours”), sometimes like with code reviews it’s a quality thing that will show up over time.

    But we’re all scrambling like fuck, knowing full well that
    a) everything is up for change right now and nobody knows where this is going
    b) we coders are like the horseshoe makers we better figure out how the fuck to get in front of this
    c) just like the Internet – the companies that Figure It Out will be so much more efficient, that their competitors will Just Die

    I can only speak for large corporate IT. But AFAICT, it’s exactly like the Internet – just even more disruptive.

    To quote Jordan Peele: Stay woke, bitches!

    • jacksilver@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 days ago

      I’m just amazed whenever I hear people say things like this as I can’t get any model to spit out working code most of the time. And even when I can it’s inconsistent and/or questionable quality code.

      Is it because most of your work is small iterations on an existing code base? Are you only working with the most popular tools that are better supported by models?

      • nymnympseudonym@piefed.social
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        edit-2
        4 days ago

        Llama 4 sucked but with scaffolding could solve some common problems.

        o1/3 was way better less gaslighting

        Grok4 kicked it up a notch more like a pro coder

        GPT5 and Claude able to solve real problems, implement simple features.

        A lot depends on not just the codebase but on context, aka prompt engineering. Does the AI have access to relevant design docs? Interface definitions? Clearly written, well-formed bug? … but not so much that context is overwhelming and it doesn’t work well again

        • jacksilver@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 days ago

          Okay, that’s more or less what I was expecting. A lot of my work is on smaller problems with more open ended solutions and in those scenarios I find the AI only really helps with boiler plate stuff. Most of the packages I work with it only ever has a fleeting understanding or mixes up versioning so badly that it’s really hard to trust it.

    • A_Union_of_Kobolds@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      4 days ago

      There is only one thing for certain: the people who hold the purses dictate the policies.

      I sympathize for the IT workers who feel like they’re engineering their replacements. Eventually, only a fraction of those jobs will survive.

      I believe hardware and market limitations will curb AI growth in the near future, hopefully the dust will start to settle and the real people who need to feed their families will find a way through. I think one way or another, there will be a serious need for social safety net programs to offset the IT labor surplus, which, hopefully, could create a (Socialist) Red Wave.

      • nymnympseudonym@piefed.social
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        4 days ago

        the people who hold the purses dictate the policies

        Partly true. They are engaged in a dance with the technologists and researchers as we collectively figure out how exactly this is all going to work

        IT workers who feel like they’re engineering their replacements

        I know some who feel that way. But anecdotally most I know feel like we’re racing in front of technology and history more than against plutocracy

    • yeehaw@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      Ya but someone told me something that gave me pause.

      Microsoft, Google, etc, all have other very successful revenue streams. They can afford to lose on this for years and years until it hits. Or until they become the only game in town because they’re the only ones that could afford it for so long.

      And what then? Maybe processing power becomes much cheaper. Maybe big contracts have been gained by other businesses to manage segments of their businesses. Maybe it’s improved that much more?

      I do ponder this from time to time.