• 0 Posts
  • 392 Comments
Joined 5 months ago
cake
Cake day: March 31st, 2025

help-circle
  • i think that mbs got wiser than that at this point and doesn’t put oil money in softbank anymore. saudi money is in vision fund 1 (2017; 45% of it; another 15% is emirati money); i don’t think that vision fund 2 has any after they got burned on wework, wirecard, ftx, and so on and so on. per last ed zitron post, most of softbank funding round for openai is not from them but instead from some other investors, whoever they might be, so there clearly is someone further down the line. (initial 10b, of which 7.5b from softbank and 2.5 from others; then if it converts to for-profit another 30b, but if it doesn’t, it’s 10b (?), of which 8.3b is from other investors, and 1.7b from softbank (either the numbers are wrong or ed accidentally a 1b dollars), but i don’t believe that openai will convert, so it’s 9.2b from softbank and 10.8 from others, +20b from softbank otherwise)


  • well nobody guarantees that internet is safe, so it’s more on chatbot providers pretending otherwise. along with all the other lies about machine god that they’re building that will save all the worthy* in the incoming rapture of the nerds, and even if it destroys everything we know, it’s important to get there before the chinese.

    i sense a bit of “think of the children” in your response and i don’t like it. llms shouldn’t be used by anyone. there was recently a case of a dude with dementia who died after fb chatbot told him to go to nyc

    * mostly techfash oligarchs and weirdo cultists



  • commercial chatbots have a thing called system prompt. it’s a slab of text that is fed before user’s prompt and includes all the guidance on how chatbot is supposed to operate. it can get quite elaborate. (it’s not recomputed every time user starts new chat, state of model is cached after ingesting system prompt, so it’s only done when it changes)

    if you think that’s just telling chatbot to not do a specific thing is incredibly clunky and half-assed way to do it, you’d be correct. first, it’s not a deterministic machine so you can’t even be 100% sure that this info is followed in the first place. second, more attention is given to the last bits of input, so as chat goes on, the first bits get less important, and that includes these guardrails. sometimes there was a keyword-based filtering, but it doesn’t seem like it is the case anymore. the more correct way of sanitizing output would be filtering training data for harmful content, but it’s too slow and expensive and not disruptive enough and you can’t hammer some random blog every 6 hours this way

    there’s a myriad ways of circumventing these guardrails, like roleplaying a character that does these supposedly guardrailed things, “it’s for a story” or “tell me what are these horrible piracy sites so that i can avoid them” and so on and so on





  • it’s a long one and right behind paywall there’s a

    Table Of Contents
    How I Am Justifying "Guessing"
    NVIDIA, And What Will Happen Soonest
    Nevertheless, NVIDIA Will Accelerate The Collapse If Its Stock Falters
    Big Tech's Bubble Burst Moment Will Be When Growth Slows
    Capital Expenditures Are The Next Thing To Go
    CoreWeave — A Timebomb For The Markets and AI Writ Large
        How Does CoreWeave Collapse?
    The End Of AI Startup Funding
        How The Collapse Of Funding Begins — A Scenario (Chaos Bet)
        Costs Are A Brewing Scandal In Generative AI — Another Chaos Bet
    The Curious World of Anthropic and OpenAI, And My Own Suspicions About Their Businesses
        How About Anthropic's Costs?
        An Important Note: Anthropic and OpenAI's Costs Are Dramatically Underrepresented Because Neither Company Pays For The Construction Of (Or Owns) Their Infrastructure
        Questions About OpenAI's Revenue And Costs
        OpenAI Is Using Cal State's "Edu" Contracts To Pump Its Paid Business User Numbers — And It's Unclear How Long It Keep A User In Its Numbers
        What About Government Contracts?
        OpenAI Is Using "$1 for a Month" Subscription Deals On Teams To Juice Business User Numbers — And Offering Deals For ChatGPT Plus For $10-a-month To Stop User Churn (REMINDER: OpenAI Loses Money On Every User Anyway)
        I Believe OpenAI's Costs Are Worse Than They Seem — Where Is All The Money Going?
        OpenAI Will Burn At Least $3 Billion On Salaries In 2025, And May Spend As Much As $8 Billion — With Any Layoffs Guaranteeing An Industry-Wide Panic
        Compute Costs Are Likely Astronomical, Burning At Least $15 Billion — If Not $20 Billion — In 2025 Alone
        OpenAI Is Bleeding Out, And Could Run Out Of Money By End Of Year
    Chaos Bet: Microsoft Kills OpenAI By Blocking Its Non-Profit Conversion
        Even If Microsoft Agrees, OpenAI Does Not Have Enough Time To Convert To A For-Profit By The End Of The Year
    Chaos Pick: OpenAI Does Not Convert, And Does Not Receive More Money From SoftBank
        Alternate Chaos Pick: OpenAI Does Convert, But SoftBank Can't Get The Money
    How Does OpenAI (or Anthropic) IPO?
    I Believe Both OpenAI and Anthropic May Be Overstating Revenues And User Numbers, Using The Media To Launder Their Reputations
    What Happens Next?
    So Why Did You Say 2027?