Oxford University this week shut down an academic institute run by one of Elon Musk’s favorite philosophers. The Future of Humanity Institute, dedicated to the long-termism movement and other Silicon Valley-endorsed ideas such as effective altruism, closed this week after 19 years of operation. Musk had donated £1m to the FIH in 2015 through a sister organization to research the threat of artificial intelligence. He had also boosted the ideas of its leader for nearly a decade on X, formerly Twitter.

The center was run by Nick Bostrom, a Swedish-born philosopher whose writings about the long-term threat of AI replacing humanity turned him into a celebrity figure among the tech elite and routinely landed him on lists of top global thinkers. OpenAI chief executive Sam Altman, Microsoft founder Bill Gates and Tesla chief Musk all wrote blurbs for his 2014 bestselling book Superintelligence.

Bostrom resigned from Oxford following the institute’s closure, he told the Guardian.

The closure of Bostrom’s center is a further blow to the effective altruism and longtermism movements that the philosopher has spent decades championing, which in recent years have become mired in scandals related to racism, sexual harassment and financial fraud. Bostrom himself issued an apology last year after a decades-old email surfaced in which he claimed “Blacks are more stupid than whites” and used the N-word.

Effective altruism, the utilitarian belief that people should focus their lives and resources on maximizing the amount of global good they can do, has become a heavily promoted philosophy in recent years. The philosophers at the center of it, such as Oxford professor William MacAskill, also became the subject of immense amounts of news coverage and glossy magazine profiles. One of the movement’s biggest backers was Sam Bankman-Fried, the now-disgraced former billionaire who founded the FTX cryptocurrency exchange.

Bostrom is a proponent of the related longtermism movement, which held that humanity should concern itself mostly with long term existential threats to its existence such as AI and space travel. Critics of longtermism tend to argue that the movement applies an extreme calculus to the world that disregards tangible current problems, such as climate change and poverty, and veers into authoritarian ideas. In one paper, Bostrom proposed the concept of a universally worn “freedom tag” that would constantly surveil individuals using AI and relate any suspicious activity to a police force that could arrest them for threatening humanity.

The past few years have been tumultuous for effective altruism, however, as Bankman-Fried’s multibillion-dollar fraud marred the movement and spurred accusations that its leaders ignored warnings about his conduct. Concerns over effective altruism being used to whitewash the reputation of Bankman-Fried, and questions over what good effective altruist organizations are actually doing, proliferated in the years since his downfall.

Meanwhile, Bostrom’s email from the 1990s resurfaced last year and resulted in him issuing a statement repudiating his racist remarks and clarifying his views on subjects such as eugenics. Some of his answers – “Do I support eugenics? No, not as the term is commonly understood” – led to further criticism from fellow academics that he was being evasive.

  • Zimited@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    4
    ·
    edit-2
    8 months ago

    Unrelated. But what makes you guys trust sources like Guardian so much? Ever since they spread toxic misinformation,or at the very best, guided peasimistic truth without context about Pewdiepie back in 2017 they pretty much just lost me.

    I feel like I have had a significant amount of experiences that has seen most of these more well established news outlets dealing in misinformation.

    They were probably the truth back in the day. But I don’t feel that any more.

    • ᴇᴍᴘᴇʀᴏʀ 帝@feddit.ukOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 months ago

      I’m not really sure what toxic misinformation about Pewdiepie they spread (got a source?) but The Guardian is a left-leaning newspaper that does top notch investigative journalism (when we have very few of either of those in the UK). However, as with every source, you have to understand their inherent biases and read around a subject to get a more balanced view of any story you are interested in. So my major concern with The Guardian is they got a reputation for running TERF articles. I believe they have rectified this to some extent but it’s something I keep an eye out for. It is still one of the best newspapers in the country and, in regards to this article, I can’t really fault their coverage as the TREACLES have been a cause for concern for a while - this article may even be too restrained but there are other outlets really chasing this issue. See: !sneerclub@awful.systems.

      • Zimited@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        8 months ago

        I appreciate your elaboration.

        As for a source, here is one I had in mind: Guardian link

        My reasoning is that I feel they took it out of context and made it so that to new eyes it could seem like he possibly actually supported antisemitism. While in reality, anybody watching the original context with a head on their shoulders would know he made the people write the worst thing that came to mind on the spot for a joke. Funny or not not doesn’t really matter, as they made him seem like there was a possibility he was the exact opposite of what he really is.

        • ᴇᴍᴘᴇʀᴏʀ 帝@feddit.ukOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          8 months ago

          I don’t see that as “spread toxic misinformation”, the video is tied to an article reporting that YouTube and Disney because of the content of his video based on an investigation by the Wall Street Journal.

          The videos did not directly promote Nazi ideology, instead using the imagery and phrasing of fascism for its shock value alone.

          That seems like pretty standard reporting to me, unless I am missing something.