Reddit said in a filing to the Securities and Exchange Commission that its users’ posts are “a valuable source of conversation data and knowledge” that has been and will continue to be an important mechanism for training AI and large language models. The filing also states that the company believes “we are in the early stages of monetizing our user base,” and proceeds to say that it will continue to sell users’ content to companies that want to train LLMs and that it will also begin “increased use of artificial intelligence in our advertising solutions.”

The long-awaited S-1 filing reveals much of what Reddit users knew and feared: That many of the changes the company has made over the last year in the leadup to an IPO are focused on exerting control over the site, sanitizing parts of the platform, and monetizing user data.

Posting here because of the privacy implications of all this, but I wonder if at some point there should be an “Enshittification” community :-)

  • LWD@lemm.ee
    link
    fedilink
    arrow-up
    12
    arrow-down
    1
    ·
    10 months ago

    LLM’s are a parasitic entity. They can only operate as long as they have a living host (us) on which to draw data. Without their host, they rapidly start hallucinating. Hell, the other day ChatGPT (and every business that relied on it) started hallucinating for no apparent reason.

    The thing about the parasite is, though, that it endangers its host. At some point, the fact that anything you say can be plugged into a machine with no credit given back to you, will encourage creative people to stop bothering being creative, depriving them of income or even exposure.

    It’s a funny thing, a few years ago I would say that the “anything you post here can be sold by us” clause on social media was very unlikely to get exploited, as nobody knew how to sell data en masse to make money off of it. I guess now we know that’s not true at all. If something bad can happen with your data… It will.