The narrative that OpenAI, Microsoft, and freshly minted White House “AI czar” David Sacks are now pushing to explain why DeepSeek was able to create a large language model that outpaces OpenAI’s while spending orders of magnitude less money and using older chips is that DeepSeek used OpenAI’s data unfairly and without compensation. Sound familiar?
Both Bloomberg and the Financial Times are reporting that Microsoft and OpenAI have been probing whether DeepSeek improperly trained the R1 model that is taking the AI world by storm on the outputs of OpenAI models.
It is, as many have already pointed out, incredibly ironic that OpenAI, a company that has been obtaining large amounts of data from all of humankind largely in an “unauthorized manner,” and, in some cases, in violation of the terms of service of those from whom they have been taking from, is now complaining about the very practices by which it has built its company.
OpenAI is currently being sued by the New York Times for training on its articles, and its argument is that this is perfectly fine under copyright law fair use protections.
“Training AI models using publicly available internet materials is fair use, as supported by long-standing and widely accepted precedents. We view this principle as fair to creators, necessary for innovators, and critical for US competitiveness,” OpenAI wrote in a blog post. In its motion to dismiss in court, OpenAI wrote “it has long been clear that the non-consumptive use of copyrighted material (like large language model training) is protected by fair use.”
OpenAI argues that it is legal for the company to train on whatever it wants for whatever reason it wants, then it stands to reason that it doesn’t have much of a leg to stand on when competitors use common strategies used in the world of machine learning to make their own models.
You made me look ridiculously stupid and rightfully so. Actually, I take that back, I made myself look stupid and you made it obvious as it gets! Thanks for the wake up call
If I understand correctly, the model is in a way a dictionary of questions with responses, where the journey of figuring out the response is skipped. As in, the answer for the question “What’s the point of existence” is “42”, but it doesn’t contain the thinking process that lead to this result.
If that’s so, then wouldn’t it be especially prone to hallucinations? I don’t imagine it would respond adequately to the third “why?” in the row.
You kind of get it, it’s not really a dictionary, it’s more like a set of steps to transform noise that is tinted with your data, into more coherent data. Pass this input through a series of valves that are all open a different amount.
If we set the valves just perfectly, the output will kind of look like what we want it to.
Yes, LLMs are prone to hallucinations, which isn’t always actually a bad thing, it’s only bad if you are trying to do things that you need 100% accuracy for, like specific math.
I recommend 3blue1browns videos on LLMs for a nice introduction into how they actually work.