So taking data without permission is bad, now?
I’m not here to say whether the R1 model is the product of distillation. What I can say is that it’s a little rich for OpenAI to suddenly be so very publicly concerned about the sanctity of proprietary data.
The company is currently involved in several high-profile copyright infringement lawsuits, including one filed by The New York Times alleging that OpenAI and its partner Microsoft infringed its copyrights and that the companies provide the Times’ content to ChatGPT users “without The Times’s permission or authorization.” Other authors and artists have suits working their way through the legal system as well.
Collectively, the contributions from copyrighted sources are significant enough that OpenAI has said it would be “impossible” to build its large-language models without them. The implication being that copyrighted material had already been used to build these models long before these publisher deals were ever struck.
The filing argues, among other things, that AI model training isn’t copyright infringement because it “is in service of a non-exploitive purpose: to extract information from the works and put that information to use, thereby ‘expand[ing] [the works’] utility.’”
This kind of hypocrisy makes it difficult for me to muster much sympathy for an AI industry that has treated the swiping of other humans’ work as a completely legal and necessary sacrifice, a victimless crime that provides benefits that are so significant and self-evident that it’s wasn’t even worth having a conversation about it beforehand.
A last bit of irony in the Andreessen Horowitz comment: There’s some handwringing about the impact of a copyright infringement ruling on competition. Having to license copyrighted works at scale “would inure to the benefit of the largest tech companies—those with the deepest pockets and the greatest incentive to keep AI models closed off to competition.”
“A multi-billion-dollar company might be able to afford to license copyrighted training data, but smaller, more agile startups will be shut out of the development race entirely,” the comment continues. “The result will be far less competition, far less innovation, and very likely the loss of the United States’ position as the leader in global AI development.”
Some of the industry’s agita about DeepSeek is probably wrapped up in the last bit of that statement—that a Chinese company has apparently beaten an American company to the punch on something. Andreessen himself referred to DeepSeek’s model as a “Sputnik moment” for the AI business, implying that US companies need to catch up or risk being left behind. But regardless of geography, it feels an awful lot like OpenAI wants to benefit from unlimited access to others’ work while also restricting similar access to its own work.
I remember that there was a science group that each year got millions in funding, unconditionally. Except everything you discovered would be open for anyone to use.
Because it was unconditional, they could research ANYTHING. And it was very successful, because they could invent things without being controlled by profits or share holders.
It basically worked well.
EDIT: Found some of them. Look up The Invisible College or The Institute of Advanced Study. Also found 4 similar groups in Denmark being funded by private firms (like Carlsberg, the beer maker), where they can study anything and make it public.