- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
Summary
OpenAI and Microsoft are investigating whether Chinese AI startup DeepSeek improperly trained its R1 model using OpenAI’s outputs.
Reports suggest DeepSeek may have used “distillation,” a technique where one AI model learns from another by asking vast numbers of questions.
Venture capitalist and Trump administration member David Sacks claims there is “substantial evidence” of this.
Critics highlight the irony of OpenAI complaining about data misuse, given its own history of scraping vast amounts of data without authorization to train its models.
“Our output is so good, our competitors are training on it!”
Sure, grandpa. Let’s get you back in the bed and I’ll see if the nurses can find an extra pudding.