David Sacks said there’s “substantial evidence” that Chinese upstart DeepSeek leaned on the output of OpenAI’s models to help develop its own technology.
Budget with ET
India, get ready for the modern warfare of tomorrow
Women, youth, farmers and poor can continue to be Budget 2025's ‘roti, kapda aur makan’
Modi govt has a key task in Budget 2025: Unlocking the PLI goldmine
In an interview with Fox News, Sacks described a technique called distillation whereby one AI model uses the outputs of another for training purposes to develop similar capabilities.
“There’s substantial evidence that what DeepSeek did here is they distilled knowledge out of OpenAI models and I don’t think OpenAI is very happy about this,” Sacks said, without detailing the evidence. OpenAI did not immediately respond to a request for comment.
Last week, DeepSeek released a new open-source AI model called R1 that can mimic the way humans reason. The company said R1 rivaled or outperformed leading US developers on a range of industry benchmarks — and claimed it was built for a small fraction of the cost. Many in the tech industry are now puzzling over how DeepSeek built its technology and questioning some of the company’s claims.
OpenAI CEO Sam Altman has told OpenAI employees that his startup is trying to understand if and to what extent DeepSeek’s performance is the result of distilling OpenAI’s models, as opposed to an independent research breakthrough, a person familiar with the matter previously told Bloomberg.
Artificial Intelligence(AI)
Ja