DeepSeek may have benefited from a method that allegedly piggybacks off the advances of US rivals called "distillation."
Budget with ET
How Budget 2025 can walk the tightrope of India's 'sins'
Will Budget add more firepower to defence industry's local manufacturing?
Sitharaman and co to fuel India's luxe aspirations, premium growth?
The technique, which involves one AI system learning from another AI system, may be difficult to stop, according to executive and investor sources in Silicon Valley.
DeepSeek this month rocked the technology sector with a new AI model that appeared to rival the capabilities of US giants like OpenAI, but at much lower cost. And the China-based company gave away the code for free.
Some technologists believe that DeepSeek's model may have learned from US models to make some of its gains. The distillation technique involves having an older, more established and powerful AI model evaluate the quality of the answers coming out of a newer model, effectively transferring the older model's learnings.
That means the newer model can reap the benefits of the massive investments of time and computing power that went into building the initial model without the associated costs.
Artificial Intelligence(AI)
Java Programming with ChatGPT: Learn using Generative AI
By — Metla Sudha Sekhar, IT Specialist and Developer
Artificial Intelligence(AI)
Basics of Generative AI: Unveiling Tomorrows Innovations
By — Metla Sudha Sekhar, IT Specialist and Developer
Artificial Intelligence(AI)
Generative AI for Dynamic