Chatbots, GPTs and search engines powered by artificial intelligence (AI) are the talk of the town, but so are dire warnings about the unbridled development of the ‘revolutionary’ technology and the threat it poses to humans. Industry and policymakers agree that AI needs regulating.
But no one is quite sure how. Amidst all of this, a tussle is shaping up, as much by geopolitics as by the surge in research and development.
Proprietary commercialised AI models from companies like OpenAI, Google and Microsoft are being pitted against hordes of narrow domain-specific open source models.
<div data-placement=«Mid Article Thumbnails» data-target_type=«mix» data-mode=«thumbnails-mid» style=«min-height:400px; margin-bottom:12px;» class=«wdt-taboola» id=«taboola-mid-article-thumbnails-107973022»>
Open-source large language models (LLM) from China are increasingly topping the leaderboards, with massive token counts, and trends showing these models are not only catching up but outperforming the closed-sourced models.
Incoming disruptions
Every week there is a new disruption in the open-source AI community in China. Last December, China-based Deepseek released a 67-billion parameter strong LLM training on a dataset consisting of two trillion tokens.
Chinese startup 01.AI hit a $1-trillion valuation on the back of its open-source model Yi-34B that has outperformed other leading open-source models, including Meta’s popular Llama 2. Meanwhile, Alibaba’s Qwen AI models,