Subscribe to enjoy similar stories. Tech titans have a new way to measure who is winning in the race for AI supremacy: who can put the most Nvidia chips in one place. Companies that run big data centers have been vying for the past two years to buy up the artificial-intelligence processors that are Nvidia’s specialty.
Now some of the most ambitious players are escalating those efforts by building so-called super clusters of computer servers that cost billions of dollars and contain unprecedented numbers of Nvidia’s most advanced chips. Elon Musk’s xAI built a supercomputer it calls Colossus—with 100,000 of Nvidia’s Hopper AI chips—in Memphis in a matter of months. Meta Chief Executive Mark Zuckerberg said last month that his company was already training its most advanced AI models with a conglomeration of chips he called “bigger than anything I’ve seen reported for what others are doing." A year ago, clusters of tens of thousands of chips were seen as very large.
OpenAI used around 10,000 of Nvidia’s chips to train the version of ChatGPT it launched in late 2022, UBS analysts estimate. Such a push toward larger super clusters could help Nvidia sustain a growth trajectory that has seen it rise from about $7 billion of quarterly revenue two years ago to more than $35 billion today. That jump has helped make it the world’s most-valuable publicly listed company, with a market capitalization of more than $3.5 trillion.
Installing many chips in one place, linked together by superfast networking cables, has so far produced larger AI models at faster rates. But there are questions about whether ever-bigger super clusters will continue to translate into smarter chatbots and more convincing image-generation tools. The continuation
. Read more on livemint.com