Elon Musk unveiled Colossus, an AI training cluster powered by 100,000 Nvidia H100 chips, in September. It is currently the largest AI computing cluster globally. Musk plans to double the system’s capacity in a few months, incorporating 200,000 chips, including 50,000 Nvidia H200 units. These chips are critical for training large AI models, and Musk’s xAI startup aims to use them for developing its Grok model, designed to compete with OpenAI’s GPT-4.
Also Read: Fortnite Remix: The Finale Live Event: All you may want to know
Musk’s xAI has secured $6 billion in Series B funding from major investors like Andreessen Horowitz and Sequoia Capital, raising its valuation to $24 billion. Reports suggest that the startup is set to raise another $6 billion, doubling its valuation to $50 billion. In parallel, Oracle co-founder Larry Ellison revealed that he and Musk personally urged Nvidia CEO Jensen Huang to prioritize their chip orders, showcasing the high stakes involved.
Nvidia, while working to expand production, acknowledged the strain caused by Tesla and other major clients like Meta and OpenAI. “We’ve worked hard to meet the needs of all customers and have greatly expanded the available supply today,” said an Nvidia spokesperson. With competitors also vying for chips, Nvidia’s position as the top AI chip supplier underscores its pivotal role in the AI revolution.