Subscribe to enjoy similar stories. In the fierce competition to build the best artificial-intelligence systems, the most precious resource isn’t data, researchers or cash. It’s an expensive chip called a graphics processing unit.
Tech CEOs including Elon Musk, Mark Zuckerberg and Sam Altman think that the difference between dominance and defeat in AI comes down to amassing as many GPUs as possible and networking them together in massive data centers that cost billions of dollars each. If AI requires building at this scale, Silicon Valley’s leaders think, then only giants like Microsoft, Meta Platforms, Alphabet’s Google and Amazon, or startups with deep-pocketed investors like OpenAI, can afford to do it. People like Alex Cheema think there’s another way.
Cheema, co-founder of EXO Labs, is among a burgeoning group of founders who say they believe success in AI lies in finding pockets of underused GPUs around the world and stitching them together in virtual “distributed" networks over the internet. These chips can be anywhere—in a university lab or a hedge fund’s office or a gaming PC in a teenager’s bedroom. If it works, the setup would allow AI developers to bypass the largest tech companies and compete against OpenAI or Google at far lower cost.
That approach, coupled with engineering techniques popularized by Chinese AI startup DeepSeek and other open-source models, could make AI cheaper to develop. “The fundamental constraint with AI is compute," Cheema says, using the industry term for GPUs. “If you don’t have the compute, you can’t compete.
Read more on livemint.com