Subscribe to enjoy similar stories. Two years after ChatGPT took the world by storm, generative artificial intelligence seems to have hit a roadblock. The energy costs of building and using bigger models are spiralling, and breakthroughs are getting harder.
Fortunately, researchers and entrepreneurs are racing for ways around the constraints. Their ingenuity will not just transform AI. It will determine which firms prevail, whether investors win, and which country holds sway over the technology.
Large language models have a keen appetite for electricity. The energy used to train OpenAI’s gpt-4 model could have powered 50 American homes for a century. And as models get bigger, costs rise rapidly.
By one estimate, today’s biggest models cost $100m to train; the next generation could cost $1bn, and the following one $10bn. On top of this, asking a model to answer a query comes at a computational cost—anything from $2,400 to $223,000 to summarise the financial reports of the world’s 58,000 public companies. In time such “inference" costs, when added up, can exceed the cost of training.
If so, it is hard to see how generative AI could ever become economically viable. This is frightening for investors, many of whom have bet big on AI. They have flocked to Nvidia, which designs the chips most commonly used for AI models.
Its market capitalisation has risen by $2.5trn over the past two years. Venture capitalists and others have ploughed nearly $95bn into AI startups since the start of 2023. OpenAI, the maker of ChatGPT, is reportedly seeking a valuation of $150bn, which would make it one of the biggest private tech firms in the world.
Read more on livemint.com