Building the current crop of artificial intelligence chatbots has relied on specialized computer chips pioneered by Nvidia, which dominates the market and made itself the poster child of the AI boom
SANTA CLARA, Calif. — Building the current crop of artificial intelligence chatbots has relied on specialized computer chips pioneered by Nvidia, which dominates the market and made itself the poster child of the AI boom.
But the same qualities that make those graphics processor chips, or GPUs, so effective at creating powerful AI systems from scratch make them less efficient at putting AI products to work.
That's opened up the AI chip industry to rivals who think they can compete with Nvidia in selling so-called AI inference chips that are more attuned to the day-to-day running of AI tools and designed to reduce some of the huge computing costs of generative AI.
“These companies are seeing opportunity for that kind of specialized hardware,” said Jacob Feldgoise, an analyst at Georgetown University's Center for Security and Emerging Technology. “The broader the adoption of these models, the more compute will be needed for inference and the more demand there will be for inference chips.”
It takes a lot of computing power to make an AI chatbot. It starts with a process called training or pretraining — the “P” in ChatGPT — that involves AI systems “learning” from the patterns of huge troves of data. GPUs are good at doing that work because they can run many calculations at a time on a network of devices in communication with each other.
However, once trained, a generative AI tool still needs chips to do the work — such as when you ask a chatbot to compose a document or generate an image. That's where inferencing comes in. A
Read more on abcnews.go.com