ChatGPT, Google’s Bard and Anthropic’s Claude, and applying them to whatever area of human endeavor their founders think hasn’t had enough AI thrown at it yet. The sudden ubiquitousness of these startups and services suggests that the AIs they are leveraging are ready for prime time. In many ways, though, they are not.
Not yet, anyway. But the good news (for AI enthusiasts, anyway) is that the underlying AIs upon which all this hype rests are getting better, fast. And that means that today’s hype could quickly become tomorrow’s reality.
To understand all of this—why the AIs aren’t ready for prime time, how they are getting better, and what that can tell us about where we’re heading—we have to go on a bit of an intellectual journey. To start, it helps to understand how these AIs work. Two terms it’s imperative to know: “generative AI" and “foundation models." The current generation of AIs that have people so excited—the ones doing things that until a couple of years ago it seemed only humans could—are what are known as generative AIs.
They are based on foundation models, which are gigantic systems trained on enormous corpuses of data—in many cases, terabytes of information representing everything readily available on the internet. Generative AIs are the AIs that generate eerily humanlike responses to written prompts, or surprisingly convincing images, or artificial voices that sound just like the humans they copy. The best way to understand where these AIs might take us, and why predictions are only so useful, is to compare them to other transformative technologies in their earliest stages of development.
Read more on livemint.com