OpenAI saga over the past week has brought attention to artificial general intelligence (AGI), especially with the revelations in a Reuters report that several researchers at the company had warned its board of a «powerful artificial intelligence discovery that they said could threaten humanity».
So, what is AGI, how is it different from generative AI that powers OpenAI’s ChatGPT and what's at stake? An ET explainer:
What is artificial general intelligence?
Artificial general intelligence, or AGI, is generally understood as software that has the general cognitive abilities of human beings and is able to perform any task that a human can.
In other words, it is AI that is possibly as smart as humans. This would mean that, like a human, a computer would be able to apply its ‘mind’, so to speak, and find a solution when faced with an unfamiliar situation.
According to Gartner’s information technology glossary, AGI is “a form of AI that possesses the ability to understand, learn and apply knowledge across a wide range of tasks and domains”.
But there are disagreements around the definition. For instance, OpenAI, in its charter, defines it as “highly autonomous systems that outperform humans at most economically valuable work”.
How is it different from genAI?
Generative AI is AI that has learnt from a large amount of data fed into its model and is able to generate new data in text, audio, image, video, etc., with similar