ChatGPT can quickly process a huge training corpus from the internet, a task that would take a human tens of thousands of lifetimes dedicated solely to reading. This is achievable because, while learning, computers can perform extensive parallel computations and share data among themselves at rates billions of times faster than humans, who are limited to exchanging a few bits per second using language.
Additionally, computer programs are, unlike humans, immortal, capable of being copied or self-replicating like computer viruses. When can we expect such superhuman AIs? Until recently, I placed my 50% confidence interval between a few decades and a century.
Since GPT-4, I have revised my estimate down to between a few years and a couple of decades, a view shared by my Turing award co-recipients. What if it occurs in five years? Or even ten? OpenAI, the company behind GPT, is among those who think it could happen by then.
Are we prepared for this possibility? Do we comprehend the potential consequences? No matter the potential benefits, disregarding or playing down catastrophic risks would be very unwise. How might such catastrophes arise? There are always misguided or ill-intentioned people, so it seems highly probable that at least one organisation or person would—intentionally or not—misuse a powerful tool once it became widely available.
We are not there yet, but imagine a scenario where a method for achieving superhuman AI becomes publicly known and the model downloadable and usable with the resources accessible to a mid-sized company, as is currently the case with open-source—but fortunately not superhuman—large language models. What are the chances that at least one such organisation would download the model and
. Read more on livemint.com