OpenAI could have made it less useful, “leading to a possible degradation in cognitive skills". He added that in a bid to cut costs, OpenAI could have reduced the parameters.
Princeton professor of computer science Arvind Narayanan and a PhD student at the same university co-authored a response in which they argue, among other things, that variance in behaviour does not suggest a degradation in capability. Reacting to user criticism, Peter Welinder (in pic), vice-president of OpenAI, which owns ChatGPT, said GPT-4 was getting smarter with each new version.
“When you use it more heavily, you start noticing issues you didn’t see before." Logan Kilpatrick, lead of developer relations at OpenAI, tweeted: “We are actively looking into the reports people shared." Human resources tasks like onboarding, training, performance management, and employee queries and complaints can be automated using ChatGPT. But to integrate OpenAI’s application programming interfaces (APIs) with the business workflows of companies, one has to continuously monitor, retrain and fine-tune the models to ensure that they continue to produce accurate output and stay up-to-date.
Variance in AI model behaviour only makes it a bigger challenge. The day the paper was released, Meta too released the second version of its free open-source LLM called Llama 2 for research and commercial use, providing an alternative to the pricy proprietary LLMs sold by OpenAI like ChatGPT Plus and Google’s Bard.
Interestingly, Databricks Inc., whose CTO is Zaharia (one of the paper’s authors), has open-sourced its LLM called Dolly 2.0. Hugging Face’s BigScience Large Open-science Open-access Multilingual Language Model (BLOOM), too, is open to researchers to run.
. Read more on livemint.com