Mira Murati, the chief technology officer at OpenAI, believes government regulators should be “very involved” in developing safety standards for the deployment of advanced artificial intelligence models such as ChatGPT.
She also believes a proposed six-month pause on development isn’t the right way to build safer systems and that the industry isn’t currently close to achieving artificial general intelligence (AGI) — a hypothetical intellectual threshold where an artificial agent is capable of performing any task requiring intelligence, including human-level cognition. Her comments stem from an interview with the Associated Press published on April 24.
Related: Elon Musk to launch truth-seeking artificial intelligence platform TruthGPT
When asked about the safety precautions OpenAI took before the launch of GPT-4, Murati explained that the company took a slow approach to training to not only inhibit the machine’s penchant for unwanted behavior but also to locate any downstream concerns associated with such changes:
In the wake of GPT-4’s launch, experts fearing the unknown-unknowns surrounding the future of AI have called for interventions ranging from increased government regulation to a six-month pause on global AI development.
The latter suggestion garnered attention and support from luminaries in the field of AI such as Elon Musk, Gary Marcus, and Eliezer Yudkowski, while many notable figures including Bill Gates, Yann LeCun and Andrew Ng have come out in opposition.
a big deal: @elonmusk, Y. Bengio, S. Russell, @tegmark, V. Kraknova, P. Maes, @Grady_Booch, @AndrewYang, @tristanharris & over 1,000 others, including me, have called for a temporary pause on training systems exceeding GPT-4
Read more on cointelegraph.com