The Wall Street Journal show. The company used the categorized passages to build an AI safety filter that it would ultimately deploy to constrain ChatGPT from exposing its tens of millions of users to similar content. “My experience in those four months was the worst experience I’ve ever had in working in a company," Alex Kairu, one of the Kenya workers, said in an interview.
Teaching ChatGPT OpenAI marshaled a sprawling global pipeline of specialized human labor for over two years to enable its most cutting-edge AI technologies to exist, the documents show. Much of this work was benign, for instance, teaching ChatGPT to be an engaging conversationalist or witty lyricist. AI researchers and engineers say such human input will continue to be essential as OpenAI and other companies hone the technology.
Alexandr Wang, chief executive of Scale AI, one outsourcing company that provides contractors to OpenAI for reviewing and categorizing content, tweeted in February that companies could soon spend hundreds of millions of dollars a year to provide AI systems with human feedback. Others estimate that companies are already investing between millions and tens of millions of dollars on it annually. OpenAI said it hired more than 1,000 workers for this purpose.
Mark Sears, the founder and CEO of CloudFactory, a company that supplies workers to clean and label data sets for AI, said reviewing toxic content goes hand-in-hand with the less objectionable work to make systems like ChatGPT usable. Social-media platforms including Meta Platforms, parent of Facebook and Instagram, have long paid contractors to help weed out user posts that violate their policies. The work done for OpenAI is even more vital to the product because it is
. Read more on livemint.com