Meta said on Friday it was releasing a batch of new AI models from its research division, including a "Self-Taught Evaluator" that may offer a path toward less human involvement in the AI development process.
The release follows Meta's introduction of the tool in an August paper, which detailed how it relies upon the same «chain of thought» technique used by OpenAI's recently released o1 models to get it to make reliable judgments about models' responses.
That technique involves breaking down complex problems into smaller logical steps and appears to improve the accuracy of responses on challenging problems in subjects like science, coding and math.
Meta's researchers used entirely AI-generated data to train the evaluator model, eliminating human input at that stage as well.
The ability to use AI to evaluate AI reliably offers a glimpse at a possible pathway toward building autonomous AI agents that can learn from their own mistakes, two of the Meta researchers behind the project told Reuters.
Legal
Commercial Contract and Dispute Resolution
By — Mukul Sharma, Partner- Cyril Amarchand Mangaldas
Artificial Intelligence(AI)
AI for Everyone: Understanding and Applying the Basics on Artificial Intelligence
By — Ritesh Vajariya, Generative AI Expert
Marketing
Performance Marketing for eCommerce Brands
By — Zafer Mukeri, Founder- Inara Marketers
Artificial Intelligence(AI)
Basics of Generative AI: Unveiling Tomorrow's Innovations
By — Metla Sudha Sekhar, IT Specialist and Developer
Finance
AI and Generative AI for Finance
By