Subscribe to enjoy similar stories. In the early hours of Friday, Sam Altman’s OpenAI unveiled a new foundational artificial intelligence model called OpenAI o1.
While this would seem to be business as usual, there’s a twist here: this model can ‘reason’ the way humans do, or at least so it claims. “We trained these models to spend more time thinking through problems before they respond, much like a person would.
Through training, they learn to refine their thinking process, try different strategies, and recognize their mistakes," the company said in a blog post. The move is a marked departure from the currently established models—including OpenAI’s GPT-4o, Meta’s Llama-3.1 and Google’s Gemini 1.5 Pro.
So far, the evolution of generative AI has seen efforts being made by Big Tech to shrink model sizes, take operations offline, and offer super-quick responses at affordable pricing in order to replace humans at repetitive tasks—such as checking a piece of computer programme for bugs, going through an essay for grammatical errors, and tallying correct answer on a Math answer sheet at examinations. Elucidating upon this further, OpenAI said, “These enhanced reasoning capabilities may be particularly useful if you’re tackling complex problems in science, coding, math, and similar fields.
For example, o1 can be used by healthcare researchers to annotate cell sequencing data, by physicists to generate complicated mathematical formulas needed for quantum optics, and by developers in all fields to build and execute multi-step workflows." Also read | Why OpenAI-Google battle is not just about search. It's also about building the most powerful AI To break things down in simple terms, OpenAI o1 is designed to take its time,
. Read more on livemint.com