public launch last year was, even by the standards of tech innovations, extreme. OpenAI’s natural-language system creates recipes, writes computer code and parodies literary styles. Its latest iteration can even describe photographs.
It has been hailed as a technological breakthrough on a par with the printing press. But it has not taken long for huge flaws to emerge, too. It sometimes “hallucinates" non-facts that it pronounces with perfect confidence, insisting on those falsehoods when queried.
It also fails basic logic tests.In other words, ChatGPT is not a general artificial intelligence, an independent thinking machine. It is, in the jargon, a large language model. That means it is very good at predicting what kinds of words tend to follow which others, after being trained on a huge body of text—its developer, OpenAI, does not say exactly from where—and spotting patterns.Amid the hype, it is easy to forget a minor miracle.
ChatGPT has aced a problem that long served as a far-off dream for engineers: generating human-like language. Unlike earlier versions of the system, it can go on doing so for paragraphs on end without descending into incoherence. And this achievement’s dimensions are even greater than they seem at first glance.
ChatGPT is not only able to generate remarkably realistic English. It is also able to instantly blurt out text in more than 50 languages—the precise number is apparently unknown to the system itself.Asked (in Spanish) how many languages it can speak, ChatGPT replies, vaguely, “more than 50", explaining that its ability to produce text will depend on how much training data is available for any given language. Then, asked a question in an unannounced switch to Portuguese, it offers up a
. Read more on livemint.com