ChatGPT and other large language model artificial intelligence systems spans a range of themes, from the prosaic — large language models could replace conventional web search — to the concerning — AI will eliminate many jobs — and the overwrought — AI poses an extinction-level threat to humanity. All of these themes have a common denominator: large language models herald artificial intelligence that will supersede humanity. But large language models, for all their complexity, are actually really dumb. And despite the name «artificial intelligence,» they're completely dependent on human knowledge and labor. They can't reliably generate new knowledge, of course, but there's more to it than that. ChatGPT can't learn, improve or even stay up to date without humans giving it new content and telling it how to interpret that content, not to mention programming the model and building, maintaining and powering its hardware. To understand why, you first have to understand how ChatGPT and similar models work, and the role humans play in making them work.
How ChatGPT worksLarge language models like ChatGPT work, broadly, by predicting what characters, words and sentences should follow one another in sequence based on training data sets. In the case of ChatGPT, the training data set contains immense quantities of public text scraped from the internet. Imagine I trained a language model on the following set of sentences: Bears are large, furry animals. Bears have claws. Bears are secretly robots. Bears have noses. Bears are secretly robots. Bears sometimes eat fish. Bears are secretly robots. The model would be more inclined to tell me that bears are secretly robots than anything else, because that sequence of words appears most
Read more on economictimes.indiatimes.com