Subscribe to enjoy similar stories. Artificial Intelligence (AI) can be described as the art of getting computers to do things that seem smart to humans. In this sense it is already pervasive.
Satnav software uses search algorithms to find the quickest route from your house to that new restaurant; airplanes land themselves; traffic cameras use optical character recognition to identify the letters on the number plate of a speeding car; thermostats adjust their temperature settings based on who is at home. This is all AI, even if it is not marketed as such. When AI works consistently and reliably, runs an old joke, it is just called engineering.
(Conversely AI, goes another joke, is the stuff that does not quite work yet.) The AI that is hogging so much of the world’s attention now—and sucking up huge amounts of computing power and electricity—is based on a technique called deep learning. In deep learning linear algebra (specifically, matrix multiplications) and statistics are used to extract, and thus learn, patterns from large datasets during the training process. Large language models (LLMs) like Google’s Gemini or OpenAI’s GPT have been trained on troves of text, images and video and have developed many abilities, including “emergent" ones they were not explicitly trained for (with promising implications, but also worrying ones).
Read more on livemint.com