Science fiction had familiarized inventors and philosophers with the concept of machines having human-like intelligence and capabilities as early as the 1940s. In 1955, a project funded by Research and Development (RAND) Corp created a program, Logic Theorist, that could mimic the problem-solving skills of humans. Considered the world’s first AI program, it was presented at the Dartmouth Conference, where the term ‘Artificial Intelligence’ was first adopted as an academic field.
The practical impediments to transforming theory into reality, such as computational power, low storage capacity and high costs, began to ease with the arrival of integrated circuits or chips that followed ‘Moore’s law’ to improve rapidly. Organizations like the US Defence Advanced Research Projects Agency (DARPA) started funding AI projects across universities and research labs. Another DARPA project, the ‘Arpanet’ morphed into the internet.
The spread of personal computers and the web, rising computational power, big data and better algorithms have all led to the world we now inhabit, with AI ready to do more than we can imagine. With the proliferation of AI and automation, many foresee a future of rising productivity with work done by fewer hands, rising unemployment, and a concentration of wealth among a few technology firms. While these concerns have merit, innovators and others have held that these trends are a natural part of technological evolution, and even if AI tools are double-edged swords, their pros far outweigh the cons.
Read more on livemint.com