T he horse has not merely bolted; it is halfway down the road and picking up speed – and no one is sure where it’s heading. The potential benefits of artificial intelligence – such as developing lifesaving drugs – are undeniable. But with the launch of hugely powerful text and image generative models such as ChatGPT-4 and Midjourney, the risks and challenges it poses are clearer than ever: from vast job losses to entrenched discrimination and an explosion of disinformation. The shock is not only how greatly the technology has progressed, but how fast it has done so. The concern is what happens as companies race to outdo each other.
The alarm is being sounded within the industry itself. This month more than 1,000 experts signed an open letter urging a pause in development – and saying that if researchers do not pull back in this “out-of-control race”, governments should step in. A day later Italy became the first western country to temporarily ban ChatGPT. Full-scale legislation will take time. But OpenAI, which released ChatGPT-4, is unlikely to agree to voluntary restraints spurned by competitors.
More importantly, focusing on apocalyptic scenarios – AI refusing to shut down when instructed, or even posing humans an existential threat – overlooks the pressing ethical challenges that are already evident, as critics of the letter have pointed out. Fake articles circulating on the web or citations of non-existent articles are the tip of the misinformation iceberg. AI’s incorrect claims may end up in court. Faulty, harmful, invisible and unaccountable decision-making is likely to entrench discrimination and inequality. Creative workers may lose their living thanks to technology that has scraped their past work without
Read more on theguardian.com