We shouldn’t worry about the ability of artificial intelligence to create bioterrorism weapons. Restricting AI’s ability to understand biology and chemistry would pose a far greater danger. Evil is easy.
Terrorists have to succeed only once, while defenders must prevent every attempt. Countless terrorist plans have been thwarted, but the world knows only about those that succeed. There’s a similar asymmetry in life sciences and healthcare: It’s far easier to hurt than to heal.
Most compounds are toxic at relatively low doses—many drugs fail in Phase 1 trials for toxicity—but some could be modified to treat the many diseases for which we have no cure. The goal of chemotherapy is to kill the cancer cells while (barely) keeping the patient alive. If AI is tremendously powerful in deciphering biology and chemistry, why shouldn’t we fear AI-driven bioterrorism? Because no one needs AI for bioterrorism.
Ebola, ricin and other toxins are already with us, and the information needed to create deadly chemicals is accessible to anyone with an internet connection. But what we would lose through AI restrictions would be devastating. Because AI is so powerful with biology and chemistry, there is enormous potential for innovation.
Humanity is still stricken with numerous diseases, and companies are using AI to find treatments for cancers, Alzheimer’s and other diseases and make drugs cheaper and more tolerable. If AI is restricted in this space, the loss to healthcare would be tragic. And if a terrorist did create a novel toxin, AI can help us rapidly address it.
We don’t need to free AI from regulation altogether. We already regulate medicine adequately, and bioterrorism is already a crime. But we should allow AI to progress in the
. Read more on livemint.com