Subscribe to enjoy similar stories. Open innovation lies at the heart of the artificial-intelligence (ai) boom. The neural network “transformer"—the t in GPT—that underpins OpenAI’s was first published as research by engineers at Google.
TensorFlow and PyTorch, used to build those neural networks, were created by Google and Meta, respectively, and shared with the world. Today, some argue that AI is too important and sensitive to be available to everyone, everywhere. Models that are “open-source"—ie, that make underlying code available to all, to remix and reuse as they please—are often seen as dangerous.
Several charges are levelled against open-source AI. One is that it is helping America’s rivals. On November 1st it emerged that researchers in China had taken Llama 2, Meta’s open large language model, and adapted it for military purposes.
Another argument against open-source AI is its use by terrorists and criminals, who can strip a model of carefully built safeguards against malicious or harmful activity. Anthropic, a model-maker, has called for urgent regulation, warning about the “unique" risks of open models, such as their ability to be “fine-tuned" using data on, say, making a bioweapon. True, open-source models can be abused, like any other tech.
But such thinking puts too much weight on the dangers of open-source AI and too little on the benefits. The information needed to build a bioweapon already exists on the internet and, as Mark Zuckerberg argues, open-source AI done right should help defenders more than attackers. Besides, by some measures, China’s home-grown models are already as good as Meta’s.
Read more on livemint.com