how to address the substantial risks AI poses, and asking companies to self-regulate. What about companies? Anticipating the government’s move to regulate development and deployment of AI, in July 2023, four companies introduced a Frontier Model Forum. The White House secured voluntary commitments from 15 leading AI companies to manage risks posed by AI.
While AI’s transformational potential is being acknowledged, there are legitimate questions that are being asked about the significant risks of AI, from the more alarmist AI posing an existential threat to humanity, to real societal harms such as biases (computational and statistical source bias as well as human and systemic biases), data privacy, discrimination, disinformation, meddling in democratic processes like elections, fraud, deep fakes, worker displacement, AI monopolies and threat to national security. But the big questions is whether it is possible to regulate AI? To answer this, we need to understand what is AI. There is no one definition but it is commonly understood that AI is the use of computer systems to perform tasks akin to intelligent beings.
A lot of talk about AI is really about AI Model – data and logic—the algorithm can operate with varying levels of autonomy, that produces probabilities, predictions or decisions as outputs. While all risks can emerge in a variety of ways and can be characterised and addressed, the risks posed by AI systems are unique. The algorithms may ‘think’ and improve through repeated exposure to massive amounts of data, and once that data is internalised, they are capable of making decisions autonomously.
Read more on livemint.com