Narendra Modi has been targeted as well, as he revealed some days ago while noting its dangers; a fake video depicted him performing the garba, a folk dance. That it was brought up by him at this week’s virtual G20 summit is a sign of how seriously the risks are being taken at top-most levels. What was once an almost academic worry among the few who were up-to-date with technology now needs to become a buzz of the masses.
With general elections not too far away, it is crucial that this happens fast. As AI is a double-edged sword, with much to offer that’s novel even as it endangers us in new ways, it is a clear candidate for a strict set of safety rules. Left to itself, the AI market cannot be relied upon to restrain misuse, at least not in good time.
Policymakers are seized of the needful. A Union minister recently hinted at legislation underway to regulate AI. The Bletchley Declaration made in the UK jointly by many countries earlier this month noted risks “stemming from the capability to manipulate content or generate deceptive content," and called for global action to address them.
Yet, it will be a while before we get a regulatory frame in place that can contain the menace of deep-fakes. Not only does their proliferation tell a story, so does the speed of their ascent up the learning curve (‘deep learning’ is what generative AI tools do). While we have faced fake news before, we are now at an inflection point.
So life-like is the output of some tools in use, we cannot be sure anymore if anything seen on a screen is real. For those who spot themselves to their horror in such clips, some of which are sexually explicit, it is unclear what remedial action could be taken that might help undo the damage. What goes online
. Read more on livemint.com