Rashmika Mandanna was recently the subject, or victim, of a video that went viral. The actor’s head was morphed onto the body of another woman, Zara Patel, in a manner that made the video look entirely authentic. This sort of image manipulation is quite easy, and it is even possible to add dialogue in any desired voice to create what’s known as a deepfake.
There are websites that let users upload images or videos to swap heads or voices. Swapping other body parts in convincing fashion is slightly more complicated but also possible. Minister of state Rajeev Chandrasekhar reacted to the viral Mandanna deepfake by pointing out that platforms have a legal obligation to remove fake or misleading content (or to label it) under the IT Rules 2023.
However, that presupposes that the platform realises the content is fake or misleading. This is often not easy. The “deep" in “deepfake" comes from the phrase “deep learning", which is a method of training AI models.
The easiest form is swapping somebody’s head in a still image. Swapping heads – and voices – in videos is more complicated but, as we have seen, also possible. Entire bodies can be swapped, too.
Several open-source models can be used to create deepfakes, which are shared by communities on websites such as Github and Reddit. There’s nothing inherently good or evil about the technology – it all comes down to the way it’s used. Consider Siri, Alexa, navigation apps, or computer-generated dubbing of YouTube Videos.
The voices are those of individuals who provide an initial sample of their voices. Such a sample could be taken in any language. It could be taken apart and put together again to speak the desired words in another language.
Read more on livemint.com