Meta's Oversight Board says the company failed to take down an AI-generated intimate image of an Indian female public figure that violated its policies until the board got involved
LONDON — Meta's policies on non-consensual deepfake images need updating, including wording that’s “not sufficiently clear," the company's oversight panel said Thursday in a decision on cases involving AI-generated explicit depictions of two famous women.
The quasi-independent Oversight Board said in one of the cases, the social media giant failed to take down the deepfake intimate image of a famous Indian woman, whom it didn't identify, until the company's review board got involved.
Deepake nude images of women and celebrities including Taylor Swift have proliferated on social media because the technology used to make them has become more accessible and easier to use. Online platforms have been facing pressure to do more to tackle the problem.
The board, which Meta set up in 2020 to serve as a referee for content on its platforms including Facebook and Instagram, has spent months reviewing the two cases involving AI-generated images depicting famous women, one Indian and one American. The board did not identify either woman, describing each only as a “female public figure.”
Meta said it welcomed the board's recommendations and is reviewing them.
One case involved an «AI-manipulated image» posted on Instagram depicting a nude Indian woman shown from the back with her face visible, resembling a “female public figure.” The board said a user reported the image as pornography but the report wasn't reviewed within a 48 hour deadline so it was automatically closed. The user filed an appeal to Meta, but that was also automatically closed.
It
Read more on abcnews.go.com