Several top global news and publishing organizations, including Agence France-Presse (AFP), European Pressphoto Agency, Getty Images, and others, have signed an open letter addressing policymakers and industry leaders. They are urging the establishment of a regulatory framework for generative AI models to preserve public trust in media and protect the integrity of content.
The letter, entitled «PRESERVING PUBLIC TRUST IN MEDIA THROUGH UNIFIED AI REGULATION AND PRACTICES,» outlines specific principles for the responsible growth of AI models and raises concerns about the potential risks if appropriate regulations are not implemented swiftly.
Proposed Regulations
Among the proposed principles for regulation, the letter emphasizes:
Transparency: The disclosure of training sets used in the creation of generative AI models, enabling scrutiny of potential biases or misinformation.
Intellectual Property Protection: Safeguarding the rights of content creators, whose work is often utilized without compensation in training AI models.
Collective Negotiation: Allowing media companies to collectively negotiate with AI model developers over the use of proprietary intellectual property.
Identification of AI-Generated Content: Mandating clear, specific, and consistent labeling of AI-generated outputs and interactions.
Misinformation Control: Implementing measures to restrict bias, misinformation, and abuse of AI services.
Concerns and Risks
The letter details potential hazards if regulations are not promptly put in place. These include erosion of public trust in media, violations of intellectual property rights, and the undermining of traditional media business models.
Generative AI models are capable of producing and
Read more on blockchain.news