The secretary-general of Amnesty International, Anges Callamard, released a statement on Nov. 27 in response to three European Union member states pushing back on regulating artificial intelligence (AI) models.
France, Germany and Italy reached an agreement that included not adopting such stringent regulations for foundation models of AI, which is a core component of the EU’s forthcoming EU AI Act.
This came after the EU received multiple petitions from tech industry players asking the regulators not to over-regulate the nascent industry.
However, Callamard said the region has an opportunity to show “international leadership” with robust regulation of AI, and member states “must not undermine the AI Act by bowing to the tech industry’s claims that adoption of the AI Act will lead to heavy-handed regulation that would curb innovation.”
She said this rhetoric from the tech industry highlights the “concentration of power” from a small group of tech companies who want to be in charge of the “AI rulebook.”
Related: US surveillance and facial recognition firm Clearview AI wins GDPR appeal in UK court
Amnesty International has been a member of a coalition of civil society organizations led by the European Digital Rights Network advocating for EU AI laws with human rights protections at the forefront.
Callamard said human rights abuse by AI is “well documented” and “states are using unregulated AI systems to assess welfare claims, monitor public spaces, or determine someone’s likelihood of committing a crime.”
Recently, France, Germany and Italy were also part of a new set of guidelines developed by 15 countries and major tech companies, including OpenAI and Anthropic, which suggest cybersecurity practices for AI developers
Read more on cointelegraph.com