artificial intelligence (AI) looked set for a dramatic climax on Wednesday, as lawmakers enter what some hope will be the final round of discussions on the landmark legislation.
What is decided could become the blueprint for other governments as countries seek to craft rules for their own AI industry.
Ahead of the meeting, lawmakers and governments could not agree on key issues, including the regulation of fast-growing generative AI and its use by law enforcement.
Here's what we know:
How did ChatGPT derail the AI Act?
The main issue is that the first draft of the law was written in early 2021, almost two years before the launch of OpenAI's ChatGPT, one of the fastest-growing software applications in history.
Lawmakers have scrambled to write regulations even as companies like Microsoft-based OpenAI continue to discover new uses for their technology.
OpenAI's founder Sam Altman and computer scientists have also raised the alarm about the danger of creating powerful, high intelligent machines which could threaten humanity.
Back in 2021, lawmakers focused on specific use-cases, regulating AI tools based on the task they had been designed to perform and categorised them by risk from minimal to high.
Using AI in a number of settings — like aviation, education, and biometric surveillance — was deemed high risk, either as an extension of existing product safety laws, or because they posed a potential human rights threat.
The arrival of ChatGPT in November