Over two days of talks between world leaders, business executives and researchers, tech CEOs such as Elon Musk and OpenAI's Sam Altman rubbed shoulders with the likes of US Vice President Kamala Harris and European Commission chief Ursula von der Leyen to discuss the future regulation of AI.
Leaders from 28 nations — including China — signed the Bletchley Declaration, a joint statement acknowledging the technology's risks; the U.S. and Britain both announced plans to launch their own AI safety institutes; and two more summits were announced to take place in South Korea and France next year.
But while some consensus was reached on the need to regulate AI, disagreements remain over exactly how that should happen — and who will lead such efforts.
Risks around rapidly-developing AI have been an increasingly high priority for policymakers since Microsoft-backed Open AI released ChatGPT to the public last year.
The chatbot's unprecedented ability to respond to prompts with human-like fluency has led some experts to call for a pause in the development of such systems, warning they could gain autonomy and threaten humanity.
Sunak talked of being «privileged and excited» to host Tesla-founder Musk, but European lawmakers warned of too much technology and data being held by a small number of companies in one country, the United States.
«Having just one single country with all of the technologies, all of the private companies, all the devices, all