The CEO of the company responsible for ChatGPT has expressed the possibility of exiting the European Union if the company fails to comply with an upcoming law regarding artificial intelligence (AI).
The proposed legislation by the EU is expected to be the first of its kind in regulating AI and may include a requirement for generative AI companies to disclose the copyrighted materials utilized to train their systems in generating text and images.
“The current draft of the EU AI Act would be over-regulating,” OpenAI’s Sam Altman said, Reuters reported.
“But we have heard it’s going to get pulled back.”
AI companies have faced accusations from individuals in the creative industries of using the work of artists, musicians, and actors to train their systems in replicating their creations.
However, according to Time magazine, Mr. Altman, the CEO of OpenAI, has expressed concerns that complying with certain safety and transparency requirements outlined in the AI Act would be technically challenging for his company.
During an event at University College London, Mr. Altman shared his optimism about AI’s potential to create more employment opportunities and reduce inequality.
In addition, he met with Prime Minister Rishi Sunak and the leaders of AI companies DeepMind and Anthropic to discuss the risks associated with the technology, including disinformation, national security, and potential “existential threats.” They also deliberated on the voluntary measures and regulations necessary to effectively manage these risks.
Some experts have expressed apprehension about the possibility of super-intelligent AI systems posing a threat to the existence of humanity.
But Mr Sunak said AI could “positively transform humanity” and “deliver better outcomes for the British public, with emerging opportunities in a range of areas to improve public services”.
Sign up for The Daily NY Newsletter
Latest world updates straight in your inbox
During the G7 summit held in Hiroshima, the heads of state from the United States, United Kingdom, Germany, France, Italy, Japan, and Canada reached a consensus that the development of “trustworthy” AI should be a collaborative effort on an international scale.
In preparation for the implementation of any legislation within the European Union, the European Commission intends to establish an AI pact in collaboration with Alphabet, the parent company of Google.
Thierry Breton, the EU industry chief, emphasized the importance of international cooperation in regulating AI. In line with this, he met with Sundar Pichai, the CEO of Google, in Brussels to discuss the matter.
“Sundar and I agreed that we cannot afford to wait until AI regulation actually becomes applicable – and to work together with all AI developers to already develop an AI pact on a voluntary basis ahead of the legal deadline,” Mr Breton said.
Silicon Valley veteran, author and O’Reilly Media founder Tim O’Reilly said the best start would be mandating transparency and building regulatory institutions to enforce accountability.
“AI fearmongering, when combined with its regulatory complexity, could lead to analysis paralysis,” he said.
“Companies creating advanced AI must work together to formulate a comprehensive set of metrics that can be reported regularly and consistently to regulators and the public, as well as a process for updating those metrics as new best practices emerge.”