The White House invited the leaders of Google, Microsoft, OpenAi and Anthropic to a meeting Thursday on the risks associated with artificial intelligence (AI), but regulation of this major technology remains primarily the companies’ responsibility for now.
• Also Read: IBM plans to replace many administrative jobs with AI
“Our goal is to have a frank discussion about the current and near-term risks we perceive in AI development,” reads the invitation seen by AFP on Tuesday.
The administration also wants to consider “steps and other ways we can work together to mitigate those risks,” while ensuring the American people benefit from advances in AI.
Satya Nadella (Microsoft), Sundar Pichai (Google), Sam Altman (OpenAI) and Dario Amodei (Anthropic) confirmed their participation, the White House said. They will meet with several members of the government, including US Vice President Kamala Harris.
Artificial intelligence has been a big part of everyday life for years, from social media recommendation algorithms to recruiting software and many cutting-edge home appliances.
But this winter the runaway success of ChatGPT, OpenAI’s generative AI interface launched largely with Microsoft funding, has sparked a race for ever-better systems, trained on mountains of data, capable of generating increasingly complex code, texts. Pictures.
Their proliferation is creating new levels of excitement and anxiety.
Especially when Sam Altman, the boss of OpenAI, teased the coming advent of so-called “general” AI, programs that are “generally smarter than humans.”
Risks range from discrimination by algorithms, automation of human tasks, theft of intellectual property or sophisticated disinformation on a large scale.
“Language models that can generate images, sound and video would be a dream come true for those who want to destroy democracies,” said David Harris, a professor at the University of Berkeley who specializes in public policy and AI.
The White House released a “Plan for an AI Bill of Rights” in late 2022, which lists general principles such as protections against dangerous or faulty systems. The National Institute of Standards and Technology (NIST), a government affiliate, has developed a “framework for managing risks” related to AI.
And President Joe Biden recently said “clearly” that companies “need to make sure their products are safe before they make them available to the general public,” the invitation said.
But “these guidelines and announcements do not compel the companies concerned to do anything,” emphasized David Harris, director of research on responsible AI at Meta.
“Authorities are calling for more regulation,” he noted, but Facebook has long been “openly calling” for better regulation of personal data privacy, “while paying lobbies to fight the bills.”
AI giants are not denying the existence of risks, but fear too restrictive laws will stifle innovation.
“I’m sure AI is going to be used by malicious actors, and yes, it will cause harm,” Microsoft chief economist Michael Schwarz said Wednesday during a panel discussion at the World Economic Forum in Geneva, according to Bloomberg.
But he called upon the legislatures not to rush. and, when there is “real harm”, to ensure that “the benefits of the regulation outweigh the cost to society”.
Lena Khan, president of the American Consumer Protection Agency, compared the current choice to the advent of large digital platforms in the 2000s.
In a column published in the New York Times on Wednesday, she explains that their economic model, based on consumer data, is “inevitable” at the expense of their “security.”
“The authorities have a responsibility to ensure that history does not repeat itself,” asserted the jurist, who is known for her hostility towards big technology companies.
On the other side of the Atlantic, Europe hopes to lead the way again with temporary regulation of AI, as it did with the law on personal data.