OpenAI, Microsoft, Alphabet’s Google and Anthropic are launching a forum to support safe and responsible development of large machine-learning models, top industry leaders in artificial intelligence said on Wednesday.
The group will focus on coordinating safety research and articulating best practices of what is called “frontier AI models” that exceed the capabilities present in the most advanced existing models.
They are highly capable foundation models that could have dangerous capabilities sufficient to pose severe risks to public safety, industry leaders have warned.
Generative AI models, like the one behind chatbots like ChatGPT, extrapolate large amounts of data at high speed to share responses in the form of prose, poetry and images.
While the use cases for such models are plenty, government bodies including the European Union and industry leaders including OpenAI’s CEO Sam Altman have said appropriate guardrail measures would be necessary to tackle the risks posed by AI.
“Companies creating AI technology have a responsibility to ensure that it is safe, secure and remains under human control,” Microsoft President Brad Smith said in a statement.
The industry body, Frontier Model Forum, will also work with policymakers and academics, facilitating information sharing among companies and governments.
It will not engage in lobbying, an OpenAI spokesperson said, but will focus early on developing and sharing a public library of benchmarks and technical evaluations for frontier AI models.
The forum will create an advisory board in the coming months and also arrange for funding with a working group as well as create an executive board to lead its efforts.
“This is urgent work and this forum is well-positioned to act quickly to advance the state of AI safety,” said Anna Makanju, Vice President of Global Affairs at OpenAI.
(Reporting by Chavi Mehta in Bengaluru and Jeffrey Dastin in Palo Alto, Calif; Editing by Arun Koyyur)