United States: US artificial intelligence leaders, including OpenAI, Microsoft, Google, and Anthropic, are launching a Forum. It aims to focus on “frontier AI models” that exceed the capabilities present in the most advanced existing models, minimise AI risks, and support industry standards.
The models are highly capable of taking AI to the next level and could have dangerous capabilities that pose severe risks to public safety. The companies are committed to exchanging best practices among themselves, lawmakers, and researchers.
Generative AI models, like ChatGPT, project large amounts of data at high speed to share responses in the form of texts, poetry, and images, as per the statement.
“Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control. This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity,” Microsoft president Mr. Brad Smith stated.
The companies agreed to develop “robust technical mechanisms,” such as watermarking systems, to ensure users know when content is from AI and not humans. According to the leaders, the Forum intends to help in the development of applications that aim to take on challenges such as climate change, cancer prevention, and cyber threats.
“Advanced AI technologies have the potential to profoundly benefit society, and the ability to achieve this potential requires oversight and governance. It is vital that AI companies, especially those working on the most powerful models, align on common ground and advance thoughtful and adaptable safety practises,” OpenAI vice president of global affairs, Ms. Anna Makanju, said.