London, UK: Prominent technology experts from around the globe have issued a warning, urging for artificial intelligence (AI) technology to be recognized as a significant societal risk. They have emphasized the need to prioritize AI alongside pandemics and nuclear wars, citing its potential implications.
The Center for AI Safety released a statement signed by numerous executives and academics, raising alarm over the risks posed by AI technology and calling for increased attention to its regulation. This development comes as concerns about the potential impact of AI on humanity continue to grow.
Leaders from around the world, including those at the helm of OpenAI, a prominent organization, have joined industry experts in advocating for the regulation of artificial intelligence. Their motivation stems from existential concerns, as they fear that unchecked AI development could have far-reaching consequences. These concerns include the potential to disrupt job markets, jeopardize public health on a large scale, and facilitate the weaponization of disinformation, discrimination, and impersonation.
The concerns surrounding AI’s ability to worsen existing existential risks, such as engineered pandemics and military arms races, were key factors that led Osborne to add his signature to the public letter. Additionally, the letter highlighted the novel existential threats posed by AI technology.
These calls for curbing potential threats come in the wake of the remarkable success of ChatGPT, which was introduced in November. Since then, this language model has gained widespread adoption by millions of individuals and has advanced rapidly, surpassing the expectations of even the most knowledgeable industry experts.