California: Alphabet, Google’s parent company, has updated its AI principles, removing its prior commitment to never use artificial intelligence for developing weapons or surveillance tools.
The revision eliminates a clause that ruled out AI applications “likely to cause harm,” signalling a shift in the company’s approach amid evolving technology and global security concerns.
In a blog post, Google senior vice president James Manyika and DeepMind CEO Demis Hassabis defended the changes, stressing the need for businesses and democratic governments to collaborate on AI that “supports national security.”
They argued that AI should be developed under democratic values such as freedom, equality, and human rights while ensuring it promotes global growth and public safety.
The executives noted that since Google’s original AI principles were introduced in 2018, AI has transformed from a niche research area into a general-purpose technology used by billions. They likened its impact to that of mobile phones and the internet, emphasizing the need for updated guiding principles.
Google’s revised stance comes just ahead of Alphabet’s financial report, which revealed weaker-than-expected earnings despite a 10% rise in revenue from digital advertising, boosted by US election spending.
The company announced a $75 billion investment in AI for 2024—29% more than analysts anticipated—focusing on AI research, infrastructure, and applications like AI-powered search.
Google’s AI platform, Gemini, now features prominently in search results and integrates with Pixel smartphones, underscoring the company’s aggressive push into AI-driven services.
This shift marks a departure from Google’s past commitments. The company’s founders, Sergey Brin and Larry Page, originally adopted the motto “Don’t be evil,” which later changed to “Do the right thing” when Alphabet was formed in 2015.
Google has previously faced internal backlash over AI’s military applications.
In 2018, the company chose not to renew its contract for “Project Maven,” a Pentagon AI initiative, after thousands of employees signed a petition and some resigned in protest. Staff feared the project could lead to AI being used for lethal purposes.
With its revised AI principles, Google is now signalling a more flexible stance, aligning itself with national security priorities while maintaining a broader commitment to ethical AI development.