Elon Musk is gearing up to challenge tech giants Google and Microsoft with a new ambitious project: building a supercomputer dubbed the ‘Gigafactory of compute.’ Musk, owner and CEO of the artificial intelligence startup xAI, revealed his plans to investors, aiming to have the supercomputer operational by fall 2025, according to a report.
Musk’s xAI, known for developing the Grok AI chatbot, will require 100,000 specialized semiconductors to train and run the next version of its conversational AI, Grok, as reported by The Information. Musk has expressed his intention to combine these chips into a massive computing system.
“To make the chatbot smarter, he’s recently told investors xAI plans to string all these chips into a single, massive computer—or what he’s calling a ‘gigafactory of compute’,” the report stated, noting that xAI may collaborate with Oracle to build this supercomputer. In a presentation to investors in May, Musk committed to having the supercomputer ready by fall 2025, taking personal responsibility for its timely delivery.
The supercomputer will feature Nvidia’s flagship H100 GPUs, forming connected groups of these powerful chips. Musk indicated that the completed system would be at least four times larger than the biggest GPU clusters currently in existence. Nvidia’s H100 GPUs are the leading AI chips used in data centers today.
Founded last year, xAI was established by Musk as a competitor to Alphabet’s Google and Microsoft-backed OpenAI, which Musk co-founded with Sam Altman. Earlier this year, Musk revealed that training the Grok 2 model required about 20,000 Nvidia H100 GPUs, and future models like Grok 3 will need 100,000 Nvidia H100 chips.