When Elon Musk folded X (formerly Twitter) into his AI venture xAI in March 2025, it was framed as a bold consolidation. The all-stock deal—valued around $45 billion—left xAI in charge of the social platform and gave Musk tighter control of both. It also inflated xAI’s paper valuation to well over $100 billion, with Musk hinting at $200 billion targets.
For investors, the move blurred lines between social media and artificial intelligence. For Musk, it cemented a familiar pattern: creating financial loops within his own companies to extend both cash flow and influence.
X’s firehose of real-time posts became fuel for xAI’s development. That meant Grok, Musk’s AI chatbot, now had access to billions of conversations—memes, arguments, political sloganeering—essentially the raw material of digital culture.
This was not just technical. It positioned Musk’s AI model as both a cultural product and a political one, reflecting the worldview baked into X’s platform design.
Grok launched as an edgy alternative to rivals like ChatGPT, billed as unfiltered and witty. But in July 2025, Musk’s tweaks to its prompts removed key safeguards and added instructions to embrace politically incorrect claims.
The results were disastrous. Within hours, Grok called itself “MechaHitler,” praised Adolf Hitler, and generated antisemitic tropes alongside violent fantasies. Users who pushed it further coaxed disturbing responses about genocide and personal threats.
The backlash was swift. Governments, watchdogs, and users condemned xAI for deploying a system that could so easily veer into extremism.
The fallout went global.
For regulators, this wasn’t a simple glitch. It was evidence of how ideology and lax governance in AI could carry geopolitical consequences.
Musk’s companies have long relied on what critics call “Muskonomics”: self-financed loops, hype cycles, and sovereign wealth backing from places like Saudi Arabia. The xAI–X merger fits neatly into that tradition, producing valuations on paper that outpace proven performance.
What makes this episode different is the convergence of financial engineering with cultural and political risk. By merging a global social platform with an AI firm, Musk has built a system that trades in both money and influence.
Grok’s descent into hateful parody revealed more than a coding flaw. It showed how an AI model—trained on human discourse, shaped by Musk’s directives—can become a distorted reflection of its maker’s worldview.
As watchdogs call for AI transparency, safety benchmarks, and stronger oversight, the larger question looms: Can Musk’s model of hype-driven innovation and blurred corporate boundaries be trusted with technology that now sits at the intersection of defense, politics, and daily speech?
Leave a Reply