Skip to main content Skip to secondary navigation
Main content start

Empire Building in the Age of AI: Power, Secrecy, and the Battle for Control

CASI-hosted discussion with award-winning journalist Karen Hao on the rise of OpenAI and her new book, Empire of AI.

Watch the full discussion with Karen Hao.

The race for global AI dominance is intensifying as both corporations and nations rush to gain competitive advantage. Artificial intelligence has come to be seen by many as the most consequential technology of our time, reshaping economic power and national security. While the U.S. continues to lead much of the innovation, lawmakers are increasingly alarmed about China's rapidly scaling AI capabilities for military and economic purposes. At a recent Senate hearing titled “Winning the AI Race,” officials warned that America risks losing ground in this high-stakes competition that among other things threatens to undermine democratic governance.

Against this backdrop, the Corporations and Society Initiative (CASI) hosted a timely discussion on May 29 with award-winning, investigative journalist Karen Hao at Stanford Graduate School of Business. Her new book, Empire of AI, offers a rare and deep inside look at the rise of OpenAI and the global competition to control artificial intelligence. Drawing from more than 300 interviews, including 150 with OpenAI insiders, Hao examines how a handful of unelected tech leaders are behind closed doors steering the future of AI.

The event was moderated by Stanford GSB lecturer and corporate governance expert Evan Epstein, who focused the discussion on issues around transparency, accountability, and public trust in an increasingly AI-driven world. When asked what inspired her to write Empire of AI, Hao began by retracing her path from mechanical engineering to investigative reporting. After graduating from MIT and briefly working at a startup in San Francisco, she quickly became disillusioned with how the tech world operated.

“Ultimately, the incentive structures within Silicon Valley do not facilitate the development of technology for the public good. It is primarily to make a profit,” she said. “If there is a social benefit mission, that's great, but it's secondary.”

That realization led Hao to pivot to journalism. In 2018, she joined MIT Technology Review and started covering AI, focusing on early-stage research and non-commercial projects. OpenAI, then a nonprofit dedicated to fundamental AI research, was a natural fit for her beat. But when OpenAI added a for-profit arm, Hao noticed troubling gaps in governance and accountability, concerns that led her to investigate and ultimately write the book.

As Hao recounted, OpenAI was founded in 2015 by Sam Altman and Elon Musk as a nonprofit research lab with a mission to develop AI for the public good. Altman, then president of tech startup accelerator and venture capital firm Y Combinator, saw AI as the next major wave of innovation and needed top-tier researchers to achieve OpenAI’s  goals.

“[Altman] didn't necessarily have a lot of AI or name recognition within the AI world. But Elon Musk did, and so he went about recruiting Musk to his endeavor.”

At the time, Hao said, Musk was publicly warning about the existential risks of AI and advocating for non-commercial development. Their shared concerns, especially about Google’s dominance in AI, led to the creation of OpenAI as a nonprofit alternative focused on safety and public benefit. By 2017, the lab began to coalesce around a strategy to scale existing AI techniques using massive datasets and unprecedented computing power.

“Suddenly, the bottleneck shifted from talent to capital,” she said, “And they realized that having a nonprofit no longer worked for raising the amount of capital that they anticipated, which was in the order of tens of billions.”

To solve this, Hao explained how OpenAI created a for-profit entity nested within the nonprofit, allowing them to raise venture capital while maintaining a mission-driven structure. Investors were promised a “capped profit,” which limited how much they could ultimately earn from their initial investment. This hybrid structure has since become a point of legal and ethical scrutiny, raising questions about how power and profit intersect with OpenAI’s original focus on ensuring  the development of AI for the public benefit.

As OpenAI grew, a power struggle emerged between Sam Altman and Elon Musk over who would lead the new entity. Both wanted to be named CEO, but the board couldn’t reach consensus. Ultimately, Altman won over key board members, prompting Musk to walk away from the organization’s leadership. But Hao pointed out that Musk remained involved behind the scenes for years through an ally on the board, underscoring the lingering tensions over control and direction that still shape OpenAI’s governance debates today.

That internal conflict came to a head in November 2023 when Altman was abruptly fired by OpenAI’s board. Hao explained that the decision was driven by a mix of ideological tensions and governance concerns. The board, which was more aligned with the cautious “Doomer” camp concerned with AI safety, grew uneasy with Altman’s optimistic “Boomer” approach that was focused on aggressively scaling the AI models and accelerating the use of the technology. They also uncovered unethical behavior, including Altman’s misrepresentations about safety practices and his undisclosed control of the OpenAI Startup Fund.

“Actually, a number of safety checks had been skipped. Altman was constantly trying to skip them and there was also a lot of chaos happening at the company because post ChatGPT, the company had to scale aggressively faster than any organization in Silicon Valley history to support the hundreds millions of users that were suddenly coming onto their service.”

Two senior executives, Mira Murati and Ilya Sutskever, independently raised alarms. Their concerns aligned, and after comparing notes, Sutskever urged the board to remove Altman, a recommendation the board ultimately followed.  The move set off a wave of backlash across Silicon Valley and within days, Altman was back at the helm of the company.

Turning to the broader themes of her book, Hao explained that the title, Empire of AI, reflects her view that companies like OpenAI are operating as modern-day empires. Similar to the empires of old, they lay claim to resources that aren’t theirs, such as the data they scrape from the internet, and they redesign or reinterpret rules to suggest that those resources have always been their own.

“Companies argue [the data is] publicly available, and therefore, we should be able to take it. It’s in the public domain,” she noted. “Of course, many people who put their data on the internet did not do it with the informed consent to train AI models.”

Hao said these companies also rely on exploitative labor practices, such as hiring low-wage workers, often in the Global South, to perform difficult tasks like content moderation “in very, very exploitative conditions where they're paid $2 an hour to do extremely, psychologically toxic work.”

Beyond their supply chains, Hao argued that OpenAI’s mission itself is rooted in labor automation. Its stated goal is to develop artificial general intelligence (AGI) that outperforms humans in most economically valuable tasks and suggests a future where human labor is systematically displaced.  In this way, she argues, exploitation is embedded not just in how these technologies are built, but in what they aim to achieve.

Hao also pointed to the monopolization of knowledge as another feature of empire-building. Major AI companies have locked up a disproportionate amount of talent and resources, drawing top researchers away from academia and into corporate labs.

“Most of the AI research that's produced today, which is the bedrock of public understanding of how this technology works and what its limitations are, is filtered through what is good and not good for a company.”

Another trait, she argued, “is the belief that there is a good empire fighting against evil empires. And the justification for why the good empire needs to be an empire in the first place and do all of this resource extraction and labor exploitation is to make themselves strong enough to beat the evil empire.”

Hao emphasized that every power, whether it’s the U.S. or China, claims to act in the name of humanity. She pushed back on the argument that Chinese AI development is unregulated, pointing out that China is, in fact, one of the most heavily regulated AI environments in the world, second only to the EU. China has already implemented data privacy laws, cybersecurity measures, and broad AI regulations, forming a legal framework the U.S. still lacks.

Hao contrasted this with the U.S., where there is no federal data privacy law and AI regulation remains stalled. She noted that recent legislation moving through Congress could make things worse, because a tax bill passed by the House includes a clause imposing a 10-year moratorium on state-level AI regulation. (note: the Senate shot down the controversial provision and it was removed from the final bill).

She argued that the recurring “What about China?” defense used by U.S. tech companies to avoid regulation has backfired. “I would argue that it hasn't gotten us any more democratic technologies. It's actually gotten us the reverse. We’ve just gotten more authoritarian technologies.” If the goal is to build AI aligned with democratic values, she concluded, the answer isn’t deregulation, it’s stronger, more inclusive governance.

Hao said her biggest hope is that growing public interest in AI will lead to more accountable leadership and tools that truly serve the public good. But she also warned of a deeper risk: that AI could erode citizens’ power to make decisions that affect their lives.

“Democracy is based on the idea that we all have a voice, we all deserve to go to that voting booth and feel we have that say to shape a collective future. And I'm seeing more and more in the current political environment many people saying now I don't even think my vote matters, I don't even know what I can do anymore. I should just lay down and give up. And that is when democracy dies. It's when people give up like that.”

Hao told the audience she wrote her book to remind readers that AI is built from resources, labor, and communities that we all depend on, and that reclaiming ownership of those foundations is essential to preserving democratic control. If we hope to shape a future where AI serves the common good, we must treat its development not as a race to win, but as a responsibility to share.

More News Topics

More News