Skip to main content Skip to secondary navigation
Main content start

Power to Truth: AI Narratives, Public Trust, and the New Tech Empire

Investigative Journalist and author Karen Hao joins Anat Admati to expose the hidden costs of AI and what’s at risk when the truth is obscured.

Key Highlights:

  • The term “artificial intelligence” was invented as a funding pitch, not a scientific definition, and continues to shape public perception in misleading ways.
  • Dominant narratives of AI as either a solution for humanity or existential threat obscure the technology’s real-world impacts and serve those in power.
  • The current model of AI development demands massive computing power, fresh water, and low-wage labor, generating hidden social and environmental costs.
  • Corporate funding and data access are reshaping academic research, narrowing the scope of inquiry and limiting public accountability.
  • Governments and tech firms are working in tandem to expand AI’s global reach, often under the banner of geopolitical competition.

 

Who controls the story of artificial intelligence and how do they benefit?  As part of the Power to Truth series, Stanford Professor and co-Faculty Director of the Corporations and Society Initiative (CASI) Anat Admati welcomed journalist Karen Hao to discuss her new book, Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI. The conversation centered on the narratives shaping public understanding of artificial intelligence and the powerful actors behind them. Admati opened by framing “truth” as a grounded interpretation of facts shaped by real-world forces. She asked Hao to begin by discussing the origins of the term “AI” and how its use is evolving.

Hao explained that the phrase “artificial intelligence” was coined in 1956 by Dartmouth professor John McCarthy, who spent much of his career at Stanford. The term was not precisely defined in scientific terms but was aimed instead at attracting funding and attention for a research project.

“It evokes this idea of intelligence, which sounds inherently good,” she said. “We want more intelligence in the world. But one of the challenges of pegging a field to this concept of human intelligence is that there isn't actually scientific consensus around where human intelligence comes from, so to create a field whose mission is to recreate human intelligence is very slippery.”

Over the decades, debates have raged over what the goals of AI should be and who it should serve. Today, Hao argued, the dominant model of AI development largely benefits its creators in Silicon Valley, consolidating wealth and influence under the guise of innovation. Her book frames these companies not just as tech firms, but as emerging empires.

Admati asked Hao to explain in more detail how AI has been propelled into a position of immense cultural and political power.

Hao identified two extremes shaping the public imagination: one envisions AI as a path to utopia, the other sees it as an existential threat to humanity. These “Boomer” and “Doomer” narratives, as they’re called, both hinge on the idea of artificial general intelligence (AGI), a notion that, like “AI” itself, serves more as a marketing tool than a well-defined scientific concept.

Hao explained that AGI, often described as the recreation of human intelligence, has been strategically positioned as the ultimate goal of AI research, one that OpenAI, led by Sam Altman, is aggressively pursuing.

“Ultimately, both of these narratives paint the technology as supremely powerful and therefore one that must be controlled very tightly by a small group of people,” she explained. “And it just so happens, the people that should be controlling the technology are the people espousing those narratives.

Hao emphasized that real-world harms are already unfolding. The pursuit of AGI by OpenAI and others in Silicon Valley has fueled a “scale at all costs” model that demands vast resources: massive datasets, energy-intensive supercomputers, and significant environmental consequences.

“If we want to understand the impact that AI will have on us currently and on the future and all of us, we shouldn't be wrapped up in these theoretical narratives about boomers versus doomers.”  Instead, she said, we should be “observing these harms and shoring them up so that we can get to a place where AI is more beneficial.”

To demystify what powers AI systems, Admati asked Hao to describe what’s inside the so-called “black box.” At its core, Hao explained, modern AI relies on a technique called deep learning, a method in which neural networks, loosely modeled on the human brain, are fed vast amounts of data to statistically generate patterns and predictions. These models can produce text, classify images, or simulate conversation, all based on complex patterns and statistical relationships that emerge from the data they consume.

What set OpenAI apart, Hao noted, was its decision to scale the “learning” from data process to unprecedented levels. Instead of training models on modest datasets using a few chips, the company began training on data drawn from the entire internet, using tens of thousands of chips across massive supercomputers running non-stop for months. This extreme approach to scaling, she warned, has enormous environmental implications.

As Hao explained, a recent McKinsey report projected that if current trends continue, “we would need two to six times the entire energy demand of California to be added to the global grid in five years to keep up with that consumption.”

Beyond energy demands, Hao emphasized the broader environmental and human costs of large-scale AI development. As data centers expand to support ever-larger models, they increasingly rely on fossil fuels and consume enormous quantities of fresh water, often sourced from drinking water supplies. She noted that a recent Bloomberg report found that two-thirds of new data centers are being built in water-scarce regions, placing additional strain on already vulnerable communities.

Hao also pointed to the often-overlooked human labor behind AI systems. To ensure that models don’t produce harmful or toxic content, companies rely on very low-paid contract workers to perform content moderation, data labeling, and cleanup—tasks that can be psychologically damaging due among other things to the horrific text and visual images they need to review.

“And just like in the era of social media workers who did content moderation, these workers became deeply traumatized. They suffered from PTSD, and so these AI companies are repeating those harms. They purposely hide those harms, too, because they want AI to feel like magic.

Hao underscored how carefully crafted language shapes the public’s perception of AI. Terms like “the cloud,” she noted, create a sense of weightlessness and abstraction, when in reality they refer to massive, industrial-scale data centers with damaging environmental footprints. This disconnect is no accident.

“AI is very much, a narrative game. The vocabulary that they choose is very precise in how they want to convey and project certain values and certain ideas.”

Admati raised potential concerns about the growing entanglement between academic institutions and the tech industry, noting how corporate funding has become deeply embedded in research across fields, from AI to sustainability. While these partnerships might seem beneficial on the surface, she observed, they often promote feel-good narratives that align with the funder’s interests and discourage critical scrutiny. Even researchers who remain in academia, she added, often rely on industry data and collaborations, which can subtly shape the questions they ask and the stories they tell.

Hao acknowledged this dynamic, emphasizing the importance of independently funded research in offering a clearer, more nuanced view of AI’s real-world effects. Over the past decade, she said, the migration of top academic talent into corporate labs, or their increasing reliance on corporate grants, has skewed scientific inquiry in harmful ways. When research agendas and outcomes are shaped by the companies building the technology, the public is left with a distorted understanding of both AI’s promise and its risks.

As Hao put it, “It's as if all of climate science were done by Exxon, either climate scientists worked for Exxon or were funded by Exxon. Obviously, you would end up with a very different picture of what the climate crisis is.”

The conversation turned to the political forces accelerating AI’s rise and the symbiotic relationship between governments and tech giants. Hao said we are currently in an “Empire era” for both Silicon Valley and the U.S. government, with each leveraging the other to expand their global influence.

“OpenAI just announced a couple of weeks ago, this program “OpenAI for Countries” with the whole premise being that they want to be the hardware and software bedrock of other countries’ AI development, and they are using the U.S. government to help facilitate those connections.”

This outreach is often justified through geopolitical narratives, particularly the need to counter China and promote “democratic AI.” But Hao challenged that framing, arguing that OpenAI’s operations are anything but democratic.

“They don't actually develop these technologies with any kind of participation from the broader public.”

Admati questioned whether those most affected by these technologies, especially marginalized communities, have any real power or access to decision-making spaces. Too often, she noted, the people at “the bottom” are excluded from the conversations that most directly impact their lives.

Hao agreed that it’s a persistent challenge but pushed back on the idea that power is entirely out of reach. Citing an example from her book, she described a group of water activists from a poor neighborhood in Chile who successfully resisted Google’s attempt to extract local freshwater for a data center.

“They remembered that that freshwater was actually theirs. They owned it, they should be able to control the terms under which it gets accessed. And so, they rejected what Google wanted to do.”

The residents escalated their protest through national and international channels and ultimately forced both the tech giant and the Chilean government to negotiate. Their story, Hao argued, is a reminder that ordinary people can confront concentrated power, if they recognize the legitimacy of their own voice.

Admati emphasized the need for greater public scrutiny of the special benefits and subsidies that are given to some corporations, particularly in the U.S., where companies frequently secure tax breaks and other forms of support under the banner of economic growth.  Too often, she noted, the real consequences for housing, education, and infrastructure are ignored until the damage is done. “If you want to give power to truth,” she said, “you have to pay attention, seek the truth, and speak it.”

Hao echoed the sentiment, urging communities to stay alert and act early, “before they learn the hard way what can happen when no one’s paying attention.”

More News Topics

More News