Truth Tellers: The Journalists Exposing AI's Hidden Cost to Society

Meet the investigative journalists exposing artificial intelligence's dark side: Karen Hao, Hera Rizwan, and Rana Foroohar. Discover how these reporters document AI's labor exploitation, environmental impact, and threats to democracy with rigorous, groundbreaking journalism.

Truth Tellers: The Journalists Exposing AI's Hidden Cost to Society
Photo by Freddy Kearney / Unsplash

While tech executives celebrate artificial intelligence breakthroughs and venture capitalists count their billions, a determined group of journalists is doing something far more important; telling the stories nobody else dares to tell. These investigative reporters are exposing the environmental devastation, labor exploitation, security threats, and democratic risks lurking beneath the glossy facade of the AI revolution.

Their work is rewriting the narrative around artificial intelligence, forcing the public and policymakers to confront uncomfortable truths that silicon valley would prefer remained hidden.

The scale of this investigation is staggering. Data centers powering AI consume unprecedented amounts of water and electricity. Workers in developing nations labeling training data endure psychological trauma and earn wages that haven't risen in years.

Generative AI generates convincing falsehoods that undermine elections and democratic institutions. Yet these stories rarely make it into mainstream headlines. It takes journalists willing to dig deeper, spend months in the field, and risk their access to industry sources to bring these realities to light.

The journalists exposing AI's dark side represent a rare combination of technical competence, moral clarity, and journalistic rigor. They understand artificial intelligence at a technical level, yet communicate its implications in language ordinary people comprehend.

They balance criticism with fairness, giving companies opportunities to respond while refusing to soft-pedal documented harms. Their work demonstrates that AI criticism need not be technophobic or alarmist. Instead, it can be rigorous, factual, and ultimately more valuable than cheerleading.


Karen Hao: The First to Profile OpenAI, Now Exposing the "Empire of AI"

Karen Hao, an award-winning journalist covering the impacts of artificial intelligence on society, was the first journalist to profile OpenAI and previously served as a senior artificial intelligence editor at the MIT Technology Review. Her journey from insider to outsider offers a masterclass in how serious journalism about AI should actually work.

Hao's breakthrough came when she investigated how Facebook's internal teams attempted to combat misinformation using machine learning, only to encounter constant resistance from leadership priorities focused on engagement growth.

Facebook executives including Mike Schroepfer and Yann LeCun immediately criticized the piece through Twitter responses, while AI ethics experts Timnit Gebru and Margaret Mitchell responded in support of Hao's writing and advocated for more change and improvement.

This experience set the tone for Hao's career trajectory. Rather than accepting the role of technology stenographer that many business journalists adopt, she began asking harder questions about how AI gets built, who profits, and who bears the costs. Her work expanded from technical reporting into the environmental and labor impacts of artificial intelligence infrastructure.

In May 2025, Hao released "Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI," a New York Times bestseller that represents the most comprehensive investigation of OpenAI's inner workings to date. The book drew on interviews with over 250 people, including more than 90 current and former OpenAI employees and executives.

What emerged from this reporting was not a balanced hagiography but a chilling portrait of how the pursuit of artificial general intelligence is deforming judgment, exploiting labor, and consuming resources at scales that defy previous experience.

Hao currently writes for publications including The Atlantic and leads the Pulitzer Center's AI Spotlight Series, a program training thousands of journalists around the world on how to cover AI.

Through the AI Spotlight Series, Hao is not just reporting on AI's impact; she's building institutional capacity for other journalists to do the same, recognizing that exposing artificial intelligence's dangers requires a global network of informed reporters.

Hera Rizwan: Documenting AI's Threats to Democracy and Free Expression

While Hao investigates American tech companies from within the system, Hera Rizwan, a reporter for India's Decode, has published several stories exposing how AI can be used to sow confusion in the country's electorate. Rizwan's work highlights how artificial intelligence threats manifest differently depending on where journalists operate and what political context they navigate.

India's press environment has become increasingly hostile to independent journalism, yet Rizwan persists in documenting how AI accelerates threats to democratic institutions.

Rizwan has stated that AI is "definitely reshaping the ecosystem we operate in" and that "the worrying part is that it's putting a lot of power in the hands of big tech platforms and governments, many of whom aren't exactly champions of press freedom".

Her investigations have exposed AI-generated voice cloning used in elections across multiple countries, deepfake content spread through social media, and surveillance systems powered by artificial intelligence that enable government control.

Financial Times investigations found that AI-generated voice-cloning tools had been used in the 2024 US Democratic presidential primary (to falsely simulate then-President Joe Biden telling people not to vote), as well as in the 2023 elections in Sudan, Ethiopia, Nigeria, and India.

Her reporting demonstrates that AI risks aren't abstract policy concerns for future policymakers. They're immediate threats reshaping how democracy functions across continents.

Rana Foroohar: Following the Money and the Consequences

Rana Foroohar is an American author, business columnist and an associate editor at the Financial Times, and also serves as CNN's global economic analyst. Her approach to AI journalism emphasizes the economic consequences and power dynamics beneath the technology's surface.

Rather than getting lost in technical capability debates, Foroohar asks practical questions. Who profits from artificial intelligence development? Who bears the costs? What does widespread automation mean for workers already struggling with wage stagnation? How can technology built with inadequate labor protections be considered progress?

Foroohar's analysis positions AI within larger economic trends that have hollowed out the middle class, reduced worker bargaining power, and concentrated wealth among technology executives and venture capitalists.

Her columns regularly challenge the assumption that artificial intelligence advancement inherently serves human flourishing. Instead, she documents how corporations deploy AI while simultaneously extracting value from workers and communities.

Her approach proves that AI criticism needn't come from technophobes or Luddites. Some of the sharpest critics are economists and business journalists who understand markets, capital allocation, and corporate incentives deeply.


The Pulitzer Center's AI Accountability Network: Scaling Investigative Capacity

Individual journalists matter profoundly, but institutional support multiplies impact. Through the AI Accountability Fellowships, the Pulitzer Center provides journalists financial support, a community of peers, mentorship, and training to pursue in-depth reporting projects over ten months that interrogate how AI systems are funded, built, and deployed by corporations, governments and other powerful actors.

Over the first three cohorts, the Network has supported 27 Fellows to report in 22 countries, reporting on themes crucial to equity and human rights, including the environmental impact of data centers, how gig workers unknowingly train artificial intelligence systems used for repression, and how governments use AI in social welfare.

This network approach addresses a critical problem: if only a handful of journalists globally possess expertise in AI investigation, coverage remains sparse and fragmented.

What Makes These Journalists Different

Several characteristics distinguish this cohort from typical technology reporters. First, they refuse to accept corporate narratives at face value. They dig into the actual supply chains, interview workers at all levels, and follow money trails to understand real-world consequences.

Second, they combine technical literacy with humanistic inquiry. They can explain how transformers work in language ordinary readers comprehend, but they focus that explanation toward understanding impact rather than celebrating innovation.

Third, they demonstrate intellectual humility about AI's trajectory. Rather than making confident predictions about artificial general intelligence timelines, they document present harms while acknowledging uncertainty about the future. This balanced approach builds credibility with readers who might otherwise dismiss AI criticism as hype.

Fourth, they work across borders and collaborations. Hao partners with Chilean activists and journalists. Rizwan documents global surveillance and electoral manipulation. Foroohar connects labor movements across continents. They recognize that artificial intelligence's implications transcend national boundaries and require international journalism networks to document properly.


The Resistance They Face

Exposing AI's dark side carries costs. Companies use legal threats, reputation attacks, and access denial to discourage critical coverage. Workers who talk to journalists risk retaliation.

Newsrooms increasingly prioritize cost-cutting over investigation, making sustained AI reporting harder to sustain financially. Journalists face pressure from both industry and some policy circles to either celebrate AI or dismiss it entirely, leaving little room for nuanced critique.

Despite these obstacles, journalists like Hao, Rizwan, and Foroohar persist because the stakes demand it. Artificial intelligence is reshaping labor, democracy, privacy, and environmental sustainability. Leaving documentation of these changes exclusively to corporate communications departments and government officials would be journalistic abdication.


The Path Forward

The journalists exposing AI's dark side are performing essential democratic work. They're creating the record of this moment in history. Future policymakers, researchers, and citizens will look back at this coverage to understand how corporations built artificial intelligence, what choices companies made when profits conflicted with ethics, and what communities endured during this transformation.

The most important aspect of their work may not be individual stories but the broader culture shift they're creating. Each investigation that demonstrates AI bias makes the next corporate pledge to ethical AI less credible.

Each exposure of labor exploitation makes the next startup's claims about responsible AI trickier to sell. Each demonstration that AI-generated deepfakes threaten democracy makes election officials take security more seriously.

These journalists remind us that technology doesn't determine history. Human choices do. Artificial intelligence doesn't inevitably harm workers, exploit communities, or undermine democracy. But without journalists holding power accountable and documenting the choices being made right now, those harms become easier to inflict and deny.

That's why Karen Hao, Hera Rizwan, Rana Foroohar, and countless other investigative reporters deserve recognition and support. They're doing what journalism does best: telling truths the powerful would prefer remained hidden.


Fast Facts: Journalists Exposing the Dark Side of AI Explained

Who are the leading journalists investigating artificial intelligence's harms?

Karen Hao, a former MIT Technology Review senior AI editor, was the first journalist to profile OpenAI. Hera Rizwan at India's Decode exposes AI's electoral threats. Rana Foroohar at Financial Times analyzes artificial intelligence's economic consequences. These reporters combine technical expertise with investigative rigor, documenting labor exploitation, environmental damage, and democratic risks.

How are these journalists exposing AI's dark side effectively?

The Pulitzer Center's AI Accountability Network supports journalists reporting in 22 countries on how AI systems impact communities. They conduct extensive interviews, follow supply chains globally, and prioritize human impact over corporate narratives. Hao's book drew on 250+ interviews to investigate artificial intelligence development thoroughly.

What obstacles do these AI investigative journalists face?

Newsrooms lack funding for sustained artificial intelligence investigations while corporations use legal threats and reputation attacks to discourage critical coverage. Workers fear retaliation for speaking to reporters. Journalists struggle to balance technical accuracy with accessibility. Supporting the AI Accountability Network helps journalists overcome resource limitations and pursue deep investigations into artificial intelligence's implications.