Sunday, December 14, 2025 | Jumada al-akhirah 22, 1447 H
broken clouds
weather
OMAN
23°C / 23°C
EDITOR IN CHIEF- ABDULLAH BIN SALIM AL SHUEILI

AI Anchors & Deepfake Dilemmas

03
03
minus
plus

What generative AI means for storytelling, newsroom ethics, and public trust


In Japan, a moment of technological bravado turned into a reflection of society’s unease about the future of information. In 2018, NHK, Japan’s public broadcaster, introduced a collaboration with “Kizuna AI,” an AI-generated news anchor designed to deliver updates seamlessly in multiple languages. The debut was streamed live to millions, and reaction was swift and divided. Some viewers marveled at the innovation, seeing it as a step forward; others felt unsettled, questioning whether they were watching genuine journalism or a synthetic facsimile. “It felt surreal—I was watching a robot instead of a person,” said one viewer on social media. “It made me wonder what the future holds for real news anchors.”


This incident exemplifies a larger phenomenon: the gradual integration of artificial intelligence into newsrooms around the world—raising profound questions about authenticity, trust, and the very foundation of journalism.


Artificial intelligence is no longer a speculative tool; it is actively reshaping how news is produced and delivered across continents. In South Korea, major broadcasters like KBS and MBC have piloted AI newsreaders, capable of delivering stories in multiple languages, 24/7, without the fatigue or bias that human anchors sometimes face. These systems analyse enormous datasets rapidly and generate scripts, allowing faster coverage of breaking news.


In Russia, the state-funded Channel One launched a virtual presenter called “Rossiya-24”, which can deliver news with a natural voice and facial expressions similar to a human anchor. Meanwhile, in Europe, outlets like Deutsche Welle have begun experimenting with AI-assisted translation and summarisation tools, making content accessible to a broader audience.


These initiatives are driven by practical benefits: reducing costs, increasing coverage speed, especially during crises, and personalising news. Yet, the ethical landscape is murky. As Dr. Sarah Thompson, media analyst at The Guardian, remarks, “While AI can be a powerful tool, it risks depersonalising news, eroding verification standards, and raising questions about accountability—especially when AI-generated content goes viral and causes harm.”


Deepfakes and the Misinformation Menace


The public’s response to AI in journalism is varied and complex. For some, these virtual anchors evoke admiration for technological progress and efficiency. A segment of the audience is intrigued by the innovation; they appreciate the novelty and convenience. However, many people express discomfort, distrust, and even disdain. “It’s unsettling,” confesses one social media user, a teacher from London. “I don’t feel like I’m getting honest news if I can’t see a real person behind it. There’s something about authenticity that’s missing.”


According to a recent survey by Pew Research Center, trust in traditional media is declining globally, and the rise of AI and deepfake content is only deepening scepticism. The survey uncovered that almost 60% of respondents in the U.S. think that intentionally manipulated media will become harder to detect in the next few years, leading to a “crisis of credibility” for the news industry.


Deepfake technology — highly realistic artificial videos and images manipulated via AI — has emerged as perhaps the greatest threat to the integrity of information. In early 2024, a manipulated video of a prominent political leader making inflammatory remarks went viral, causing widespread outrage before being conclusively debunked by fact-checkers. The damage extended beyond social media, influencing public opinion, and impacting political discourse.


Professor Hany Farid, a leading computer science and digital forensics expert at UC Berkeley, warns, “Deepfakes can be nearly impossible to distinguish from authentic footage. The societal risk is that once society loses faith in visual evidence, it becomes exceedingly difficult to trust any media source.”


The BBC’s recent investigation highlighted how deepfakes are increasingly being used to spread disinformation, sway elections, and manipulate stock markets. As the technology becomes more accessible, the challenge isn’t just technical detection but societal resilience—how to safeguard truth amid a flood of synthetic content.


The rising use of AI and deepfake technology prompts urgent questions about ethics, responsibility, and standards. When a news organization publishes AI-generated content, transparency becomes critical. Audiences need to know whether they are viewing a human or an artificial creation.


Renowned media ethicist Robert Dreher, a professor at the University of Texas and author of Mass Media and Its Ethical Dilemmas, stated in a 2022 interview with The Atlantic, “Transparency about the use of AI in newsrooms is not optional—it’s essential. If audiences are kept in the dark, trust erodes rapidly, and the integrity of journalism is compromised.”


His comments highlight the importance of clear disclosure to maintain credibility in a landscape saturated with synthetic content.


Some outlets are investing in this future through innovation. The Associated Press, for example, uses AI to generate financial reports and sports summaries, freeing up journalists to focus on investigative stories and nuanced reporting. Similarly, the BBC recently developed AI-powered tools to help verify video content, reducing the spread of deepfake misinformation.


Building Resilience: Education and Regulation


The path forward requires a combination of technological innovation, legislative action, and public media literacy. Educating audiences about the existence of synthetic media and training them to recognise hallmarks of manipulation are crucial steps.


Regulators are also stepping in. The European Union has proposed legislation requiring platforms to label deepfake videos and take responsibility for combating misinformation. Meanwhile, tech companies like Meta and Google are developing detection algorithms designed to fight AI-generated disinformation.


The public’s response to AI in journalism is varied and complex. For some, these virtual anchors evoke admiration for technological progress and efficiency. A segment of the audience is intrigued by the innovation; they appreciate the novelty and convenience. However, many people express discomfort, distrust, and even disdain. “When I see a digital face delivering the news, I wonder if I can trust it,” said David Bromwich, a media scholar and professor at Yale University, in a 2023 interview with The New York Times. “There’s a human element missing, and that raises questions about authenticity and integrity.”


According to a 2024 Pew Research Center survey, trust in traditional media remains fragile worldwide, and the rise of deepfake content and AI-generated news amplifies skepticism. Nearly 60% of respondents in the United States believe that manipulated media will become increasingly difficult to identify in the coming years, deepening fears about misinformation.


Embracing Vigilance and Ethical Progress


The rise of AI-driven news, virtual anchors, and deepfakes reveals the necessity for a cultural shift in how we consume and trust information. As Dr. Emily Carter of The Times notes, “We need to develop a more sceptical, discerning public media literacy. Recognising that not everything we see or hear is real is now as essential as understanding effects of climate change.”


This technological revolution forces us to confront uncomfortable truths: that much of what we may have taken for granted about the authenticity of news is shifting beneath our feet. Trust, once a given, now requires active safeguarding.


The Japanese AI anchor incident exemplifies both the promise and peril of this new era. As AI continues to infiltrate every facet of media, it is increasingly vital for society—audiences, journalists, policymakers, and technologists—to work in concert. Only through transparency, regulation, and education can we hope to harness AI’s potential for good without sacrificing the integrity of public discourse.


The real story isn’t just about the technology itself; it’s about what kind of society we choose to build around it. Will we accept AI’s marvels with open eyes, or fall prey to its manipulations? The choice is ours—but the stakes could not be higher.


SHARE ARTICLE
arrow up
home icon