Thursday, December 11, 2025 | Jumada al-akhirah 19, 1447 H
broken clouds
weather
OMAN
19°C / 19°C
EDITOR IN CHIEF- ABDULLAH BIN SALIM AL SHUEILI

Can you believe the documentary you’re watching?

A combination of technological developments and market forces is undermining the trust between viewer and film-maker. What’s at stake is history itself.
minus
plus

Like a surging viral outbreak, artificial intelligence-generated video has suddenly become inescapable. It’s infiltrated our social feeds and wormed its way into political discourse. But documentarians have been bracing for impact since before most of us even knew what the technology could do.


Documentaries fundamentally traffic in issues of truth, transparency and trust. If they use so-called synthetic materials but present them as if they’re “real", it’s not just a betrayal of the tacit contract between film-maker and audience. The implications are far broader and far more serious: a century of shared history is in jeopardy.


Whispers that documentarians were using materials created with generative AI started surfacing several years ago. “Roadrunner", the 2021 film about Anthony Bourdain, set off controversy when it failed to disclose that several lines of his “voice-over” had been generated with software trained on existing samples. That one made the news. But in other productions, sometimes cloaked by non-disclosure agreements, more was going on than many audiences knew.


In 2023, a group of documentary producers formed the Archival Producers Alliance and published an open letter to their industry calling for greater transparency, listing ways generative AI had been used without disclosure.


You might be shocked by what they pointed out: artificially created historical voices, which lead audiences “to believe they are hearing authentic primary sources when they are not”; “AI-generated ‘historical’ images”; “fake newspaper articles”; and “non-existent historical artefacts.” In other words, you may have watched a documentary in the past few years and thought what you were seeing was real — but it wasn’t.


Members of the Archival Producers Alliance, seasoned producers and directors, saw exactly how damaging this trend could be, not just for documentaries but also for the shared public record. In 2024, they announced a comprehensive set of best practices. The guiding principle is transparency and trust: The audience should know when any AI tool is used — perhaps to enhance a damaged photo, sharpen an audio clip, or create a voice from written text and recordings.


But this may be even more important: The guidelines are concerned with protecting history itself — with not “muddying the historical record.” If enhanced or generated material is used, the alliance suggests, the disclosure should be made on screen, not saved for the credits. Why? Every film that’s streamed can be sliced, diced and clipped. Any piece then becomes part of “the archive” — discoverable online and divorced from the larger context of the film, making it easy for someone to assume the clip is real.


That’s scary enough. But believe it or not, AI raises an even bigger issue for documentaries — and, in turn, for all of us.


In September, a clip surfaced in which a black garbage bag was apparently tossed from a White House window. The White House press office said a contractor was doing routine maintenance. But President Donald Trump falsely declared the clip was AI anyway, and made some revealing remarks.


“One of the problems we have with AI, it’s both good and bad,” he said. “If something happens, really bad, just blame AI. But also they create things. You know, it works both ways.” That phenomenon has a name: “liar’s dividend", a term coined by two law professors in 2019. The idea is simple. We’re becoming more aware of how easy it is to create convincing fake videos, which means people who claim real videos are fake are becoming more persuasive, too. If they’re caught on video but claim the video is AI, we’re more likely to believe them. Or at least we might feel pangs of doubt.


With the release of OpenAI’s video generator Sora 2 in September, the world irrevocably changed. Once the software is widely available, it will be possible for anyone to make a video of pretty much anything, and fast.


Most people can understand the obvious consequence of this earthshaking moment. Every video is now Schrodinger’s video: it’s both real and not real. We can take the liar’s dividend one frightening step further. In this brave new world, no claim that a video is real will ever be fully persuasive.


Documentarians have been fighting back. The newly formed Trust in Archives Initiative, for instance, is working on ways to authenticate and protect genuine archival materials. The Coalition for Content Provenance and Authenticity is developing an open technical standard that can certify the source of online content, and representatives from tech giants Google, Amazon, Meta and OpenAI are on the steering committee. Organisations like Witness are working with people documenting human rights violations, equipping them with resources for collecting video that is harder to discredit and helping authenticate material in real time.


The current market for documentaries is dismal. The demand from streaming platforms for movies about crimes, cults and celebrities leads to slapdash aesthetics and rushed timelines. In that market, the temptation to use AI-generated footage is high, and it’s often rewarded by viewers. In a real sense, we are part of the problem. Just going to the theatre to see a documentary, or paying a few bucks to digitally rent one with high artistic standards, can go a long way toward reviving the market.


This sounds bleak, because it is. But that doesn’t mean it’s entirely hopeless. Our shared history deserves protection, and we all need to be part of the solution. Perhaps documentarians are the ones best suited to help us rethink what trust, transparency and authenticity really look like when we can’t believe our eyes.

Alissa Wilkinson


The author has been covering non-fiction film-making for 15 years. She writes the Documentary Lens column for NYT


SHARE ARTICLE
arrow up
home icon