Tuesday, March 10, 2026 | Ramadan 20, 1447 H
clear sky
weather
OMAN
22°C / 22°C
EDITOR IN CHIEF- ABDULLAH BIN SALIM AL SHUEILI
x
Iran war will be 'short-term': Trump
Trump says Iran war 'pretty much' over: CBS News
HM congratulates Mojtaba Khamenei on selection as Iran’s Supreme Leader
Do not depend entirely on GPS: Oman MSC's alert to seagoers
Day 9: Latest in the Middle East war
India's CBSE cancels Class 12 exams again
Iran war's effects already a reality in Europe: EU chief
Global stagflation is almost inevitable amid escalating crisis: Analyst
Travel industry to see rise in refund disputes
Oil surge: Oman crude closes at $124.68, Brent crosses $100 first time since 2022

Fake AI satellite imagery spurs war disinformation

minus
plus

The satellite image posted by an Iranian news outlet looked real: a devastated US base in Qatar. But it was an AI-generated fake, underscoring the accelerating threat of tech-enabled disinformation during wartime.


The rise of generative AI has turbocharged the ability of state actors and propagandists to fabricate convincing satellite imagery during major conflicts, a trend that researchers warn carries real-world security implications.


As the US-Israeli war against Iran rages, Tehran Times, an English daily, posted on X a "before vs after" image it claimed showed "completely destroyed" US radar equipment at a base in Qatar.


In fact, it was an AI-manipulated version of a Google Earth image from last year of a US base in Bahrain, researchers said.


The subtle visual giveaways included a row of cars parked in identical positions in both the authentic satellite photo and the manipulated image. Yet the manipulated photo garnered millions of views as it spread across social media in multiple languages, illustrating how users are increasingly failing to distinguish reality from fiction on platforms saturated with AI-generated visuals.


Brady Africk, an open-source intelligence researcher, noted an "increase in manipulated satellite imagery" appearing on social media in the wake of major events, including the Middle East war.


"Many of these manipulated images have the hallmarks of imperfect AI-generation: odd angles, blurred details, and hallucinated features that don't align with reality," Africk said.


"Others appear to be an image manipulated manually, often by superimposing indicators of damage or another change on a satellite image that had no such details to begin with," he said.


Information warfare analyst Tal Hagin flagged another AI-generated satellite image purporting to show that Israeli-US jets had targeted the painted silhouette of an aircraft on the ground in Iran, while Tehran seemingly moved real planes elsewhere. The telltale clues included gibberish coordinates embedded in the fake image, which spread across sites including Instagram, Threads and X.


AFP detected a SynthID, an invisible watermark meant to identify images created using Google AI. The fabricated satellite images follow the emergence of imposter OSINT - or open-source intelligence - accounts on social media that appear to undermine the work of credible digital investigators.


"Due to the fog of war, it can be very difficult to determine the success of an adversary's strikes. OSINT came as a solution, using public satellite imagery to circumvent the censorship" inside countries like Iran, Hagin said.


"But it's now being preyed upon by disinformation agents," he added.


Reports of fake satellite imagery created or edited using AI also followed the Russia-Ukraine conflict and the four-day war between India and Pakistan last year.


"Manipulated satellite imagery, like other forms of misinformation, can have real-world impacts when people act on the information they come across without verifying its authenticity," Africk said.


"This can have effects that range from influencing public opinion on a major issue, like whether or not a country should engage in conflict, to impacting financial markets."


In the age of AI, authentic high-resolution satellite imagery collected in real time can give decision-makers vital clues to assess security threats and debunk falsehoods from unverified sources.


During a recent militant attack on Niamey airport in Niger, satellite intelligence company Vantor said it detected images circulating online purporting to show the main civilian terminal on fire.


The company's own satellite imagery helped confirm that the photos were fake, almost certainly generated using AI, Vantor's Tomi Maxted said.


"When a satellite image is presented as visual evidence in the context of war, it can easily influence how people interpret events," Bo Zhao, from the University of Washington, said.


As AI-generated imagery grows increasingly convincing, it is "important for the public to approach such visual content with caution and critical awareness," Zhao said.


SHARE ARTICLE
arrow up
home icon