Opinion

The new face of fraud: How AI is outsmarting human trust

The most unsettling aspect of AI-driven fraud is how easily it manipulates human psychology. We naturally tend to trust familiar voices, faces and communication styles. When a message seems to come from someone we know, our instinct is to respond, not to question.

Fraud has always evolved alongside technology, but never as quickly, or as quietly, as it is doing today. Across the Middle East, a new form of digital deception is emerging, powered not by criminals hiding behind screens but by artificial intelligence capable of mimicking the people we trust most.
Deepfake voices, synthetic videos and AI-generated messages have begun to blur the line between what is real and what is artificially constructed. And as our businesses and institutions accelerate their digital transformation, the threat is growing faster than our awareness.
The most unsettling aspect of AI-driven fraud is how easily it manipulates human psychology. We naturally tend to trust familiar voices, faces, and communication styles. When a message seems to come from someone we know, our instinct is to respond, not to question.
This instinct, once a strength in traditional communication, has now become a weakness. A few seconds of recorded speech are enough for AI to clone a voice almost perfectly. A handful of photos can create a realistic video that looks authentic to someone untrained. In this new environment, our senses are no longer dependable guardians.
This vulnerability is especially risky in a region where digital adoption is quickly increasing. The Middle East has adopted artificial intelligence in banking, government services, education, energy and commerce faster than many other regions.
However, every new development creates a fresh chance for exploitation. A sophisticated AI-generated message telling an employee to transfer funds, release confidential information, or give system access can cause severe damage before anyone notices that the request was not made by a human.
While cybersecurity experts often lead discussions about digital threats, internal auditors and governance professionals are quietly becoming an essential line of defence. AI-enabled fraud not only targets systems but also focuses on processes, controls and behaviours.
Weak approval chains, informal communication habits, outdated verification procedures and siloed decision-making create vulnerabilities through which AI-driven deception can go unnoticed. Many internal audit functions still rely on tools designed for a different era, even as the threats they face evolve at an unprecedented pace.
What makes AI fraud especially dangerous is that the failure point is rarely technological. Most breaches happen because a person trusts what they hear or see. A voice message sounds urgent. A request feels real. The video looks legitimate. AI’s strength is its ability to exploit moments of pressure, distraction, or routine.
This means that strengthening our defences isn’t just about tightening systems but also about changing culture, creating workplaces where verification is encouraged, questions are welcomed and employees feel confident to pause before acting.
To prepare for this new reality, organisations must rethink their risk strategies. Protecting against AI-enabled fraud involves more than just firewalls or software. It requires solid governance frameworks that specify how AI is used, who oversees it and how risks are escalated. It also involves internal auditors who understand digital behaviours and can anticipate potential misuses of technology. Additionally, it calls for leadership willing to invest in awareness at all levels of the organisation, not just at the top.
Artificial intelligence is not the enemy. It is a powerful tool that can transform economies and improve lives.
But like every tool, it can be misused. The real challenge is whether our systems, our controls and our people are equipped to recognise deception in an age when deception can be manufactured in seconds.
The future of fraud is already here and it is speaking to us in familiar voices. The question is no longer whether AI will reshape the landscape of digital crime; it is whether we are prepared to question what we once accepted without doubt.

Dr Suaad Jassem The writer is Assistant Professor of Accounting and Auditing, College of Banking and Financial Studies