Monday, December 08, 2025 | Jumada al-akhirah 16, 1447 H
clear sky
weather
OMAN
18°C / 18°C
EDITOR IN CHIEF- ABDULLAH BIN SALIM AL SHUEILI

Machines mirror humanity: The inherent bias of Artificial Intelligence

Every AI model is trained on data generated by humans, reflecting our behaviour, inequalities, and historical prejudices. In essence, the machine learns from us, and we, as a society, are not neutral. Global studies have shown that facial recognition systems often misidentify women and people of color at far higher rates than others.
minus
plus

Artificial Intelligence was once hailed as the dawn of impartiality, a tool capable of removing human prejudice from decision-making. From global corporations to national policies, AI promised to bring logic where emotion once ruled.


However, today, minor and major decisions are made by machines, and increasingly, algorithms decide who gets the loan, gets promoted, and which patient receives care. One reality that we all need to acknowledge and accept is that AI inherits our biases. The term “inherent bias” does not imply that machines have opinions; it means bias is built into their foundations.


Every AI model is trained on data generated by humans, reflecting our behaviour, inequalities, and historical prejudices. In essence, the machine learns from us, and we, as a society, are not neutral. Global studies have shown that facial recognition systems often misidentify women and people of color at far higher rates than others.


This is not deceit or malice but an imbalance. Male faces with lighter skin tones predominate the data used in these algorithm trainings. Disparity subtly transforms into prejudice when it is applied to millions of users.


Similarly, hiring algorithms trained on decades of corporate data may unconsciously favor male-coded resumes, credit-scoring systems may downgrade applicants from specific neighbourhoods, and in healthcare, AI trained essentially on Western datasets may misdiagnose patients from other ethnic backgrounds.


These systems discriminate based on inheritance, not intent. AI's inherent bias is not a technical flaw; it's a societal mirror. These systems reflect humanity, magnifying both our strengths and our shortcomings.


The solution lies not in abandoning AI, but in designing it consciously. It means implementing AI ethics boards, conducting regular bias audits, and ensuring that developers, policymakers, and users understand not only what the AI predicts, but why.


Global authorities, including the European Union, are enacting legislation to categorise and oversee "high-risk AI systems." The ultimate objective is to develop responsible algorithms rather than flawless ones. The actual test will be whether cultures decide to acknowledge and address bias, which will always exist in some form.


From a regional perspective, Oman, under Vision 2040, is rapidly embracing artificial Intelligence to power its healthcare systems, logistics networks, and innovative governance. As a basis for the future, the National AI and Advanced Technologies Strategy places a strong emphasis on innovation, digital skills, and local data sovereignty.


However, the debate now centers on how Oman should utilise AI responsibly rather than whether it should be used at all, as AI becomes a significant factor in national decision-making. The generated algorithms may not accurately represent Omani reality if the source data comes from areas with distinct social norms or demographics.


A recruiting system may overlook regionally appropriate qualifications; an educational AI would misread cultural learning patterns; and a forecasting algorithm might overestimate a local company's success. To combat this, Oman must ensure that every AI project is based on local data, ethics, and governance. In addition to performing effectively, AI should be trained to do so with respect, considering cultural nuances and upholding equity and transparency. While developing policies in line with their own national and artistic ideals, Oman and its GCC allies may expand on these international initiatives.


"If AI is the mirror of our civilisation, then fairness in algorithms begins with fairness in society itself."

Dr Mythili Kolluru

K V Ch Madhu Sudhana Rao


The writer is AI researcher based in the USA


SHARE ARTICLE
arrow up
home icon