Wednesday, February 25, 2026 | Ramadan 7, 1447 H
clear sky
weather
OMAN
21°C / 21°C
EDITOR IN CHIEF- ABDULLAH BIN SALIM AL SHUEILI

How AI is reshaping corporate governance and D&O risk

minus
plus

Once the citadel of human judgment and facultative discernment, corporate governance has rapidly evolved to absorb extensive inputs from artificial intelligence tools. AI today resembles a gold rush, with governance and regulation still playing catch-up. Unsurprisingly, this imbalance is driving a steady expansion of Directors’ and Officers’ (D&O) liability exposure.


At the heart of AI lies a fundamental and uncomfortable paradox: the more powerful and complex AI systems become, the less transparent they are to human oversight. Functionally, this opacity makes it difficult to trace, explain, or correct problematic outcomes. Ethically, it raises a more profound question — how can organisations place trust in systems whose decision-making logic is not fully understood, even by their creators?


Recent high-profile cases illustrate how AI deployments can fall foul of regulators and public scrutiny.


One such example is the Apple Card controversy, where allegations surfaced that algorithms used to set credit limits disproportionately disadvantaged women. Apple co-founder Steve Wozniak publicly complained about the issue, prompting regulatory attention. Goldman Sachs, the issuing bank, stated that its credit decisions were based solely on creditworthiness and not on prohibited factors such as gender or race. While investigations highlighted the complexity of algorithmic credit models rather than deliberate discrimination, the episode underscored the reputational and governance risks associated with opaque AI systems.


Another widely cited case is COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), an algorithm used in several US jurisdictions to assess recidivism risk. Investigations and academic studies have criticised the tool for exhibiting racial bias, particularly in flagging Black defendants as higher risk at disproportionately higher rates. Although COMPAS developers have contested some of these findings, the controversy demonstrated how algorithmic outputs could materially influence life-altering decisions — and expose institutions to legal and ethical challenges.


Bias can creep into AI models from multiple sources: historical datasets that reflect entrenched socio-economic inequalities, under-representation of disadvantaged groups in AI development teams and flawed testing or validation protocols. As organisations integrate AI into core operations and products, directors and officers must navigate an increasingly complex risk environment — one that is already testing the limits of existing insurance frameworks.


To mitigate AI-related D&O liability, corporate governance structures must evolve. Key measures include:


Bias audits: Routine, mandatory and independent assessments to detect and correct distortions in AI models.


Transparency mandates: Clear insight into decision logic, particularly for consequential outcomes such as loan approvals, recruitment, or legal assessments.


Accountability hierarchies: Defined governance roles, including AI compliance or ethics officers, to manage oversight and respond to failures.


Legal alignment: Adherence to emerging regulatory regimes, such as the EU AI Act, to ensure both functional and ethical compliance.


Hybrid decision models: Retaining meaningful human judgment at critical AI decision points.


Beyond bias, boards must also confront the growing risk of ‘AI washing’ — the exaggeration or misrepresentation of AI capabilities to investors and stakeholders. A notable case involves Albert Saniger, founder and former CEO of Nate Inc, who is facing enforcement action in the US for allegedly making materially false and misleading statements about the company’s AI capabilities in order to raise funds. The case sends a clear signal: misrepresentation of AI use or sophistication can trigger regulatory action with direct consequences for directors and officers.


Another emerging concern is aggregation risk. Traditionally associated with insurers covering multiple properties in a single geographic area, aggregation risk in the AI era may arise when a common algorithm, model, or data dependency simultaneously affects multiple companies, sectors, or jurisdictions. Just as cyber incidents have demonstrated the potential for systemic, cross-border impact, AI failures could generate cascading claims across industries — an exposure boards must now factor into their risk assessments.


As AI-enabled corporate practices continue to evolve, so too will governance expectations and liability exposure. For insurers, this translates into heightened underwriting scrutiny and, inevitably, rising D&O premiums. For boards, the message is clear: AI governance is no longer a future concern — it is a present fiduciary responsibility.


SHARE ARTICLE
arrow up
home icon