

Among the many rich sessions of the 8th Astana Finance Days 2025 (#AFD2025), none struck a deeper chord than the panel titled “Regulating the Future – How SupTech and AI Are Reshaping Financial Markets” on September 5 . This session did not merely offer speculative futures — it directly confronted the growing imperative to govern the tools we are increasingly adopting to govern others: artificial intelligence (AI) and supervisory technology (SupTech).
As financial regulators around the world explore the potential of AI, the narrative often defaults to generative AI models — tools designed primarily for linguistic processing, not legal judgement. This disconnect can lead to both missed opportunities and regulatory hazards. What became abundantly clear during the session, especially from the interventions of Prof Wardrobe and legal adviser David Simpson, is that the future of AI in regulation depends not on simply using AI, but on governing its use with integrity, precision and institutional self-awareness.
Prof Wardrobe made a powerful point: the foundation of effective AI-based supervision lies not in the model itself, but in the architecture of structured data. Most AI today digests text, PDFs and scanned documents — formats organised linguistically rather than logically. The result is often inconsistent, erratic and unpredictable outputs. In regulatory contexts, where decisions must be anchored in legality, rationality and proportional discretion, such instability is unacceptable and erodes trust and confidence naturally.
Wardrobe stressed that structured data — systematically categorised, relationally coherent and machine-readable — is not a technical luxury but a regulatory necessity. Without it, predictive models falter. Risk detection becomes noisy. Transparency is diminished. In contrast, structured data unlocks the capacity of AI to support, rather than replace, the human discretion and legal reasoning at the heart of regulatory work.
David Simpson, speaking from his legal vantage point, added a critical dimension. He reminded us that any regulatory system powered by AI must remain reviewable in court. Courts examine not merely the outcome of regulatory decisions, but the legality of the process, the rationality of the logic and the preservation of discretionary authority. Algorithms, when unchecked, risk fossilising decision-making processes — undermining the regulator’s ability to apply judgement on a case-by-case basis. This is especially dangerous when discretion is a legal obligation, not just an operational choice.
From these insights, two key governance challenges emerge for AI in regulation.
First, data governance, where regulators must not only structure their own data pipelines but also invest in a full transformation of regulatory processes into structured, machine-actionable formats. Every inspection, licence, enforcement notice and corporate filing must be part of an integrated data ecosystem. AI must be taught not just how language works, but how regulation thinks.
Second, legal accountability. AI systems must be designed and deployed with built-in legal auditability. This includes ensuring that AI-enabled decisions are explainable, reversible and contextualised within existing legal norms. AI must never be allowed to dilute the regulator’s responsibility, nor to automate away public trust.
The session’s overarching message was clear: SupTech and AI can indeed catalyse a new era of regulatory precision, speed and adaptability — but only if embedded within robust legal, ethical and institutional governance frameworks. The fusion of structured data, institutional clarity and legal oversight is not optional. It is the bedrock of responsible AI in public decision-making.
#AFD2025 reminded us that the future is not just something we regulate — it is something we must govern with wisdom. And when it comes to AI, governance is not an afterthought. It is the software of trust.
Oman Observer is now on the WhatsApp channel. Click here