

In 1983, a US report titled A Nation at Risk sparked a revolution in education. Today, a new challenge looms: generative AI. Unesco estimates 244 million children worldwide lack access to schooling, while those in classrooms face a paradoxical disruptor — AI tools like ChatGPT, which 67 per cent of university students now use for assignments (Stanford University, 2023).
Yet history reassures us: education adapts. When calculators entered schools, critics warned of collapsing math skills. Instead, they became tools for deeper exploration. Similarly, generative AI need not undermine learning — if we act decisively. Every semester, I discuss with my students the exciting challenge of how we can benefit from the emerging generative AI tools, such ChatGPT and Deepseek, in improving problem solving and critical skills instead of treating them as the place to go for quick and easy answers. Here are five risks AI poses to education, paired with proven solutions and success stories from Oman and beyond.
The first risk is short-cutting critical thinking. Generative AI’s instant answers risk replacing deep analysis with quick fixes. At King’s College London, a 2023 study showed a 32-per cent improvement in diagnostic accuracy compared to traditional methods, after professors partnered with students to design AI-generated simulations for medical training, analyse synthetic patient data and practice diagnoses in risk-free environments. In Oman, educators noticed students increasingly relying on tools like ChatGPT to draft essays without engaging with core concepts. Here, the solution is to integrate AI as a Collaborative Tool. Oman’s Ministry of Education partnered with Microsoft in 2022 to pilot “AI Co-Lab", a programme training teachers to design assignments where students debate AI-generated content. For example, high schoolers in Muscat use ChatGPT to draft essays on climate change, then work in teams to identify biases, gaps and inaccuracies in the text. After one year, students in the programme scored 28 per cent higher on critical thinking assessments than peers in traditional classrooms, proving AI can fuel, not replace, intellectual excellence.
The second risk is outdated accreditation systems. Traditional grading often fails to measure AI-augmented skills like prompt engineering or ethical AI use. What is the solution? Modernise accreditation frameworks. Saudi Arabia’s Education and Training Evaluation Commission (ETEC) now requires universities to embed AI literacy into degree programmes. At King Saud University, students earn micro-credentials for mastering AI tools in research, with employers like NEOM and Aramco prioritising these badges in hiring.
Another risk is the lecture vs AI Knowledge Gap; Why attend lectures when AI delivers facts faster? One solution could be to prioritise hands-on learning. MIT’s “Introduction to AI” course reduced lecture hours by 50 per cent, assigning students to build AI tools addressing real-world issues, like optimising solar grids in rural India. Post-course surveys noted a 27-per cent rise in conceptual understanding.
Similarly, the UAE’s Mohammed bin Zayed University of Artificial Intelligence (MBZUAI) slashed lecture hours by 40 per cent, redirecting time to AI-driven projects. Students in Abu Dhabi recently designed a ChatGPT-powered tutor for Arabic grammar, which reduced errors in primary school writing assignments by 33 per cent during trials.
A risk that I face every semester is the fact that teachers are left behind. Many of us educators lack training to harness AI effectively. What I hope educational ecosystems do is to invest more in teacher empowerment. For example, Finland’s “AI Educator Grants” award RO 5,000 to teachers developing AI-enhanced lesson plans. One recipient created a ChatGPT-powered debate coach for Helsinki High School, improving students’ argumentation skills by 35 per cent in six months. Also, Qatar’s “TeachTech” initiative upskills educators through AI workshops led by Carnegie Mellon University. In Doha, physics teachers now use AI to simulate complex experiments, cutting lab preparation time by 50 per cent while boosting student engagement.
The fifth, and last but not least risk is the policy Lag. Bureaucratic delays leave schools unprepared for AI’s pace. Here, there is no solution better than fostering agile leadership. Singapore’s SkillsFuture initiative partnered with Google to launch AI “sandboxes” in 30 schools, where teachers test tools like Gemini for personalised feedback. Early data shows a 22-per cent reduction in administrative tasks, freeing educators to mentor. Bahrain’s National AI Strategy allocates 20 per cent of its education budget to AI pilot programmes. Whereas, the University of Bahrain, administrators and tech firms co-developed an AI plagiarism detector tailored to Arabic texts, reducing cheating incidents by 45 per cent in 2023.
The GCC’s proactive stance offers a blueprint. Oman’s AI Co-Lab, Saudi Arabia’s credential reforms and Bahrain’s policy agility prove that AI can elevate education when met with creativity, not fear. As the global edtech market surges towards RO 156 billion by 2025 (HolonIQ), schools worldwide must learn from these pioneers. The lesson? AI is not a threat, it is the next chapter in education’s endless story of reinvention.
Oman Observer is now on the WhatsApp channel. Click here