Monday, December 15, 2025 | Jumada al-akhirah 23, 1447 H
clear sky
weather
OMAN
22°C / 22°C
EDITOR IN CHIEF- ABDULLAH BIN SALIM AL SHUEILI

Ethical questions remain as universities embrace AI

minus
plus


An anecdote in a leading journal has recently created buzz in higher education. A student just ‘asked for a little help’ in writing a paper due the next morning. Within seconds, she got a ready paper on her screen. This made the student panic as she wasn’t sure what to do next. Could she submit the paper as is? Even if she edited it, would it be ethical to submit a paper prompted by AI? While the prompts were hers, the language was not.


In a world that is increasingly driven by Artificial Intelligence (AI), such ethical questions are becoming urgent. What is an original work? If continuous prompts are being given to an AI platform to improve and customise an essay, a poem or an engineering project, can that final work be considered original?


These are increasingly difficult but necessary questions to ask in higher education.


Policies around AI in higher education suggest that students need to be introduced and trained to use such technologies responsibly. But that is easier said than done. AI tools can assist students to brainstorm ideas or create an outline, but it can also be used to download an entire paper without genuine understanding. The line between assistance and automated production is getting blurred when AI is used to solve problems or create code.


An interesting example points to issues of critical thinking. Any AI platform can be asked how to ensure that cheese doesn’t fall from a pizza. Glue was the quickest answer. The answer may be literally correct but contextually wrong. An automated response cannot be taken without appropriate thought which needs basic human education and intervention.


Privacy of data is another crucial aspect of AI usage of which we are not sufficiently aware. The new information that a platform is generating is coming from our questions and prompts. AI is developing more because we are using it more. Who owns this data then? The clearest answer is that it is owned by the companies who host such platforms. Our original work is going into a dataset that will churn the same material in new ways. Again, institutions of higher education do not, so far, have official policies on protecting such data.


If not managed properly, this data could be misused, leading to breaches of privacy or biased decision-making. Research has already shown that algorithms tend to show us what we are already sympathise with in terms of personal leanings. This makes critical thinking even more problematic as we need to go out of our way to confront opposing ideas. Again, universities need to educate learners of such digital divides and show ways of solving such biases.


Responsible implementation of AI lies at the core of teaching and learning. While a blanket usage of AI technologies may be tempting, it is not beneficial for young learners. It is preempting critical thinking and original problem-solving that remain at the core of higher education.


Crafting policies that focus on ethical literacy and train teachers and students on responsible use of AI through thoughtfully curated policies is the only way forward.


SHARE ARTICLE
arrow up
home icon