Tuesday, May 07, 2024 | Shawwal 27, 1445 H
clear sky
weather
OMAN
33°C / 33°C
EDITOR IN CHIEF- ABDULLAH BIN SALIM AL SHUEILI

Making AI safe for humanity

minus
plus

The meeting to discuss the impact of artificial intelligence (AI) on our lives could not be timelier. This amazing technology may well change the world, but it will not automatically make it a better place.


All around the world, there is action being taken to understand how automation and the use of robot technology will influence human beings.


We already know that it can do good things – particularly in improving health care and allowing machines to take over thousands of boring, mindless, and repetitive tasks.


But in the hands of the wrong people – criminals, dictators, or people who want to make war not peace – this technology can be a force for evil.


That is why at the beginning of November the first-ever global AI Safety Summit was held in Britain, bringing together world leaders, tech executives, and AI experts from 26 countries including the United States, China, and the European Union. They aimed to make artificial technology safer.


A few days later 16 media support groups from 20 countries, including the Ethical Journalism Network, published our Paris Charter on AI and Journalism – the first of its kind – and presented it to another governmental meeting, the Paris Peace Forum. They aim to make AI a force for truth-telling – not telling malicious lies or for hate speech.


The whirlwind arrival of Open AI and its Chat GPT has led to an entrepreneurial explosion that dwarfs even the dotcom boom at the end of the last century.


In less than a year, we have had billion-dollar investments in artificial intelligence and warnings that humanity is on the brink of a technology-led catastrophe.


There is good reason to be worried, particularly for people in media and education who strive for truth-telling in media and education, which is at the heart of building public trust. But that trust is under threat like never before.


AI exacerbates what is already an existential moment for everyone’s right to access reliable and trustworthy information.


Whether we are engaged in sharing knowledge or research or investigating and interrogating the events and realities of human society we provide intelligence and fact-based information that is vital for the public we serve.


We also know that technological innovation does not inherently lead to progress. For it to truly benefit humanity, it must be steered by human values, ethics, and respect for all.


But what are those ethics?


Everyone has an interest in information in the public information space that reflects the core values of society and democracy. These values are accuracy and truth-telling; independence and not biased propaganda; impartial and inclusive information that reflects all shades of opinion; humanity that shows respect for other people and does no harm; and transparency which ensures we are accountable for what the information we publish.


Of course, these values are not intended to infringe people’s rights to free expression. Everyone has the right to say things that they believe in and that we might not agree.


People even have the right to say things that other people might find offensive, but no one should engage in hateful speech or do harm to vulnerable groups, like children, migrants, or minority groups in society.


When it comes to the use of technology we must guard against it being used to harm others. Educators and journalists, for example, use the information for public purposes. We share an ambition to eradicate ignorance, poverty, and fear through the empowering strength of truth-telling. That is part of the social role we play in society.


This is not a marginal benefit for humanity but gets to the heart of what we mean by being human. Nowhere is this more evident than in how we tell the stories of human suffering, particularly in times of war.


In coverage of conflicts in both Ukraine and Palestine, for example, we see how advanced technology is used to distort information, spread malicious lies, and to minimize the brutal realities of what is happening and what has happened.


Even worse, AI is being used by the military as we see in Palestine to target communities and to commit what many people believe are appalling war crimes.


People in journalism try to counter this propaganda and barbarism by reporting on the spot, but they pay a heavy price. More than 50 journalists, many of them targeted, have already been killed in the Gaza conflict, the heaviest media death toll in any conflict since records began more than 30 years ago.


Journalism at its best can focus on the terrible consequences of war and provide context to help us better understand the roots of violence. Media can tell the stories, often poignant and moving, of human suffering respectfully and sensitively by sharing the reality of loss as an ordeal for us all to bear.


So how do we ensure that technology works for all of us and peaceful progress?


Many people, even in the technology industry say we need new rules and regulations to control how we use AI, but governments are divided about how to do that.


Some, like the governments in the United States and many of those who attended the first AI safety summit, think we have to have a light-touch approach, keeping legal interference and relying on the industry itself to exercise more self-control.


Others are already imposing bans on the use of technology but this may not deal properly with the potential problems and if they are just an excuse to double down on existing limitations on fundamental human rights they will limit free speech for everyone.


Some governments – like those in the European Union are trying to classify the different uses of AI by the degree of risk we face. This may be a better approach but it will only work if it is based on settled principles, such as those indicated above.


The charter that has been agreed for journalism, for example, outlines ten points for the introduction and use of AI in news media. It calls for AI to be guided by ethics, transparency, and the centrality of human oversight.


There should be enforcement of rules demanding disclosure about how systems are trained, how they operate, and how they are monitored. Certainly, AI is an important technology.


As important, for example, as that which led to the inventions of cars, and airplanes and just as we have introduced important safeguards in the past – seat belts and safety devices in cars, black box flight recorders in planes, industrial laws to reduce accidents at work. We will need an equivalent body of rules to protect us from the dangers of robotic automation in our lives.


Above all, we need to continue the debate – on how we push back against the relentless pursuit of profit and the greed, self-interest, and self-regard of techno-optimists.


We also need to recognise that the debate about AI and its use is currently under the control of a charmed circle of elites from the political and corporate centres of power in the richest countries of the world.


However, this is not a debate for only the global north. We need to make the debate inclusive, diverse, and relevant to all the countries of the world.


By building bridges with others, working together around principles that we share, and putting people at the heart of everything we do we can ensure that artificial intelligence is properly recognised as a public good that can be made safe for all of humanity.


Excerpts from the keynote speech by the writer at the World Innovation Summit for Education (WISE11) summit in Doha, Qatar, recently.


SHARE ARTICLE
arrow up
home icon