Wednesday, April 24, 2024 | Shawwal 14, 1445 H
scattered clouds
weather
OMAN
33°C / 33°C
EDITOR IN CHIEF- ABDULLAH BIN SALIM AL SHUEILI

The key to responsible AI development

Generative AI will change the world, whether we like it or not. At this pivotal moment in the technology’s development, public and private stakeholders must do everything in their power to ensure that the process leads to fair, equitable, and sustainable outcomes.
No Image
minus
plus

In recent months, the development of artificial intelligence (AI) has accelerated considerably, with generative AI systems such as ChatGPT and Midjourney rapidly transforming a wide range of professional activities and creative processes. The window of opportunity for guiding the development of this powerful technology in ways that minimise the risks and maximise the benefits is closing fast.


AI-based capabilities exist along a continuum, with generative AI systems such as GPT-4 (the latest version of ChatGPT) falling within the most advanced category. Given that such systems hold the greatest promise and can lead to the most treacherous pitfalls, they merit particularly close scrutiny by public and private stakeholders.


Virtually all technological advances have had both positive and negative effects on society. On one hand, they have bolstered economic productivity and income growth, expanded access to information and communication technologies, extended human lifespans, and improved overall well-being. On the other hand, they have led to worker displacement, wage stagnation, greater inequality, and increasing concentration of resources among individuals and corporations.


AI is no different. Generative AI systems open up abundant opportunities in areas such as product design, content creation, drug discovery and health care, personalised education, and energy optimisation. At the same time, they may prove highly disruptive, and even harmful, to our economies and societies.


The risks already posed by advanced AI, and those that are reasonably foreseeable, are considerable. Beyond widespread reorientation of labour markets, large-language-model systems can increase the spread of disinformation and perpetuate harmful biases. Generative AI also threatens to exacerbate economic inequality. Such systems may even pose existential risks to humankind.


For some, this is a reason to tap the brakes on AI research. Last month, more than 1,000 AI technologists, from Elon Musk to Steve Wozniak, signed an open letter recommending that AI labs “immediately pause” the training of systems more powerful than GPT-4 for at least six months. During this pause, they argue, a set of shared safety protocols – “rigorously audited and overseen by independent outside experts” – should be devised and implemented.


The open letter, and the heated debate it has triggered, underscores the urgent need for stakeholders to engage in a wide-ranging good-faith process aimed at aligning on robust shared guidelines for developing and deploying advanced AI. Such an effort must account for issues like automation and job displacement, the digital divide, and the concentration of control over technological assets and resources, such as data and computing power. And a top priority must be to work continuously to eliminate systemic biases in AI training, so that systems like ChatGPT do not end up reproducing or even exacerbating them.


Proposals for AI and digital-services governance are already emerging, including in the US and the European Union. Organisations like the World Economic Forum (WEF) are also making contributions. In 2021, the WEF launched the Global Coalition for Digital Safety, which aims to unite stakeholders in tackling harmful content online and facilitate the exchange of best practices for regulating online safety. The WEF subsequently created the Digital Trust Initiative, to ensure that advanced technologies like AI are developed with the public’s best interests in mind.


Now, the WEF is calling for urgent public-private co-operation to address the challenges that have accompanied the emergence of generative AI and to build consensus on the next steps for developing and deploying the technology. To facilitate progress, the WEF, in partnership with AI Commons – a nonprofit organisation supported by AI practitioners, academia, and NGOs focused on the common good – will hold a global summit on generative AI in San Francisco on April 26-28.


Stakeholders will discuss the technology’s impact on business, society, and the planet, and work together to devise ways to mitigate negative externalities and deliver safer, more sustainable, and more equitable outcomes.


Generative AI will change the world, whether we like it or not. At this pivotal moment in the technology’s development, a co-operative approach is essential to enable us to do everything in our power to ensure that the process is aligned with our shared interests and values.


Klaus Schwab, Founder and Executive Chairman of the World Economic Forum, is co-author of The Great Narrative: For a Better Future (Forum Publishing, 2022).


Cathy Li is Head of AI, Data and Metaverse and a member of the Executive Committee at the World Economic Forum.


SHARE ARTICLE
arrow up
home icon