Friday, April 19, 2024 | Shawwal 9, 1445 H
clear sky
weather
OMAN
25°C / 25°C
EDITOR IN CHIEF- ABDULLAH BIN SALIM AL SHUEILI

Disruptive impact of Artificial Intelligence in defence

Stefano Virgilli
Stefano Virgilli
minus
plus


Advances in artificial intelligence (AI), deep-learning, and robotics are enabling new military capabilities that will have a disruptive impact on military strategies. The effects of these capabilities will be felt across the spectrum of military requirements — from intelligence, surveillance, and reconnaissance to offence/defence balances and even on to nuclear weapons systems themselves.
Artificial Intelligence is becoming a critical part of modern warfare. Compared with conventional systems, military systems equipped with AI are capable of handling larger volumes of data more efficiently.
An analysis by Markets and Markets indicates that the market size of artificial intelligence in the military is expected to reach $18.82 billion by 2025, at a CAGR of 14.75 per cent from 2017 to 2025 USA & China have emerged as leaders in the field of AI. By 2020, China aims at achieving great strides in various AI technologies like big data and autonomous intelligent systems. All of this would have an estimated value of more than 150 billion yuan.
In 2017, the Chinese president, Xi Jinping explicitly called for the acceleration of military AI research to better prepare China for future warfare against a major adversary such as the US. China’s approach to AI has been heavily influenced by its assessment of US military initiatives, in particular, the Pentagon’s Third Offset Strategy.
Military institutions, policymakers, and intelligence agencies are feeling the competitive pressure to expand the use of military (and broader national security) applications of artificial intelligence. Both the US and Chinese governments have released long-term strategies to lead the world in the development and employment of artificial intelligence.
From a military perspective, there are several reasons why there is an interest in autonomous weapons. Number one is speed: Warfare is becoming faster paced. And computers and machines are generally better at making fast decisions than humans, especially when you have a lot of information that needs to be thought through.
The second reason why companies and armed forces are interested in autonomy and artificial intelligence is that it allows for new military capacities, most importantly swarms.
There is no way a human could control every single drone of such a swarm. The drones need to be able to communicate with each other and react to each other. So they need sensors and they need decision-making algorithms — and again you are at the AI and autonomy stage.
Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions.
Artificial Intelligence technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high. Autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.
Some technology experts are nervous about the accelerating drive toward weapons systems that use AI to make key attack decisions. In Geneva representatives of 120, United Nations member countries began discussing a potential ban on lethal AI-infused weaponry at a forum organised by a United Nations group, known as the Convention on Conventional Weapons (CCW) Group of Governmental Experts (GGE) on Lethal Autonomous Weapons Systems (LAWS).
Google has released a set of principles to guide its work in artificial intelligence, making good on a promise to do so last year following controversy over its involvement in a Department of Defence drone project.
The document, titled “Artificial Intelligence at Google: our principles,” does not directly reference this work, but makes clear that the company will not develop AI for use in weaponry. It also outlines a number of broad guidelines for AI, touching issues like bias, privacy, and human oversight.
Delegates at the United Nations have debated whether to consider banning killer robots, more formally known as lethal autonomous weapons systems (LAWS).
Those who would actually be responsible for designing LAWS, the AI and robotics researchers and developers, have spent these years calling on the UN to negotiate a treaty banning LAWS.
More specifically, nearly 4,000 AI and robotics researchers called for a ban on LAWS in 2015; in 2017, 137 CEOs of AI companies asked the UN to ban LAWS; and in 2018, 240 AI-related organizations and nearly 3,100 individuals took that call a step further and pledged not to be involved in LAWS development.
I believe that AI has great potential to benefit humanity in many ways and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.



SHARE ARTICLE
arrow up
home icon