Thursday, March 28, 2024 | Ramadan 17, 1445 H
broken clouds
weather
OMAN
23°C / 23°C
EDITOR IN CHIEF- ABDULLAH BIN SALIM AL SHUEILI

Artificial intelligence prone to misuse by hackers

1256483
1256483
minus
plus

FRANKFURT: Rapid advances in artificial intelligence are raising risks that malicious users will soon exploit the technology to mount automated hacking attacks, cause driverless car crashes or turn commercial drones into targeted weapons, a new report warns.


The study, published on Wednesday by 25 technical and public policy researchers from Cambridge, Oxford and Yale universities along with privacy and military experts, sounded the alarm for the potential misuse of AI by rogue states, criminals and lone-wolf attackers.


The researchers said the malicious use of AI poses imminent threats to digital, physical and political security by allowing for large-scale, finely targeted, highly efficient attacks.


The study focuses on plausible developments within five years.


“We all agree there are a lot of positive applications of AI,” Miles Brundage, a research fellow at Oxford’s Future of Humanity Institute, said. “There was a gap in the literature around the issue of malicious use.”


Artificial intelligence, or AI, involves using computers to perform tasks normally requiring human intelligence, such as taking decisions or recognising text, speech or visual images.


It is considered a powerful force for unlocking all manner of technical possibilities but has become a focus of strident debate over whether the massive automation it enables could result in widespread unemployment and other social dislocations.


The 98-page paper cautions that the cost of attacks may be lowered by the use of AI to complete tasks that would otherwise require human labour and expertise.


New attacks may arise that would be impractical for humans alone to develop or which exploit the vulnerabilities of AI systems themselves.


It reviews a growing body of academic research about the security risks posed by AI and calls on governments and policy and technical experts to collaborate and defuse these dangers.


The researchers detail the power of AI to generate synthetic images, text and audio to impersonate others online, in order to sway public opinion, noting the threat that authoritarian regimes could deploy such technology.


The report makes a series of recommendations including regulating AI as a dual-use military/commercial technology.


It also asks questions about whether academics and others should rein in what they publish or disclose about new developments in AI until other experts in the field have a chance to study and react to potential dangers they might pose.


“We ultimately ended up with a lot more questions than answers,” Brundage said.


The common practice, for example, of “phishing” — sending e-mails seeded with malware or designed to finagle valuable personal data — could become far more dangerous, the report detailed.


Currently, attempts at phishing are either generic but transparent — such as scammers asking for bank details to deposit an unexpected windfall — or personalised but labour intensive — gleaning personal data to gain someone’s confidence, known as “spear phishing”. In the political sphere, unscrupulous or autocratic leaders can already use advanced technology to sift through mountains of data collected from omnipresent surveillance networks to spy on their own people.


“Dictators could more quickly identify people who might be planning to subvert a regime, locate them, and put them in prison before they act,” the report said. Likewise, targeted propaganda along with cheap, highly believable fake videos have become powerful tools for manipulating public opinion “on previously unimaginable scales”. An indictment handed down by US special prosecutor Robert Mueller last week detailed a vast operation to sow social division in the United States and influence the 2016 presidential election in which so-called “troll farms” manipulated thousands of social network bots, especially on Facebook and Twitter.


Another danger zone on the horizon is the proliferation of drones and robots that could be repurposed to crash autonomous vehicles, deliver missiles, or threaten critical infrastructure to gain ransom.


— Agencies


SHARE ARTICLE
arrow up
home icon