World

AI synthetic media tech enters perilous phase

A green wireframe model covers an actor's lower face during the creation of a synthetic facial reanimation video, known alternatively as a deepfake. -- Reuters
 
A green wireframe model covers an actor's lower face during the creation of a synthetic facial reanimation video, known alternatively as a deepfake. -- Reuters
LONDON: 'Do you want to see yourself acting in a movie or on TV?' said the description for one app on online stores, offering users the chance to create AI-generated synthetic media, also known as deepfakes.

'Do you want to see your best friend, colleague, or boss dancing?' it added. 'Have you ever wondered how would you look if your face swapped with your friend's or a celebrity's?'

The same app was advertised differently on dozens of adult sites: 'Make deepfake clip in a sec,' the ads said. 'Deepfake anyone.'

How increasingly sophisticated technology is applied is one of the complexities facing synthetic media software, where machine learning is used to digitally model faces from images and then swap them into films as seamlessly as possible.

The technology, barely four years old, may be at a pivotal point, according to interviews with companies, researchers, policymakers and campaigners.

It's now advanced enough that general viewers would struggle to distinguish many fake videos from reality, the experts said, and has proliferated to the extent that it's available to almost anyone who has a smartphone, with no specialism needed.

'Once the entry point is so low that it requires no effort at all, and an unsophisticated person can create a very sophisticated deepfake video - that's the inflection point,' said Adam Dodge, an attorney and the founder of online safety company EndTab.

'That's where we start to get into trouble.'

With the tech genie out of the bottle, many online safety campaigners, researchers and software developers say the key is ensuring consent from those being simulated, though this is easier said than done. Some advocate taking a tougher approach when it comes to synthetic video, given the risk of abuse.

Non-consensual deepfake videos accounted for 96% of a sample study of more than 14,000 deepfake videos posted online, according to a 2019 report by Sensity, a company that detects and monitors synthetic media. It added that the number of deepfake videos online was roughly doubling every six months.

'The vast, vast majority of harm caused by deepfakes right now is a form of gendered digital violence,' said Ajder, one of the study authors and the head of policy and partnerships at AI company Metaphysic, adding that his research indicated that millions of women had been targeted worldwide.

AD NETWORK AXES APP

ExoClick, the online advertising network that was used by the 'Make deepfake video in a sec' app, told Reuters it was not familiar with this kind of AI face-swapping software. It said it had suspended the app from taking out adverts and would not promote face-swap technology in an irresponsible way.

'This is a product type that is new to us,' said Bryan McDonald, ad compliance chief at ExoClick, which like other large ad networks offer clients a dashboard of sites they can customise themselves to decide where to place adverts.

'After a review of the marketing material, we ruled the wording used on the marketing material is not acceptable. We are sure the vast majority of users of such apps use them for entertainment with no bad intentions, but we further acknowledge it could also be used for malicious purposes.'

Apple said it didn't have any specific rules about deepfake apps but that its broader guidelines prohibited apps that include content that was defamatory, discriminatory or likely to humiliate, intimidate or harm anyone.

Policymakers looking to regulate deepfake technology are making patchy progress, also faced by new technical and ethical snarls.

Laws specifically aimed at online abuse using deepfake technology have been passed in some jurisdictions, including China, South Korea, and California. -- Reuters