Words By Miranda Dunne

“My friends, I wish to rise above this divide, and endorse my worthy opponent, the Right Honorable Jeremy Corbyn to be Prime Minister of our United Kingdom,” enthused Boris Johnson ahead of the 2019 general election, ‘Only he, not I, can make Britain great again.’ He went on, ‘Alas, why should you believe me? Much like Odysseus and his encounter with the cyclops, Polyphemus, I too, am nobody. I am a fake. A ‘deepfake’ to be precise.”

This video, extremely fake yet highly convincing, was produced as part of a project between AI think tank Future Advocacy and UK-based artist Bill Posters to raise public awareness of the threats deepfakes pose to democracy and trust. In response, a digitally manipulated Jeremy Corbyn urged his supporters to put ‘people before privilege and back Boris Johnson to continue as our Prime Minister.’

Other high-profile representations of deepfake technologies have included Channel 4’s recent Alternative Christmas message from the Queen, and a fake Obama in 2018 urging viewers to ‘Stay Woke, bitches.’ But such projects depicting funny or otherwise harmless deepfaked media are vastly outweighed by the swarm of deepfakes that exist on the internet – at least 14,000 are out there. As such, experts have ranked deepfakes as ‘the most serious AI crime threat.

‘Deepfake’ typically refers to a video of an individual that has had the facial likeness of another synthesised over the top, mainly via machine learning. However, it is often used as an umbrella term referring to a range of manipulated media, from ‘cheapfake’ videos to manipulated audio, the latter of which was used to scam a CEO out of $243,000 in 2019.  Elon Musk-backed company Open AI even built a text generation software trained on 8 million web pages. This was dubbed the ‘deepfakes for text’ due to fears it could be used to impersonate people and fabricate irrepressible quantities of falsified news. It was deemed by its creators as being ‘too dangerous for release.

The term deepfake itself was coined in 2017 by a Reddit user who created a space where users would share fake pornographic videos featuring non-consenting individuals. The technology behind it, however, had existed for years prior. Notably, deepfake tech was used in 1994’s Forrest Gump which used archival footage of JFK to create his interaction with Gump, then later on when Paul Walker died during the production of ‘Furious 7’. For the latter, the filmmakers shot Walker’s brother in a scene and inserted an image of the late actor’s face. 

The difference is now, thanks to machine-learning, the technology is becoming increasingly accessible for amateurs to use as legislation lags behind. This raises myriad questions surrounding trust in video as a form of evidence. 

The more obvious threat posed by deepfakes is as an extension of ‘fake news’. Corrupt actors can create deepfakes to manipulate public opinion, with serious implications for democracy. However, the threat here is not simply in the form of deepfakes being formulated, but the very fact that they exist gives a route for corrupt actors to claim that a legitimate video clip is a deepfake, as a mutation of Trump’s ‘fake news’ retorts. In other words, the ‘liars dividend’: what could happen in the future if a politician is caught in the act of corruption and video documentation of the act is dismissed as a deepfake? 

On the other hand, claims of deepfakes can be invoked in the event of legitimate videos featuring legitimate actions. In Gabon, 2018, “deepfake” was cried amidst public speculation about the whereabouts of President Ali Bongo who had not been seen publicly for several months. Eventually, when the government released a video of Bongo giving the traditional New Year’s address, the military initiated a coup against the government, citing the President’s strange appearance, a few days after an article has been published claiming the video was a deepfake. Forensic analysis found no evidence the video had been manipulated. Whilst the political climate of a particular country cannot be explained away via deepfakes, it is nonetheless a stark example of the legitimacy crisis they pose.

The threat of deepfaked corruption is a highly gendered one, as an extension of image-based abuse. 96% of deepfakes exist in the form of non-consensual pornographic videos, with women, nearly exclusively, being targeted. Researchers such as Dr Aislin O’Connell have called on the UK government to formulate a specific piece of legislation that classes deepfaked image-based abuse as a crime.

In April 2018, Indian investigative journalist Rana Ayyub was targeted with deepfaked sexual abuse. The perpetrators conducted the attack after Ayyub spoke out against the rape and murder of an eight-year-old girl. Local members of the Bharatiya Janata Party (BJP) had shown support for the alleged predator. A source from the BJP sent the deepfaked to her and it ended up being shared more than 40,000 times. She was doxed and barraged with harassment and abuse, men often threatening her with death and rape. Speaking of the incident, she said:

‘It was devastating. I just couldn’t show my face. You can call yourself a journalist, you can call yourself a feminist but in that moment, I just couldn’t see through the humiliation.’

‘I used to be very opinionated, now I’m much more cautious about what I post online. I’ve self-censored quite a bit out of necessity.’

The abuse against Ayyub is an example of how deepfakes do not simply exist in a vacuum. They are best considered as a symptom of pre-existing societal sicknesses. Sexual abuse via deepfakes draws parallels with attitudes towards physical sexual assault. For example, in 2012, Australia, a 17-year-old had her likeness stolen by perpetrators who then generated pornographic imagery, leading her to begin a successful campaign against image-based abuse in Australia. Upon contacting some of her abusers to ask them why they were targeting her, to which they would respond: ‘what do you expect when you post all these images of yourself?’… ‘we’re just men being men, what did you expect?’

Sensity, an Amsterdam based company committed to countering deepfakes, provide risk ratings on public figures who may be targeted by video manipulation with their deepfake detection system. In 2019, they identified 14,678 deepfake videos. They scored US president Joe Biden with a risk rating of 100.

Despite these efforts, malicious deepfakers may still have a huge advantage. This is because companies such as Sensity need a huge quantity of deepfake videos to train their detection systems on, whereas one corrupt actor with one goal would only need to generate and release one video to achieve their goal and cause huge damage, as we saw with journalist Rana Ayyub. To tackle this, in 2019, members of the ‘Silicon Six’ including Facebook and Google collaborated to create the ‘Deepfake Detection Challenge, efforts aimed at releasing a dataset of deepfakes using paid actors, so researchers can use them as training data for their models. Despite the huge qualms many rightly have with big tech, it is undeniable that the private sector has a huge role to play in tackling this dilemma. It’s difficult to see how a national government alone would be able to tackle it.

Moreover, government-focused solutions should be taken with enormous care, as the issue of deepfakes is also one of free speech. Yes, deepfake technology has had and will continue to be used for corrupt ends, as with any new technology. However, they have also been invoked for political satire of powerful figures, therefore flat out banning public use of the tech would attack free speech.

Deepfakes are an issue that need to be treated with the urgency they warrant, but without being alarmist – as with any new technology, malicious usages are par for the course. I would echo Professor Philip Howard, professor of internet studies at Oxford, who, speaking at CogX 2019, argued: ‘It’s not just cybercriminals that do the most politically poisonous stuff. It’s our existing political structures that produce the poisonous stuff.’ We saw earlier how deepfake pornography drew parallels with traditional understandings of sexual corruption, and we saw in Gabon that deepfakes were invoked, but not solely responsible for the attempted coup. 

Author of ‘Deepfakes and the Infocalypse’ Nina Schick argues that people are, overall, still conditioned to believe that video is incorruptible. This will change, for better or for worse moving forward.

Categories: Opinion

Leave a Reply

Your email address will not be published. Required fields are marked *