2020 is a big year for democracy and technology. The US Presidential Election is scheduled for 3 November 2020. One of the biggest and certainly most-watched exercises in democracy will be a prime target for malicious actors of all sorts, with attempts to misinform the populace about candidates and policies, as well as efforts to subvert the democratic process in some way, a genuine risk.
The 2016 election process was sullied by various issues, not least the leak of almost 20 million emails from the Democratic National Committee (DNC) by a Russian state-sponsored group known as Fancy Bear. The information that was made public included suggestions that the DNC’s leadership was actively seeking to undermine Bernie Sanders’ presidential campaign and discussing ways to advance the nomination of Hillary Clinton. Donald Trump would use this leak time and again in debates against “crooked” Hillary – and we know how that worked out.
But leaks such as that, tactics like that, are a classic of political espionage: find information; steal information; use information to leverage opponents. The threat in 2020, however, is far more sophisticated than that.
Social media platforms are one of the most hotly contested forums for both legitimate and illegitimate political campaigning. Recently, YouTube and Twitter have decided to clamp down on synthetic and manipulated media, of which ‘deepfakes’ are a significant part. These images, audio recordings, or videos are produced with the help of machine learning and artificial intelligence to create convincing simulacra of famous faces and voices saying things that may go against their true beliefs or hamper them in an election.
Deepfakes, it should be noted, are a far more widespread problem than just politics: celebrities have had their faces stuck onto adult actors in pornographic films and experts have noted that the attack surface is broad, encompassing the military, law enforcement, insurance, and commerce. In politics, though, deepfakes can be used to change people’s views of a particular candidate, alter voters’ opinions on policies and parties and potentially swing an election in favour of one party or another – they have been called “the next chapter in the fake news era.”
In its Community Guidelines, YouTube already has a ban on media that has been manipulated. However, in a blog post entitled ‘How YouTube supports elections’, posted at the beginning of February, the platform made a more explicit effort to outline the ways in which it is combatting content related to the US election that is deemed to be misleading, such as digitally manipulated videos aimed at spreading disinformation: evidently, deepfakes are one of the major drivers behind this.
On 6 January, Facebook banned AI-manipulated deepfake videos. Misleading video content posted to the platform will be removed if “It has been edited or synthesised [to] mislead someone into thinking that a subject of the video said words that they did not actually say” or if “It is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.” To date, no content has been removed as a result of these rules. It should be noted that this also covers Instagram, which is owned by Facebook.
For its part, Twitter has announced a ban on synthetic or manipulated output that is “likely to cause harm.” This move, it claims, is not a response to the 2020 US elections but based on the results of a survey of its users conducted towards the end of 2019. Despite these claims, it is not a huge logical jump to imagine that the moves by YouTube and Facebook to curb the dissemination of manipulated media on their platforms affected Twitter’s decision-making process.
Law makers, too, are waking up to the challenges. In the US, California recently introduced two laws explicitly targeting deepfakes: one of these makes it illegal to distribute deepfakes depicting politicians within two months of an election. This has been roundly criticised by free speech campaigners and is unlikely to deter those individuals and groups intent on pushing this sort of content. After all, if a state or group wants to disrupt an election in a foreign country, the fact that their actions are illegal in the target nation is beside the point.
As much as these deepfakes are a potent threat to the democratic process, sometimes threat actors take a more direct route. In early February, the FBI released a Private Industry Notification which indicated that a voter registration and information website had been targeted in a potential Distributed Denial of Service (DDoS) attack. The attacks took place over the course of, at least, a month, with overload attempts coming in intervals of approximately two hours. According to the FBI’s notification, the attacks proved ultimately unsuccessful because the website had put in place measures to combat exactly the type of assault that was attempted. The site has not been named.
Occasionally, however, it is simply the fault of the technology. On 3 February, the state of Iowa held its caucus, the first nominating contest to take place in the Democratic primaries to choose the party’s candidate to run against Donald Trump in the 2020 presidential election. Due to issues with the app used to report the results of the caucus, however, there was a three-day delay in the reporting of all votes cast. Shadow Inc., the app’s developer, apologised in a series of tweets on 4 February but this was not enough to prevent another state from dropping the use of Shadow’s technology: Nevada had been due to deploy a similar app, also developed by Shadow, in its caucuses in late-February.
Elsewhere, a security audit by MIT researchers of the Voatz voting app revealed several bugs that could allow a threat actor to change, stop, or expose the way in which certain individuals had voted. The app was used for online voting during the US midterm elections in 2018 and is scheduled to be used again in the presidential election later this year. Several issues stemmed from the fact that third-party services are used to run crucial aspects of the Voatz app. The researchers were only able to assess the app itself and claim that it is likely there are more issues in the Voatz backend.
For its part, Voatz acknowledged the vulnerabilities but claimed the researchers were attempting to “disrupt the election process, to sow doubt in the security of our election infrastructure, and to spread fear and confusion.” The company has a history of this sort of defensiveness: in 2018, a security researcher conducted a dynamic analysis of the Voatz backend, so the company reported him to the FBI, claiming that this had been a hacking attempt. Rather than taking such an aggressive approach to the disclosure of bugs in its system, Voatz might be better served taking on board findings such as these and building a better, more secure application for voters in the US.
Whether through disinformation campaigns, direct attacks on voter websites, or mistakes in technology facilitating fraud, threat actors will be presented with numerous avenues to affect not just the presidential elections in the US in 2020, but the democratic process in countries around the world in years to come. Legislation, the beginnings of which are starting to trickle through, is only one way to combat this threat. Another is education: voters, and online citizens in general, should be very aware that what they are being presented with may not be the truth. The learning curve may be steep and hard, but it is entirely necessary to maintain trust in the democratic process.