Global menu

Our global pages

Close

Disinformation and the digital wild west

  • United Kingdom
  • Media
  • Privacy, data protection and cybersecurity
  • Technology
  • Technology, Media and Telecoms

02-12-2019

The digital modern age offers huge potential – the sharing of new technologies, the promotion of economic growth and the ability for instant communication. But whilst products of this age, such as social media, may help to progress today’s society, there is a rapidly growing concern on the issues of the accuracy and reliability of information propagated online.

Currently, there are very limited regulatory measures in place, both in the UK and worldwide, that address the prevalence and threat of disinformation (or better known as ‘fake news’) online. In an attempt to tackle harmful online content, including disinformation, in the UK, the Digital, Media, Culture and Sport Committee and Home Office published its Online Harms White Paper[1] (“White Paper”) earlier this year setting out proposals for new legislative measures for 2020. Please see our earlier article for an overview of the White Paper’s key proposals.

Whilst the implementation of these measures to prevent certain online harms will undoubtedly be a welcomed step in the right direction, the spread of disinformation is a unique threat – one which has the potential to target any issue, whether for personal, commercial or political purpose, and the ability to reach all audiences, on a global scale. It is therefore vital to consider the threat of disinformation, and any future legislative measures for the UK, with this in mind.

In this article, we take a closer look at the threat of disinformation, how it is being spread in light of new technologies, and the current response from both online platforms and the UK Government in an attempt to tame the digital wild west.

What is disinformation?

Disinformation, or ‘fake’ news, is false or manipulated information created or shared with the intention to mislead or deceive audiences, most commonly used for the purpose of causing harm, or for gaining personal, commercial or political advantage. The important distinction between disinformation and misinformation is that in the case of misinformation, the false information is spread without the intention to mislead or deceive, but rather is spread in error and without malice.

Using disinformation in order to achieve a particular objective is nothing new. In 32BC, Octavian (who then became known as the youthful, dynamic ‘Augustus’) notably drove a successful campaign of disinformation and propaganda against Marc Antony in the final war of the Roman Republic. In the 20th Century, the spread of disinformation and propaganda were central strategies during both WWI and WWII. But whilst disinformation has almost always been present in society, the rise of technology and the ubiquity of social media has seen its method, speed and overall influence completely evolve. By 2017, ‘fake news’ was named word of the year[2]

How is disinformation spread online?

There are currently 4.3 billion people in the world who have access to the internet[3], and the method upon which disinformation is spread comes in a variety of forms, from private instant messaging apps and online blogs, through to ‘news’ websites and most commonly, social media. The reach of social media platforms, in particular, is enormous – between Facebook and Twitter alone there are currently close to 2.5 billion users worldwide.

The vast majority of online platforms are run as businesses, and the bottom line for the vast majority of businesses is profit. In the case of social media companies, their business models rely heavily on revenue generated from the sale of advertisements[4]. In essence, content that increases profit will always be a key priority, which is of course a concern when the content advertised is either insufficiently regulated or simply inaccurate.

A key feature attributable to the popularity of social media platforms is personalisation, with content tailored to a user’s interests. In order to achieve this level of personalisation, social media platforms use algorithms (or sequences of instructions) to personalise news and other content for its users, selecting content on factors based on a user’s past online activity, social connections, and location[5]. Though this content may not necessarily comprise ‘disinformation’, a personalised newsfeed generated through algorithms can have a polarising effect on a user’s interpretation and understanding of an issue with an ability to promote highly partisan news, whilst stifling reasoned debate, objectivity and the common ground.

In addition to human activity, one of the most dangerous threats attributable to the spread of disinformation on social media is artificial intelligence, in particular, bots. Through algorithms, bots work to simulate human behaviour, executing tasks such as ‘liking’ and ‘sharing’ content anonymously and repetitively to propagate disinformation in the online community. Disinformation, spread through bots, can be used for some of the most sinister of strategies to incite hate, tear down reputations, and threaten political campaigns. The US 2016 election is one example of the use and interference of bots on social media. According to data provided by Twitter to the House Intelligence Committee Minority, between just 1 September and 15 November 2016, more than 36,000 automated Russian-linked bot accounts posted 1.4 million US election-related tweets. These tweets received approximately 288 million views[6]. Though the true influence and end result of these statistics are hard to accurately quantify, what is clear is that this type of intervention on social media is a major threat to both society and the very fabric of democracy. 

As machine learning continues to advance, more sophisticated tools are now evolving which look to take the threat of disinformation into a new chapter. The creation of highly realistic and difficult-to-detect digital manipulations of audio and video, known as ‘deep fakes’, is one such tool. Deep fakes, through the use of face replacements, face re-enactments and/or speech synthesis, operate to depict real people doing or saying things they never actually said or did. In comparison to bots, although not currently as prevalent online, deep fakes are highly complex and more difficult to identify, which in turn will become even harder to crack-down.

How are online platforms responding?

Unsurprisingly, social media companies are at the forefront of criticism. In response, tech giants such as Facebook and Twitter have implemented certain self-regulated mechanisms that seek to block and report harmful content. For example, Facebook is enlisting fact-checkers to identify stories online that may constitute fake news on its platform, whilst measures to cut-off advertising revenue from fake news websites are being taken. Twitter, meanwhile, has taken further steps to crack-down on bots, and most recently confirmed last month that all political advertising will be banned from its platform worldwide[7]

Though these measures may be a positive step in the right direction, there is an argument that not enough is being done, and that the self-regulated regime is not working. In response, the argument is that social media platforms are merely ‘platforms’ hosting the content, and as a result there is a reluctance to accept the label of anything more, such as ‘publishers’. Whichever side of the fence one may sit in this debate, what is clear is that we were not (at least originally) prepared for the tidal wave of disinformation, and its consequences, that has swept across the online community.

What is the UK doing about it? 

Disinformation is undoubtedly a global threat, and the fight against it must be a coordinated effort worldwide involving governments, online platforms and individuals. In the UK, the Government currently deliberates legislation for 2020 that focuses on harmful and illegal content on a broader scale, from terrorism and modern slavery to disinformation and cyberbullying. The proposed regulatory framework will seek to place a new statutory duty of care on online platforms and importantly, introduce an independent regulator for the oversight of activity and enforcement. But will it be sufficiently adequate and sophisticated to address the unique threat of disinformation? We wait to see in 2020.



[1] “Online Harms White Paper”, Department for Digital, Culture, Media & Sport, 26 June 2019, available at: https://www.gov.uk/government/consultations/online-harms-white-paper/online-harms-white-paper.

[2] “'Cuffing season' and 'Corbynmania' are named Words of the Year by Collins Dictionary”, The Telegraph, 2 November 2017, available at: https://www.telegraph.co.uk/news/2017/11/02/cuffing-season-corbynmania-named-words-year-collins-dictionary/.

[3] “Global digital population as of July 2019 (in millions)”, Statista, July 2019, available at:  https://www.statista.com/statistics/617136/digital-population-worldwide/.

[4] “Disinformation and ‘fake news’: Final Report”, House of Commons Digital, Culture, Media, and Sport Committee, 18 February 2019, available at: https://publications.parliament.uk/pa/cm201719/cmselect/cmcumeds/1791/1791.pdf.

[5] “Disinformation and ‘fake news’”, Digital, Culture, Media and Sport Committee, available at: https://www.parliament.uk/business/committees/committees-a-z/commons-select/digital-culture-media-and-sport-committee/inquiries/parliament-2017/fake-news-17-19/.

[6] “Exposing Russia’s Effort to Sow Discord Online: The Internet Research Agency and Advertisements”, U.S. House of Representatives, Permanent Select Committee on Intelligence, available at: https://intelligence.house.gov/social-media-content/.

[7] “Twitter to ban all political advertising”, BBC, 31 October 2019, available at: https://www.bbc.co.uk/news/world-us-canada-50243306

For more information contact

< Go back

Print Friendly and PDF
Subscribe to e-briefings