At least since the last U.S. elections in 2016, the issue of “fake news” is frequently debated in the public and the news. The strategic and targeted distribution of misinformation to undermine political opponents peaked in the conspiracy theory termed “Pizzagate”.
Originated from leaked emails, the story suggested that the former presidential candidate Hillary Clinton along with other high-level Democrats ran a child trafficking out of a pizzeria in Washington[1]. Despite these absurd claims and the lack of any credible evidence, the owner received multiple death threats and the restaurant was attacked with an assault rifle[2]. Luckily, nobody was injured.
The hunger for likes
Though admittingly an extreme case, this is only one example of many fake news stories shared on social media and often echoed among equal minded users. Even though multiple psychological studies emphasize the human tendency to believe information that supports prior beliefs, it remains astonishing that even the most outlandish fakes find their believers and are frequently shared. This phenomenon fueled by the hunger of many users for likes and reach of their posts, which seems to be extended with more extreme content.
These dynamics have given prominence to the recent focus on “fake news” but looking at the latest technological developments the future might even hold dire prospects.
Modern computer software, like Photoshop©, allows for realistic manipulations of images since many years. While some faked photos have famously traveled through the internet, I would argue that people have developed a healthy and critical attitude towards digital images as people can no longer trust their own eyes. Increasing processing power and novel algorithms start to enable trained users to not only alter photos, but also voice recordings and video material. While not yet perfect, with enough training data these technologies are able to rearrange and even create new audio and video material that is hard to distinguish from the original.
Thinking a few years ahead, it is not hard to imagine that these methods become better and better, and fakes will ultimately be indistinguishable from real footage.
This will allow the creation of fake content about individuals using their own voice and presented by a realistic video of the person without their knowledge. While this will certainly trigger a cat and mouse game between people creating fake material and others trying to identify the fake through digital forensics, it will always be easier to create a fake than detecting one. Hence, one might hope that people develop a similar skepticism towards videos and voice recordings than most have towards images. In any case, the line between what is real and what is fake will inevitably become blurrier as technology increases.
Type 2 error
Currently, the discussion about fake news focuses on the spread of what is literally fake news, the spreading of information that is not true – like Pizzagate. Borrowing from the language and ideas of statistics, people believing the Pizzagate conspiracy make what is called a Type 1 error: they believe a story to be true, even though there is nothing to it.
I, however, would like to focus attention on the second type of error that has so far received less attention. A Type 2 error occurs if someone does not believe a story, even though it is actually true. In other words, declaring something fake news, even though it is real. There are a few recent cases that highlight this problem.
For instance, in 2015 a real video of the former Greek Minister of Finance Yanis Varoufakis surfaced where he showed “Germany the middle finger”. However, in the name of satire, a German comedian wrongly claimed to have created the video by showing a fake video of the Minister only raising a clenched fist and declared it to be the original before his team added the raised middle finger digitally[3]. This “Varoufake” controversy circulated the media until an official clarification stating that the video with a raised middle finger is actually real footage. Resolving the confusion took several days. A long time for the current speed of information on social media.
A more recent example stems from Prince Andrew involved in a sex scandal[4]. Confronted with the accusation of an inappropriate relationship with, at the time, underaged Virginia Giuffre, he claimed to not remember ever meeting her and responded to a photo showing him with his arms around her that there is no way to prove the authenticity of this image and suggested that it could have been faked.
Fakes affecting social media and public opinion
While fakes might ultimately be identified by experts in the famous cases or the court, it is unlikely that social media and public opinion will not be affected by this issue. The mere possibility of fake images, audio, or video evidence might undermine the credibility of real incriminating evidence and help perpetrators spread doubt about the authenticity of evidence against them.
In 2012, a shaky video surfaced where republican candidate Mitt Romney declared 47% of the nation as government-dependent and his job would not be to “worry about these people”. In 2016, a hot microphone recorded Donald Trump before leaving a bus bragging about sexual assault. In the latter case, Trump on numerous occasions suggested that the audio might be a fake,[5] creating doubt at least among some voters, and ultimately won the election.
An increase in such “Type 2 fake news” issues might be even more problematic than the currently discussed Type 1 problems.
If the public can no longer trust any of their senses to separate truth from fake due to technological progress, the democratic process is certainly in danger. And if at some point even experts struggle to clearly identify the authenticity of the evidence, the issue might even spread into our courts and the legal system.
When teaching my students about the different error types in statistics, the lecture generally concludes with the lesson that the probability of making either of the errors is connected. Being more skeptical reduces Type 1 errors but increases the probability of making the 2nd types.
Despite this link, it is ex ante not clear which errors cause more harm and we should be careful that our current emphasis on “fake news” focusing on type 1 error not inadvertently creates too much skepticism which will leave us with many more type 2 errors. “Pizzagate” is the former, climate change denial is the latter.
Last year, the Seminar on Fake News – Digital Transformation Platform took place at Copenhagen Business School. The organizers highlighted: The problem of Fake News and other problematic online content is one of our times’ most pressing challenges — it is widely believed to have played a major role in the election of Trump and the current situation with Brexit.
Jan Michael Bauer is Associate Professor at Copenhagen Business School and part of the Consumer & Behavioural Insights Group at CBS Sustainability. His research interests are in the fields of sustainability, consumer behavior and decision-making.