This is an outtake from CIFS’ recent members’ report Future Media: Key Trends and Technologies. The article looks at the rise of networked truth, and some of the measures that can be taken to combat the rampant spread of misinformation online.
Truth in a networked future
It is difficult to imagine we will ever return to a world where most of our news, information and entertainment was covered by a few big, trusted media institutions - and where individuals were mostly passive recipients of media. Yet, this has been the way of things for most of human history, from the dawn of writing up until the spread of the internet.
In a span of just a few decades, the global media landscape has changed drastically. Much of our consumption of news, information and entertainment has moved online, and individuals have gone from being passive media consumers to active prosumers. Changes in the digital media landscape are happening at breakneck speed, and a fast-growing share of our media consumption is happening on social platforms. In 2016, 45 percent of Americans aged 50 or older reported getting news from social media sites. One year later, the number had already risen by 10 percent. The 2018 Reuters Digital News Report showed that 40% of respondents in 38 countries use Facebook for news, and 87 percent of respondents find their news online (including on social media). The media we consume on these platforms is determined by our previous habits or our peers’ recommendations, and as a result, our identities, tastes and political beliefs are increasingly formed through online networks. In some ways, universally used social media such as Facebook have become monopoly platforms for social life.
The rise of social platforms for sharing knowledge and information has empowered ordinary citizens and led to an explosive growth in amateur knowledge, and the diminishing role of experts as gatekeepers of knowledge. A 2017 Google report found that 67 percent of millennials use YouTube to find tutorials to help them learn new skills. The same study found that 91 percent of mobile users search for how-to content online when working on a project, and that ‘how-to’ searches on YouTube have been growing 70 percent year over year.
On the flip side, this trend has also led to the undermining of the legitimate gatekeepers of truth: academics, scientists and others who speak from a position of authority and whose information and advice we used to trust almost unconditionally. According to the Weill Cornell Department of Healthcare Policy and Research in the US, more than 75 percent of people trusted their doctor’s advice in 1966; in 2018, only 34 percent did. RAND Corporation describes the diminishing role of facts and analysis in public life in a 2018 report titled Truth Decay. The report lists the increasing relative volume and resulting influence of opinion and personal experience over fact as one of the primary drivers for this development.
While online discussion on social platforms is free and open in theory, it is heavily reliant on the non-transparent workings of the algorithms that curate our experience. As we have seen in the last few years, this has made public dialogue vulnerable to political and scientific misinformation, which can spread like wildfire among like-minded peers. An outcome of sharing and communication of information becoming frictionless – meaning that the filters or barriers that usually exist between sender and receiver disappear – is that fringe groups like anti- vaxxers, flat-earthers, 5G scaremongers, political conspiracy theorists and troll bots have become staples of social media and the internet, and by extension, of public discourse. In this new environment, it is more difficult for individuals to navigate the maelstrom of information and misinformation. This information overlead leads many to pick and choose from the available information and piece together their own individual truths.
A recent report by Oxford University looked into the phenomenon of ‘Computational Propaganda’, a term used to denote “the use of algorithms, automation, and human curation to purposefully distribute misleading information over social media networks.” The research project tracked online misinformation on social media and found that a lot of so-called “junk news and automated accounts” could be traced to programmers and businesses in Germany, Poland and the United States. Further, the study found that no less than 45 percent of Twitter activity in Russia is managed by highly automated accounts, and that a significant portion of the political conversation over Twitter in Poland is produced by a handful of right-wing and nationalist accounts. Ironically, the free and open structure of the internet has led to a centralisation of misinformation designed to shape and control public discourse.
What will the shift from broadcasted to networked truth mean in the long term? In 2017, Pew and Elon University conducted a research project where they asked more than 1,000 media experts the following question: “In the next 10 years, will trusted methods emerge to block false narratives and allow the most accurate information to prevail in the overall information ecosystem? Or will the quality and veracity of information online deteriorate due to the spread of unreliable, sometimes even dangerous, socially destabilizing ideas?”
The results showed uncertainty about the future, as respondents were divided equally on the positive and negative sides of the question. 51 percent of the respondents believed that the information environment will not improve. 49 percent believed it will. The 51 percent with a negative outlook believed that efforts to correct the situation will be stifled by bad actors, who will continue to use social media to appeal to the lowest common denominator: “selfish, tribal, gullible, and greedy information consumers who will believe whatever they are told.” To these respondents, technology will cause more problems than it will solve, as it will allow users to be bombarded with even more misleading information. One expert even referred to our present time as a “nuclear winter of misinformation”. The 49 percent with a positive outlook believed that we will find solutions to our current problems with mis- information, and they expressed a belief that technology, which can be used to spread misinformation, can also do much to combat it.
Both the optimists and pessimists agreed that there is no quick fix to the challenges posed, and that technology alone cannot provide the solution to the situation it has helped create. What’s needed, they believe, is a renewed focus on objective, accurate information fostered in all levels of education, and greater support for quality journalism. Similarly, a 2018 report by the EU Commission’s High Level Expert Group on Fake News and Online Disinformation recommended five steps to counter disinformation and fake news in the future: enhancing transparency of online news through better data sharing; promoting media and information literacy to help users navigate the digital media environment; developing tools to empower users and journalists to tackle disinformation; safeguarding the diversity and sustainability of the European news media ecosystem; and promoting continued research on the impact of disinformation in Europe.
One thing is clear: in a future of networked truth, the need for trusted and balanced channels of information is greater than ever. Some countries have already taken measures achieve this. In Norway, the fact-checking site Faktisk.no has been established for the purpose of preventing the spread of fake news and misguiding information. In other countries, the measures have been more extensive. In France, for example, a law was passed in 2018 which allows authorities to remove fake content and block sites that publish it. Singapore also recently instated harsh laws punishing those who spread fake news by lengthy prison sentences or hefty fines.
Assuming the role of ‘fact-checker’ may help alleviate some of the problems caused by the rise of networked truth, but it is also a reactive position to take. Lies spread faster than facts – much faster, in fact. A recent investigation by Science magazine monitored about 126,000 rumours spread on Twitter between 2006 and 2017. They found that false news cascades reached between 1000 and 100,000 people whereas the truth rarely reached more than 1000. Fact-checking, while important and no doubt beneficial, is treating the symptoms, not engaging with the root cause. In the long-term, proactive measures that focus on fostering information-, news-, and media literacy will likely have a more significant impact.
Looking closer at individual media users, a central question for the future is the extent to which the need for trusted and more transparent sources of information will outweigh the desire for more convenient products and services. The horizontalisation and hyper-personalisation of digital ecosystems, which happen when digital giants leverage their vast insights into individual consumer behaviours across platforms, mean that citizens must often trade off transparency for convenience. Unless a different model gains ground – for instance, one where citizens have complete control over the data they allow platforms to access, and the situations in which they allow it – the question of whether fostering information literacy will have the desired effect, or if it will be overshadowed by the temptation of highly personalised offerings, remains open.