Rebeka F Guarda, Marcia P Ohlson, Anderson V Romanini, Daniel Martínez-Ávila. Education for Information. Volume 34, Issue 3, 2018.
Based on recent political happenings, such as Brexit (UK) and the election of Donald Trump (USA), it has become clear that political marketing has been using ‘Big Data’ intensively. Information gathered from social media networks is organized into digital environments and has the power to determine the outcome of elections, plebiscites and popular consultations. New advertising and persuasion mechanisms have been created to undermine the reliability of traditional mass media communication that are familiar to the general audience. Consequently, ‘fake news’ and ‘alternative facts’ have emerged along with the notion of ‘post-truth’, which defines the state of affairs represented in public opinion that has been contaminated by these strategies. Based on the pragmatic-semiotic concepts developed by Peirce, such as belief, mental habits, controlled action, final opinion, truth, and reality, we argue that the ‘global village’, (McLuhan, 2008) may be at a dangerous fork in the road. This author’s ‘scientific method’ was elaborated based on (1) the concatenation of hypotheses, (2) the deduction of its consequences, and (3) the design of experiences and aims to test our beliefs against our results which would be critically evaluated by communities of researchers. This fork in the road, which rapidly evolves as a dystopia built and reaffirmed by the spread of disinformation on social networks, points towards a ‘post-reality’ that can represent an illusory and brief comfort zone for those who live in it but may also represent a tragedy with no turning back for our entire civilization.
The spread of disinformation is a topic that has gained increasing visibility and proportions worldwide, especially after indications that this type of practice may have influenced the outcome of political events, such as the 2016 U.S. elections and the Brexit decision, which signaled the withdrawal of Britain from the European Union. In 2016, this discussion was incorporated into the international public sphere debate after The Economist published an article entitled “Art of the Lie” (2016), which focused on the term ‘post-truth’ and blamed the internet and social media for the dissemination of lies told by politicians, such as Donald Trump. A few months later, Oxford Dictionaries selected ‘post-truth’ as the word of the year, describing it as an adjective “relating or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief” (Oxford Dictionary, 2016).
Little by little untrue information, which seemed to be created and spread in order to obtain advantages, started being called ‘fake news’ by several mass media communication businesses and researchers alike. Buzzfeed media editor, Craig Silverman, one of the people responsible for spreading the term, used it for the first time on Twitter in 2014 (Silverman, 2017). In January 2017, at the first presidential press conference since Election Day, Donald Trump refused to take questions from CNN reporter Jim Acosta, stating that he wouldn’t answer questions from CNN because it works with ‘fake news’ (CNN, 2017). The establishment and popularization of this term in the public sphere, regarding the grounds on which journalistic information is based, gained even more ground when U.S. Counselor to President Trump, Kellyanne Conway, used the term “alternative facts” during a “Meet the Press” interview on January 22, 2017. In it, she defended White House Press Secretary Sean Spicer’s false statement about the attendance numbers of Donald Trump’s Inauguration Day (Swaine, 2017).
‘Post-truth’ can be understood as the by-product of a phenomenon addressed in psychoanalytic literature, the ‘Confirmation Bias’, that is, the tendency to selectively assess information. This means that only evidence that supports an initial belief and hypothesis is accepted. “Confirmation bias, as the term is typically used in the psychology literature, connotes the seeking or interpreting of evidence in ways that are partial to existing beliefs, expectations, or a hypothesis in hand” (Nickerson, 1998, p. 175). From the systemic point of view, contained in pragmatic communication theory (Watzlawick, 1968) and directly related to communication psychology, ‘confirmation bias’ produces positive feedback in a predisposition for disagreement and conflict, leading communication agents into a spiral that gravitates towards a violent clash of opinions (Idem, pp. 28-32).
Disinformation and Public Opinion
Since mid-2017, the academic debate concerning disinformation and ‘fake news’ has been growing quickly. However, many researchers highlight that this topic is not totally new, since lies and manipulation have always existed, especially those deliberately spread to reach specific goals in public opinion. Examples are common in literature and are mainly related to war advertisement and to inflamed debates between rival political groups, especially during election periods or social uprisings. More often, disinformation and promotion of ignorance, fundamentalism, and prejudice through inaccurate information have been pointed out as one of the main strategies for social domination or for demobilization of protests in the face of injustices and attacks on fundamental human rights.
Assuredly, studies about the ideological use of mass media communication are a fundamental part of the critical theories inspired by Marx in the 20th century. Adorno, Bakhtin, Gramsci and Baudrillard have all written about the role of mass media communication and its strategies of (mis)representing social reality and social production of meaning to ensure the maintenance of status quo and the reproduction of the forms of domination of one class over another. In his book “Marxism and the Philosophy of Language”, released in the beginning of the 20th century and one of the most well-known works of the so-called “Bakhtin Circle”, Voloshinov (1973) emphasized that linguistic signs are dynamic entities that are capable of simultaneously reflecting and refracting an individual according to the ideological perspective that conditions social dialogues:
Every sign, as we know, is constructed between socially organized persons in the processes of their interaction. Therefore, the forms of signs are conditioned above all by the social organization of the participants involved and also by the immediate conditions of their interaction. When those forms change, so does the sign. And it should be one of the tasks of the study of ideologies to trace the social life of the verbal sign. Only so approached can the problem of the relationship between sign and existence find its concrete expression; only then will the process of the causal shaping of the sign by existence stand out as a process of genuine existence-to-sign transit, of genuine dialectical refraction of existence in the sign. (p. 21, author’s emphasis). Author of important concepts such as dialogism and chronotope, the semioticist and philosopher, Mikhail Bakhtin gathered a group of scholars around him that became known as “the Bakhtin Circle”, of which Valentin Voloshinov was part. In this article, we have attributed the writing of “Marxism and the Philosophy of Language” to Voloshinov, according to the edition published in 1973 by Seminar Press. However, the authorship of this book is controversial since it has also been given to Bakhtin in a few editions, as well as in the biography of “Mikhail Bakhtin”, published originally in 1984 by Katerina Clark and Michael Holquist. In the 1980’s, Baudrillard (1994) exposed the set of simulacra and simulations produced by the colonization of our representations through advertising strategies. According to him, human experience itself has become a simulation since the signs have become stronger than reality. In this context, the so-called simulacra are known as simulations that are disconnected from reality, i.e., they represent elements that do not exist, yet, conflictingly, become models for reality.
Even in functionalist studies of mass communication research, excess of information has been criticized because of the impacts it has had on public opinion concerning socially relevant topics. According to this school of thought, excessive information would generate confusion and passiveness among the population instead of engagement for social change. Lazarsfeld and Merton (1948) created the term ‘narcotizing dysfunction’ to describe the effect that the overwhelming flow of information, produced by mass media, had on individuals, making them passive in their social activism. According to this hypothesis, because of the amount of diverse information available, individuals spend more time trying to understand current issues and less time actually conducting socially organized activities. Lazarsfeld and Merton (idem) stated that action strategies may be discussed but are rarely implemented by the individuals who face this kind of overflow of information. In other words, people have unconsciously replaced action for knowledge. Although information and political messages have multiplied throughout traditional and online media, political participation continues to drop. People pay increasing attention to media, but overexposure to media messages can confuse the audience, thus discouraging them from getting involved in the political process.
These theories, however, were incapable of foreseeing the fast-paced technological progress that started with the invention of the world wide web. Advancements, such as social networking on digital platforms, mobile devices connected to geolocation, and storage of huge amounts of data in ‘clouds’ (a network of computers and memory units capable of storing and computing enormous amounts of digital information) have exponentially increased the social importance of mass media in the digital era. McLuhan (2008) was surely the 20th-century scholar who came closest to our current reality as he wrote about the perceptive/cognitive convergence of media and the ‘global village’ that has emerged with it. According to this Canadian researcher, a society globalized by the flow of mediatized information would be sustained by new types of logic, completely based on orality (in contrast to the linear logic which sustains written communication), on abrasive and emotive communication between participants, and on the redefinition of the meaning and importance of privacy.
New Technologies, Big Data and Filter Bubble
Following McLuhan’s train of thought regarding media, Manovich (2001 p. 218-219) proposed that the invention of the database represents the birth of a new cultural genre, which associates data and allows spectators to access fragmented information in an interactive manner because of the customization of content filters. If the information contained in the databases represent past information, collected and classified according to distinct logics, the patterns extracted from the databases can provide future possibilities which are specific to the personal context of each individual user or to the social context of a certain community.
Before moving on, it is important to point out the differences between the previous methods of collection and storage of huge volumes of data, such as those conducted by banks and censuses, to those used in the current ‘Big Data’ phenomenon. Boyd and Crawford (2012) have defined ‘Big Data’ as a socio-technical phenomenon and explained that it “is less about data that is big than it is about the capacity to search, aggregate, and cross-reference large data sets” (p. 663). Opposite to common belief that large sets of data would be enough to provide unbiased and truthful information, these authors alert us about the need to critically reflect about the origins, the means of access, the interests involved, and the biases related to that data. In addition, they even state that social media users “are not necessarily aware of the multiple uses, profits, and other gains that come from information they have posted” (ibid., p. 672). In this perspective, Kitchin (2013, p. 262) indicates that ‘Big Data’ is “huge in volume (…), high in velocity (…), diverse in variety (…), exhaustive in scope (…), fine-grained in resolution (…) and uniquely indexical in identification, relational in nature (…), and flexible (…)”. In an article discussing the epistemological implications of the data revolution, Kitchin (2014) argues that ‘Big Data’ and new data analytics are reconfiguring the way research is conducted, as they favor the correlation of data and hinder the hypothesis test. More than type, quantity or speed, ‘Big Data’ is associated to a new analysis methodology enabled by the development of high-performance computers and artificial intelligence technologies that are capable detecting patterns and constructing of predictive models (ibid., p. 2).
Just and Latzer (2016) explain that the increasing flood of digital data created the demand for automated algorithmic selection in order to deal with the massive amounts of collected data. In this sense, ‘Big Data’ and the ‘algorithmic selection’ process are co-evolving: the first one is “a new economic asset class”, while the second is “a new method of extracting economic and social value from big data” (ibid., p. 240). Automated algorithmic selection has been applied in the social sphere in many different ways, which led Just and Latzer to argue that the algorithms themselves need to be evaluated as institutions and as key actors, since “they influence not only what we think about but also how we think about it and consequently how we act, thereby co-shaping the construction of individuals’ realities, structurally similar but essentially different to mass media.” (Ibid., p. 254).
Pariser (2011) argues that the abundant flow of data circulating the internet and the algorithms used by different companies like Google and Facebook lead to what he calls “personalization”. Based on the data collected from users, websites and social media, algorithms create predictions about who their users are and what they would like to do, and thus select the information each user receives. This process changes the way information circulates on the internet and leads to the ‘filter bubble’, an invisible mechanism that provides individuals only with information that is in line with their preferences, connecting people who have similar opinions and distancing people who think differently (ibid., p. 9). Among the consequences “(…) personalized filters limit what we are exposed to and therefore affect the way we think and learn. They can upset the delicate cognitive balance that help us to make good decisions and come up with new ideas”. (ibid., p. 83)
Participatory Lies on Web 2.0
Considering that our current context is influenced by ‘post-truth’, manipulated content has gained increasing space for growth, since emotions and personal beliefs become more important than verified facts in the shaping of public opinion. However, sharing content irresponsibly, that is, giving voice only to what is in accordance with our own opinions and beliefs without worrying about checking the accuracy of the information, explains only part of the problem. Disinformation, on the other hand, is a more complex phenomenon. In addition to increasing potential dissemination, promoted by information and communication technologies as well as social media, the “democratization” of content creation is also underway. Jenkins defines this as the ‘participatory culture of Web 2.0’ (Jenkins, 2009) and states that his type of content creation, which was previously restricted to mass media communication businesses, can now be done (and is done) by any organization or individual interested in spreading ideas or ideologies. A common strategy used to disseminate disinformation is through websites or social media profiles with titles that are similar to the existing and supposedly trustworthy media. This clearly illustrates the parasitic strategies that feed on the reputation of traditional mass media communication businesses.
In this new scenario, the academic community has been searching for a deeper comprehension concerning the origins and implications of the informational chaos that has reached global proportions with the aid of new technologies. Over the last two years, the term ‘fake news’ has been in the spotlight. For instance, in an article discussing the 2016 U.S. elections from the standpoint of the growth of online news and social media platforms, Allcott and Gentzkow (2017), while studying news articles containing political implications, defined fake news as “news articles that are intentionally and verifiably false, and could mislead readers” (p. 213). However, because ‘fake news’ carries a vague definition and because it has been repeatedly used by politicians who are interested in disqualifying news coverage, researchers and journalists have searched for new terms capable of describing more precisely and critically the processes of creating and sharing untrue information.
Many Shades of Mis-and-Disinformation
Floridi (2010), Professor of Philosophy and Ethics of Information at Oxford University, has tackled the contemporaneous ethical aspects of information in what he calls the ‘infosphere’, the new realm of digital information that has superseded the old analogical configurations of human culture. He defines mis- and disinformation as structured (digital) data that is simultaneously, although in different ways, semantic (meaningful), factual and untrue; misinformation being unintentionally untrue and disinformation being intentionally untrue.
When it comes to digital media, simple definitions seem to collapse. Wardle and Derakhshan (2017) warn us that ‘information pollution’ is not limited to the news and that the term ‘fake news’ is inadequate to describe the complexity of the phenomenon. Thus, they have proposed a new conceptual framework for examining the so-called ‘information disorder’:
Dis-information. Information that is false and deliberately created to harm a person, social group, organization or country. Mis-information. Information that is false, but not created with the intention of causing harm. Mal-information. Information that is based on reality, used to inflict harm on a person, organization or country. (2017, p. 20, authors’ emphasis).
Wardle and Derakhshan (2017) added that informational disorder should also consider the actors, messages and interpreters involved, as well as the diversity of form, motivation and dissemination. The nuances of controversial online content can be observed through the seven types of mis- and disinformation that Wardle has outlined in her article “Fake News. It’s Complicated” (Wardle, 2017). In pointing out seven types of mis- or disinformation, the authors reveal the complexity of the phenomenon. They are: “1) satire or parody (no intention to cause harm but has the potential to fool); 2) misleading content (misleading use of information to frame an issue or individual); 3) imposter content (when genuine sources are impersonated); 4) fabricated content (new content is 100% false, designed to deceive and do harm); 5) false connection (when headlines, visuals or captions don’t support the content); 6) false context (when genuine content is shared with false contextual information); and 7) manipulated content (when genuine information or imagery is manipulated to deceive)” (Wardle, 2017).
Previously, these threats would come in the form of text, containing false information, or, at best, in the form of an adulterated, old or out-of-context photo. Nowadays, we deal with extensive tampering that uses artificial intelligence and other techniques that were previously dominated only by cinema and its special effects. In a similar direction, Chesney and Citron (2018) argued that ‘deep fake’ technologies have expanded the potential distortion of reality with techniques such as machine learning. According to these authors, sophisticated technologies keep making falsifications more real and more profound. Simultaneously, they have become more difficult to detect, which can lead to problems, such as new forms of exploitation, sabotage and threat to democracy. These characteristics are synchronized with the consolidation of the so-called ‘Web 3.0’, which uses artificial intelligence, augmented reality, and broadcasting of high-definition and real-time information as some of its pillars.
Disinformation Meets Big Data
The most significant case that has occurred at the intersection of ‘Big Data’ and disinformation was the 2016 U.S. presidential elections, which contained claims of stolen data and Russian interference, aimed to favor republican Donald Trump. In 2017, Google, Facebook and Twitter admitted that Russian operators had bought and used their services to spread false information and promote polarization within the North-American society (Isaac & Wakabayashi, 2017). A study conducted by journalist Jonathan Albright and published in The Washington Post revealed that posts made by only six of the 470 Russian Facebook accounts, controlled by a Russian troll farm, were shared more than 340 million times and had generated more than 19.1 million interactions (Timberg, 2017).
A new scandal arose in March 2018, when The New York Times revealed that “Cambridge Analytica”, one of the companies responsible for Trump’s campaign, used stolen data from millions of Facebook users to map out psychological profiles and craft personalized messages capable of influencing the behavior of voters (Rosenberg et al., 2018). The company collected this data from an alleged personality test without revealing that the information gathered would be used for election purposes. According to an article published in The Guardian, the company used additional information obtained from geolocation to send messages and monitor efficiency on platforms such as Facebook, YouTube and Twitter (Lewis & Hilder, 2018). This leads us to the hypothesis that false news is a phenomenon associated not only with communicational, social and political aspects, but that it also contains an economic element. The creation of deliberately manipulated information has become a new industry which operates through the financial compensation of its content creators on social media, e.g. ‘click factories’. Many websites and social media pages that promote manipulated content operate based on ‘click baits’ for their financing and/or profit.
Researchers have not yet reached consensus regarding the true impact that false information had on the results of the 2016 U.S. elections, that is, if it actually determined the electors’ votes. However, we can already infer that increasingly quick and sophisticated techniques used for creating and disseminating disinformation is a threat to be considered in any electoral process. Notwithstanding, people that share information that support their beliefs are not the only ones who contribute to the dissemination of this type of article. Web robots, also known as Bots, and armies of fake profiles have an enormous impact on the promotion of untrue information, showing that the level of technical sophistication and perfection that such content has reached is capable of confusing even the most skeptical and the most qualified expert.
Furthermore, disinformation frequently produces extremist and polarized discourses that are strengthened in social media due to the ‘bubble effect’, leading specific groups of people to protest in public spaces. The reasons for protests and manifestations can vary, and frequently promote prejudice and hatred against minorities in artistic, political, and even violent manifestations, such as the murder cases that occurred in India after the spread of fake news through WhatsApp (a popular messaging app). In this context, the ideological use of digital media enables us to take ‘narcotizing dysfunction’ to complete aporia and to new forms of domination and exploitation, involving mainly cognition and the amount of time social media users spend in digital environments. Thus, a door leading to ‘post-reality’ is opened.
The Semiotic-Cognitive Stance
All these kinds of mis- and disinformation, which thrive with little or no constraint on digital social media, seem to be producing a dystopic representation of reality that is fueling the realm of post-truth. This phenomenon, unprecedented in our civilization, quickly associates the ‘confirmation bias’ (which makes it easier and faster to disseminate ‘fake news’ that repeat age-old prejudices and misconceptions) with the ‘narcotizing dysfunction’ (which hinders the establishment of new beliefs and habits that would be capable of dealing with often urgent, last-minute subjects). Semiotic refraction seems to be replacing reflection, resulting in the emergence of ‘post-reality’, a type of parallel universe embedded into real life, i.e., a simulacrum that seems to be more concrete than reality itself. For instance, some climate change deniers (people who deny the occurrence of climate change), fed by fake news, continue to behave in an unruly manner and against best practices suggested by the scientific community. Some even unnecessarily burn fossil fuel as a way to express their biased views. ‘Post-reality’ can very well be the final and ultimate trap of our species, since we have already reached turning points in many fields, such as in nuclear weapon escalation and global warming.
In the philosophical definitions mentioned above, Floridi only applies values of truth to symbols, which is the only class of sign that can be semantic. The problem with Floridi’s definition, from a semiotic and pragmatic point of view, is that attributes such as “true” or “untrue” applied to a symbol are only a matter of belief. On the other hand, Peirce, a North-American philosopher, defined pragmatism as a method to clarify ideas. He describes four ways of fixating a belief concerning the trueness of symbols such as words, concepts, ideas, propositions and arguments (Peirce, 1877). Three of them are non-scientific and contribute to the positive feedback of the ‘confirmation bias’: (1) the ‘a priori method’ fixates beliefs by selecting only information that fits nicely in a rational system that is previously accepted as true, and, in this sense, it comes close to a coherent theory of truth; (2) the ‘method of tenacity’ fixates beliefs as one comes up with a hypothesis and holds it dear, even against all contrary evidence; (3) the ‘method of authority’ works as one uncritically accepts the opinions of another person, group or institution based solely on their reputation and status. These methods can arise individually or can be mixed in different intensities whenever disinformation generates false beliefs that circulate in social media.
The fourth method, referred to as the ‘scientific method’, is based on experience and precise concentration among three kinds of rational arguments: abduction (or hypothesis), deduction and induction. It works as follows: once a novelty appears before one’s eyes and produces curiosity and doubt, this person must make an effort to formulate the best possible conjecture based on previous knowledge. Once the individual has reached an abductive hypothesis, he or she must deductively extract possible consequences, and, finally, proceed to test his or her findings in the real world. This method works better than the others because it humbly assumes its own fallibility, knowing that the first argumentative step is to elaborate a conjecture that must be reformulated whenever the empirical test fails to reach a stable belief. Additionally, it depends on a community of inquirers in continuous dialogue while they search for a true belief about a given matter. In fact, Peirce advocates for a logical type of socialism and states that enquiry, as a normative purpose, should no longer be pursued by all members of a society when vital matters arise.
The consequence of the ‘scientific method’ is that truth would be the final opinion, grounded by the ultimate mental habit developed by an ideal community of inquirers as they gather information through experience and share it in a communicative exchange. This infers that we might never hold an ultimate true belief, but there is always hope that we can get as close to it as possible if sufficient efforts and resources are dedicated to this process. The ‘scientific method’ may be slow and cognitively demanding, but it is the only way to separate information from disinformation, because it grounds the relevant symbols on reality and on building a trustworthy social opinion. Peirce suggests that our social beliefs can be strengthened or weakened depending on how they perform when confronted with reality, which is in accordance with his pragmatic maxim: the meaning of an idea, symbol or concept is, in its general consequences, translated in social dispositions to act accordingly whenever needed. Chance is precisely the ratio between success and failure of a belief when it is applied to experience, and this relation is logarithmic:
Any quantity which varies with chance might, therefore, it would seem, serve as a thermometer for the proper intensity of belief. Among all such quantities there is one which is peculiarly appropriate. When there is a very great chance, the feeling of belief ought to be very intense. Absolute certainty, or an infinite chance, can never be attained by mortals, and this may be represented appropriately by an infinite belief. As the chance diminishes the feeling of believing should diminish, until an even chance is reached, where it should completely vanish and not incline either toward or away from the proposition. When the chance becomes less, then a contrary belief should spring up and should increase in intensity as the chance diminishes, and as the chance almost vanishes (which it can never quite do) the contrary belief should tend toward an infinite intensity. Now, there is one quantity which, more simply than any other, fulfills these conditions; it is the logarithm of the chance. (CP 2.676. References to the Collected Papers of C. S. Peirce (Peirce 1958) are given in the text and footnotes as a decimal number, referring to volume and paragraph, e.g. ‘2.276’ refers to Volume 2, paragraph 276. author’s emphasis)
All four of Pierce’s methods try to establish mental habits that are capable of grounding our beliefs and putting us in a state of inclination to act according to them, but only the ‘scientific method’ stimulates democratic and responsible actions that are capable of dealing with socially complex subjects. Unfortunately, it is also the method that is most susceptible to the so-called ‘tragedy of the commons’, since it depends on the continuous engagement of all people involved in a given matter and it is the method that most people are inclined to abandon whenever their comforting beliefs are challenged by the hard facts of experience. In addition, even when people do engage in public debate concerning difficult and complex issues for a period of time (usually when the subject is set on mass and social media), the flow of information can be so overwhelming that the ‘narcotic dysfunction’ can hinder the population from taking genuine, decisive courses of action to change their realities.
Globalized corporations, such as Google and Facebook, who direct the digital flow of information according to their specific algorithms can cause a number of effects:
- Individuals gather more disinformation than truth when navigating on digital social media.
- Our disinformed beliefs vary positively and are reinforced due to a dystopian universe of chances represented in digital media.
- ‘Authority’, ‘tenacity’ and ‘a priorimethods’ are given prominence over the ‘scientific method’, while fake authorities, hate speech, and persuasive reassurance based on disinformation proliferate.
- We share our beliefs during social exchange within a limited and biased community, such as in the ‘bubbles’ of social media.
- We develop social metal habits and types of rational actions based on dystopic representations induced by ‘Big Data’ strategies.
When all of these phenomena coincide, we may call it ‘perfect disinformation storm’. As they continue to intensify, we note that our civilization seems to be at a dangerous fork in the road, in which beliefs and their corresponding actions are no longer grounded on reality, but on a ‘post-reality’ that emerges from disinformation and its actual consequences.
The effects of building dystopic realities in the current context of global informational disorder can become even greater and more harmful in developing countries, such as Brazil and India. The penetration of electronic devices and digital media platforms is deep in these countries, which contrasts with their populations’ low levels of education. This new scenario demands preparation from governments and education institutions in order to promote learning and stimulate citizens to develop a more critical view of the available content. Instead of enabling the democratization of content and technologies, databases are being used in strategies for manipulating information and have provide even more powerful weapons for certain social groups, contributing to concentration of power and increased inequality. (O’Neil, 2016).
Our civilization, as a whole, seems to be affected by this state of affairs in a critical moment of our history. ‘Post-reality’ is the semiotic hell in which Dante Alighieri’s warning: “Lasciate ogni speranza, voi ch’entrate,” “Abandon all hope, ye who enter here.” (Inferno, Canto III, line 9) should be put hanging at the front door.
Since digital information first made its appearance in the late 1940’s, its mathematical and physical aspects began to stand out as computers, transmission infrastructures, and programming languages were put at the service of a fast growing infosphere (Floridi, 2016). Only when it became clear that databases had become a new cultural genre (allowing fast media convergence) and that ‘Web 2.0’ had revealed the new era of participatory culture, did scientists and philosophers begin to fully understand the cognitive aspects of the digital era preconized by McLuhan. If information has lost its meaning at the hands of engineers who are worried solely about the best syntax for performance and efficiency, it is now time for the semantic and pragmatic aspects of information to become the center of our concerns. It has become clear that the massive quantity of information available on digital devices hinders our search for truth, since these flows are managed by specific logics (algorithms, data analysis techniques and platform policies) and the ‘bubble effect’, which prevents users from having contact with varied content. Furthermore, deliberate manipulation of information with the intention of obtaining advantages is led by social agents who are very economically and politically powerful. In this perspective, the challenge of searching for the truth requires combined efforts from governments, academia, press and civil society to build common understandings that guide the debate, while facing the difficulty of identifying true information from fake information. Philosophy of information, and especially its ethical consequences, has become a necessary if not urgent matter. Several types of mis- and disinformation are spreading around the web through social media, and the power of these strategies to undermine traditional democracy has been shown as they have manipulated the formation of public opinion regarding very sensitive subjects. It has become clear that a cognitive and semiotic research approach is needed to understand how digital information, concerning specific social subjects, has turned into meaning. Peirce states that belief is a dynamic mental habit shared by a community of interpreters. This encourages us to accept that predicates, such as “true” and “false”, are normative and can only be applied in the long run, after cautious investigation by a community of inquirers, and are always provisional and open to further review. Reality, in pragmatic philosophy, is the object of the final representation built by an ideal community. If ‘fake news’, disinformation and ‘post-truth’ continue to grow in our societies, we will most likely see a corresponding ‘post-reality’ being represented and shared, which would lead to an even more deteriorated situation for our ethical considerations.