La «scienza delle reti», una nuova disciplina il cui obiettivo è quello di comprendere le proprietà delle aggregazioni sociali e le modalità con cui queste si trasformano:
Science: The science of Fake News
The rise of fake news highlights the erosion of long-standing institutional bulwarks against misinformation in the internet age. Concern over the problem is global. However, much remains unknown regarding the vulnerabilities of individuals, institutions, and society to manipulations by malicious actors. A new system of safeguards is needed. Below, we discuss extant social and computer science research regarding belief in fake news and the mechanisms by which it spreads. Fake news has a long history, but we focus on unanswered scientific questions raised by the proliferation of its most recent, politically oriented incarnation.
Science: The spread of true and false news online
We investigated the differential diffusion of all of the verified true and false news stories distributed on Twitter from 2006 to 2017. The data comprise ~126,000 stories tweeted by ~3 million people more than 4.5 million times. We classified news as true or false using information from six independent fact-checking organizations that exhibited 95 to 98% agreement on the classifications. Falsehood diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information, and the effects were more pronounced for false political news than for false news about terrorism, natural disasters, science, urban legends, or financial information. We found that false news was more novel than true news, which suggests that people were more likely to share novel information. Whereas false stories inspired fear, disgust, and surprise in replies, true stories inspired anticipation, sadness, joy, and trust. Contrary to conventional wisdom, robots accelerated the spread of true and false news at the same rate, implying that false news spreads more than the truth because humans, not robots, are more likely to spread it.
Nature: ‘News’ spreads faster and more widely when it’s false
The role of false stories in Donald Trump’s surprise 2016 election victory or the UK’s Brexit vote, for example, is subject to intense debate. Part of the answer hinges on understanding how fake news travels, say Sinan Aral and his team at MIT, whose study was published in Science on 8 March. They classified news spread on Twitter as “true” or “false”, according to cross-checks using information from six established fact-checking sources. In this way, they investigated the dissemination of 126,000 news items among 3 million Twitter users between 2006 and 2017, using data supplied by the company. Their analysis showed that news stories deemed to be true, with 95–98% agreement among fact-checkers, spread more slowly than false stories, and reached fewer people. Even the most popular true news stories rarely reached more than 1,000 people, whereas the top 1% of false news stories reached between 1,000 and 100,000 people. False news that reached 1,500 people did so six times faster than did true stories. And falsities were 70% more likely to be retweeted than truths, according to a model of the data.
Nature: The biggest pandemic risk? Viral misinformation
A hundred years ago this month, the death rate from the 1918 influenza was at its peak. An estimated 500 million people were infected over the course of the pandemic; between 50 million and 100 million died, around 3% of the global population at the time. A century on, advances in vaccines have made massive outbreaks of flu — and measles, rubella, diphtheria and polio — rare. But people still discount their risks of disease. Few realize that flu and its complications caused an estimated 80,000 deaths in the United States alone this past winter, mainly in the elderly and infirm. Of the 183 children whose deaths were confirmed as flu-related, 80% had not been vaccinated that season, according to the US Centers for Disease Control and Prevention. I predict that the next major outbreak — whether of a highly fatal strain of influenza or something else — will not be due to a lack of preventive technologies. Instead, emotional contagion, digitally enabled, could erode trust in vaccines so much as to render them moot. The deluge of conflicting information, misinformation and manipulated information on social media should be recognized as a global public-health threat.
Pnas: The spreading of misinformation online
The wide availability of user-provided content in online social media facilitates the aggregation of people around common interests, worldviews, and narratives. However, the World Wide Web is a fruitful environment for the massive diffusion of unverified rumors. In this work, using a massive quantitative analysis of Facebook, we show that information related to distinct narratives––conspiracy theories and scientific news––generates homogeneous and polarized communities (i.e., echo chambers) having similar information consumption patterns. Then, we derive a data-driven percolation model of rumor spreading that demonstrates that homogeneity and polarization are the main determinants for predicting cascades’ size.
Plos One: Science vs Conspiracy: Collective Narratives in the Age of Misinformation
The large availability of user provided contents on online social media facilitates people aggregation around shared beliefs, interests, worldviews and narratives. In spite of the enthusiastic rhetoric about the so called collective intelligence unsubstantiated rumors and conspiracy theories—e.g., chemtrails, reptilians or the Illuminati—are pervasive in online social networks (OSN). In this work we study, on a sample of 1.2 million of individuals, how information related to very distinct narratives—i.e. main stream scientific and conspiracy news—are consumed and shape communities on Facebook. Our results show that polarized communities emerge around distinct types of contents and usual consumers of conspiracy news result to be more focused and self-contained on their specific contents. To test potential biases induced by the continued exposure to unsubstantiated rumors on users’ content selection, we conclude our analysis measuring how users respond to 4,709 troll information—i.e. parodistic and sarcastic imitation of conspiracy theories. We find that 77.92% of likes and 80.86% of comments are from users usually interacting with conspiracy stories.
Proc. 2nd Workshop on Data Science for Good: Some Like it Hoax: Automated Fake News Detection in Social Networks
In recent years, the reliability of information on the Internet has emerged as a crucial issue of modern society. Social network sites (SNSs) have revolutionized the way in which information is spread by allowing users to freely share content. As a consequence, SNSs are also increasingly used as vectors for the diffusion of misinformation and hoaxes. The amount of disseminated information and the rapidity of its diffusion make it practically impossible to assess reliability in a timely manner, highlighting the need for automatic hoax detection systems.
As a contribution towards this objective, we show that Facebook posts can be classified with high accuracy as hoaxes or non-hoaxes on the basis of the users who "liked" them. We present two classification techniques, one based on logistic regression, the other on a novel adaptation of boolean crowdsourcing algorithms. On a dataset consisting of 15,500 Facebook posts and 909,236 users, we obtain classification accuracies exceeding 99% even when the training set contains less than 1% of the posts. We further show that our techniques are robust: they work even when we restrict our attention to the users who like both hoax and non-hoax posts. These results suggest that mapping the diffusion pattern of information can be a useful component of automatic hoax detection systems.
Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining: Leveraging the Crowd to Detect and Reduce the Spread of Fake News and Misinformation
Online social networking sites are experimenting with the following crowd-powered procedure to reduce the spread of fake news and misinformation: whenever a user is exposed to a story through her feed, she can flag the story as misinformation and, if the story receives enough flags, it is sent to a trusted third party for fact checking. If this party identifies the story as misinformation, it is marked as disputed. However, given the uncertain number of exposures, the high cost of fact checking, and the trade-off between flags and exposures, the above mentioned procedure requires careful reasoning and smart algorithms which, to the best of our knowledge, do not exist to date. In this paper, we first introduce a flexible representation of the above procedure using the framework of marked temporal point processes. Then, we develop a scalable online algorithm, CURB, to select which stories to send for fact checking and when to do so to efficiently reduce the spread of misinformation with provable guarantees. In doing so, we need to solve a novel stochastic optimal control problem for stochastic differential equations with jumps, which is of independent interest. Experiments on two real-world datasets gathered from Twitter and Weibo show that our algorithm may be able to effectively reduce the spread of fake news and misinformation.
ArXiv: Studying Fake News via Network Analysis: Detection and Mitigation
Social media for news consumption is becoming increasingly popular due to its easy access, fast dissemination, and low cost. However, social media also enable the wide propagation of "fake news", i.e., news with intentionally false information. Fake news on social media poses significant negative societal effects, and also presents unique challenges. To tackle the challenges, many existing works exploit various features, from a network perspective, to detect and mitigate fake news. In essence, news dissemination ecosystem involves three dimensions on social media, i.e., a content dimension, a social dimension, and a temporal dimension. In this chapter, we will review network properties for studying fake news, introduce popular network types and how these networks can be used to detect and mitigation fake news on social media.
New Media and Society: The agenda-setting power of fake news: A big data analysis of the online media landscape from 2014 to 2016
This study examines the agenda-setting power of fake news and fact-checkers who fight them through a computational look at the online mediascape from 2014 to 2016. Although our study confirms that content from fake news websites is increasing, these sites do not exert excessive power. Instead, fake news has an intricately entwined relationship with online partisan media, both responding and setting its issue agenda. In 2016, partisan media appeared to be especially susceptible to the agendas of fake news, perhaps due to the election. Emerging news media are also responsive to the agendas of fake news, but to a lesser degree. Fake news coverage itself is diverging and becoming more autonomous topically. While fact-checkers are autonomous in their selection of issues to cover, they were not influential in determining the agenda of news media overall, and their influence appears to be declining, illustrating the difficulties fact-checkers face in disseminating their corrections.
ALBERT-LÁSZLÓ BARABÁSI:Network Science: Spreading Phenomena
This chapter describes how the tools of network science can help understand the Web's structure, development and weaknesses. The Web is an information network, in which the nodes are documents (at the time of writing over one trillion of them), connected by links. Other well-known network structures include the Internet, a physical network where the nodes are routers and the links are physical connections, and organizations, where the nodes are people and the links represent communications.
New England Journal of Medicine: The Spread of Obesity in a Large Social Network over 32 Years
Discernible clusters of obese persons (body-mass index [the weight in kilograms divided by the square of the height in meters], ≥30) were present in the network at all time points, and the clusters extended to three degrees of separation. These clusters did not appear to be solely attributable to the selective formation of social ties among obese persons. A person's chances of becoming obese increased by 57% (95% confidence interval [CI], 6 to 123) if he or she had a friend who became obese in a given interval. Among pairs of adult siblings, if one sibling became obese, the chance that the other would become obese increased by 40% (95% CI, 21 to 60). If one spouse became obese, the likelihood that the other spouse would become obese increased by 37% (95% CI, 7 to 73). These effects were not seen among neighbors in the immediate geographic location. Persons of the same sex had relatively greater influence on each other than those of the opposite sex. The spread of smoking cessation did not account for the spread of obesity in the network.
Philosophical Transaction of Royal Society A: Analysis of large-scale social and information networks
The growth of the Web has required us to think about the design of information systems in which large-scale computational and social feedback effects are simultaneously at work. At the same time, the data generated by Web-scale systems—recording the ways in which millions of participants create content, link information, form groups and communicate with one another—have made it possible to evaluate long-standing theories of social interaction, and to formulate new theories based on what we observe. These developments have created a new level of interaction between computing and the social sciences, enriching the perspectives of both of these disciplines. We discuss some of the observations, theories and conclusions that have grown from the study of Web-scale social interaction, focusing on issues including the mechanisms by which people join groups, the ways in which different groups are linked together in social networks and the interplay of positive and negative interactions in these networks.
ArXiv: The spread of fake news by social bots
The massive spread of fake news has been identified as a major global risk and has been alleged to influence elections and threaten democracies. Communication, cognitive, social, and computer scientists are engaged in efforts to study the complex causes for the viral diffusion of digital misinformation and to develop solutions, while search and social media platforms are beginning to deploy countermeasures. However, to date, these efforts have been mainly informed by anecdotal evidence rather than systematic data. Here we analyze 14 million messages spreading 400 thou- sand claims on Twitter during and following the 2016 U.S. presidential campaign and election. We find evidence that social bots play a key role in the spread of fake news. Accounts that actively spread misinformation are significantly more likely to be bots. Automated accounts are particularly active in the early spreading phases of viral claims, and tend to target influential users. Humans are vulnerable to this manipulation, retweeting bots who post false news. Successful sources of false and biased claims are heavily supported by social bots. These results suggests that curbing social bots may be an effective strategy for mitigating the spread of online misinformation.
ArXiv: Fake news propagate differently from real news even at early stages of spreading
Social media can be a double-edged sword for modern communications, either a convenient channel exchanging ideas or an unexpected conduit circulating fake news through a large population. Existing studies of fake news focus on efforts on theoretical modelling of propagation or identification methods based on black-box machine learning, neglecting the possibility of identifying fake news using only structural features of propagation of fake news compared to those of real news and in particular the ability to identify fake news at early stages of propagation. Here we track large databases of fake news and real news in both, Twitter in Japan and its counterpart Weibo in China, and accumulate their complete traces of re-posting. It is consistently revealed in both media that fake news spreads distinctively, even at early stages of spreading, in a structure that resembles multiple broadcasters, while real news circulates with a dominant source. A novel predictability feature emerges from this difference in their propagation networks, offering new paths of early detection of fake news in social media. Instead of commonly used features like texts or users for fake news identification, our finding demonstrates collective structural signals that could be useful for filtering out fake news at early stages of their propagation evolution. |
|