Main

In early 2020, the World Health Organization (WHO) declared a worldwide ‘infodemic’. An infodemic is characterized by an overabundance of information, particularly false and misleading information1. Although researchers have debated the effect of fake news on the outcomes of major societal events, such as political elections2,3, the spread of misinformation has much clearer potential to cause direct and notable harm to public health, especially during a pandemic. For example, research across different countries has shown that the endorsement of COVID-19 misinformation is robustly associated with people being less likely to follow public-health guidance4,5,6,7 and having reduced intentions to get vaccinated4,5 and to recommend the vaccine to others4. Experimental evidence has found that exposure to misinformation about vaccination resulted in about a 6-percentage-point decrease in the intention to get vaccinated among those who said that they would otherwise “definitely accept a vaccine”, undermining the potential for herd immunity8. Analyses of social-network data estimate that, without intervention, anti-vaccination content on social platforms such as Facebook will dominate discourse in the next decade9. Other research finds that exposure to misinformation about COVID-19 has been linked to the ingestion of harmful substances10 and an increased propensity to engage in violent behaviors11. Of course, misinformation was a threat to public health long before the pandemic. The debunked link between the MMR vaccine and autism was associated with a significant drop in vaccination coverage in the United Kingdom12, Listerine manufacturers falsely claimed that their mouthwash cured the common cold for many decades13, misinformation about tobacco products has influenced attitudes toward smoking14 and, in 2014, Ebola clinics were attacked in Liberia because of the false belief that the virus was part of a government conspiracy15.

Given the unprecedented scale and pace at which misinformation can now travel online, research has increasingly relied on models from epidemiology to understand the spread of fake news16,17,18. In these models, the key focus is on the reproduction number (R0)—in other words, the number of individuals who will start posting fake news (that is, secondary cases) following contact with someone who is already posting misinformation (the infectious individual). It is therefore helpful to think of misinformation as a viral pathogen that can infect its host, spreading rapidly from one individual to another within a given network, without the need for physical contact. One benefit of this epidemiological approach lies in the fact that early detection systems could be designed to identify, for example, superspreaders, which would allow for the timely deployment of interventions to curb the spread of viral misinformation18.

This Review will provide readers with a conceptual overview of recent literature on misinformation, along with a research agenda (Box 1) that covers three major theoretical dimensions aligned with the viral analogy: susceptibility, spread, and immunization. What makes individuals susceptible to misinformation in the first place? Why and how does it spread? And what can we do to boost public immunity?

Before reviewing the extant literature to help answer these questions, it is worth briefly discussing what the term ‘misinformation’ means, because inconsistent definitions affect not only the conceptualization of research designs but also the nature and validity of key outcome measures19. Indeed, misinformation has been referred to as an ‘umbrella category of symptoms’20 not only because definitions vary, but also because the behavioral consequences for public health might differ depending on the type of misinformation. The term ‘fake news’ is often especially regarded as problematic because it insufficiently describes the full spectrum of misinformation21 and has become a politicized rhetorical device in itself22. Box 2 provides a more detailed discussion of the problems associated with different scholarly definitions of misinformation23 but for the purpose of this Review, I will simply define misinformation in its broadest possible sense: ‘false or misleading information masquerading as legitimate news,’ regardless of intent24. Although disinformation is often differentiated from misinformation insofar as it involves a clear intention to deceive or harm other people, intent can be difficult to establish, so in this Review my treatment of misinformation will cover both intentional and unintentional forms of misinformation.

Susceptibility

Although people use many cognitive heuristics to make judgments about the veracity of a claim (for example, perceived source credibility)25, one particularly prominent finding that helps explain why people are susceptible to misinformation is known as the ‘illusory truth’ effect: repeated claims are more likely be judged as true than non-repeated (or novel) claims26. Given the fact that many falsehoods are often repeated by the popular media, politicians, and social-media influencers, the relevance of illusory truth has increased substantially. For example, the conspiracy theory that the coronavirus was bio-engineered in a military laboratory in Wuhan, China, and the false claim that “COVID-19 is no worse than the flu” have been repeated many times in the media27. The primary cognitive mechanism responsible for the fact that people are more likely to think that repeated claims are true is known as processing fluency: the more a claim is repeated, the more familiar it becomes and the easier it is to process28. In other words, the brain uses fluency as a signal for truth. Importantly, research shows that (1) prior exposure to fake news increases its perceived accuracy29; (2) illusory truth can occur for both plausible and implausible claims30; (3) prior knowledge does not necessarily protect people against illusory truth31; and (4) illusory truth does not appear to be moderated by thinking styles such as analytical versus intuitive reasoning32.

Although illusory truth can affect everyone, research has noted that some people are still more susceptible to misinformation than others. For example, some common findings include the observation that older individuals are more susceptible to fake news33,34, potentially owing to factors such as cognitive decline and greater digital illiteracy35, although there are exceptions: in the context of COVID-19, older individuals appear less likely to endorse misinformation4. Those with a more extreme and right-wing political orientation have also consistently shown to be more susceptible to misinformation3,4,33,36,37, even when the misinformation in question is non-political38,39. Yet, the link between ideology and misinformation susceptibility is not always consistent across different cultures4,37. Other factors such as greater numeracy skills4 and cognitive and analytic thinking styles36,40,41 have consistently been revealed to have a negative correlation with misinformation susceptibility—although other scholars have identified partisanship as a potential moderating factor42,43,44. In fact, these individual differences have given rise to two competing overarching theoretical explanations45,46 for why people are susceptible to misinformation. The first theory is often referred to as the classical ‘inattention’ account; the second is often dubbed the ‘identity-protective’ or ‘motivated cognition’ account. I will discuss emerging evidence for both theories in turn.

The inattention account

The inattention or ‘classical reasoning’ account argues that people are committed to sharing accurate content but the context of social media simply distracts people from making news-sharing decisions that are based on a preference for accuracy45. For example, consider that people are often bombarded with news content online, much of which is emotionally charged and political, which, coupled with the fact that people have limited time and resources to think about the veracity of a piece of news, might significantly interfere with their ability to accurately reflect on such content. The inattention account is based on a ‘classical’ reasoning perspective insofar as it draws on dual-process theories of human cognition, which suggest that people rely on two qualitatively different processes of reasoning47. These processes are often referred to as System 1, which is predominantly automatic, associative, and intuitive, and System 2, which is more reflective, analytical, and deliberate. A canonical example is the Cognitive Reflection Test (CRT), which administers a series of puzzles in which the intuitive or first answer that comes to mind is often wrong and thus a correct answer requires people to pause and reflect more carefully. The basic point is that activating more analytical System 2-type reasoning can override erroneous System 1-type intuitions. Evidence for the inattention account comes from the fact that people who score higher on the CRT36,41, who deliberate more48, who have greater numeracy skills4, and who have higher knowledge and education37,49 are consistently better able to discern between true and false news—regardless of whether the content is politically congruent36. In addition, experimental interventions that ‘prime’ people to think more analytically or consider the accuracy of news content50,51 have been shown to improve the quality of people’s news-sharing decisions and decrease acceptance of conspiracy theories52.

The motivated reasoning account

In stark contrast to the inattention account stands the theory of (politically) motivated reasoning53,54,55, which posits that information deficits or lack of reflective reasoning are not the primary driver of susceptibility to misinformation. Motivated reasoning occurs when someone starts out their reasoning process with a pre-determined goal (for example, someone might want to believe that vaccines are unsafe because that belief is shared by their family members), so individuals interpret new (mis)information in service of reaching that goal53. The motivated account therefore argues that the types of commitments that people have to their affinity groups is what leads them to selectively endorse media content that reinforces deeply held political, religious, or social identities56,57. There are several variants of the politically motivated reasoning account, but the basic premise is that people pay attention to not just the accuracy of a piece of news content, but also the goals that such information may serve. For example, a fake news story could be viewed as much more plausible when it happens to offer positive information about someone’s political group, or equally when it offers negative information about a political opponent42,57,58. A more extreme and scientifically contentious version of this model, also known as the ‘motivated numeracy’59 account, suggests that more reflective and analytical System 2 reasoning abilities do not help people make more accurate assessments but in fact are frequently hijacked in service of identity-based reasoning. Evidence for this claim comes from the fact that partisans with the highest numeracy and education levels tend to be the most polarized on contested scientific issues, such as climate change60 or stem-cell research61. Experimental work has also shown that when people are asked to make causal inferences about a data problem, such as the benefits of a new skin rash treatment, people with greater numeracy skills performed better when the problem was non-political. By contrast, people became more polarized and less accurate when the same data were presented as results from a new study on gun control59. These patterns were more pronounced among those with higher numeracy skills. Other research has found that politically conservative individuals are much more likely to (mistakenly) judge misinformation as true when the information is presented as coming from a conservative source than when that same information is presented as coming from a liberal source, and vice versa for politically liberal individuals—highlighting the key role of politics in truth discernment62.

Susceptibility: limitations and future research

It is worth mentioning that both accounts face significant critiques and limitations. For example, independent replications of interventions designed to nudge accuracy have revealed mixed findings63, and questions have been raised about the conceptualization of partisan bias in these studies43, including the possibility that the intervention effects are moderated by people’s political identities44. In turn, the motivated numeracy account has faced several failed and mixed replications64,65,66. For example, one large nationally representative study in the United States showed that, although polarization on global warming was indeed greatest among the highest educated partisans at baseline, this effect was neutralized and even reversed by an experimental intervention that induced accuracy motivations by highlighting the scientific consensus on global warming66. These findings have led to the discovery of a much larger confound in the motivated-reasoning literature, in that partisan bias could simply be due to selective exposure rather than motivated reasoning66,67,68. This is so because the role of politics is confounded with people’s prior beliefs66. Although people are polarized on many issues, this does not mean that they are unwilling to update their (misinformed) beliefs in line with the evidence. Moreover, people might refuse to update their beliefs not because of a motivation to reject the information (because it is incongruent with their political worldview) but simply because they find the information not credible, either because they discount the source or the veracity of the content itself for what appear to be legitimate reasons to those individuals. This ‘equivalence paradox’69 makes it difficult to causally disentangle accuracy from motivation-based preferences.

Future research should therefore not only carefully manipulate people’s motivations in the processing of (mis)information that is politically (dis)concordant, but also offer a more integrated theoretical account of susceptibility to misinformation. For example, it is likely that for political fake news, identity-motivations are going to be more salient; however, for misinformation that tackles non-politicized issues (such as falsehoods about cures for the common cold), knowledge deficits, inattention, or confusion might be more likely to play a role. Of course, it is possible for public-health issues—such as COVID-19—to become politicized relatively quickly, in which case the prominence of motivational goals in driving susceptibility to misinformation might increase. Accuracy and motivational goals are also frequently in conflict. For example, people might understand that a news story is unlikely to be true, but if the misinformation promotes the goals of their social group, they might be more inclined to forgo their desire for accuracy in favor of a motivation to conform with the norms of their community56,57. In other words, in any given context, the importance people assign to accuracy versus social goals is going to determine how and when they are going to update their beliefs in light of misinformation. There is much to be gained by advancing more contextual theories that focus on the interplay between accuracy and socio-political goals in explaining why people are susceptible to misinformation.

Spread

Measuring the infodemic

To return to the viral analogy, researchers have adopted models from epidemiology, such as the susceptible–Infected–recovered (SIR) model, to measure and quantify the spread of misinformation in online social networks17,70. In this context, R0 often represents individuals who will start posting fake news following contact with someone who is already ‘infected’. Evidence for the potential of an infodemic is taken when R0 exceeds 1, which signals the potential for exponential growth (when R0 is lower than 1, the infodemic will eventually sizzle out) and is evidence pointing to a possible infodemic. Analyses of social-media platforms have shown that all have the potential to drive infodemic-like spread, but some are more capable than others17. For example, research on Twitter has found that false news is about 70% more likely to be shared than true news, and it takes true news 6 times longer than false stories to reach 1,500 people71. Although fake news can thus spread faster and deeper than true news, it is important to emphasize that these findings are based on a relatively narrow definition of fact-checked news (see Box 2 and ref. 70), and more recent research has pointed out that these estimates are likely platform-dependent72. Importantly, several studies have now shown that fake news typically represents a small part of people’s overall media diet and that the spread of misinformation on social media is highly skewed so that a small number of accounts are responsible for the majority of the content that is shared and consumed, also known as ‘supersharers’ and ‘superconsumers’3,24,73. Although much of this work has come from the political domain, very similar findings have been found in the context of the COVID-19 pandemic, during which ‘superspreaders’ on Twitter and Facebook were exerting a majority of the influence on the platform74. A major issue is the existence of echo chambers, in which the flow of information is often systematically biased toward like-minded others72,75,76. Although the prevalence of echo chambers is debated77, the existence of such polarized clusters has shown to aid the virality of misinformation75,78,79 and impede the spread of corrections76.

Exposure does not equal infection

Importantly, exposure estimates based on social-media data often do not seem to line up with people’s self-reported experiences. Different polls show that over a third of people self-report frequent, if not daily exposure, to misinformation80. Of course, the validity of people’s self-reported experiences can be variable, but it raises questions about the accuracy of exposure estimates, which are often based on limited public data and can be sensitive to model assumptions. Moreover, a crucial factor to consider here is that exposure does not equal persuasion (or ‘infection’). For example, research in the context of COVID-19 headlines shows that people’s judgments of headline veracity had little impact on their sharing intentions45. People may thus choose to share misinformation for reasons other than accuracy. For example, one recent study81 found that people often share content that appears ‘interesting if true’. The study indicated that although people rate fake news as less accurate, they also rate it as ‘more interesting if true’ than real news and are thus willing to share it.

Spread: limitations and future research

More generally, the body of research on ‘spreading’ has faced significant limitations, including critical gaps in knowledge. There is skepticism about the rate at which people exposed to misinformation begin to actually believe it because research on media and persuasion effects has shown that it is difficult to persuade people using traditional advertisements82. But existing research has often used contrived laboratory designs that may not sufficiently represent the environment in which people make news-sharing decisions. For example, studies often test one-off exposures to a single message rather than persuasion as a function of repeated exposure to misinformation from diverse social and traditional media sources. Accordingly, we need a better understanding of the frequency and intensity with which exposure to misinformation ultimately leads to persuasion. Most studies also rely on publicly available data that people have shared or clicked on, but people may be exposed and influenced by much more information while scrolling on their social-media feed45. Moreover, fake news is often conceptualized as a list of URLs that were fact-checked as true or false, but this type of fake news represents only a small segment of misinformation; people may be much more likely to encounter content that is misleading or manipulative without being overtly false (see Box 2). Finally, micro-targeting efforts have significantly enhanced the ability for misinformation producers to identify and target subpopulations of individuals who are most susceptible to persuasion83. In short, more research is needed before precise and valid conclusions can be made about either population-level exposure or the probability that exposure to misinformation leads to infection (that is, persuasion).

Immunization

A rapidly emerging body of research has started to evaluate the possibility of ‘immunizing’ the public against misinformation at a cognitive level. I will categorize these efforts by whether their application is primarily prophylactic (preventative) or therapeutic (post-exposure), also known as ‘prebunking’ and ‘debunking,’ respectively.

Therapeutic treatments: fact-checking and debunking

The traditional, standard approach to countering misinformation generally involves the correction of a myth or falsehood after people have already been exposed or persuaded by a piece of misinformation. For example, debunking misinformation about autism interventions has shown to be effective in reducing support for non-empirically supported treatments, such as dieting84. Exposure to court-ordered corrective advertisements from the tobacco industry on the link between smoking and disease can increase knowledge and reduce misperceptions about smoking85. In one randomized controlled trial, a video debunking several myths about vaccination effectively reduced influential misperceptions, such as the false belief that vaccines cause autism or that they reduce the strength of the natural immune system86. Meta-analyses have consistently found that fact-checking and debunking interventions can be effective87,88, including in the context of countering health misinformation on social media89. However, not all medical misperceptions are equally amenable to corrections90. In fact, these same analyses note that the effectiveness of interventions is significantly attenuated by (1) the quality of the debunk, (2) the passing of time, and (3) prior beliefs and ideologies. For example, the aforementioned studies on autism84 and corrective smoking advertisements85 showed no remaining effect after a 1-week and 6-week follow-up, respectively. When designing corrections, simply labeling information as false or incorrect is generally not sufficient because correcting a myth by means of a simple retraction leaves a gap in people’s understanding of why the information is false and what is true instead. Accordingly, the recommendation for practitioners is often to craft much more detailed debunking materials88. Reviews of the literature91,92 have indicated that best practice in designing debunking messages involves (1) leading with the truth, (2) appealing to scientific consensus and authoritative expert sources, (3) ensuring that the correction is easily accessible and not more complex than the initial misinformation, (4) a clear explanation of why the misinformation is wrong, and (5) the provision of a coherent alternative causal explanation (Fig. 1). Although there is generally a lack of comparative research, some recent studies have shown that optimizing debunking messages according to these guidelines enhances their efficacy when compared with alternative or business-as-usual debunking methods84.

Fig. 1: Best-practice recommendations for effectively debunking misinformation91,92.
figure 1

An effective debunking message should open with the facts and present them in a simple and memorable fashion. The audience should then be warned about the myth (do not repeat the myth more than once). The manipulation technique used to mislead people should subsequently be identified and exposed. End by repeating the facts and emphasizing the correct explanation.

Debunking: limitations and future research

Despite these advances, significant concerns have been raised about the application of such post hoc ‘therapeutic’ corrections, mostly notably the risk of a correction backfiring so that people end up believing more in the myth as a result of the correction. This backfire effect can occur along two potential dimensions92,93, one of which concerns psychological reactance against the correction itself (the ‘worldview’ backfire effect) whereas the other is concerned with the repetition of false information (the ‘familiarity’ backfire effect). Although early research was supportive of the fact that, for example, corrections about myths surrounding the flu and MMR vaccine can cause already concerned individuals to become even more hesitant about vaccination decisions94,95, more recent studies have failed to find evidence for such worldview backfire effects93,96. In fact, while evidence of backfire remains widely cited, recent replications have failed to reproduce such effects when correcting misinformation about vaccinations specifically97. Thus, although the effect likely exists, its frequency and intensity is less common than previously thought. Worldview backfire concerns can also be minimized by designing debunking messages in a way that coheres rather than conflicts with the recipients’ worldviews92. Nonetheless, because debunking forces a rhetorical frame in which the misinformation needs to be repeated in order to correct it (that is, rebutting someone else’s claim), there is a risk that such repetition enhances familiarity with the myth while people subsequently fail to encode the correction in long-term memory. Although research clearly shows that people are more likely to believe repeated (mis)information than non-repeated (mis)information26, recent work has found that the risk of ironically strengthening a myth as part of a debunking effort is relatively low93, especially when the debunking messages feature the correction prominently relative to the misinformation. The consensus is therefore that, although practitioners should be aware of these backfire concerns, they should not prevent the issuing of corrections given the infrequent nature of these side effects91,93.

Having said this, there are two other notable problems with therapeutic approaches that limit their efficacy. The first is that retrospective corrections do not reach the same amount of people as the original misinformation. For example, estimates reveal that only about 40% of smokers were exposed to the tobacco industry’s court-ordered corrections98. A related concern is that, after being exposed, people continue to make inferences on the basis of falsehoods, even when they acknowledge a correction. This phenomenon is known as the ‘continued influence of misinformation’92, and meta-analyses have found robust evidence of continued influence effects in a wide range of contexts88,99.

Prophylactic treatments: inoculation theory

Accordingly, researchers have recently begun to explore prophylactic or pre-emptive approaches to countering misinformation, that is, before an individual has been exposed to or has reached ‘infectious’ status. Although prebunking is a more general term used for interventions that pre-emptively remind people to ‘think before they post’51, such reminders in and of themselves do not equip people with any new skills to identify and resist misinformation. The most common framework for preventing unwanted persuasion is psychological inoculation theory100,101 (Fig. 2). The theory of psychological inoculation follows the biomedical analogy and posits that, just as vaccines trigger the production of antibodies to help confer immunity against future infection, the same can be achieved with information. By pre-emptively forewarning and exposing people to severely weakened doses of misinformation (coupled with strong refutations), people can cultivate cognitive resistance against future misinformation102. Inoculation theory operates via two mechanisms, namely (1) motivational threat (a desire to defend oneself from manipulation attacks) and (2) refutational pre-emption or prebunking (pre-exposure to a weakened example of the attack). For example, research has found that inoculating people against conspiratorial arguments about vaccination before (but not after) exposure to a conspiracy theory effectively raised vaccination intentions103. Several recent reviews102,104 and meta-analyses105 have pointed to the efficacy of psychological inoculation as a robust strategy for conferring immunity to persuasion by misinformation, including many applications in the health domain106, such as inoculating people against misinformation about the use of mammography in breast-cancer screening107.

Fig. 2: The process of psychological inoculation against misinformation.
figure 2

Psychological inoculation consists of two core components: (1) forewarning people that they may be misled by misinformation (to activate the psychological ‘immune system’), and (2) prebunking the misinformation (tactic) by exposing people to a severely weakened dose of it coupled with strong counters and refutations (to generate the cognitive ‘antibodies’). Once people have gained ‘immunity’ they can then vicariously spread the inoculation to others via offline and online interactions.

Several recent advances, in particular, are worth noting. The first is that the field has moved from ‘narrow-spectrum’ or ‘fact-based’ inoculation to ‘broad-spectrum’ or ‘technique-based’ immunization102,108. The reasoning behind this shift is that, although it is possible to synthesize a severely weakened dose from existing misinformation (and to subsequently refute that weakened example with strong counterarguments), it is difficult to scale the vaccine if this process has to be repeated anew for every piece of misinformation. Instead, scholars have started to identify the common building blocks of misinformation more generally38,109, including techniques such as impersonating fake experts and doctors, manipulating people’s emotions with fear appeals, and the use of conspiracy theories. Research has found that people can be inoculated against these underlying strategies and, as a result, become relatively more immune to a whole range of misinformation that makes use of these tactics38,102. This process is sometimes referred to as cross-protection insofar as inoculating people against one strain offers protection against related and different strains of the same misinformation tactic.

A second advance surrounds the application of active versus passive inoculation. Whereas the traditional inoculation process is passive insofar as people pre-emptively receive the specific refutations from the experimenter, the process of active inoculation encourages people to generate their own ‘antibodies’. Perhaps the best-known example of active inoculation are popular gamified inoculation interventions such as Bad News38 and GoViral!110, where players step into the shoes of a misinformation producer and are exposed—in a simulated social-media environment—to weakened doses of common strategies used to spread misinformation. As part of this process, players actively generate their own media content and unveil the techniques of manipulation. Research has found that resistance to deception occurs when people (1) recognize their own vulnerability to being persuaded and (2) perceive undue intent to manipulate their opinion111,112. These games therefore aim to expose people’s vulnerability, motivating an individuals’ desire to protect themselves against misinformation through pre-exposure to weakened doses. Randomized controlled trials have found that active inoculation games help people identify misinformation38,110,113,114, boost confidence in people’s truth-discernment abilities110,113, and reduce self-reported sharing of misinformation110,115. Yet, like many biological vaccines, research has found that psychological immunity also wanes over time but can be maintained for several months with regular ‘booster’ shots that re-engage people with the inoculation process114. A benefit of this line of research is that these gamified interventions have been evaluated and scaled across millions of people as part of the WHO’s ‘Stop The Spread’ campaign and the United Nations’ ‘Verified’ initiative in collaboration with the UK government110,116.

Prebunking: limitations and future research

A potential limitation is that, although misinformation tropes are often repeated throughout history (consider similarities in the myths that the cowpox vaccine would turn people into human–cow hybrids and the conspiracy theory that COVID-19 vaccines alter human DNA), inoculation does require at least some advanced knowledge of what misinformation (tactic) people might be exposed to in the future91. In addition, as healthcare workers are being trained to combat misinformation117, it is important to avoid confusion in terminology when using psychological inoculation to counter vaccine hesitancy. For example, the approach can be implemented without making explicit reference to the vaccination analogy and instead can focus on the value of ‘prebunking’ and helping people unveil the techniques of manipulation.

Several other important open questions remain. For example, analogous to recent advances in experimental medicine on the application of therapeutic vaccines—which can still boost immune responses after infection—research has found that inoculation can still protect people from misinformation even when they have already been exposed to misinformation108,112,118. This makes conceptual sense insofar it may take repeated exposure or a significant amount of time for misinformation to fully persuade people or become integrated with prior attitudes. Yet it remains conceptually unclear at which point therapeutic inoculation transitions into traditional debunking. Moreover, although some comparisons of active versus passive inoculation approaches exist105,110, the evidence base for active forms of inoculation remains relatively small. Similarly, whereas head-to-head studies that compared prebunking to debunking suggest that prevention is indeed better than cure103, more comparative research is needed. Research also finds that it is possible for people to vicariously pass on the inoculation interpersonally or on social media, a process known as ‘post-inoculation talk’104, which alludes to the possibility of herd immunity in online communities110, yet no social-network simulations currently exist that evaluate the potential of inoculative approaches. Current studies are also based on self-reported sharing of misinformation. Future research will need to evaluate the extent to which inoculation can scale across the population and influence objective news-sharing behavior on social media.

Conclusion

The spread of misinformation has undermined public-health efforts, from vaccination uptake to public compliance with health-protective behaviors. Research finds that although people are sometimes duped by misinformation because they are distracted on social media and are not paying sufficient attention to accuracy cues, the politicized nature of many public-health challenges suggests that people also believe in and share misinformation because doing so reinforces important socio-political beliefs and identity structures. A more integrated framework is needed that is sensitive to context and can account for varying susceptibility to misinformation on the basis of how people prioritize accuracy and social motives when forming judgments about the veracity of news media. Although ‘exposure’ does not equal ‘infection,’ misinformation can spread fast in online networks, and its virality is often aided by the existence of political echo chambers. Importantly, however, the bulk of misinformation on social media often originates from influential accounts and superspreaders. Therapeutic and prophylactic approaches to countering misinformation have both demonstrated some success, but given the continued influence of misinformation after exposure, there is much value in preventative approaches, and more research is needed on how to best combine debunking and prebunking efforts. Further research is also encouraged to outline the benefits and potential challenges to applying the epidemiological model to understand the psychology behind the spread of misinformation. A major challenge for the field moving forward will be clearly defining how misinformation is measured and conceptualized, as well as the need for standardized psychometric instruments that allow for better comparisons of outcomes across studies.