Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

  • The Future of Truth and Misinformation Online

Experts are evenly split on whether the coming decade will see a reduction in false and misleading narratives online. Those forecasting improvement place their hopes in technological fixes and in societal solutions. Others think the dark side of human nature is aided more than stifled by technology.

Table of contents.

  • About this canvassing of experts
  • Theme 1: The information environment will not improve. The problem is human nature
  • Theme 2: The information environment will not improve because technology will create new challenges that can’t or won’t be countered effectively and at scale
  • Theme 3: The information environment will improve because technology will help label, filter or ban misinformation and thus upgrade the public’s ability to judge the quality and veracity of content
  • Theme 4: The information environment will improve, because people will adjust and make things better
  • Theme 5: Tech can’t win the battle. The public must fund and support the production of objective, accurate information. It must also elevate information literacy to be a primary goal of education
  • Acknowledgments

essay online information is deceiving and unreliable

  In late 2016, Oxford Dictionaries selected “post-truth” as the word of the year , defining it as “relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief.”

The 2016 Brexit vote in the United Kingdom and the tumultuous U.S. presidential election highlighted how the digital age has affected news and cultural narratives. New information platforms feed the ancient instinct people have to find information that syncs with their perspectives: A 2016 study that analyzed 376 million Facebook users’ interactions with over 900 news outlets found that people tend to seek information that aligns with their views.

This makes many vulnerable to accepting and acting on misinformation. For instance, after fake news stories in June 2017 reported Ethereum’s founder Vitalik Buterin had died in a car crash its market value was reported to have dropped by $4 billion .

Misinformation is not like a plumbing problem you fix. It is a social condition, like crime, that you must constantly monitor and adjust to. Tom Rosenstiel

When BBC Future Now interviewed a panel of 50 experts in early 2017 about the “ grand challenges we face in the 21 st century ” many named the breakdown of trusted information sources. “The major new challenge in reporting news is the new shape of truth,” said Kevin Kelly, co-founder of Wired magazine. “Truth is no longer dictated by authorities, but is networked by peers. For every fact there is a counterfact and all these counterfacts and facts look identical online, which is confusing to most people.”

Americans worry about that: A Pew Research Center study conducted just after the 2016 election found 64% of adults believe fake news stories cause a great deal of confusion and 23% said they had shared fabricated political stories themselves – sometimes by mistake and sometimes intentionally.

The question arises, then: What will happen to the online information environment in the coming decade? In summer 2017, Pew Research Center and Elon University’s Imagining the Internet Center conducted a large canvassing of technologists, scholars, practitioners, strategic thinkers and others, asking them to react to this framing of the issue:

The rise of “fake news” and the proliferation of doctored narratives that are spread by humans and bots online are challenging publishers and platforms. Those trying to stop the spread of false information are working to design technical and human systems that can weed it out and minimize the ways in which bots and other schemes spread lies and misinformation.

The question:  In the next 10 years, will trusted methods emerge to block false narratives and allow the most accurate information to prevail in the overall information ecosystem? Or will the quality and veracity of information online deteriorate due to the spread of unreliable, sometimes even dangerous, socially destabilizing ideas?

Respondents were then asked to choose one of the following answer options:

The information environment will improve – In the next 10 years, on balance, the information environment will be IMPROVED by changes that reduce the spread of lies and other misinformation online.

The information environment will NOT improve – In the next 10 years, on balance, the information environment will NOT BE improved by changes designed to reduce the spread of lies and other misinformation online.

Some 1,116 responded to this nonscientific canvassing: 51 % chose the option that the information environment will not improve, and 49% said the information environment will improve. (See “ About this canvassing of experts ” for details about this sample.) Participants were next asked to explain their answers. This report concentrates on these follow-up responses.

Their reasoning revealed a wide range of opinions about the nature of these threats and the most likely solutions required to resolve them. But the overarching and competing themes were clear: Those who do not think things will improve felt that humans mostly shape technology advances to their own, not-fully-noble purposes and that bad actors with bad motives will thwart the best efforts of technology innovators to remedy today’s problems.

And those who are most hopeful believed that technological fixes can be implemented to bring out the better angels guiding human nature.

More specifically, the 51% of these experts who expect things will not improve generally cited two reasons:

The fake news ecosystem preys on some of our deepest human instincts: Respondents said humans’ primal quest for success and power – their “survival” instinct – will continue to degrade the online information environment in the next decade. They predicted that manipulative actors will use new digital tools to take advantage of humans’ inbred preference for comfort and convenience and their craving for the answers they find in reinforcing echo chambers.

Our brains are not wired to contend with the pace of technological change: These respondents said the rising speed, reach and efficiencies of the internet and emerging online applications will magnify these human tendencies and that technology-based solutions will not be able to overcome them. They predicted a future information landscape in which fake information crowds out reliable information. Some even foresaw a world in which widespread information scams and mass manipulation cause broad swathes of public to simply give up on being informed participants in civic life.

The 49% of these experts who expect things to improve generally inverted that reasoning:

Technology can help fix these problems: These more hopeful experts said the rising speed, reach and efficiencies of the internet, apps and platforms can be harnessed to rein in fake news and misinformation campaigns. Some predicted better methods will arise to create and promote trusted, fact-based news sources.

It is also human nature to come together and fix problems: The hopeful experts in this canvassing took the view that people have always adapted to change and that this current wave of challenges will also be overcome. They noted that misinformation and bad actors have always existed but have eventually been marginalized by smart people and processes. They expect well-meaning actors will work together to find ways to enhance the information environment. They also believe better information literacy among citizens will enable people to judge the veracity of material content and eventually raise the tone of discourse.

The majority of participants in this canvassing wrote detailed elaborations on their views. Some chose to have their names connected to their answers; others opted to respond anonymously. These findings do not represent all possible points of view, but they do reveal a wide range of striking observations.

Respondents collectively articulated several major themes tied to those insights and explained in the sections below the following graphic. Several longer additional sets of responses tied to these themes follow that summary .

The following section presents an overview of the themes found among the written responses, including a small selection of representative quotes supporting each point. Some comments are lightly edited for style or length.

essay online information is deceiving and unreliable

Theme 1: The information environment will not improve: The problem is human nature

Most respondents who expect the environment to worsen said human nature is at fault. For instance, Christian H. Huitema , former president of the Internet Architecture Board, commented, “The quality of information will not improve in the coming years, because technology can’t improve human nature all that much.”

These experts predicted that the problem of misinformation will be amplified because the worst side of human nature is magnified by bad actors using advanced online tools at internet speed on a vast scale.

The quality of information will not improve in the coming years, because technology can’t improve human nature all that much. Christian H. Huitema

Tom Rosenstiel , author, director of the American Press Institute and senior fellow at the Brookings Institution, commented, “Whatever changes platform companies make, and whatever innovations fact checkers and other journalists put in place, those who want to deceive will adapt to them. Misinformation is not like a plumbing problem you fix. It is a social condition, like crime, that you must constantly monitor and adjust to. Since as far back as the era of radio and before, as Winston Churchill said, ‘A lie can go around the world before the truth gets its pants on.’”

Michael J. Oghia , an author, editor and journalist based in Europe, said he expects a worsening of the information environment due to five things: “1) The spread of misinformation and hate; 2) Inflammation, sociocultural conflict and violence; 3) The breakdown of socially accepted/agreed-upon knowledge and what constitutes ‘fact.’ 4) A new digital divide of those subscribed (and ultimately controlled) by misinformation and those who are ‘enlightened’ by information based on reason, logic, scientific inquiry and critical thinking. 5) Further divides between communities, so that as we are more connected we are farther apart. And many others.”

Leah Lievrouw , professor in the department of information studies at the University of California, Los Angeles, observed, “So many players and interests see online information as a uniquely powerful shaper of individual action and public opinion in ways that serve their economic or political interests (marketing, politics, education, scientific controversies, community identity and solidarity, behavioral ‘nudging,’ etc.). These very diverse players would likely oppose (or try to subvert) technological or policy interventions or other attempts to insure the quality, and especially the disinterestedness, of information.”

Subtheme: More people = more problems. The internet’s continuous growth and accelerating innovation allow more people and artificial intelligence (AI) to create and instantly spread manipulative narratives

While propaganda and the manipulation of the public via falsehoods is a tactic as old as the human race, many of these experts predicted that the speed, reach and low cost of online communication plus continuously emerging innovations will magnify the threat level significantly. A professor at a Washington, D.C.-area university said, “It is nearly impossible to implement solutions at scale – the attack surface is too large to be defended successfully.”

Jerry Michalski, futurist and founder of REX, replied, “The trustworthiness of our information environment will decrease over the next decade because: 1) It is inexpensive and easy for bad actors to act badly; 2) Potential technical solutions based on strong ID and public voting (for example) won’t quite solve the problem; and 3) real solutions based on actual trusted relationships will take time to evolve – likely more than a decade.”

It is nearly impossible to implement solutions at scale – the attack surface is too large to be defended successfully. Anonymous professor

An institute director and university professor said, “The internet is the 21st century’s threat of a ‘nuclear winter,’ and there’s no equivalent international framework for nonproliferation or disarmament. The public can grasp the destructive power of nuclear weapons in a way they will never understand the utterly corrosive power of the internet to civilized society, when there is no reliable mechanism for sorting out what people can believe to be true or false.”

Bob Frankston , internet pioneer and software innovator, said, “I always thought that ‘Mein Kampf’ could be countered with enough information. Now I feel that people will tend to look for confirmation of their biases and the radical transparency will not shine a cleansing light.”

David Harries , associate executive director for Foresight Canada, replied, “More and more, history is being written, rewritten and corrected, because more and more people have the ways and means to do so. Therefore there is ever more information that competes for attention, for credibility and for influence. The competition will complicate and intensify the search for veracity. Of course, many are less interested in veracity than in winning the competition.”

Glenn Edens , CTO for technology reserve at PARC, a Xerox company, commented, “Misinformation is a two-way street. Producers have an easy publishing platform to reach wide audiences and those audiences are flocking to the sources. The audiences typically are looking for information that fits their belief systems, so it is a really tough problem.”

Subtheme: Humans are by nature selfish, tribal, gullible convenience seekers who put the most trust in that which seems familiar

The respondents who supported this view noted that people’s actions – from consciously malevolent and power-seeking behaviors to seemingly more benign acts undertaken for comfort or convenience – will work to undermine a healthy information environment.

People on systems like Facebook are increasingly forming into ‘echo chambers’ of those who think alike. They will keep unfriending those who don’t, and passing on rumors and fake news that agrees with their point of view. Starr Roxanne Hiltz

An executive consultant based in North America wrote, “It comes down to motivation: There is no market for the truth. The public isn’t motivated to seek out verified, vetted information. They are happy hearing what confirms their views. And people can gain more creating fake information (both monetary and in notoriety) than they can keeping it from occurring.”

Serge Marelli , an IT professional who works on and with the Net, wrote, “As a group, humans are ‘stupid.’ It is ‘group mind’ or a ‘group phenomenon’ or, as George Carlin said, ‘Never underestimate the power of stupid people in large groups.’ Then, you have Kierkegaard, who said, ‘People demand freedom of speech as a compensation for the freedom of thought which they seldom use.’ And finally, Euripides said, ‘Talk sense to a fool and he calls you foolish.’”

Starr Roxanne Hiltz , distinguished professor of information systems and co-author of the visionary 1970s book “ The Network Nation ,” replied, “People on systems like Facebook are increasingly forming into ‘echo chambers’ of those who think alike. They will keep unfriending those who don’t, and passing on rumors and fake news that agrees with their point of view. When the president of the U.S. frequently attacks the traditional media and anybody who does not agree with his ‘alternative facts,’ it is not good news for an uptick in reliable and trustworthy facts circulating in social media.”

Nigel Cameron , a technology and futures editor and president of the Center for Policy on Emerging Technologies, said, “Human nature is not EVER going to change (though it may, of course, be manipulated). And the political environment is bad.”

Ian O’Byrne , assistant professor at the College of Charleston, replied, “Human nature will take over as the salacious is often sexier than facts. There are multiple information streams, public and private, that spread this information online. We can also not trust the businesses and industries that develop and facilitate these digital texts and tools to make changes that will significantly improve the situation.”

Greg Swanson , media consultant with ITZonTarget, noted, “The sorting of reliable versus fake news requires a trusted referee. It seems unlikely that government can play a meaningful role as this referee. We are too polarized. And we have come to see the television news teams as representing divergent points of view, and, depending on your politics, the network that does not represent your views is guilty of ‘fake news.’ It is hard to imagine a fair referee that would be universally trusted.”

There were also those among these expert respondents who said inequities, perceived and real, are at the root of much of the misinformation being produced.

A professor at MIT observed, “I see this as problem with a socioeconomic cure: Greater equity and justice will achieve much more than a bot war over facts. Controlling ‘noise’ is less a technological problem than a human problem, a problem of belief, of ideology. Profound levels of ungrounded beliefs about things both sacred and profane existed before the branding of ‘fake news.’ Belief systems – not ‘truths’ – help to cement identities, forge relationships, explain the unexplainable.”

Julian Sefton-Green , professor of new media education at Deakin University in Australia, said, “The information environment is an extension of social and political tensions. It is impossible to make the information environment a rational, disinterested space; it will always be susceptible to pressure.”

A respondent affiliated with Harvard University’s Berkman Klein Center for Internet & Society wrote, “The democratization of publication and consumption that the networked sphere represents is too expansive for there to be any meaningful improvement possible in terms of controlling or labeling information. People will continue to cosset their own cognitive biases.”

Subtheme: In existing economic, political and social systems, the powerful corporate and government leaders most able to improve the information environment profit most when it is in turmoil

A large number of respondents said the interests of the most highly motivated actors, including those in the worlds of business and politics, are generally not motivated to “fix” the proliferation of misinformation. Those players will be a key driver in the worsening of the information environment in the coming years and/or the lack of any serious attempts to effectively mitigate the problem.

Scott Shamp , a dean at Florida State University, commented, “Too many groups gain power through the proliferation of inaccurate or misleading information. When there is value in misinformation, it will rule.”

Big political players have just learned how to play this game. I don’t think they will put much effort into eliminating it. Zbigniew Łukasiak

[information]

Stephen Downes , researcher with the National Research Council of Canada, wrote, “Things will not improve. There is too much incentive to spread disinformation, fake news, malware and the rest. Governments and organizations are major actors in this space.”

An anonymous respondent said, “Actors can benefit socially, economically, politically by manipulating the information environment. As long as these incentives exist, actors will find a way to exploit them. These benefits are not amenable to technological resolution as they are social, political and cultural in nature. Solving this problem will require larger changes in society.”

Seth Finkelstein , consulting programmer and winner of the Electronic Freedom Foundation’s Pioneer Award, commented, “Virtually all the structural incentives to spread misinformation seem to be getting worse.”

A data scientist based in Europe wrote, “The information environment is built on the top of telecommunication infrastructures and services developed following the free-market ideology, where ‘truth’ or ‘fact’ are only useful as long as they can be commodified as market products.”

Zbigniew Łukasiak , a business leader based in Europe, wrote, “Big political players have just learned how to play this game. I don’t think they will put much effort into eliminating it.”

A vice president for public policy at one of the world’s foremost entertainment and media companies commented, “The small number of dominant online platforms do not have the skills or ethical center in place to build responsible systems, technical or procedural. They eschew accountability for the impact of their inventions on society and have not developed any of the principles or practices that can deal with the complex issues. They are like biomedical or nuclear technology firms absent any ethics rules or ethics training or philosophy. Worse, their active philosophy is that assessing and responding to likely or potential negative impacts of their inventions is both not theirs to do and even shouldn’t be done.”

Patricia Aufderheide , professor of communications and founder of the Center for Media and Social Impact at American University, said, “Major interests are not invested enough in reliability to create new business models and political and regulatory standards needed for the shift. … Overall there are powerful forces, including corporate investment in surveillance-based business models, that create many incentives for unreliability, ‘invisible handshake’ agreements with governments that militate against changing surveillance models, international espionage at a governmental and corporate level in conjunction with mediocre cryptography and poor use of white hat hackers, poor educational standards in major industrial countries such as the U.S., and fundamental weaknesses in the U.S. political/electoral system that encourage exploitation of unreliability. It would be wonderful to believe otherwise, and I hope that other commentators will be able to convince me otherwise.”

James Schlaffer , an assistant professor of economics, commented, “Information is curated by people who have taken a step away from the objectivity that was the watchword of journalism. Conflict sells, especially to the opposition party, therefore the opposition news agency will be incentivized to push a narrative and agenda. Any safeguards will appear as a way to further control narrative and propagandize the population.”

Subtheme: Human tendencies and infoglut drive people apart and make it harder for them to agree on “common knowledge.” That makes healthy debate difficult and destabilizes trust. The fading of news media contributes to the problem

Many respondents expressed concerns about how people’s struggles to find and apply accurate information contribute to a larger social and political problem: There is a growing deficit in commonly accepted facts or some sort of cultural “common ground.” Why has this happened? They cited several reasons:

  • Online echo chambers or silos divide people into separate camps, at times even inciting them to express anger and hatred at a volume not seen in previous communications forms.
  • Information overload crushes people’s attention spans. Their coping mechanism is to turn to entertainment or other lighter fare.
  • High-quality journalism has been decimated due to changes in the attention economy.

They said these factors and others make it difficult for many people in the digital age to create and come to share the type of “common knowledge” that undergirds better and more-responsive public policy. A share of respondents said a lack of commonly shared knowledge leads many in society to doubt the reliability of everything, causing them to simply drop out of civic participation, depleting the number of active and informed citizens.

Jamais Cascio , distinguished fellow at the Institute for the Future, noted, “The power and diversity of very low-cost technologies allowing unsophisticated users to create believable ‘alternative facts’ is increasing rapidly. It’s important to note that the goal of these tools is not necessarily to create consistent and believable alternative facts, but to create plausible levels of doubt in actual facts. The crisis we face about ‘truth’ and reliable facts is predicated less on the ability to get people to believe the *wrong* thing as it is on the ability to get people to *doubt* the right thing. The success of Donald Trump will be a flaming signal that this strategy works, alongside the variety of technologies now in development (and early deployment) that can exacerbate this problem. In short, it’s a successful strategy, made simpler by more powerful information technologies.”

Philip J. Nickel , lecturer at Eindhoven University of Technology in the Netherlands, said, “The decline of traditional news media and the persistence of closed social networks will not change in the next 10 years. These are the main causes of the deterioration of a public domain of shared facts as the basis for discourse and political debate.”

Kenneth Sherrill , professor emeritus of political science at Hunter College, City University of New York, predicted, “Disseminating false rumors and reports will become easier. The proliferation of sources will increase the number of people who don’t know who or what they trust. These people will drop out of the normal flow of information. Participation will decline as more and more citizens become unwilling/unable to figure out which information sources are reliable.”

The crisis we face about ‘truth’ and reliable facts is predicated less on the ability to get people to believe the *wrong* thing as it is on the ability to get people to *doubt* the right thing. Jamais Cascio

What is truth? What is a fact? Who gets to decide? And can most people agree to trust anything as “common knowledge”? A number of respondents challenged the idea that any individuals, groups or technology systems could or should “rate” information as credible, factual, true or not.

An anonymous respondent observed, “Whatever is devised will not be seen as impartial; some things are not black and white; for other situations, facts brought up to come to a conclusion are different that other facts used by others in a situation. Each can have real facts, but it is the facts that are gathered that matter in coming to a conclusion; who will determine what facts will be considered or what is even considered a fact.”

A research assistant at MIT noted, “‘Fake’ and ‘true’ are not as binary as we would like, and – combined with an increasingly connected and complex digital society – it’s a challenge to manage the complexity of social media without prescribing a narrative as ‘truth.’”

An internet pioneer and longtime leader at ICANN said, “There is little prospect of a forcing factor that will emerge that will improve the ‘truthfulness’ of information in the internet.”

A vice president for stakeholder engagement said, “Trust networks are best established with physical and unstructured interaction, discussion and observation. Technology is reducing opportunities for such interactions and disrupting human discourse, while giving the ‘feeling’ that we are communicating more than ever.”

Subtheme: A small segment of society will find, use and perhaps pay a premium for information from reliable sources. Outside of this group “chaos will reign” and a worsening digital divide will develop

Some respondents predicted that a larger digital divide will form. Those who pursue more-accurate information and rely on better-informed sources will separate from those who are not selective enough or who do not invest either the time or the money in doing so.

There will be a sort of ‘gold standard’ set of sources, and there will be the fringe. Anonymous respondent.

Alejandro Pisanty , a professor at UNAM, the National University of Mexico, and longtime internet policy leader, observed, “Overall, at least a part of society will value trusted information and find ways to keep a set of curated, quality information resources. This will use a combination of organizational and technological tools but above all, will require a sharpened sense of good judgment and access to diverse, including rivalrous, sources. Outside this, chaos will reign.”

Alexander Halavais , associate professor of social technologies at Arizona State University, said, “As there is value in accurate information, the availability of such information will continue to grow. However, when consumers are not directly paying for such accuracy, it will certainly mean a greater degree of misinformation in the public sphere. That means the continuing bifurcation of haves and have-nots, when it comes to trusted news and information.”

An anonymous editor and publisher commented, “Sadly, many Americans will not pay attention to ANY content from existing or evolving sources. It’ll be the continuing dumbing down of the masses, although the ‘upper’ cadres (educated/thoughtful) will read/see/know, and continue to battle.”

An anonymous respondent said, “There will be a sort of ‘gold standard’ set of sources, and there will be the fringe.”

Many who see little hope for improvement of the information environment said technology will not save society from distortions, half-truths, lies and weaponized narratives. An anonymous business leader argued, “It is too easy to create fake facts, too labor-intensive to check and too easy to fool checking algorithms.’’ And this response of an anonymous research scientist based in North America echoed the view of many participants in this canvassing: “We will develop technologies to help identify false and distorted information, BUT they won’t be good enough.”

In the arms race between those who want to falsify information and those who want to produce accurate information, the former will always have an advantage. David Conrad

Paul N. Edwards , Perry Fellow in International Security at Stanford University, commented, “Many excellent methods will be developed to improve the information environment, but the history of online systems shows that bad actors can and will always find ways around them.”

Vian Bakir , professor in political communication and journalism at Bangor University in Wales, commented, “It won’t improve because of 1) the evolving nature of technology – emergent media always catches out those who wish to control it, at least in the initial phase of emergence; 2) online social media and search engine business models favour misinformation spreading; 3) well-resourced propagandists exploit this mix.”

Many who expect things will not improve in the next decade said that “white hat” efforts will never keep up with “black hat” advances in information wars. A user-experience and interaction designer said, “As existing channels become more regulated, new unregulated channels will continue to emerge.”

Subtheme: Those generally acting for themselves and not the public good have the advantage, and they are likely to stay ahead in the information wars

Many of those who expect no improvement of the information environment said those who wish to spread misinformation are highly motivated to use innovative tricks to stay ahead of the methods meant to stop them. They said certain actors in government, business and other individuals with propaganda agendas are highly driven to make technology work in their favor in the spread of misinformation, and there will continue to be more of them.

There are a lot of rich and unethical people, politicians, non-state actors and state actors who are strongly incentivized to get fake information out there to serve their selfish purposes. Jason Hong

A number of respondents referred to this as an “arms race.” David Sarokin of Sarokin Consulting and author of “Missed Information,” said, “There will be an arms race between reliable and unreliable information.” And David Conrad , a chief technology officer, replied, “In the arms race between those who want to falsify information and those who want to produce accurate information, the former will always have an advantage.”

Jim Hendler , professor of computing sciences at Rensselaer Polytechnic Institute, commented, “The information environment will continue to change but the pressures of politics, advertising and stock-return-based capitalism rewards those who find ways to manipulate the system, so it will be a constant battle between those aiming for ‘objectiveness’ and those trying to manipulate the system.”

John Markoff , retired journalist and former technology reporter for The New York Times, said, “I am extremely skeptical about improvements related to verification without a solution to the challenge of anonymity on the internet. I also don’t believe there will be a solution to the anonymity problem in the near future.”

Scott Spangler , principal data scientist at IBM Watson Health, said technologies now exist that make fake information almost impossible to discern and flag, filter or block. He wrote, “Machine learning and sophisticated statistical techniques will be used to accurately simulate real information content and make fake information almost indistinguishable from the real thing.”

Jason Hong , associate professor at the School of Computer Science at Carnegie Mellon University, said, “Some fake information will be detectable and blockable, but the vast majority won’t. The problem is that it’s *still* very hard for computer systems to analyze text, find assertions made in the text and crosscheck them. There’s also the issue of subtle nuances or differences of opinion or interpretation. Lastly, the incentives are all wrong. There are a lot of rich and unethical people, politicians, non-state actors and state actors who are strongly incentivized to get fake information out there to serve their selfish purposes.”

A research professor of robotics at Carnegie Mellon University observed, “Defensive innovation is always behind offensive innovation. Those wanting to spread misinformation will always be able to find ways to circumvent whatever controls are put in place.”

A research scientist for the Computer Science and Artificial Intelligence Laboratory at MIT said, “Problems will get worse faster than solutions can address, but that only means solutions are more needed than ever.”

Subtheme: Weaponized narratives and other false content will be magnified by social media, online filter bubbles and AI

Some respondents expect a dramatic rise in the manipulation of the information environment by nation-states, by individual political actors and by groups wishing to spread propaganda. Their purpose is to raise fears that serve their agendas, create or deepen silos and echo chambers, divide people and set them upon each other, and paralyze or confuse public understanding of the political, social and economic landscape.

We live in an era where most people get their ‘news’ via social media and it is very easy to spread fake news. … Given that there is freedom of speech, I wonder how the situation can ever improve. Anonymous project leader for a science institute

This has been referred to as the weaponization of public narratives. Social media platforms such as Facebook, Reddit and Twitter appear to be prime battlegrounds. Bots are often employed, and AI is expected to be implemented heavily in the information wars to magnify the speed and impact of messaging.

A leading internet pioneer who has worked with the FCC, the UN’s International Telecommunication Union (ITU), the General Electric Co. (GE) and other major technology organizations commented, “The ‘internet-as-weapon’ paradigm has emerged.”

Dean Willis , consultant for Softarmor Systems, commented, “Governments and political groups have now discovered the power of targeted misinformation coupled to personalized understanding of the targets. Messages can now be tailored with devastating accuracy. We’re doomed to living in targeted information bubbles.”

An anonymous survey participant noted, “Misinformation will play a major role in conflicts between nations and within competing parties within nation states.”

danah boyd , principal researcher at Microsoft Research and founder of Data & Society, wrote, “What’s at stake right now around information is epistemological in nature. Furthermore, information is a source of power and thus a source of contemporary warfare.”

Peter Lunenfeld , a professor at UCLA, commented, “For the foreseeable future, the economics of networks and the networks of economics are going to privilege the dissemination of unvetted, unverified and often weaponized information. Where there is a capitalistic incentive to provide content to consumers, and those networks of distribution originate in a huge variety of transnational and even extra-national economies and political systems, the ability to ‘control’ veracity will be far outstripped by the capability and willingness to supply any kind of content to any kind of user.”

These experts noted that the public has turned to social media – especially Facebook – to get its “news.” They said the public’s craving for quick reads and tabloid-style sensationalism is what makes social media the field of choice for manipulative narratives, which are often packaged to appear like news headlines. They note that the public’s move away from more-traditional mainstream news outlets, which had some ethical standards, to consumption of social newsfeeds has weakened mainstream media organizations, making them lower-budget operations that have been forced to compete for attention by offering up clickbait headlines of their own.

An emeritus professor of communication for a U.S. Ivy League university noted, “We have lost an important social function in the press. It is being replaced by social media, where there are few if any moral or ethical guidelines or constraints on the performance of informational roles.”

A project leader for a science institute commented, “We live in an era where most people get their ‘news’ via social media and it is very easy to spread fake news. The existence of clickbait sites make it easy for conspiracy theories to be rapidly spread by people who do not bother to read entire articles, nor look for trusted sources. Given that there is freedom of speech, I wonder how the situation can ever improve. Most users just read the headline, comment and share without digesting the entire article or thinking critically about its content (if they read it at all).”

Subtheme: The most-effective tech solutions to misinformation will endanger people’s dwindling privacy options, and they are likely to limit free speech and remove the ability for people to be anonymous online

The rise of new and highly varied voices with differing agendas and motivations might generally be considered to be a good thing. But some of these experts said the recent major successes by misinformation manipulators have created a threatening environment in which many in the public are encouraging platform providers and governments to expand surveillance. Among the technological solutions for “cleaning up” the information environment are those that work to clearly identify entities operating online and employ algorithms to detect misinformation. Some of these experts expect that such systems will act to identify perceived misbehaviors and label, block, filter or remove some online content and even ban some posters from further posting.

Increased censorship and mass surveillance will tend to create official ‘truths’ in various parts of the world. Retired professor

An educator commented, “Creating ‘a reliable, trusted, unhackable verification system’ would produce a system for filtering and hence structuring of content. This will end up being a censored information reality.”

An eLearning specialist observed, “Any system deeming itself to have the ability to ‘judge’ information as valid or invalid is inherently biased.” And a professor and researcher noted, “In an open society, there is no prior determination of what information is genuine or fake.”

In fact, a share of the respondents predicted that the online information environment will not improve in the next decade because any requirement for authenticated identities would take away the public’s highly valued free-speech rights and allow major powers to control the information environment.

A distinguished professor emeritus of political science at a U.S. university wrote, “Misinformation will continue to thrive because of the long (and valuable) tradition of freedom of expression. Censorship will be rejected.” An anonymous respondent wrote, “There is always a fight between ‘truth’ and free speech. But because the internet cannot be regulated free speech will continue to dominate, meaning the information environment will not improve.”

But another share of respondents said that is precisely why authenticated identities – which are already operating in some places, including China – will become a larger part of information systems. A professor at a major U.S. university replied, “Surveillance technologies and financial incentives will generate greater surveillance.” A retired university professor predicted, “Increased censorship and mass surveillance will tend to create official ‘truths’ in various parts of the world. In the United States, corporate filtering of information will impose the views of the economic elite.”

The executive director of a major global privacy advocacy organization argued removing civil liberties in order to stop misinformation will not be effective, saying, “‘Problematic’ actors will be able to game the devised systems while others will be over-regulated.”

Several other respondents also cited this as a major flaw of this potential remedy. They argued against it for several reasons, including the fact that it enables even broader government and corporate surveillance and control over more of the public.

Emmanuel Edet , head of legal services at the National Information Technology Development Agency of Nigeria, observed, “The information environment will improve but at a cost to privacy.”

James LaRue , director of the Office for Intellectual Freedom of the American Library Association, commented, “Information systems incentivize getting attention. Lying is a powerful way to do that. To stop that requires high surveillance – which means government oversight which has its own incentives not to tell the truth.”

Tom Valovic , contributor to The Technoskeptic magazine and author of “Digital Mythologies,” said encouraging platforms to exercise algorithmic controls is not optimal. He wrote: “Artificial intelligence that will supplant human judgment is being pursued aggressively by entities in the Silicon Valley and elsewhere. Algorithmic solutions to replacing human judgment are subject to hidden bias and will ultimately fail to accomplish this goal. They will only continue the centralization of power in a small number of companies that control the flow of information.”

Most of the respondents who gave hopeful answers about the future of truth online said they believe technology will be implemented to improve the information environment. They noted their faith was grounded in history, arguing that humans have always found ways to innovate to overcome problems. Most of these experts do not expect there will be a perfect system – but they expect advances. A number said information platform corporations such as Google and Facebook will begin to efficiently police the environment to embed moral and ethical thinking in the structure of their platforms. They hope this will simultaneously enable the screening of content while still protecting rights such as free speech.

If there is a great amount of pressure from the industry to solve this problem (which there is), then methodologies will be developed and progress will be made … In other words, if there’s a will, there’s way. Adam Lella

Larry Diamond , senior fellow at the Hoover Institution and the Freeman Spogli Institute (FSI) at Stanford University, said, “I am hopeful that the principal digital information platforms will take creative initiatives to privilege more authoritative and credible sources and to call out and demote information sources that appear to be propaganda and manipulation engines, whether human or robotic. In fact, the companies are already beginning to take steps in this direction.”

An associate professor at a U.S. university wrote, “I do not see us giving up on seeking truth.” And a researcher based in Europe said, “Technologies will appear that solve the trust issues and reward logic.”

Adam Lella , senior analyst for marketing insights at comScore Inc., replied, “There have been numerous other industry-related issues in the past (e.g., viewability, invalid traffic detection, cross-platform measurement) that were seemingly impossible to solve, and yet major progress was made in the past few years. If there is a great amount of pressure from the industry to solve this problem (which there is), then methodologies will be developed and progress will be made to help mitigate this issue in the long run. In other words, if there’s a will, there’s way.”

Subtheme: Likely tech-based solutions include adjustments to algorithmic filters, browsers, apps and plug-ins and the implementation of “trust ratings”

Many respondents who hope for improvement in the information environment mentioned ways in which new technological solutions might be implemented.

Bart Knijnenburg , researcher on decision-making and recommender systems and assistant professor of computer science at Clemson University, said, “Two developments will help improve the information environment: 1) News will move to a subscription model (like music, movies, etc.) and subscription providers will have a vested interest in culling down false narratives; 2) Algorithms that filter news will learn to discern the quality of a news item and not just tailor to ‘virality’ or political leaning.”

In order to reduce the spread of fake news, we must deincentivize it financially. Amber Case

Laurel Felt , lecturer at the University of Southern California, “There will be mechanisms for flagging suspicious content and providers and then apps and plugins for people to see the ‘trust rating’ for a piece of content, an outlet or even an IP address. Perhaps people can even install filters so that, when they’re doing searches, hits that don’t meet a certain trust threshold will not appear on the list.”

A longtime U.S. government researcher and administrator in communications and technology sciences said, “The intelligence, defense and related U.S. agencies are very actively working on this problem and results are promising.”

Amber Case , research fellow at Harvard University’s Berkman Klein Center for Internet & Society, suggested withholding ad revenue until veracity has been established. She wrote, “Right now, there is an incentive to spread fake news. It is profitable to do so, profit made by creating an article that causes enough outrage that advertising money will follow. … In order to reduce the spread of fake news, we must deincentivize it financially. If an article bursts into collective consciousness and is later proven to be fake, the sites that control or host that content could refuse to distribute advertising revenue to the entity that created or published it. This would require a system of delayed advertising revenue distribution where ad funds are held until the article is proven as accurate or not. A lot of fake news is created by a few people, and removing their incentive could stop much of the news postings.”

Andrea Matwyshyn , a professor of law at Northeastern University who researches innovation and law, particularly information security, observed, “Software liability law will finally begin to evolve. Market makers will increasingly incorporate security quality as a factor relevant to corporate valuation. The legal climate for security research will continue to improve, as its connection to national security becomes increasingly obvious. These changes will drive significant corporate and public sector improvements in security during the next decade.”

Larry Keeley , founder of innovation consultancy Doblin, predicted technology will be improved but people will remain the same, writing, “Capabilities adapted from both bibliometric analytics and good auditing practices will make this a solvable problem. However, non-certified, compelling-but-untrue information will also proliferate. So the new divide will be between the people who want their information to be real vs. those who simply want it to feel important. Remember that quote from Roger Ailes: ‘People don’t want to BE informed, they want to FEEL informed.’ Sigh.”

Anonymous survey participants also responded:  

  • “Filters and algorithms will improve to both verify raw data, separate ‘overlays’ and to correct for a feedback loop.”
  • “Semantic technologies will be able to cross-verify statements, much like meta-analysis.”
  • “The credibility history of each individual will be used to filter incoming information.”
  • “The veracity of information will be linked to how much the source is perceived as trustworthy – we may, for instance, develop a trust index and trust will become more easily verified using artificial-intelligence-driven technologies.”
  • “The work being done on things like verifiable identity and information sharing through loose federation will improve things somewhat (but not completely). That is to say, things will become better but not necessarily good.”
  • “AI, blockchain, crowdsourcing and other technologies will further enhance our ability to filter and qualify the veracity of information.”
  • “There will be new visual cues developed to help news consumers distinguish between trusted news sources and others.”

Subtheme: Regulatory remedies could include software liability law, required identities, unbundling of social networks like Facebook

A number of respondents believe there will be policy remedies that move beyond whatever technical innovations emerge in the next decade. They offered a range of suggestions, from regulatory reforms applied to the platforms that aid misinformation merchants to legal penalties applied to wrongdoers. Some think the threat of regulatory reform via government agencies may force the issue of required identities and the abolition of anonymity protections for platform users.

Sonia Livingstone , professor of social psychology at the London School of Economics and Political Science, replied, “The ‘wild west’ state of the internet will not be permitted to continue by those with power, as we are already seeing with increased national pressure on providers/companies by a range of means from law and regulation to moral and consumer pressures.”

Willie Currie , a longtime expert in global communications diffusion, wrote, “The apparent success of fake news on platforms like Facebook will have to be dealt with on a regulatory basis as it is clear that technically minded people will only look for technical fixes and may have incentives not to look very hard, so self-regulation is unlikely to succeed. The excuse that the scale of posts on social media platforms makes human intervention impossible will not be a defense. Regulatory options may include unbundling social networks like Facebook into smaller entities. Legal options include reversing the notion that providers of content services over the internet are mere conduits without responsibility for the content. These regulatory and legal options may not be politically possible to affect within the U.S., but they are certainly possible in Europe and elsewhere, especially if fake news is shown to have an impact on European elections.”

Sally Wentworth , vice president of global policy development at the Internet Society, warned against too much dependence upon information platform providers in shaping solutions to improve the information environment. She wrote: “It’s encouraging to see some of the big platforms beginning to deploy internet solutions to some of the issues around online extremism, violence and fake news. And yet, it feels like as a society, we are outsourcing this function to private entities that exist, ultimately, to make a profit and not necessarily for a social good. How much power are we turning over to them to govern our social discourse? Do we know where that might eventually lead? On the one hand, it’s good that the big players are finally stepping up and taking responsibility. But governments, users and society are being too quick to turn all of the responsibility over to internet platforms. Who holds them accountable for the decisions they make on behalf of all of us? Do we even know what those decisions are?”

A professor and chair in a department of educational theory, policy and administration commented, “Some of this work can be done in private markets. Being banned from social media is one obvious one. In terms of criminal law, I think the important thing is to have penalties/regulations be domain-specific. Speech can be regulated in certain venues, but obviously not in all. Federal (and perhaps even international) guidelines would be useful. Without a framework for regulation, I can’t imagine penalties.”

Many of those who expect the information environment to improve anticipate that information literacy training and other forms of assistance will help people become more sophisticated consumers. They expect that users will gravitate toward more reliable information – and that knowledge providers will respond in kind.

When the television became popular, people also believed everything on TV was true. It’s how people choose to react and access to information and news that’s important, not the mechanisms that distribute them. Irene Wu

Frank Kaufmann , founder and director of several international projects for peace activism and media and information, commented, “The quality of news will improve, because things always improve.” And Barry Wellman , virtual communities expert and co-director of the NetLab Network, said, “Software and people are becoming more sophisticated.”

One hopeful respondent said a change in economic incentives can bring about desired change. Tom Wolzien , chairman of The Video Call Center and Wolzien LLC, said, “The market will not clean up the bad material, but will shift focus and economic rewards toward the reliable. Information consumers, fed up with false narratives, will increasingly shift toward more-trusted sources, resulting in revenue flowing toward those more trusted sources and away from the junk. This does not mean that all people will subscribe to either scientific or journalistic method (or both), but they will gravitate toward material the sources and institutions they find trustworthy, and those institutions will, themselves, demand methods of verification beyond those they use today.”

A retired public official and internet pioneer predicted, “1) Education for veracity will become an indispensable element of secondary school. 2) Information providers will become legally responsible for their content. 3) A few trusted sources will continue to dominate the internet.”

Irene Wu , adjunct professor of communications, culture and technology at Georgetown University, said, “Information will improve because people will learn better how to deal with masses of digital information. Right now, many people naively believe what they read on social media. When the television became popular, people also believed everything on TV was true. It’s how people choose to react and access to information and news that’s important, not the mechanisms that distribute them.”

Charlie Firestone , executive director at the Aspen Institute Communications and Society Program, commented, “In the future, tagging, labeling, peer recommendations, new literacies (media, digital) and similar methods will enable people to sift through information better to find and rely on factual information. In addition, there will be a reaction to the prevalence of false information so that people are more willing to act to assure their information will be accurate.”

Howard Rheingold , pioneer researcher of virtual communities, longtime professor and author of “Net Smart: How to Thrive Online,” noted, “As I wrote in ‘Net Smart’ in 2012, some combination of education, algorithmic and social systems can help improve the signal-to-noise ratio online – with the caveat that misinformation/disinformation versus verified information is likely to be a continuing arms race. In 2012, Facebook, Google and others had no incentive to pay attention to the problem. After the 2016 election, the issue of fake information has been spotlighted.”

Subtheme: Misinformation has always been with us and people have found ways to lessen its impact. The problems will become more manageable as people become more adept at sorting through material

Many respondents agree that misinformation will persist as the online realm expands and more people are connected in more ways. Still, the more hopeful among these experts argue that progress is inevitable as people and organizations find coping mechanisms. They say history validates this. Furthermore, they said technologists will play an important role in helping filter out misinformation and modeling new digital literacy practices for users.

We were in this position before, when printing presses broke the existing system of information management. A new system emerged and I believe we have the motivation and capability to do it again. Jonathan Grudin

Mark Bunting , visiting academic at Oxford Internet Institute, a senior digital strategy and public policy advisor with 16 years of experience at the BBC and as a digital consultant, wrote, “Our information environment has been immeasurably improved by the democratisation of the means of publication since the creation of the web nearly 25 years ago. We are now seeing the downsides of that transformation, with bad actors manipulating the new freedoms for antisocial purposes, but techniques for managing and mitigating those harms will improve, creating potential for freer, but well-governed, information environments in the 2020s.”

Jonathan Grudin , principal design researcher at Microsoft, said, “We were in this position before, when printing presses broke the existing system of information management. A new system emerged and I believe we have the motivation and capability to do it again. It will again involve information channeling more than misinformation suppression; contradictory claims have always existed in print, but have been manageable and often healthy.”

Judith Donath , fellow at Harvard University’s Berkman Klein Center for Internet & Society and founder of the Sociable Media Group at the MIT Media Lab, wrote, “‘Fake news’ is not new. The Weekly World News had a circulation of over a million for its mostly fictional news stories that are printed and sold in a format closely resembling a newspaper. Many readers recognized it as entertainment, but not all. More subtly, its presence on the newsstand reminded everyone that anything can be printed.”

Joshua Hatch , president of the Online News Association, noted, “I’m slightly optimistic because there are more people who care about doing the right thing than there are people who are trying to ruin the system. Things will improve because people – individually and collectively – will make it so.”

Many of these respondents said the leaders and engineers of the major information platform companies will play a significant role. Some said they expect some other systematic and social changes will alter things.

John Wilbanks , chief commons officer at Sage Bionetworks, replied, “I’m an optimist, so take this with a grain of salt, but I think as people born into the internet age move into positions of authority they’ll be better able to distill and discern fake news than those of us who remember an age of trusted gatekeepers. They’ll be part of the immune system. It’s not that the environment will get better, it’s that those younger will be better fitted to survive it.”

Danny Rogers , founder and CEO of Terbium Labs, replied, “Things always improve. Not monotonically, and not without effort, but fundamentally, I still believe that the efforts to improve the information environment will ultimately outweigh efforts to devolve it.”

Bryan Alexander , futurist and president of Bryan Alexander Consulting, replied, “Growing digital literacy and the use of automated systems will tip the balance towards a better information environment.”

A number of these respondents said information platform corporations such as Google and Facebook will begin to efficiently police the environment through various technological enhancements. They expressed faith in the inventiveness of these organizations and suggested the people of these companies will implement technology to embed moral and ethical thinking in the structure and business practices of their platforms, enabling the screening of content while still protecting rights such as free speech.

Patrick Lambe , principal consultant at Straits Knowledge, commented, “All largescale human systems are adaptive. When faced with novel predatory phenomena, counter-forces emerge to balance or defeat them. We are at the beginning of a largescale negative impact from the undermining of a social sense of reliable fact. Counter-forces are already emerging. The presence of largescale ‘landlords’ controlling significant sections of the ecosystem (e.g., Google, Facebook) aids in this counter-response.”

A professor in technology law at a West-Coast-based U.S. university said, “Intermediaries such as Facebook and Google will develop more-robust systems to reward legitimate producers and punish purveyors of fake news.”

A longtime director for Google commented, “Companies like Google and Facebook are investing heavily in coming up with usable solutions. Like email spam, this problem can never entirely be eliminated, but it can be managed.”

Sandro Hawke , technical staff at the World Wide Web Consortium, predicted, “Things are going to get worse before they get better, but humans have the basic tools to solve this problem, so chances are good that we will. The biggest risk, as with many things, is that narrow self-interest stops people from effectively collaborating.”

Anonymous respondents shared these remarks:

  • “Accurate facts are essential, particularly within a democracy, so this will be a high, shared value worthy of investment and government support, as well as private-sector initiatives.”
  • “We are only at the beginning of drastic technological and societal changes. We will learn and develop strategies to deal with problems like fake news.”
  • “There is a long record of innovation taking place to solve problems. Yes, sometimes innovation leads to abuses, but further innovation tends to solve those problems.”
  • Consumers have risen up in the past to block the bullshit, fake ads, fake investment scams, etc., and they will again with regard to fake news.”
  • “As we understand more about digital misinformation we will design better tools, policies and opportunities for collective action.”
  • “Now that it is on the agenda, smart researchers and technologists will develop solutions.”
  • “The increased awareness of the issue will lead to/force new solutions and regulation that will improve the situation in the long-term even if there are bound to be missteps such as flawed regulation and solutions along the way.”

Subtheme: Crowdsourcing will work to highlight verified facts and block those who propagate lies and propaganda. Some also have hopes for distributed ledgers (blockchain)

A number of these experts said solutions such as tagging, flagging or other labeling of questionable content will continue to expand and be of further use in the future in tackling the propagation of misinformation

The future will attach credibility to the source of any information. The more a given source is attributed to ‘fake news,’ the lower it will sit in the credibility tree. Anonymous engineer

J. Nathan Matias , a postdoctoral researcher at Princeton University and previously a visiting scholar at MIT’s Center for Civic Media, wrote, “Through ethnography and largescale social experiments, I have been encouraged to see volunteer communities with tens of millions of people work together to successfully manage the risks from inaccurate news.”

A researcher of online harassment working for a major internet information platform commented, “If there are nonprofits keeping technology in line, such as an ACLU-esque initiative, to monitor misinformation and then partner with spaces like Facebook to deal with this kind of news spam, then yes, the information environment will improve. We also need to move away from clickbaity-like articles, and not algorithmically rely on popularity but on information.”

An engineer based in North America replied, “The future will attach credibility to the source of any information. The more a given source is attributed to ‘fake news,’ the lower it will sit in the credibility tree.”

Micah Altman , director of research for the Program on Information Science at MIT, commented, “Technological advances are creating forces pulling in two directions: It is increasingly easy to create real-looking fake information; and it is increasingly easy to crowdsource the collection and verification of information. In the longer term, I’m optimistic that the second force will dominate – as transaction cost-reduction appears to be relatively in favor of crowds versus concentrated institutions.”

[The information environment]

Some predicted that digital distributed ledger technologies, known as blockchain, may provide some answers. A longtime technology editor and columnist based in Europe , commented, “The blockchain approach used for Bitcoin, etc., could be used to distribute content. DECENT is an early example.” And an anonymous respondent from Harvard University’s Berkman Klein Center for Internet & Society said, “They will be cryptographically verified, with concepts.”

A professor of media and communication based in Europe said, “Right now, reliable and trusted verification systems are not yet available; they may become technically available in the future but the arms race between corporations and hackers is never ending. Blockchain technology may be an option, but every technological system needs to be built on trust, and as long as there is no globally governed trust system that is open and transparent, there will be no reliable verification systems.”

There was common agreement among many respondents – whether they said they expect to see improvements in the information environment in the next decade or not – that the problem of misinformation requires significant attention. A share of these respondents urged action in two areas: A bolstering of the public-serving press and an expansive, comprehensive, ongoing information literacy education effort for people of all ages.

We can’t machine-learn our way out of this disaster, which is actually a perfect storm of poor civics knowledge and poor information literacy. Mike DeVito

A sociologist doing research on technology and civic engagement at MIT said, “Though likely to get worse before it gets better, the 2016-2017 information ecosystem problems represent a watershed moment and call to action for citizens, policymakers, journalists, designers and philanthropists who must work together to address the issues at the heart of misinformation.”

Michael Zimmer, associate professor and privacy and information ethics scholar at the University of Wisconsin, Milwaukee commented, “This is a social problem that cannot be solved via technology.”

Subtheme: Funding and support must be directed to the restoration of a well-fortified, ethical and trusted public press

Many respondents noted that while the digital age has amplified countless information sources it has hurt the reach and influence of the traditional news organizations. These are the bedrock institutions much of the public has relied upon for objective, verified, reliable information – information undergirded by ethical standards and a general goal of serving the common good. These respondents said the information environment can’t be improved without more, well-staffed, financially stable, independent news organizations. They believe that material can rise above misinformation and create a base of “common knowledge” the public can share and act on.

This is a wake-up call to the news industry, policy makers and journalists to refine the system of news production. Rich Ling

Susan Hares , a pioneer with the National Science Foundation Network (NSFNET) and longtime internet engineering strategist, now a consultant, said, “Society simply needs to decide that the ‘press’ no longer provides unbiased information, and it must pay for unbiased and verified information.”

Christopher Jencks , a professor emeritus at Harvard University, said, “Reducing ‘fake news’ requires a profession whose members share a commitment to getting it right. That, in turn, requires a source of money to pay such professional journalists. Advertising used to provide newspapers with money to pay such people. That money is drying up, and it seems unlikely to be replaced within the next decade.”

Rich Ling , professor of media technology at the School of Communication and Information at Nanyang Technological University, said, “We have seen the consequences of fake news in the U.S. presidential election and Brexit. This is a wake-up call to the news industry, policy makers and journalists to refine the system of news production.”

Maja Vujovic , senior copywriter for the Comtrade Group, predicted, “The information environment will be increasingly perceived as a public good, making its reliability a universal need. Technological advancements and civil-awareness efforts will yield varied ways to continuously purge misinformation from it, to keep it reasonably reliable.”

An author and journalist based in North America said, “I believe this era could spawn a new one – a flight to quality in which time-starved citizens place high value on verified news sources.”

A professor of law at a major U.S. state university commented, “Things won’t get better until we realize that accurate news and information are a public good that require not-for-profit leadership and public subsidy.”

Marc Rotenberg , president of the Electronic Privacy Information Center, wrote, “The problem with online news is structural: There are too few gatekeepers, and the internet business model does not sustain quality journalism. The reason is simply that advertising revenue has been untethered from news production.”

With precarious funding and shrinking audiences, healthy journalism that serves the common good is losing its voice. Siva Vaidhyanathan , professor of media studies and director of the Center for Media and Citizenship at the University of Virginia, wrote, “There are no technological solutions that correct for the dominance of Facebook and Google in our lives. These incumbents are locked into monopoly power over our information ecosystem and as they drain advertising money from all other low-cost commercial media they impoverish the public sphere.”

Subtheme: Elevate information literacy: It must become a primary goal at all levels of education

Many of these experts said the flaws in human nature and still-undeveloped norms in the digital age are the key problems that make users susceptible to false, misleading and manipulative online narratives. One potential remedy these respondents suggested is a massive compulsory crusade to educate all in digital-age information literacy. Such an effort, some said, might prepare more people to be wise in what they view/read/believe and possibly even serve to upgrade the overall social norms of information sharing.

Information is only as reliable as the people who are receiving it. Julia Koller

Karen Mossberger , professor and director of the School of Public Affairs at Arizona State University, wrote, “The spread of fake news is not merely a problem of bots, but part of a larger problem of whether or not people exercise critical thinking and information-literacy skills. Perhaps the surge of fake news in the recent past will serve as a wake-up call to address these aspects of online skills in the media and to address these as fundamental educational competencies in our education system. Online information more generally has an almost limitless diversity of sources, with varied credibility. Technology is driving this issue, but the fix isn’t a technical one alone.”

Mike DeVito , graduate researcher at Northwestern University, wrote, “These are not technical problems; they are human problems that technology has simply helped scale, yet we keep attempting purely technological solutions. We can’t machine-learn our way out of this disaster, which is actually a perfect storm of poor civics knowledge and poor information literacy.”

Miguel Alcaine , International Telecommunication Union area representative for Central America, commented, “The boundaries between online and offline will continue to blur. We understand online and offline are different modalities of real life. There is and will be a market (public and private providers) for trusted information. There is and will be space for misinformation. The most important action societies can take to protect people is education, information and training.”

An early internet developer and security consultant commented, “Fake news is not a product of a flaw in the communications channel and cannot be fixed by a fix to the channel. It is due to a flaw in the human consumers of information and can be repaired only by education of those consumers.”

An anonymous respondent from the Harvard University’s Berkman Klein Center for Internet & Society noted, “False information – intentionally or inadvertently so – is neither new nor the result of new technologies. It may now be easier to spread to more people more quickly, but the responsibility for sifting facts from fiction has always sat with the person receiving that information and always will.”

An internet pioneer and rights activist based in the Asia/Pacific region said, “We as a society are not investing enough in education worldwide. The environment will only improve if both sides of the communication channel are responsible. The reader and the producer of content, both have responsibilities.”

Deirdre Williams , retired internet activist, replied, “Human beings are losing their capability to question and to refuse. Young people are growing into a world where those skills are not being taught.”

Julia Koller , a learning solutions lead developer, replied, “Information is only as reliable as the people who are receiving it. If readers do not change or improve their ability to seek out and identify reliable information sources, the information environment will not improve.”

Ella Taylor-Smith , senior research fellow at the School of Computing at Edinburgh Napier University, noted, “As more people become more educated, especially as digital literacy becomes a popular and respected skill, people will favour (and even produce) better quality information.”

Constance Kampf , a researcher in computer science and mathematics, said, “The answer depends on socio-technical design – these trends of misinformation versus verifiable information were already present before the internet, and they are currently being amplified. The state and trends in education and place of critical thinking in curricula across the world will be the place to look to see whether or not the information environment will improve – cyberliteracy relies on basic information literacy, social literacy and technological literacy. For the environment to improve, we need substantial improvements in education systems across the world in relation to critical thinking, social literacy, information literacy, and cyberliteracy (see Laura Gurak’s book ‘ Cyberliteracy ’).”

Su Sonia Herring , an editor and translator, commented, “Misinformation and fake news will exist as long as humans do; they have existed ever since language was invented. Relying on algorithms and automated measures will result in various unwanted consequences. Unless we equip people with media literacy and critical-thinking skills, the spread of misinformation will prevail.”

Responses from additional key experts regarding the future of the information environment

This section features responses by several of the top analysts who participated in this canvassing. Following this wide-ranging set of comments is a much more expansive set of quotations directly tied to the five primary themes identified in this report.

Ignorance breeds frustration and ‘a growing fraction of the population has neither the skills nor the native intelligence to master growing complexity’

Mike Roberts , pioneer leader at ICANN and Internet Hall of Fame member, replied, “There are complex forces working both to improve the quality of information on the net, and to corrupt it. I believe the outrage resulting from recent events will, on balance, lead to a net improvement, but viewed with hindsight, the improvement may be viewed as inadequate. The other side of the complexity coin is ignorance. The average man or woman in America today has less knowledge of the underpinnings of his or her daily life than they did 50 or a hundred years ago. There has been a tremendous insertion of complex systems into many aspects of how we live in the decades since World War II, fueled by a tremendous growth in knowledge in general. Even among highly intelligent people, there is a significant growth in personal specialization in order to trim the boundaries of expected expertise to manageable levels. Among educated people, we have learned mechanisms for coping with complexity. We use what we know of statistics and probability to compartment uncertainty. We adopt ‘most likely’ scenarios for events of which we do not have detailed knowledge, and so on. A growing fraction of the population has neither the skills nor the native intelligence to master growing complexity, and in a competitive social environment, obligations to help our fellow humans go unmet. Educated or not, no one wants to be a dummy – all the wrong connotations. So ignorance breeds frustration, which breeds acting out, which breeds antisocial and pathological behavior, such as the disinformation, which was the subject of the survey, and many other undesirable second order effects. Issues of trustable information are certainly important, especially since the technological intelligentsia command a number of tools to combat untrustable info. But the underlying pathology won’t be tamed through technology alone. We need to replace ignorance and frustration with better life opportunities that restore confidence – a tall order and a tough agenda. Is there an immediate nexus between widespread ignorance and corrupted information sources? Yes, of course. In fact, there is a virtuous circle where acquisition of trustable information reduces ignorance, which leads to better use of better information, etc.”

The truth of news is murky and multifaceted

Judith Donath , fellow at Harvard University’s Berkman Klein Center for Internet & Society and founder of the Sociable Media Group at the MIT Media Lab, wrote, “Yes, trusted methods will emerge to block false narratives and allow accurate information to prevail, and, yes, the quality and veracity of information online will deteriorate due to the spread of unreliable, sometimes even dangerous, socially destabilizing ideas. Of course, the definition of ‘true’ is sometimes murky. Experimental scientists have many careful protocols in place to assure the veracity of their work, and the questions they ask have well-defined answers – and still there can be controversy about what is true, what work was free from outside influence. The truth of news stories is far murkier and multi-faceted. A story can be distorted, disproportional, meant to mislead – and still, strictly speaking, factually accurate. … But a pernicious harm of fake news is the doubt it sows about the reliability of all news. Donald Trump’s repeated ‘fake news’ smears of The New York Times, Washington Post, etc., are among his most destructive non-truths.”

“Algorithms weaponize rhetoric,” influencing on a mass scale

Susan Etlinger , industry analyst at Altimeter Research, said, “There are two main dynamics at play: One is the increasing sophistication and availability of machine learning algorithms and the other is human nature. We’ve known since the ancient Greeks and Romans that people are easily persuaded by rhetoric; that hasn’t changed much in two thousand years. Algorithms weaponize rhetoric, making it easier and faster to influence people on a mass scale. There are many people working on ways to protect the integrity and reliability of information, just as there are cybersecurity experts who are in a constant arms race with cybercriminals, but to put as much emphasis on ‘information’ (a public good) as ‘data’ (a personal asset) will require a pretty big cultural shift. I suspect this will play out differently in different parts of the world.”

There’s no technical solution for the fact that ‘news’ is a social bargain

Clay Shirky , vice provost for educational technology at New York University, replied, “‘News’ is not a stable category – it is a social bargain. There’s no technical solution for designing a system that prevents people from asserting that Obama is a Muslim but allows them to assert that Jesus loves you.”

‘Strong economic forces are incentivizing the creation and spread of fake news’

Amy Webb , author and founder of the Future Today Institute, wrote, “In an era of social, democratized media, we’ve adopted a strange attitude. We’re simultaneously skeptics and true believers. If a news story reaffirms what we already believe, it’s credible – but if it rails against our beliefs, it’s fake. We apply that same logic to experts and sources quoted in stories. With our limbic systems continuously engaged, we’re more likely to pay attention to stories that make us want to fight, take flight or fill our social media accounts with links. As a result, there are strong economic forces incentivizing the creation and spread of fake news. In the digital realm, attention is currency. It’s good for democracy to stop the spread of misinformation, but it’s bad for business. Unless significant measures are taken in the present – and unless all the companies in our digital information ecosystem use strategic foresight to map out the future – I don’t see how fake news could possibly be reduced by 2027.”

Propagandists exploit whatever communications channels are available

Ian Peter , internet pioneer, historian and activist, observed, “It is not in the interests of either the media or the internet giants who propagate information, nor of governments, to create a climate in which information cannot be manipulated for political, social or economic gain. Propaganda and the desire to distort truth for political and other ends have always been with us and will adapt to any form of new media which allows open communication and information flows.”

Expanding information outlets erode opportunities for a ‘common narrative’

[‘There are three kinds of lies : lies, damned lies and statistics.’]

‘Broken as it might be, the internet is still capable of routing around damage’

Paul Saffo , longtime Silicon-Valley-based technology forecaster, commented, “The information crisis happened in the shadows. Now that the issue is visible as a clear and urgent danger, activists and people who see a business opportunity will begin to focus on it. Broken as it might be, the internet is still capable of routing around damage.”

It will be impossible to distinguish between fake and real video, audio, photos

Marina Gorbis , executive director of the Institute for the Future, predicted, “It’s not going to be better or worse but very different. Already we are developing technologies that make it impossible to distinguish between fake and real video, fake and real photographs, etc. We will have to evolve new tools for authentication and verification. We will probably have to evolve both new social norms as well as regulatory mechanisms if we want to maintain online environment as a source of information that many people can rely on.”

A ‘Cambrian explosion’ of techniques will arise to monitor the web and non-web sources

Stowe Boyd , futurist, publisher and editor-in-chief of Work Futures, said, “The rapid rise of AI will lead to a Cambrian explosion of techniques to monitor the web and non-web media sources and social networks and rapidly identifying and tagging fake and misleading content.”

Well, there’s good news and bad news about the information future …

Jeff Jarvis , professor at the City University of New York’s Graduate School of Journalism, commented, “Reasons for hope: Much attention is being directed at manipulation and disinformation; the platforms may begin to recognize and favor quality; and we are still at the early stage of negotiating norms and mores around responsible civil conversation. Reasons for pessimism: Imploding trust in institutions; institutions that do not recognize the need to radically change to regain trust; and business models that favor volume over value.”

A fear of the imposition of pervasive censorship

Jim Warren , an internet pioneer and open-government/open-records/open-meetings advocate, said, “False and misleading information has always been part of all cultures (gossip, tabloids, etc.). Teaching judgment has always been the solution, and it always will be. I (still) trust the longstanding principle of free speech: The best cure for ‘offensive’ speech is MORE speech. The only major fear I have is of massive communications conglomerates imposing pervasive censorship.”

People have to take responsibility for finding reliable sources

Steven Miller , vice provost for research at Singapore Management University, wrote, “Even now, if one wants to find reliable sources, one has no problem doing that, so we do not lack reliable sources of news today. It is that there are all these other options, and people can choose to live in worlds where they ignore so-called reliable sources, or ignore a multiplicity of sources that can be compared, and focus on what they want to believe. That type of situation will continue. Five or 10 years from now, I expect there to continue to be many reliable sources of news, and a multiplicity of sources. Those who want to seek out reliable sources will have no problems doing so. Those who want to make sure they are getting a multiplicity of sources to see the range of inputs, and to sort through various types of inputs, will be able to do so, but I also expect that those who want to be in the game of influencing perceptions of reality and changing the perceptions of reality will also have ample means to do so. So the responsibility is with the person who is seeking the news and trying to get information on what is going on. We need more individuals who take responsibility for getting reliable sources.”

Sign up for our weekly newsletter

Fresh data delivery Saturday mornings

Sign up for The Briefing

Weekly updates on the world of news & information

  • Emerging Technology
  • Future of the Internet (Project)
  • Misinformation
  • Misinformation Online
  • Online Privacy & Security
  • Online Search
  • Platforms & Services
  • Privacy Rights
  • Social Media
  • Technology Adoption
  • Technology Policy Issues
  • Trust in Government
  • Trust, Facts & Democracy

Many Americans think generative AI programs should credit the sources they rely on

Americans’ use of chatgpt is ticking up, but few trust its election information, q&a: how we used large language models to identify guests on popular podcasts, computer chips in human brains: how americans view the technology amid recent advances, striking findings from 2023, most popular, report materials.

  • Shareable quotes from experts on the future of truth and misinformation online

1615 L St. NW, Suite 800 Washington, DC 20036 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Age & Generations
  • Coronavirus (COVID-19)
  • Economy & Work
  • Family & Relationships
  • Gender & LGBTQ
  • Immigration & Migration
  • International Affairs
  • Internet & Technology
  • Methodological Research
  • News Habits & Media
  • Non-U.S. Governments
  • Other Topics
  • Politics & Policy
  • Race & Ethnicity
  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

Copyright 2024 Pew Research Center

Terms & Conditions

Privacy Policy

Cookie Settings

Reprints, Permissions & Use Policy

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • 09 January 2024

How online misinformation exploits ‘information voids’ — and what to do about it

You have full access to this article via your institution.

Silhouettes of people holding laptops are seen in front of the logo of 'Google' technology company, Turkey.

Google uses automated methods that rank search results on the basis of quality measures. But there could be additional approaches to preventing people falling into data voids of misinformation and disinformation. Credit: Aytac Unal/Anadolu Agency/Getty

This year, countries with a combined population of 4 billion — around half the world’s people — are holding elections, in what is being described as the biggest election year in recorded history. Some researchers are concerned that 2024 could also be one of the biggest years for the spreading of misinformation and disinformation. Both refer to misleading content, but disinformation is deliberately generated.

Vigorous debate and argument ahead of elections is foundational to democratic societies. Political parties have long competed for voter approval and subjected their differing policies to public scrutiny. But the difference now is that online search and social media enable claims and counterclaims to be made almost endlessly.

A study in Nature 1 last month highlights a previously underappreciated aspect of this phenomenon: the existence of data voids, information spaces that lack evidence, into which people searching to check the accuracy of controversial topics can easily fall. The paper suggests that media-literacy campaigns that emphasize ‘just searching’ for information online need to become smarter. It might no longer be enough for search providers to combat misinformation and disinformation by just using automated systems to deprioritize these sources. Indeed, genuine, lasting solutions to a problem that could be existential for democracies needs to be a partnership between search-engine providers and sources of evidence-based knowledge.

The mechanics of how misinformation and disinformation spread has long been an active area of research. According to the ‘illusory truth effect’, people perceive something to be true the more they are exposed to it, regardless of its veracity. This phenomenon pre-dates 2 , 3 the digital age and now manifests itself through search engines and social media.

essay online information is deceiving and unreliable

Read the paper: Online searches to evaluate misinformation can increase its perceived veracity

In their recent study 1 , Kevin Aslett, a political scientist at the University of Central Florida in Orlando, and his colleagues found that people who used Google Search to evaluate the accuracy of news stories — stories that the authors but not the participants knew to be inaccurate — ended up trusting those stories more. This is because their attempts to search for such news made them more likely to be shown sources that corroborated an inaccurate story.

In one experiment, participants used the search engine to verify claims that the US government engineered a famine by locking down during the COVID-19 pandemic. When they entered terms used in inaccurate news stories, such as 'engineered famine', to get information, they were more likely to find sources uncritically reporting an engineered famine. The results also held when participants used search terms to describe other unsubstantiated claims about SARS-CoV-2: for example, that it rarely spreads between asymptomatic people, or that it surges among people even after they are vaccinated.

Nature reached out to Google to discuss the findings, and to ask what more could be done to make the search engine recommend higher-quality information in its search results. Google’s algorithms rank news items by taking into account various measures of quality, such as how much a piece of content aligns with the consensus of expert sources on a topic. In this way, the search engine deprioritizes unsubstantiated news, as well as news sources carrying unsubstantiated news from its results. Furthermore, its search results carry content warnings. For example, ‘breaking news’ indicates that a story is likely to change and that readers should come back later when more sources are available. There is also an ‘ about this result ’ tab, which explains more about a news source — although users have to click on a different icon to access it.

Clearly, copying terms from inaccurate news stories into a search engine reinforces misinformation, making it a poor method for verifying accuracy. So, what more could be done to route people to better sources? Google does not manually remove content, or de-rank a search result; nor does it moderate or edit content, in the way that social-media sites and publishers do. Google is sticking to the view that, when it comes to ensuring quality results, the future is automated methods that rank results on the basis of quality measures. But there can be additional approaches to preventing people falling into data voids of misinformation and disinformation, as Google itself acknowledges and as Aslett and colleagues show.

essay online information is deceiving and unreliable

The disinformation sleuths: a key role for scientists in impending elections

Some type of human input, for example, might enhance internal fact-checking systems, especially on topics on which there might be a void of reliable information. How this can be done sensitively is an important research topic, not least because the end result should be not about censorship, but about protecting people from harm.

There’s also a body of literature on improving media literacy — including suggestions on more, or better education on discriminating between different sources in search results. Mike Caulfield, who studies media literacy and online verification skills at the University of Washington in Seattle, says that there is value in exposing a wider population to some of the skills taught in research methods. He recommends starting with influential people, giving them opportunities to improve their own media literacy, as a way to then influence others in their networks.

One point raised by Paul Crawshaw, a social scientist at Teesside University in Middlesbrough, UK, is that research-methods teaching on its own does not always have the desired impact. Students benefit more when they are learning about research methods while carrying out research projects. He also suggests that lessons could be learnt by studying the conduct and impact of health-literacy campaigns. In some cases, these can be less effective for people on lower incomes 4 , compared with those on higher incomes. Understanding that different population groups have different needs will also need to be factored into media-literacy campaigns, he argues. Research journals, such as Nature , also have a part to play in bridging data voids; it cannot just be the responsibility of search-engine providers. In other words, any response to misinformation and disinformation needs to be a partnership.

Clearly, there’s work to do. The need is urgent, because it’s possible that generative artificial-intelligence and large language models will propel misinformation to much greater heights. The often-mentioned phrase ‘search it online’ could end up increasing the prominence of inaccurate news instead of reducing it. In this super election year, people need to have the confidence to know that, if a piece of news comes from an untrustworthy source, the best choice might be to simply ignore it.

Nature 625 , 215-216 (2024)

doi: https://doi.org/10.1038/d41586-024-00030-x

Aslett, K. et al. Nature https://doi.org/10.1038/s41586-023-06883-y (2023).

Article   PubMed   Google Scholar  

Hasher, L., Goldstein, D. & Toppino, T. J. Verbal Learn. Verbal Behav. 16 , 107–112 (1977).

Article   Google Scholar  

Brashier, N. M. & Marsh, E. J. Ann. Rev. Psychol. 71 , 499–515 (2020).

Nutbeam, D. & Lloyd, J. E. Annu. Rev. Public Health 42 , 159–173 (2021).

Download references

Reprints and permissions

Related Articles

essay online information is deceiving and unreliable

Online search results can increase belief in misinformation

  • Human behaviour
  • Information technology

Is the Internet bad for you? Huge study reveals surprise effect on well-being

Is the Internet bad for you? Huge study reveals surprise effect on well-being

News 12 MAY 24

Hunger on campus: why US PhD students are fighting over food

Hunger on campus: why US PhD students are fighting over food

Career Feature 03 MAY 24

US National Academies report outlines barriers and solutions for scientist carers

US National Academies report outlines barriers and solutions for scientist carers

Career News 02 MAY 24

Daniel Kahneman obituary: psychologist who revolutionized the way we think about thinking

Daniel Kahneman obituary: psychologist who revolutionized the way we think about thinking

Obituary 03 MAY 24

Pandemic lockdowns were less of a shock for people with fewer ties

Pandemic lockdowns were less of a shock for people with fewer ties

Research Highlight 01 MAY 24

Rwanda 30 years on: understanding the horror of genocide

Rwanda 30 years on: understanding the horror of genocide

Editorial 09 APR 24

The US Congress is taking on AI —this computer scientist is helping

The US Congress is taking on AI —this computer scientist is helping

News Q&A 09 MAY 24

How I fled bombed Aleppo to continue my career in science

How I fled bombed Aleppo to continue my career in science

Career Feature 08 MAY 24

Who’s making chips for AI? Chinese manufacturers lag behind US tech giants

Who’s making chips for AI? Chinese manufacturers lag behind US tech giants

News 03 MAY 24

Faculty Positions at the Center for Machine Learning Research (CMLR), Peking University

CMLR's goal is to advance machine learning-related research across a wide range of disciplines.

Beijing, China

Center for Machine Learning Research (CMLR), Peking University

essay online information is deceiving and unreliable

Faculty Positions at SUSTech Department of Biomedical Engineering

We seek outstanding applicants for full-time tenure-track/tenured faculty positions. Positions are available for both junior and senior-level.

Shenzhen, Guangdong, China

Southern University of Science and Technology (Biomedical Engineering)

essay online information is deceiving and unreliable

Southeast University Future Technology Institute Recruitment Notice

Professor openings in mechanical engineering, control science and engineering, and integrating emerging interdisciplinary majors

Nanjing, Jiangsu (CN)

Southeast University

essay online information is deceiving and unreliable

Staff Scientist

A Staff Scientist position is available in the laboratory of Drs. Elliot and Glassberg to study translational aspects of lung injury, repair and fibro

Maywood, Illinois

Loyola University Chicago - Department of Medicine

W3-Professorship (with tenure) in Inorganic Chemistry

The Institute of Inorganic Chemistry in the Faculty of Mathematics and Natural Sciences at the University of Bonn invites applications for a W3-Pro...

53113, Zentrum (DE)

Rheinische Friedrich-Wilhelms-Universität

essay online information is deceiving and unreliable

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies
  • Follow us on Facebook
  • Follow us on Twitter
  • Criminal Justice
  • Environment
  • Politics & Government
  • Race & Gender

Expert Commentary

Online information, credibility and the “Google generation”: Research, tips, resources

2013 research review of academic literature as well as practical tips for instilling good online information-seeking habits in “digital natives.”

Students (iStock)

Republish this article

Creative Commons License

This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License .

by John Wihbey, The Journalist's Resource April 29, 2013

This <a target="_blank" href="https://journalistsresource.org/education/online-information-credibility-google-generation-research-review-tips-resources/">article</a> first appeared on <a target="_blank" href="https://journalistsresource.org">The Journalist's Resource</a> and is republished here under a Creative Commons license.<img src="https://journalistsresource.org/wp-content/uploads/2020/11/cropped-jr-favicon-150x150.png" style="width:1em;height:1em;margin-left:10px;">

It’s been 15 years since Google was incorporated as a company, yet the public still seems to be feeling its way toward a more sophisticated understanding of how to harness online search technology to find richer, more accurate information.

Young people are often thought to be expert at navigating the digital world — they are “digital natives,” after all — but most research shows that when it comes to seeking credible information, they often fall short . Educators note that students sometimes have a hard time distinguishing between commercially influenced sites and peer-reviewed academic journals, for example. They also see many young people whose patience with the search process can quickly run thin.

What can educators do to solve this? How can this gap be bridged? And for media educators — who carry the greatest general burden for teaching the “art of verification” — what are the basic tools that every student should have? The beginning of any answer is a solid understanding of youth tendencies and generational norms that have evolved around online information. Below is a list of studies and reports that can be helpful to educators trying to understand and improve information-seeking habits.

The second step is giving young people the basic tools to perform online research in a smart way. Of course, various databases and major search engines — Yahoo!, Bing, Google — have different algorithms and tricks for finding things. Twitter, Facebook, YouTube and other platforms have their own internal search dynamics. At the most general level, success begins with taking a deliberate approach and articulating a search strategy .

The Google universe is the most obvious place to start. Dan Russell, Google’s “anthropologist of search,” updates his blog constantly with new tips and exercises that can challenge anyone who is looking to sharpen his or her search skills. The company also offers a variety of educational tools to help train students. Students should be familiar with how to use Google’s “Advanced Search” function and should know the nine basic operators that allow one to search Google in a targeted way:

GoogleSearchShortcuts

Microsoft’s Bing also offers its own tips for those doing searches.

At a more specific level, media students should probably have knowledge of good tools that relate to obtaining information about specific people and businesses. Barbara Gray of the New York Times and CUNY Graduate School of Journalism offers tips .

Students should also know that Google Scholar offers a higher grade of information in general, much of which is peer-reviewed (here are some other quality databases ):

For greater preparation toward navigating the academic and research world, see “Research tip sheets: Lessons on reading studies, understanding data and methods.”

Below are studies and reports that can provide further insight into a variety of topics in this general area:

“How Teens Do Research in the Digital World” Purcell, Kristen; Rainie, Lee; Heaps, Alan; Buchanan, Judy; Friedrich, Linda; Jacklin, Amanda; Chen, Clara; Zickuhr, Kathryn. Pew Internet and American Life Project, November 2012.

Excerpt: “76% of teachers surveyed ‘strongly agree’ with the assertion that Internet search engines have conditioned students to expect to be able to find information quickly and easily. Large majorities also agree with the assertion that the amount of information available online today is overwhelming to most students (83%) and that today’s digital technologies discourage students from using a wide range of sources when conducting research (71%). Fewer teachers, but still a majority of this sample (60%), agree with the assertion that today’s technologies make it harder for students to find credible sources of information…. Only about one-quarter of teachers surveyed here rate their students ‘excellent’ or ‘very good.’ Indeed, in our focus groups, many teachers suggest that despite being raised in the ‘digital age,’ today’s students are surprisingly lacking in their online search skills. Students receive the lowest marks for ‘patience and determination in looking for information that is hard to find,’ with 43% of teachers rating their students ‘poor’ in this regard, and another 35% rating their students ‘fair.’ ”

“Learning Curve: How College Graduates Solve Information Problems Once They Join the Workplace” Head, Alison J. Project Information Literacy, October 2012.

Findings: College hires tend to give the quickest answer possible when asked to find information. They do so by using Web search engines and scanning the first few pages of results. Most employers were surprised that younger employees rarely use annual reports or phone calls to find answers to pressing questions. When recent graduates cannot find information online, many turn to a trusted co-worker for help with a quick answer. In other situations, they develop a trial-and-error method to solve information problems. Many employers sought recent college graduates who could make use of both online searches and traditional methods in information gathering, and present a synthesis of all information collected. Conversations with college graduates suggest that they perceive speed as a primary virtue in terms of completing professional tasks and requests from managers. They “wanted to prove to employers they were hyper-responsive and capable of solving information problems in an instant — a response they perceived employers wanted from them, based on their interviews and how dazzled some employers were with their computer proficiencies when they first joined the workplace.”

“Conceptual Relationship of Information Literacy and Media Literacy in Knowledge Societies” Lee, Alice; Lau, Jesus; Carbo, Toni; Gendina, Natalia. UNESCO, 2013.

Abstract: “Many novel literacy concepts have been put forward in response to the new social and technological environments. Some are independent and novel, such as digital literacy and information fluency, whereas others are compound concepts such as multiliteracies, transliteracy and media and information literacy (MIL). Recent studies have indicated that future society will comprise the semantic Web, Big Data, cloud computing, smart phones and apps, the Internet of things, artificial intelligence and various new gadgets. In short, it will be an information and communications technology (ICT)-based society. Given the complexity of the next society, this report adopts an integrated approach towards new literacy training by establishing a literacy framework of “21st Century Competencies.”

“Youth and Digital Media: From Credibility to Information Quality” Gasser, Urs; Cortesi, Sandra; Malik, Momin; Lee, Ashley. Berkman Center for Internet and Society, Harvard University, February 2012.

Abstract: “This paper seeks to map and explore what we know about the ways in which young users of age 18 and under search for information online, how they evaluate information, and how their related practices of content creation, levels of new literacies, general digital media usage, and social patterns affect these activities. A review of selected literature … highlights the importance of contextual and demographic factors both for search and evaluation. Looking at the phenomenon from an information-learning and educational perspective, the literature shows that youth develop competencies for personal goals that sometimes do not transfer to school, and are sometimes not appropriate for school. Thus far, educational initiatives to educate youth about search, evaluation, or creation have depended greatly on the local circumstances for their success or failure…. Key findings: (1) Search shapes the quality of information that youth experience online; (2) Youth use cues and heuristics to evaluate quality, especially visual and interactive elements; (3) Content creation and dissemination foster digital fluencies that can feed back into search and evaluation behaviors; (4) Information skills acquired through personal and social activities can benefit learning in the academic context.”

“Understanding the Online Information-Seeking Behaviors of Young People: The Role of Networks of Support” Eynon, Rebecca; Malmberg, L.-E. Journal of Computer Assisted Learning , December 2012, Vol. 28, Issue 6, 514-529. doi: 10.1111/j.1365-2729.2011.00460.x.

Abstract: “In this study we propose and test a model that adds to the existing literature by examining the ways in which parents, schools, and friends (what we call networks of support) effect young people’s online information behaviors, while at the same time taking into account young people’s individual characteristics, confidence and skills to use the Internet. Using path analysis, we demonstrate the significance of networks of support in understanding the uptake of online information seeking both directly and indirectly (through enhancing self-concept for learning and online skills). Young people who have better networks of support, particularly friends who are engaged in technology, are more likely to engage in online information seeking.”

“ Trust Online: Young Adults’ Evaluation of Web Content” Hargittai, Eszter; Fullerton, Lindsay; Menchen-Trevino, Ericka; Thomas, Kristin Yates. International Journal of Communication , 2010, Vol. 4, 468-494.

Abstract: “Little of the work on online credibility assessment has considered how the information seeking process figures into the final evaluation of content people encounter. Using unique data about how a diverse group of young adults looks for and evaluates Web content, our paper makes contributions to existing literature by highlighting factors beyond site features in how users assess credibility. We find that the process by which users arrive at a site is an important component of how they judge the final destination. In particular, search context, branding and routines, and a reliance on those in one’s networks play important roles in online information-seeking and evaluation. We also discuss that users differ considerably in their skills when it comes to judging online content credibility.”

“ Balancing Opportunities and Risks in Teenagers’ Use of the Internet: The Role of Online Skills and Internet Self-Efficacy” Livingstone, Sonia; Helsper, Ellen. New Media and Society , March 2010, Vol. 12, No. 2, 309-329. doi: 10.1177/1461444809342697.

Abstract: “Informed by research on media literacy, this article examines the role of selected measures of Internet literacy in relation to teenagers’ online experiences. Data from a national survey of teenagers in the U.K. ( N = 789) are analyzed to examine: first, the demographic factors that influence skills in using the Internet; and, second (the main focus of the study), to ask whether these skills make a difference to online opportunities and online risks. Consistent with research on the digital divide, path analysis showed the direct influence of age and socioeconomic status on young people’s access, the direct influence of age and access on their use of online opportunities, and the direct influence of gender on online risks. The importance of online skills was evident insofar as online access, use and skills were found to mediate relations between demographic variables and young people’s experience of online opportunities and risks. Further, an unexpected positive relationship between online opportunities and risks was found, with implications for policy interventions aimed at reducing the risks of Internet use.”

“Parental Education and Children’s Online Health Information Seeking: Beyond the Digital Divide Debate” Zhao, Shanyang. Social Science and Medicine , Vol. 69, Issue 10, November 2009, 1501-1505. http://dx.doi.org/10.1016/j.socscimed.2009.08.039.

Abstract: “Research has shown that increasing numbers of teenagers are going online to find health information, but it is unclear whether there are disparities in the prevalence of online health seeking among young Internet users associated with social and economic conditions. Existing literature on Internet uses by adults indicates that low-income, less-educated, and minority individuals are less likely to be online health seekers. Based on the analysis of data from the Pew Internet and American Life Project for the U.S., this study finds that teens of low-education parents are either as likely as or even more likely than teens of high-education parents to seek online health information. Multiple regression analysis shows that the higher engagement in health seeking by teens of low education parents is related to a lower prevalence of parental Internet use, suggesting that some of these teens may be seeking online health information on behalf of their low-education parents.”

“Digital Na(t)ives? Variation in Internet Skills and Uses among Members of the ‘Net Generation'” Hargittai, Eszter. Sociological Inquiry , February 2010, Vol. 80, No. 1. pp. 92-113. DOI: 10.1111/j.1475-682X.2009.00317.x.

Abstract: “People who have grown up with digital media are often assumed to be universally savvy with information and communication technologies. Such assumptions are rarely grounded in empirical evidence, however. This article draws on unique data with infor- mation about a diverse group of young adults’ Internet uses and skills to suggest that even when controlling for Internet access and experiences, people differ in their online abilities and activities. Additionally, findings suggest that Internet know-how is not ran- domly distributed among the population, rather, higher levels of parental education, being a male, and being white or Asian American are associated with higher levels of Web-use skill. These user characteristics are also related to the extent to which young adults engage in diverse types of online activities. Moreover, skill itself is positively associated with types of uses. Overall, these findings suggest that even when control- ling for basic Internet access, among a group of young adults, socioeconomic status is an important predictor of how people are incorporating the Web into their everyday lives with those from more privileged backgrounds using it in more informed ways for a larger number of activities.”

“Learning: Peering Backward and Looking Forward in the Digital Era” Weigel, Margaret; James, Carrie; Gardner, Howard. International Journal of Learning and Media , Vol. 1, Issue 1, 2009. doi: 10.1162/ijlm.2009.0005.

Excerpt: “The Internet’s potential for learning may be curtailed if youth lack key skills for navigating it, if they consistently engage with Internet resources in a shallow fashion, and/or if they limit their explorations to a narrow band of things they believe are worth knowing. Left to their own devices and without sufficient scaffolding, student investigations may turn out to be thoughtful and meaningful — or frustrating and fruitless. A successful informal learning practice depends upon an independent, constructivistically oriented learner who can identify, locate, process, and synthesize the information he or she is lacking. More specifically, a variety of cognitive limitations, along with features of current search engines, problematize the identification, depth, and assessment of online searches for the typical student.”

Tags: youth, research roundup

About The Author

' src=

John Wihbey

The Problem of Misinformation and Disinformation Online

  • First Online: 15 June 2022

Cite this chapter

essay online information is deceiving and unreliable

  • Victoria L. Rubin 2  

1279 Accesses

2 Citations

Chapter 1 frames the problem of deceptive, inaccurate, and misleading information in the digital media content and information technologies as an infodemic. Mis- and disinformation proliferate online, yet the solution remains elusive and many of us run the risk of being woefully misinformed in many aspects of our lives including health, finances, and politics. Chapter 1 untangles key research concepts— infodemic, mis- and disinformation, deception, “fake news,” false news, and various types of digital “fakes.” A conceptual infodemiological framework, the Rubin ( 2019 ) Misinformation and Disinformation Triangle, posits three minimal interacting factors that cause the problem—susceptible hosts, virulent pathogens, and conducive environments. Disrupting interactions of these factors requires greater efforts in educating susceptible minds, detecting virulent fakes, and regulating toxic environments. Given the scale of the problem, technological assistance as inevitable. Human intelligence can and should be, at least in part, enhanced with an artificial one. We require systematic analyses that can reliably and accurately sift through large volumes of data. Such assistance comes from artificial intelligence (AI) applications that use natural language processing (NLP) and machine learning (ML). These fields are briefly introduced and AI-enabled tasks for detecting various “fakes” are laid out. While AI can assist us, the ultimate decisions are obviously in our own minds. An immediate starting point is to verify suspicious information with simple digital literacy steps as exemplified here. Societal interventions and countermeasures that help curtail the spread of mis- and disinformation online are discussed throughout this book.

There has never been, nor will there ever be, a technological innovation that moves us away from the essential problems of human nature. (Broussard, 2019 , p. 8)

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

See Canada’s National Observer ( 2019 ) “Five Step Guide: How to Spot Fake News” via https://www.nationalobserver.com/spot-fake-news (accessed on March 16, 2021).

What I mean by affordances here is actions that are possible within a given environment, referring to what users can do within the parameters of a particular technology, like what the features of a cellphone afford you to do.

See, for example, www.nytimes.com in the US, www.bbc.co.uk in the UK, or www.cbc.ca in Canada.

This suite of working proof-of-concept applications is freely accessible on GitHub for anyone to download and experiment with.

Current trends and advances in (non-text based) creation and detection of deepfakes (Mirsky & Lee, 2021 ) and their surrounding controversy (Barari et al., 2021 ) may still be of interest to some readers, but are outside of my book’s scope.

AFP Espagne, AFP Hong Kong. (2021, April 12). Moderna boss did not say “vaccines change your DNA.” AFP Fact Check . Retrieved from https://factcheck.afp.com/moderna-boss-did-not-say-vaccines-change-your-dna

Barari, S., Lucas, C., & Munger, K. (2021, January 13). Political deepfakes are as credible as other fake media and (sometimes) real media. OSF Preprints. doi: https://doi.org/10.31219/osf.io/cdfh3

Broussard, M. (2019). Artificial unintelligence: how computers misunderstand the world . MIT Press. Retrieved from https://mitpress.mit.edu/books/artificial-unintelligence

Google Scholar  

Buller, D. B., & Burgoon, J. K. (1996). Interpersonal deception theory. Commun Theory, 6 (3), 203–242.

Article   Google Scholar  

Burfoot, C., & Baldwin, T. (2009). Automatic satire detection: are you having a laugh?, Proceedings of the ACL-IJCNLP 2009 Conference Short Papers , pages (pp. 161–164). Association for Computational Linguistics.

Canada’s National Observer. (2019, July 19). How to spot fake news . National Observer. Retrieved from https://www.nationalobserver.com/spot-fake-news

Chen, Y., Conroy, N. J., & Rubin, V. L. (2015). News in an online world: the need for an automatic crap detector, Proceedings of the Association for Information Science and Technology, 52 (1), 1–4. https://doi.org/10.1002/pra2.2015.145052010081

Compton, J. R., & Benedetti, P. (2015). News, lies and videotape: the legitimation crisis in journalism . Rabble.ca. Retrieved from http://rabble.ca/news/2015/03/news-lies-and-videotape-legitimation-crisis-journalism

Conroy, N. J., Rubin, V. L., & Chen, Y. (2015). Automatic deception detection: methods for finding fake news. Proceedings of the Association for Information Science and Technology, 52 (1), 1–4. https://doi.org/10.1002/pra2.2015.145052010082

Cook, J., Lewandowsky, S., & Ecker, U. K. H. (2017). Neutralizing misinformation through inoculation: exposing misleading argumentation techniques reduces their influence. PLoS One, 12 (5), e0175799. https://doi.org/10.1371/journal.pone.0175799

Dogo, M. S., Deepak, P., & Jurek-Loughrey, A. (2020). Exploring thematic coherence in fake news. In I. Koprinska, M. Kamp, A. Appice, C. Loglisci, L. Antonie, A. Zimmermann, R. Guidotti, Ö. Özgöbek, R. P. Ribeiro, R. Gavaldà, J. Gama, L. Adilova, Y. Krishnamurthy, P. M. Ferreira, D. Malerba, I. Medeiros, M. Ceci, G. Manco, E. Masciari, et al. (Eds.), ECML PKDD 2020 workshops (pp. 571–580). Springer International Publishing. https://doi.org/10.1007/978-3-030-65965-3_40

Chapter   Google Scholar  

Dyakon, T. (2020, December 14). Poynter’s MediaWise training significantly increases people’s ability to detect disinformation, new Stanford study finds . Poynter. Retrieved from https://www.poynter.org/news-release/2020/poynters-mediawise-training-significantly-increases-peoples-ability-to-detect-disinformation-new-stanford-study-finds/

Endsley, M. R. (2018). Combating information attacks in the age of the internet: new challenges for cognitive engineering. Human Factors: The Journal of the Human Factors and Ergonomics Society, 60 (8), 1081–1094. https://doi.org/10.1177/0018720818807357

Feng, S., Banerjee, R., & Choi, Y. (2012). Syntactic stylometry for deception detection. In 50th Annual meeting of the association for computational linguistics, ACL 2012—Proceedings of the conference (pp. 171–175).

Galasinski, D. (2000). The language of deception: a discourse analytical study . Sage Publications.

Book   Google Scholar  

Glanville, D. (2021, January 6). EMA recommends COVID-19 vaccine Moderna for authorisation in the EU [text]. European Medicines Agency. Retrieved from https://www.ema.europa.eu/en/news/ema-recommends-covid-19-vaccine-moderna-authorisation-eu

Global Legal Research Directorate Staff, L. L. of C. (2019). Government Responses to Disinformation on Social Media Platforms [Web page]. Retrieved from https://www.loc.gov/law/help/social-media-disinformation/index.php

Granhag, P. A., Andersson, L. O., Strömwall, L. A., & Hartwig, M. (2004). Imprisoned knowledge: criminals beliefs about deception. Leg Criminol Psychol, 9 (1), 103.

Hancock, J. T. (2012). Digital deception: when, where and how people lie online. In A. N. Joinson, K. McKenna, T. Postmes, & U.-D. Reips (Eds.), The Oxford handbook of internet psychology (p. 20). Oxford University Press. Retrieved from http://www.oxfordhandbooks.com/10.1093/oxfordhb/9780199561803.001.0001/oxfordhb-9780199561803-e-019

IFLA. (2021, February 18). How to spot fake news . The International Federation of Library Associations and Institutions. Retrieved from https://www.ifla.org/publications/node/11174

Jack, C. (2017). Lexicon of lies: Terms for problematic information . Data & Society Research Institute. Retrieved from https://datasociety.net/pubs/oh/DataAndSociety_LexiconofLies.pdf

Lazer, D. M. J., Baum, M. A., Benkler, Y., Berinsky, A. J., Greenhill, K. M., Menczer, F., Metzger, M. J., Nyhan, B., Pennycook, G., Rothschild, D., Schudson, M., Sloman, S. A., Sunstein, C. R., Thorson, E. A., Watts, D. J., & Zittrain, J. L. (2018). The science of fake news. Science, 359 (6380), 1094–1096. https://doi.org/10.1126/science.aao2998

Liddy, E. D., Paik, W., McKenna, M. E., & Li, M. (1999). Natural language information retrieval system and method (United States Patent No. US5963940A). Retrieved from https://patents.google.com/patent/US5963940A/en .

Liu, B. (2012). Sentiment analysis and opinion mining . Morgan & Claypool Publishers. Retrieved from http://www.cs.uic.edu/~liub/FBS/SentimentAnalysis-and-OpinionMining.html

Liu, X., Nourbakhsh, A., Li, Q., Fang, R., & Shah, S. (2015). Real-time rumor debunking on twitter . 1867–1870. doi: https://doi.org/10.1145/2806416.2806651 .

Loomba, S., de Figueiredo, A., Piatek, S. J., de Graaf, K., & Larson, H. J. (2021). Measuring the impact of COVID-19 vaccine misinformation on vaccination intent in the UK and USA. Nature Human Behaviour, 5 (3), 337–348. https://doi.org/10.1038/s41562-021-01056-1

Matt (2015). #ColumbianChemicals hoax: Trolling the Gulf coast for deceptive patterns . Retrieved from https://www.recordedfuture.com/columbianchemicals-hoax-analysis/

Mihalcea, R., & Strapparava, C. (2009). The lie detector: explorations in the automatic recognition of deceptive language. Proceedings of the ACL-IJCNLP 2009 Conference Short Papers (pp. 309–312). Association for Computational Linguistics.

Mirsky, Y., & Lee, W. (2021). The creation and detection of Deepfakes: a survey. ACM Computing Surveys, 54 (1)., 7:1–7:41. https://doi.org/10.1145/3425780

Mitchell, A., Gottfried, J., Stocking, G., Walker, M., & Fedeli, S. (2019, June 5). Many Americans say made-up news is a critical problem that needs to be fixed . Pew Research Center’s Journalism Project. Retrieved from https://www.journalism.org/2019/06/05/many-americans-say-made-up-news-is-a-critical-problem-that-needs-to-be-fixed/

Mitchell, A., Jurkowitz, M., Oliphant, J. B., & Shearer, E. (2020, December 15). Concerns about made-up election news are high, and both parties think it is mostly intended to hurt their side . Pew Research Center’s Journalism Project. Retrieved from https://www.journalism.org/2020/12/15/concerns-about-made-up-election-news-are-high-and-both-parties-think-it-is-mostly-intended-to-hurt-their-side/

Molina, M. D., Sundar, S. S., Le, T., & Lee, D. (2021). “Fake news” is not simply false information: a concept explication and taxonomy of online content. The American Behavioral Scientist (Beverly Hills), 65 (2), 180–212. https://doi.org/10.1177/0002764219878224

Montreal CTV News. (2015a, May 23). Quebec foreign correspondent suspended over allegations of false information . Montreal. Retrieved from https://montreal.ctvnews.ca/quebec-foreign-correspondent-suspended-over-allegations-of-false-information-1.2387746

Montreal CTV News. (2015b, May 29). Montreal journalist Francois Bugingo admits to falsifying stories . Montreal. Retrieved from https://montreal.ctvnews.ca/montreal-journalist-francois-bugingo-admits-to-falsifying-stories-1.2398611

Newall, M. (2020, December 30). More than 1 in 3 Americans believe a ‘deep state’ is working to undermine trump . Ipsos. Retrieved from https://www.ipsos.com/en-us/news-polls/npr-misinformation-123020

Pang, B., & Lee, L. (2008). Opinion mining and sentiment analysis. Foundations and Trends in Information Retrieval, 2 (1–2), 1–135. https://doi.org/10.1561/1500000001

Pew Research Center Report. (2019). Trends and facts on online news: state of the news media . Pew Research Center’s Journalism Project. Retrieved from https://www.journalism.org/fact-sheet/digital-news/

Reuters Staff. (2020, May 19). False claim: a COVID-19 vaccine will genetically modify humans . Reuters. Retrieved from https://www.reuters.com/article/uk-factcheck-covid-19-vaccine-modify-idUSKBN22U2BZ

Reuters Staff. (2021, January 13). Fact check: genetic materials from mRNA vaccines do not multiply in your body forever . Reuters. Retrieved from https://www.reuters.com/article/uk-factcheck-genetic-idUSKBN29I30V

Rosen, G. (2020, August 18). Community standards enforcement report . About Facebook. Retrieved from https://about.fb.com/news/2020/08/community-standards-enforcement-report-aug-2020/

Rubin, V. L. (2006). Identifying certainty in texts. Thesis. In School of information studies . Syracuse University

Rubin, V. L. (2010). On deception and deception detection: Content analysis of computer-mediated stated beliefs, Proceedings of the Association for Information Science and Technology, 32 : 1–10. Retrieved from http://dl.acm.org/citation.cfm?id=1920377

Rubin, V. L. (2017). Deception detection and rumor debunking for social media. InSloan, L. & Quan-Haase, A. (Eds.) The SAGE Handbook of Social Media ResearchMethods , London: SAGE: (pp. 342–364). https://uk.sagepub.com/en-gb/eur/the-sage-handbook-of-social-media-research-methods/book245370

Rubin, V. L. (2019). Disinformation and misinformation triangle: a conceptual model for “fake news” epidemic, causal factors and interventions. Journal of Documentation , 75 (5), 1013-1034. https://doi.org/10.1108/JD-12-2018-0209

Rubin, V. L., & Chen, Y. (2012). Information manipulation classification theory for LIS and NLP. Proceedings of the American Society for Information Science and Technology, 49 (1), 1–5. https://doi.org/10.1002/meet.14504901353

Rubin, V. L., & Conroy, N. (2012a). Discerning truth from deception: human judgments and automation efforts. First Monday, 17 (3) Retrieved from http://firstmonday.org/ojs/index.php/fm/article/view/3933/3170

Rubin, V. L., & Conroy, N. (2012b). The art of creating an informative data collection for automated deception detection: A corpus of truths and lies. Proceedings of the American Society for Information Science and Technology, 49 , 1–11.

Rubin, V. L., & Lukoianova, T. (2014). Truth and deception at the rhetorical structure level. Journal of the Association for Information Science and Technology, 66 (5), 12. https://doi.org/10.1002/asi.23216

Rubin, V. L., Stanton, J. M., & Liddy, E. D. (2004). Discerning emotions in texts . AAAI Symposium on Exploring Attitude and Affect in Text, Stanford, CA. https://www.aaai.org/Papers/Symposia/Spring/2004/SS-04-07/SS04-07-023.pdf

Rubin, V. L., Liddy, E. D., & Kando, N. (2005). Certainty identification in texts: categorization model and manual tagging results. In J. G. Shanahan, Y. Qu, & J. Wiebe (Eds.), Computing attitude and affect in text: theory and applications (pp. 61–76). Springer-Verlag.

Rubin, V. L., Chen, Y., & Conroy, N. J. (2015). Deception detection for news: three types of fakes . Proceedings of the Association for Information Science and Technology, 52 (1), 1–4. https://doi.org/10.1002/pra2.2015.145052010083

Rubin, V. L., Conroy, N. J., Chen, Y., & Cornwell, S. (2016). Fake news or truth? Using Satirical Cues to Detect Potentially Misleading News. Proceedings of the Second Workshop on Computational Approaches to Deception Detection, 7–17, San Diego, California. Association for Computational Linguistics. http://aclweb.org/anthology/W/W16/W16-0800.pdf

Rubin, V. L., Brogly, C., Conroy, N., Chen, Y., Cornwell, S. E., & Asubiaro, T. V. (2019). A news verification browser for the detection of clickbait, satire, and falsified news. Journal of Open Source Software, 4 (35), 1208. https://doi.org/10.21105/joss.01208

Salton, G., & McGill, M. J. (1983). Introduction to modern information retrieval . McGraw-Hill.

MATH   Google Scholar  

Saurí, R., & Pustejovsky, J. (2012). Are you sure that this happened? Assessing the factuality degree of events in text. Computational Linguistics, 38 (2), 261–299. https://doi.org/10.1162/COLI_a_00096

Scholthof, K.-B. G. (2007). The disease triangle: pathogens, the environment and society. Nature Reviews Microbiology, 5 (2), 152–156. https://doi.org/10.1038/nrmicro1596

Shingler, B. (2015, May 23). Foreign correspondent suspended by media outlets after report he fabricated stories . CBC News. http://www.cbc.ca/news/canada/montreal/françois-bugingo-foreign-correspondent-suspended-by-media-outlets-1.3085118

Tandoc, E. C., Lim, Z. W., & Ling, R. (2018). Defining “Fake News”. Digit Journal, 6 (2), 137–153. https://doi.org/10.1080/21670811.2017.1360143

The Onion. (2015). FIFA frantically announces 2015 summer world cup in United States . Retrieved from http://www.theonion.com/article/fifa-frantically-announces-2015-summer-world-cup-u-50525

Topping, A. (2015, May 31). Ex-FIFA vice president Jack Warner swallows Onion spoof . The Guardian. Retrieved from http://www.theguardian.com/football/2015/may/31/ex-fifa-vice-president-jack-warner-swallows-onion-spoof

Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359 (6380), 1146–1151. https://doi.org/10.1126/science.aap9559

Vrij, A. (2000). Detecting lies and deceit . Wiley.

Wardle, C. (2019). Understanding information disorder . Retrieved from https://firstdraftnews.org/wp-content/uploads/2019/10/Information_Disorder_Digital_AW.pdf?x76701

Wardle, C., & Derakhshan, H. (2017). Information disorder: toward an interdisciplinary framework for research and policy making, Council of Europe DGI . EU DisinfoLab. Retrieved from https://www.disinfo.eu/academic-source/wardle-and-herakhshan-2017/

Wiebe, J., Wilson, T., & Cardie, C. (2005). Annotating expressions of opinions and emotions in language. Lang Resour Eval, 39 (2), 165–210. https://doi.org/10.1007/s10579-005-7880-9

World Health Organization. (2021a). WHO public health research agenda for managing infodemics . World Health Organization. Retrieved from https://apps.who.int/iris/handle/10665/339192

World Health Organization. (2021b, March 16). Let’s flatten the infodemic curve . WHO Newsroom Spotlight. Retrieved from https://www.who.int/news-room/spotlight/let-s-flatten-the-infodemic-curve

World Health Organization, UN, UNICEF, UNDP, UNESCO, UNAIDS, ITU, UN Global Pulse, & IFRC. (2020, September 23). Managing the COVID-19 infodemic: promoting healthy behaviours and mitigating the harm from misinformation and disinformation. Joint statement. Retrieved from https://www.who.int/news/item/23-09-2020-managing-the-covid-19-infodemic-promoting-healthy-behaviours-and-mitigating-the-harm-from-misinformation-and-disinformation

Wu, K., Yang, S., & Zhu, K. Q. (2015). False rumors detection on Sina Weibo by propagation structures . IEEE International Conference on Data Engineering, ICDE, 651-662.

Zhang, H., Fan, Z., Zheng, J., & Liu, Q. (2012). An improving deception detection method in computer-mediated communication. Journal of . Networks, 7 (11). https://doi.org/10.4304/jnw.7.11.1811-1816

Zhou, L., & Zhang, D. (2008). Following linguistic footprints: automatic deception detection in online communication. Communications of the ACM, 51 (9), 119–122. https://doi.org/10.1145/1378727.1389972

Zhou, L., Burgoon, J. K., Nunamaker, J. F., & Twitchell, D. (2004). Automating linguistics-based cues for detecting deception in text-based asynchronous computer-mediated communications. Group Decision and Negotiation, 13 (1), 81–106.

Download references

Author information

Authors and affiliations.

Western University, London, ON, Canada

Victoria L. Rubin

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Rubin, V.L. (2022). The Problem of Misinformation and Disinformation Online. In: Misinformation and Disinformation. Springer, Cham. https://doi.org/10.1007/978-3-030-95656-1_1

Download citation

DOI : https://doi.org/10.1007/978-3-030-95656-1_1

Published : 15 June 2022

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-95655-4

Online ISBN : 978-3-030-95656-1

eBook Packages : Literature, Cultural and Media Studies Literature, Cultural and Media Studies (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • Future Students
  • Current Students
  • Faculty/Staff

Stanford Graduate School of Education

News and Media

  • News & Media Home
  • Research Stories
  • School's In
  • In the Media

You are here

Stanford scholars observe 'experts' to see how they evaluate the credibility of information online.

Researchers found that fact checkers "arrived at more warranted conclusions in a fraction of the time."

How do expert researchers go about assessing the credibility of information on the internet? Not as skillfully as you might guess – and those who are most effective use a tactic that others tend to overlook, according to scholars at  Stanford Graduate School of Education .

A new  report  released recently by the  Stanford History Education Group (SHEG) shows how three different groups of “expert” readers – fact checkers, historians and Stanford undergraduates – fared when tasked with evaluating information online.

The fact checkers proved to be fastest and most accurate, while historians and students were easily deceived by unreliable sources.

“Historians sleuth for a living,” said Professor  Sam Wineburg , founder of SHEG, who co-authored the report with doctoral student Sarah McGrew. “Evaluating sources is absolutely essential to their professional practice. And Stanford students are our digital future. We expected them to be experts.”

The report’s authors identify an approach to online scrutiny that fact checkers used consistently but historians and college students did not: The fact checkers read  laterally , meaning they would quickly scan a website in question but then open a series of additional browser tabs, seeking context and perspective from other sites.

In contrast, the authors write, historians and students read  vertically , meaning they would stay within the original website in question to evaluate its reliability. These readers were often taken in by unreliable indicators such as a professional-looking name and logo, an array of scholarly references or a nonprofit URL.

When it comes to judging the credibility of information on the internet, Wineburg said, skepticism may be more useful than knowledge or old-fashioned research skills. “Very intelligent people were bamboozled by the ruses that are part of the toolkit of digital deception today,” he said.

Testing experts, not typical users

The new report builds on  research  that SHEG released last year, which found that students from middle school through college were easily duped by information online. In that study , SHEG scholars administered age-appropriate tests to 7,804 students from diverse economic and geographic backgrounds.

For the new report, the authors set out to identify the tactics of “skilled” – rather than typical – users. They recruited participants they expected to be skilled at evaluating information: professional fact checkers at highly regarded news outlets, PhD historians with full-time faculty positions at universities in California and Washington state, and Stanford undergraduates.

“It’s the opposite of a random sample,” Wineburg said. “We purposely sought out people who are experts, and we assumed that all three categories would be proficient.”

The study sample consisted of 10 historians, 10 fact checkers and 25 students. Each participant engaged in a variety of online searches while SHEG researchers observed and recorded what they did on-screen.

In one test, participants were asked to assess the reliability of information about bullying from the websites of two different groups: the American Academy of Pediatrics (AAP), the largest professional organization of pediatricians in the world, and the American College of Pediatricians (ACPeds), a much smaller advocacy group that characterizes homosexuality as a harmful lifestyle choice.

“It was extremely easy to see what [ACPeds] stood for,” Wineburg said – noting, for example, a blog post on the group’s site that called for adding the letter P for pedophile to the acronym LGBT. Study participants were asked to evaluate an article on the ACPeds website indicating that programs designed to reduce bullying against LGBT youth “amount to special treatment” and may “validat[e] individuals displaying temporary behaviors or orientations.”

Fact checkers easily identified the group’s position. Historians, however, largely expressed the belief that both pediatricians’ sites were reliable sources of information. Students overwhelmingly judged ACPeds’ site the more reliable one.

In another task, participants were asked to perform an open web search to determine who paid the legal fees on behalf of a group of students who sued the state of California over teacher tenure policies in  Vergara v. California, a case that cost more than $1 million to prosecute. (A Silicon Valley entrepreneur financed the legal team, a fact not always mentioned in news reports about the lawsuit.) Again, the fact checkers came out well ahead of the historians and students, searching online sources more selectively and thoroughly than the others.

The tasks transcended partisan politics, Wineburg said, pointing out that advocates across the political spectrum promulgate questionable information online.

“These are tasks of modern citizenship,” he said. “If we’re interested in the future of democracy in our country, we have to be aware of who’s behind the information we’re consuming.”

Smarter way to navigate

The fact checkers’ tactic of reading laterally is similar to the idea of “taking bearings,” a concept associated with navigation. Applied to the world of internet research, it involves cautiously approaching the unfamiliar and looking around for a sense of direction. The fact checkers “understood the web as a maze filled with trap doors and blind alleys,” the authors wrote, “where things are not always what they seem.”

Wineburg and McGrew observed that even historians and students who did read laterally did not necessarily probe effectively: They failed to use quotation marks when searching for contiguous expressions, for instance, or clicked indiscriminately on links that ranked high in search results, not understanding how the order is influenced by search engine optimization. Fact checkers showed what the researchers called click restraint, reviewing search results more carefully before proceeding.

The authors of the report say their findings point to the importance of redeveloping guidelines for users of all ages to learn how to assess credibility on the internet. Many schools and libraries offer checklists and other educational materials with largely outdated criteria, Wineburg said. “Their approaches fit the web circa 2001.”

In January SHEG will begin piloting new lesson plans at the college level in California, incorporating internet research strategies drawn from the fact checkers’ tactics. Wineburg sees it as one step toward updating a general education curriculum to reflect a new media landscape and the demands of civic engagement.

In the state’s 2016 election alone, he noted, voters were confronted with 17 ballot initiatives to consider. “If people spent 10 minutes researching each one, that would be an act of incredible civic duty,” he said. “The question is, how do we make those 10 minutes count?”

More Stories

Students in a classroom in Salinas, CA

⟵ Go to all Research Stories

Get the Educator

Subscribe to our monthly newsletter.

Stanford Graduate School of Education

482 Galvez Mall Stanford, CA 94305-3096 Tel: (650) 723-2109

  • Contact Admissions
  • GSE Leadership
  • Site Feedback
  • Web Accessibility
  • Career Resources
  • Faculty Open Positions
  • Explore Courses
  • Academic Calendar
  • Office of the Registrar
  • Cubberley Library
  • StanfordWho
  • StanfordYou

Improving lives through learning

Make a gift now

  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Non-Discrimination
  • Accessibility

© Stanford University , Stanford , California 94305 .

  • Search Menu
  • Browse content in Arts and Humanities
  • Browse content in Archaeology
  • Anglo-Saxon and Medieval Archaeology
  • Archaeological Methodology and Techniques
  • Archaeology by Region
  • Archaeology of Religion
  • Archaeology of Trade and Exchange
  • Biblical Archaeology
  • Contemporary and Public Archaeology
  • Environmental Archaeology
  • Historical Archaeology
  • History and Theory of Archaeology
  • Industrial Archaeology
  • Landscape Archaeology
  • Mortuary Archaeology
  • Prehistoric Archaeology
  • Underwater Archaeology
  • Urban Archaeology
  • Zooarchaeology
  • Browse content in Architecture
  • Architectural Structure and Design
  • History of Architecture
  • Residential and Domestic Buildings
  • Theory of Architecture
  • Browse content in Art
  • Art Subjects and Themes
  • History of Art
  • Industrial and Commercial Art
  • Theory of Art
  • Biographical Studies
  • Byzantine Studies
  • Browse content in Classical Studies
  • Classical History
  • Classical Philosophy
  • Classical Mythology
  • Classical Literature
  • Classical Reception
  • Classical Art and Architecture
  • Classical Oratory and Rhetoric
  • Greek and Roman Papyrology
  • Greek and Roman Epigraphy
  • Greek and Roman Law
  • Greek and Roman Archaeology
  • Late Antiquity
  • Religion in the Ancient World
  • Digital Humanities
  • Browse content in History
  • Colonialism and Imperialism
  • Diplomatic History
  • Environmental History
  • Genealogy, Heraldry, Names, and Honours
  • Genocide and Ethnic Cleansing
  • Historical Geography
  • History by Period
  • History of Emotions
  • History of Agriculture
  • History of Education
  • History of Gender and Sexuality
  • Industrial History
  • Intellectual History
  • International History
  • Labour History
  • Legal and Constitutional History
  • Local and Family History
  • Maritime History
  • Military History
  • National Liberation and Post-Colonialism
  • Oral History
  • Political History
  • Public History
  • Regional and National History
  • Revolutions and Rebellions
  • Slavery and Abolition of Slavery
  • Social and Cultural History
  • Theory, Methods, and Historiography
  • Urban History
  • World History
  • Browse content in Language Teaching and Learning
  • Language Learning (Specific Skills)
  • Language Teaching Theory and Methods
  • Browse content in Linguistics
  • Applied Linguistics
  • Cognitive Linguistics
  • Computational Linguistics
  • Forensic Linguistics
  • Grammar, Syntax and Morphology
  • Historical and Diachronic Linguistics
  • History of English
  • Language Evolution
  • Language Reference
  • Language Acquisition
  • Language Variation
  • Language Families
  • Lexicography
  • Linguistic Anthropology
  • Linguistic Theories
  • Linguistic Typology
  • Phonetics and Phonology
  • Psycholinguistics
  • Sociolinguistics
  • Translation and Interpretation
  • Writing Systems
  • Browse content in Literature
  • Bibliography
  • Children's Literature Studies
  • Literary Studies (Romanticism)
  • Literary Studies (American)
  • Literary Studies (Asian)
  • Literary Studies (European)
  • Literary Studies (Eco-criticism)
  • Literary Studies (Modernism)
  • Literary Studies - World
  • Literary Studies (1500 to 1800)
  • Literary Studies (19th Century)
  • Literary Studies (20th Century onwards)
  • Literary Studies (African American Literature)
  • Literary Studies (British and Irish)
  • Literary Studies (Early and Medieval)
  • Literary Studies (Fiction, Novelists, and Prose Writers)
  • Literary Studies (Gender Studies)
  • Literary Studies (Graphic Novels)
  • Literary Studies (History of the Book)
  • Literary Studies (Plays and Playwrights)
  • Literary Studies (Poetry and Poets)
  • Literary Studies (Postcolonial Literature)
  • Literary Studies (Queer Studies)
  • Literary Studies (Science Fiction)
  • Literary Studies (Travel Literature)
  • Literary Studies (War Literature)
  • Literary Studies (Women's Writing)
  • Literary Theory and Cultural Studies
  • Mythology and Folklore
  • Shakespeare Studies and Criticism
  • Browse content in Media Studies
  • Browse content in Music
  • Applied Music
  • Dance and Music
  • Ethics in Music
  • Ethnomusicology
  • Gender and Sexuality in Music
  • Medicine and Music
  • Music Cultures
  • Music and Media
  • Music and Religion
  • Music and Culture
  • Music Education and Pedagogy
  • Music Theory and Analysis
  • Musical Scores, Lyrics, and Libretti
  • Musical Structures, Styles, and Techniques
  • Musicology and Music History
  • Performance Practice and Studies
  • Race and Ethnicity in Music
  • Sound Studies
  • Browse content in Performing Arts
  • Browse content in Philosophy
  • Aesthetics and Philosophy of Art
  • Epistemology
  • Feminist Philosophy
  • History of Western Philosophy
  • Metaphysics
  • Moral Philosophy
  • Non-Western Philosophy
  • Philosophy of Language
  • Philosophy of Mind
  • Philosophy of Perception
  • Philosophy of Science
  • Philosophy of Action
  • Philosophy of Law
  • Philosophy of Religion
  • Philosophy of Mathematics and Logic
  • Practical Ethics
  • Social and Political Philosophy
  • Browse content in Religion
  • Biblical Studies
  • Christianity
  • East Asian Religions
  • History of Religion
  • Judaism and Jewish Studies
  • Qumran Studies
  • Religion and Education
  • Religion and Health
  • Religion and Politics
  • Religion and Science
  • Religion and Law
  • Religion and Art, Literature, and Music
  • Religious Studies
  • Browse content in Society and Culture
  • Cookery, Food, and Drink
  • Cultural Studies
  • Customs and Traditions
  • Ethical Issues and Debates
  • Hobbies, Games, Arts and Crafts
  • Lifestyle, Home, and Garden
  • Natural world, Country Life, and Pets
  • Popular Beliefs and Controversial Knowledge
  • Sports and Outdoor Recreation
  • Technology and Society
  • Travel and Holiday
  • Visual Culture
  • Browse content in Law
  • Arbitration
  • Browse content in Company and Commercial Law
  • Commercial Law
  • Company Law
  • Browse content in Comparative Law
  • Systems of Law
  • Competition Law
  • Browse content in Constitutional and Administrative Law
  • Government Powers
  • Judicial Review
  • Local Government Law
  • Military and Defence Law
  • Parliamentary and Legislative Practice
  • Construction Law
  • Contract Law
  • Browse content in Criminal Law
  • Criminal Procedure
  • Criminal Evidence Law
  • Sentencing and Punishment
  • Employment and Labour Law
  • Environment and Energy Law
  • Browse content in Financial Law
  • Banking Law
  • Insolvency Law
  • History of Law
  • Human Rights and Immigration
  • Intellectual Property Law
  • Browse content in International Law
  • Private International Law and Conflict of Laws
  • Public International Law
  • IT and Communications Law
  • Jurisprudence and Philosophy of Law
  • Law and Politics
  • Law and Society
  • Browse content in Legal System and Practice
  • Courts and Procedure
  • Legal Skills and Practice
  • Primary Sources of Law
  • Regulation of Legal Profession
  • Medical and Healthcare Law
  • Browse content in Policing
  • Criminal Investigation and Detection
  • Police and Security Services
  • Police Procedure and Law
  • Police Regional Planning
  • Browse content in Property Law
  • Personal Property Law
  • Study and Revision
  • Terrorism and National Security Law
  • Browse content in Trusts Law
  • Wills and Probate or Succession
  • Browse content in Medicine and Health
  • Browse content in Allied Health Professions
  • Arts Therapies
  • Clinical Science
  • Dietetics and Nutrition
  • Occupational Therapy
  • Operating Department Practice
  • Physiotherapy
  • Radiography
  • Speech and Language Therapy
  • Browse content in Anaesthetics
  • General Anaesthesia
  • Neuroanaesthesia
  • Clinical Neuroscience
  • Browse content in Clinical Medicine
  • Acute Medicine
  • Cardiovascular Medicine
  • Clinical Genetics
  • Clinical Pharmacology and Therapeutics
  • Dermatology
  • Endocrinology and Diabetes
  • Gastroenterology
  • Genito-urinary Medicine
  • Geriatric Medicine
  • Infectious Diseases
  • Medical Toxicology
  • Medical Oncology
  • Pain Medicine
  • Palliative Medicine
  • Rehabilitation Medicine
  • Respiratory Medicine and Pulmonology
  • Rheumatology
  • Sleep Medicine
  • Sports and Exercise Medicine
  • Community Medical Services
  • Critical Care
  • Emergency Medicine
  • Forensic Medicine
  • Haematology
  • History of Medicine
  • Browse content in Medical Skills
  • Clinical Skills
  • Communication Skills
  • Nursing Skills
  • Surgical Skills
  • Browse content in Medical Dentistry
  • Oral and Maxillofacial Surgery
  • Paediatric Dentistry
  • Restorative Dentistry and Orthodontics
  • Surgical Dentistry
  • Medical Ethics
  • Medical Statistics and Methodology
  • Browse content in Neurology
  • Clinical Neurophysiology
  • Neuropathology
  • Nursing Studies
  • Browse content in Obstetrics and Gynaecology
  • Gynaecology
  • Occupational Medicine
  • Ophthalmology
  • Otolaryngology (ENT)
  • Browse content in Paediatrics
  • Neonatology
  • Browse content in Pathology
  • Chemical Pathology
  • Clinical Cytogenetics and Molecular Genetics
  • Histopathology
  • Medical Microbiology and Virology
  • Patient Education and Information
  • Browse content in Pharmacology
  • Psychopharmacology
  • Browse content in Popular Health
  • Caring for Others
  • Complementary and Alternative Medicine
  • Self-help and Personal Development
  • Browse content in Preclinical Medicine
  • Cell Biology
  • Molecular Biology and Genetics
  • Reproduction, Growth and Development
  • Primary Care
  • Professional Development in Medicine
  • Browse content in Psychiatry
  • Addiction Medicine
  • Child and Adolescent Psychiatry
  • Forensic Psychiatry
  • Learning Disabilities
  • Old Age Psychiatry
  • Psychotherapy
  • Browse content in Public Health and Epidemiology
  • Epidemiology
  • Public Health
  • Browse content in Radiology
  • Clinical Radiology
  • Interventional Radiology
  • Nuclear Medicine
  • Radiation Oncology
  • Reproductive Medicine
  • Browse content in Surgery
  • Cardiothoracic Surgery
  • Gastro-intestinal and Colorectal Surgery
  • General Surgery
  • Neurosurgery
  • Paediatric Surgery
  • Peri-operative Care
  • Plastic and Reconstructive Surgery
  • Surgical Oncology
  • Transplant Surgery
  • Trauma and Orthopaedic Surgery
  • Vascular Surgery
  • Browse content in Science and Mathematics
  • Browse content in Biological Sciences
  • Aquatic Biology
  • Biochemistry
  • Bioinformatics and Computational Biology
  • Developmental Biology
  • Ecology and Conservation
  • Evolutionary Biology
  • Genetics and Genomics
  • Microbiology
  • Molecular and Cell Biology
  • Natural History
  • Plant Sciences and Forestry
  • Research Methods in Life Sciences
  • Structural Biology
  • Systems Biology
  • Zoology and Animal Sciences
  • Browse content in Chemistry
  • Analytical Chemistry
  • Computational Chemistry
  • Crystallography
  • Environmental Chemistry
  • Industrial Chemistry
  • Inorganic Chemistry
  • Materials Chemistry
  • Medicinal Chemistry
  • Mineralogy and Gems
  • Organic Chemistry
  • Physical Chemistry
  • Polymer Chemistry
  • Study and Communication Skills in Chemistry
  • Theoretical Chemistry
  • Browse content in Computer Science
  • Artificial Intelligence
  • Computer Architecture and Logic Design
  • Game Studies
  • Human-Computer Interaction
  • Mathematical Theory of Computation
  • Programming Languages
  • Software Engineering
  • Systems Analysis and Design
  • Virtual Reality
  • Browse content in Computing
  • Business Applications
  • Computer Security
  • Computer Games
  • Computer Networking and Communications
  • Digital Lifestyle
  • Graphical and Digital Media Applications
  • Operating Systems
  • Browse content in Earth Sciences and Geography
  • Atmospheric Sciences
  • Environmental Geography
  • Geology and the Lithosphere
  • Maps and Map-making
  • Meteorology and Climatology
  • Oceanography and Hydrology
  • Palaeontology
  • Physical Geography and Topography
  • Regional Geography
  • Soil Science
  • Urban Geography
  • Browse content in Engineering and Technology
  • Agriculture and Farming
  • Biological Engineering
  • Civil Engineering, Surveying, and Building
  • Electronics and Communications Engineering
  • Energy Technology
  • Engineering (General)
  • Environmental Science, Engineering, and Technology
  • History of Engineering and Technology
  • Mechanical Engineering and Materials
  • Technology of Industrial Chemistry
  • Transport Technology and Trades
  • Browse content in Environmental Science
  • Applied Ecology (Environmental Science)
  • Conservation of the Environment (Environmental Science)
  • Environmental Sustainability
  • Environmentalist Thought and Ideology (Environmental Science)
  • Management of Land and Natural Resources (Environmental Science)
  • Natural Disasters (Environmental Science)
  • Nuclear Issues (Environmental Science)
  • Pollution and Threats to the Environment (Environmental Science)
  • Social Impact of Environmental Issues (Environmental Science)
  • History of Science and Technology
  • Browse content in Materials Science
  • Ceramics and Glasses
  • Composite Materials
  • Metals, Alloying, and Corrosion
  • Nanotechnology
  • Browse content in Mathematics
  • Applied Mathematics
  • Biomathematics and Statistics
  • History of Mathematics
  • Mathematical Education
  • Mathematical Finance
  • Mathematical Analysis
  • Numerical and Computational Mathematics
  • Probability and Statistics
  • Pure Mathematics
  • Browse content in Neuroscience
  • Cognition and Behavioural Neuroscience
  • Development of the Nervous System
  • Disorders of the Nervous System
  • History of Neuroscience
  • Invertebrate Neurobiology
  • Molecular and Cellular Systems
  • Neuroendocrinology and Autonomic Nervous System
  • Neuroscientific Techniques
  • Sensory and Motor Systems
  • Browse content in Physics
  • Astronomy and Astrophysics
  • Atomic, Molecular, and Optical Physics
  • Biological and Medical Physics
  • Classical Mechanics
  • Computational Physics
  • Condensed Matter Physics
  • Electromagnetism, Optics, and Acoustics
  • History of Physics
  • Mathematical and Statistical Physics
  • Measurement Science
  • Nuclear Physics
  • Particles and Fields
  • Plasma Physics
  • Quantum Physics
  • Relativity and Gravitation
  • Semiconductor and Mesoscopic Physics
  • Browse content in Psychology
  • Affective Sciences
  • Clinical Psychology
  • Cognitive Psychology
  • Cognitive Neuroscience
  • Criminal and Forensic Psychology
  • Developmental Psychology
  • Educational Psychology
  • Evolutionary Psychology
  • Health Psychology
  • History and Systems in Psychology
  • Music Psychology
  • Neuropsychology
  • Organizational Psychology
  • Psychological Assessment and Testing
  • Psychology of Human-Technology Interaction
  • Psychology Professional Development and Training
  • Research Methods in Psychology
  • Social Psychology
  • Browse content in Social Sciences
  • Browse content in Anthropology
  • Anthropology of Religion
  • Human Evolution
  • Medical Anthropology
  • Physical Anthropology
  • Regional Anthropology
  • Social and Cultural Anthropology
  • Theory and Practice of Anthropology
  • Browse content in Business and Management
  • Business Ethics
  • Business Strategy
  • Business History
  • Business and Technology
  • Business and Government
  • Business and the Environment
  • Comparative Management
  • Corporate Governance
  • Corporate Social Responsibility
  • Entrepreneurship
  • Health Management
  • Human Resource Management
  • Industrial and Employment Relations
  • Industry Studies
  • Information and Communication Technologies
  • International Business
  • Knowledge Management
  • Management and Management Techniques
  • Operations Management
  • Organizational Theory and Behaviour
  • Pensions and Pension Management
  • Public and Nonprofit Management
  • Strategic Management
  • Supply Chain Management
  • Browse content in Criminology and Criminal Justice
  • Criminal Justice
  • Criminology
  • Forms of Crime
  • International and Comparative Criminology
  • Youth Violence and Juvenile Justice
  • Development Studies
  • Browse content in Economics
  • Agricultural, Environmental, and Natural Resource Economics
  • Asian Economics
  • Behavioural Finance
  • Behavioural Economics and Neuroeconomics
  • Econometrics and Mathematical Economics
  • Economic History
  • Economic Systems
  • Economic Methodology
  • Economic Development and Growth
  • Financial Markets
  • Financial Institutions and Services
  • General Economics and Teaching
  • Health, Education, and Welfare
  • History of Economic Thought
  • International Economics
  • Labour and Demographic Economics
  • Law and Economics
  • Macroeconomics and Monetary Economics
  • Microeconomics
  • Public Economics
  • Urban, Rural, and Regional Economics
  • Welfare Economics
  • Browse content in Education
  • Adult Education and Continuous Learning
  • Care and Counselling of Students
  • Early Childhood and Elementary Education
  • Educational Equipment and Technology
  • Educational Strategies and Policy
  • Higher and Further Education
  • Organization and Management of Education
  • Philosophy and Theory of Education
  • Schools Studies
  • Secondary Education
  • Teaching of a Specific Subject
  • Teaching of Specific Groups and Special Educational Needs
  • Teaching Skills and Techniques
  • Browse content in Environment
  • Applied Ecology (Social Science)
  • Climate Change
  • Conservation of the Environment (Social Science)
  • Environmentalist Thought and Ideology (Social Science)
  • Natural Disasters (Environment)
  • Social Impact of Environmental Issues (Social Science)
  • Browse content in Human Geography
  • Cultural Geography
  • Economic Geography
  • Political Geography
  • Browse content in Interdisciplinary Studies
  • Communication Studies
  • Museums, Libraries, and Information Sciences
  • Browse content in Politics
  • African Politics
  • Asian Politics
  • Chinese Politics
  • Comparative Politics
  • Conflict Politics
  • Elections and Electoral Studies
  • Environmental Politics
  • European Union
  • Foreign Policy
  • Gender and Politics
  • Human Rights and Politics
  • Indian Politics
  • International Relations
  • International Organization (Politics)
  • International Political Economy
  • Irish Politics
  • Latin American Politics
  • Middle Eastern Politics
  • Political Behaviour
  • Political Economy
  • Political Institutions
  • Political Methodology
  • Political Communication
  • Political Philosophy
  • Political Sociology
  • Political Theory
  • Politics and Law
  • Public Policy
  • Public Administration
  • Quantitative Political Methodology
  • Regional Political Studies
  • Russian Politics
  • Security Studies
  • State and Local Government
  • UK Politics
  • US Politics
  • Browse content in Regional and Area Studies
  • African Studies
  • Asian Studies
  • East Asian Studies
  • Japanese Studies
  • Latin American Studies
  • Middle Eastern Studies
  • Native American Studies
  • Scottish Studies
  • Browse content in Research and Information
  • Research Methods
  • Browse content in Social Work
  • Addictions and Substance Misuse
  • Adoption and Fostering
  • Care of the Elderly
  • Child and Adolescent Social Work
  • Couple and Family Social Work
  • Developmental and Physical Disabilities Social Work
  • Direct Practice and Clinical Social Work
  • Emergency Services
  • Human Behaviour and the Social Environment
  • International and Global Issues in Social Work
  • Mental and Behavioural Health
  • Social Justice and Human Rights
  • Social Policy and Advocacy
  • Social Work and Crime and Justice
  • Social Work Macro Practice
  • Social Work Practice Settings
  • Social Work Research and Evidence-based Practice
  • Welfare and Benefit Systems
  • Browse content in Sociology
  • Childhood Studies
  • Community Development
  • Comparative and Historical Sociology
  • Economic Sociology
  • Gender and Sexuality
  • Gerontology and Ageing
  • Health, Illness, and Medicine
  • Marriage and the Family
  • Migration Studies
  • Occupations, Professions, and Work
  • Organizations
  • Population and Demography
  • Race and Ethnicity
  • Social Theory
  • Social Movements and Social Change
  • Social Research and Statistics
  • Social Stratification, Inequality, and Mobility
  • Sociology of Religion
  • Sociology of Education
  • Sport and Leisure
  • Urban and Rural Studies
  • Browse content in Warfare and Defence
  • Defence Strategy, Planning, and Research
  • Land Forces and Warfare
  • Military Administration
  • Military Life and Institutions
  • Naval Forces and Warfare
  • Other Warfare and Defence Issues
  • Peace Studies and Conflict Resolution
  • Weapons and Equipment

Oxford Handbook of Internet Psychology

  • < Previous chapter
  • Next chapter >

19 Digital deception: Why, when and how people lie online

Jeffrey T. Hancock, Department of Communication, Cornell University.

  • Published: 18 September 2012
  • Cite Icon Cite
  • Permissions Icon Permissions

The prevalence of both deception and communication technology in our personal and professional lives has given rise to an important set of questions at the intersection of deception and technology, referred to as ‘digital deception’. These questions include issues concerned with deception and self-presentation, such as how the Internet can facilitate deception through the manipulation of identity. A second set of questions is concerned with how we produce lies. For example, do we lie more in our everyday conversations in some media than in others? Do we use different media to lie about different types of things, to different types of people? This article examines these questions by first elaborating on the notion of digital deception in the context of the literature on traditional forms of deception. It considers identity-based forms of deception online and the lies that are a frequent part of our everyday communications.

D eception is one of the most significant and pervasive social phenomena of our age (Miller and Stiff 1993 ). Some studies suggest that, on average, people tell one to two lies a day (DePaulo et al.   1996 ; Hancock et al.   2004a ), and these lies range from the trivial to the more serious, including deception between friends and family, in the workplace and in power and politics. At the same time, information and communication technologies have pervaded almost all aspects of human communication and interaction, from everyday technologies that support interpersonal interactions, such as email and instant messaging, to more sophisticated systems that support organizational interactions.

Given the prevalence of both deception and communication technology in our personal and professional lives, an important set of questions has recently emerged at the intersection of deception and technology, or what we will refer to as ‘ digital deception ’. These questions include issues concerned with deception and self-presentation, such as how the Internet can facilitate deception through the manipulation of identity. A second set of questions is concerned with how we produce lies. For example, do we lie more in our everyday conversations in some media more than in others? Do we use different media to lie about different types of things, to different types of people? Another type of question concerns our ability to detect deception across various media and in different online communication spaces. Are we worse at detecting lies in a text-based interaction than we are in face-to-face (ftf)? What factors interact with communication media to affect our ability to catch a liar?

In the present chapter I examine these questions by first elaborating on the notion of digital deception in the context of the literature on traditional forms of deception. The chapter is then divided into two main sections, one concerned with identity-based forms of deception online, and the other focusing on the lies that are a frequent part of our everyday communications.

Digital deception defined

Deception has been studied in a wide variety of contexts (Ekman 2001 ), including organizational settings (Grazioli and Jarvenpaa 2003a ; Schein 2004 ), forensic and criminal settings (Vrij 2000 ; Granhag and Stromwall 2004), in power and politics (Ekman 1985; Galasinski 2000 ) and in everyday communication (DePaulo et al.   1996 ; DePaulo and Kashy 1998 ; Hancock et al.   2004a , b ). In the present chapter, we consider deception in the context of information and communication technology, or what I will call digital deception , which refers to the intentional control of information in a technologically mediated message to create a false belief in the receiver of the message . While this definition is an adaptation of Buller and Burgoon's ( 1996 ) conceptualization of deception, i.e., ‘a message knowingly transmitted by a sender to foster a false belief or conclusion by the receiver’ (1996: 205), the characteristics of this definition are consistent with most definitions of deception (for review of the many issues associated with defining deception, see Bok 1978 ; Galasinski 2000 ). The first characteristic is that an act of deception must be intentional or deliberate. Messages that are unintentionally misleading are usually not considered deceptive, but instead are described as mistakes or errors (Burgoon and Buller 1994 ). Similarly, forms of speech in which the speaker does not mean what they say but intend for their addressee to detect this, such as irony, joking, etc, are not considered deceptive. The second characteristic is that deception is designed to mislead or create a false belief in some target. That is, the deceiver's goal is to convince someone else to believe something that the deceiver believes to be false. These characteristics can be observed, for example, in Ekman's ( 2001 : 41) definition—‘deliberate choice to mislead a target without giving any notification of the intent to do so’—and in DePaulo et al. 's ( 2003 : 74)—‘a deliberate attempt to mislead others.’

Digital deception requires an additional characteristic, namely that the control or manipulation of information in a deception is enacted in a technologically mediated message . That is, the message must be conveyed in a medium other than the basic ftf setting. As such, digital deception involves any form of deceit that is transmitted via communication technology, such as the telephone, email, instant messaging, chat rooms, newsgroups, weblogs, listservs, multiplayer online video games etc.

Although a number of different typologies have been proposed for categorizing deception—for example deception by omission vs. by commission, active vs. passive deception, etc. (see Robinson 1996 ; Galasinski 2000 ), for the purposes of discussing how the Internet and communication technologies may affect deception and its detection, I break digital deception down into two broad types: those based on a communicator's identity, and those based on the actual messages that comprise a communication. In particular, identity-based digital deception refers to deceit that flows from the false manipulation or display of a person or organization's identity. For example, an email designed to look like it originated from someone in Africa that needs a partner to extricate vast sums of money (in order to trick the recipient into providing their bank information) is a case of identity-based digital deception. Message-based digital deception , in contrast, refers to deception that takes place in the communication between two or more interlocutors or agents. In particular, it refers to deception in which the information in the messages exchanged between interlocutors is manipulated or controlled to be deceptive. For example, when one friend calls another on his mobile phone to say that he will be late to their meeting because the traffic is bad (when in fact he simply left the office late) is an example of message-based digital deception. The two friends' identities are known to one another, but the information provided by the first friend has been manipulated to create a false belief in the second friend.

Clearly these identity-based and message-based forms of digital deception are not mutually exclusive. Indeed, the messages in a communication may serve to enhance a deception about identity and, when identity-based digital deception is enacted, the messages that make up the communication are more than likely to also be deceptive. For instance, in the email example above, there are several possible relationships between identity- and message-based deceptions. For example, the identity of the sender may be deceptive (i.e., the person is not really someone in Africa), but the message truthful (e.g., the person really does have access to money). Or, the identity of the sender may be accurate (i.e., the person really is in Africa) but the message is deceptive (e.g., the person does not have access to money). Or, both the identity and message could be false. As such, the distinction between identity-based and message-based deception is not intended to be set in stone, but is intended only as a pragmatic distinction that may help us consider how communication technologies may or may not affect deception.

Finally, it should be noted that the definition of digital deception described above includes a number of issues that are beyond the scope of this chapter. For example, the advent of sophisticated and relatively inexpensive digital editing software makes image-based digital deception, such as misleading editing or selection, an important issue (Messaris 1997 ; Galasinski 2000 ). Also, the very broad topic of information security, such as attacks and vulnerabilities on information infrastructure (see Schneider 1999 ), hacking and deceptive intrusion of information networks (see Stolfo et al.   2001 ), will also not be discussed here. Instead, the focus will be on deception in our everyday mediated communication.

Identity-based digital deception

Perhaps the most obvious deception issue to consider is the affordances provided by information and communication technologies to manipulate or obscure our identity. As Turkle ( 1995 ) observed, the relative anonymity and multiple modes of social interaction provided by the many forms of online communication conducted via the Internet provide users with unique opportunities to play with their identity and explore their sense of self. As many have now noted (e.g., Walther 1996 ; Berman and Bruckman 2001 ; Bargh et al.   2002 ; Spears et al.   2002 ; Walther and Parks 2002 ), because online communication typically involves text-based interaction or virtual representations of self (e.g., avatars), people can self-present in ways that they can not in ftf encounters. Boys can be girls, the old can be young, ethnicity can be chosen, 15-year-olds can be stock analysts—and on the Internet no one knows you're a dog.

While this growing body of research has revealed some of the fascinating effects that the relative anonymity of the Internet can have on identity and social interaction, such as the enhancement of group effects (e.g., Postmes et al.   1999 ; Douglas and McGarty 2001 ) and the potential hyerpersonalization of interpersonal interactions (Walther 1996 ; Hancock and Dunham 2001a ; Walther et al.   2001 ), the affordances of online communication for manipulating identity also have important implications for deception. In one of the first systematic investigations of identity-based deception in online contexts, Donath ( 1998 ) observed how different aspects of Usenet newsgroups (asynchronous text-based message exchange systems supporting a wide range of topical discussions) affected participants sense of identity and their abilities to deceive or be deceived by their fellow community members.

Drawing on models of deception from biology (e.g., Zahavi 1993 ), Donath distinguished between assessment signals, which are costly displays directly related to an organism's characteristics (e.g., large horns on a stag), and conventional signals, which are low-cost displays that are only conventionally associated with a characteristic (e.g., a powerful-sounding mating call). In online communication, conventional signals include most of the information that is exchanged in messages, including what we say (e.g., that I'm very wealthy) and the nicknames we use to identify ourselves (e.g., ‘richie-rich’). Assessment signals may be more difficult to come by online, but can include links to a person's ‘real-world’ identity, such as a phone number or an email address (e.g., emails ending in.ac.uk or.edu suggest that the person works at a university), or levels of knowledge that only an expert could display (e.g., highly technical information about a computer system).

Online, conventional signals are an easy target for deceptive identity manipulation, and Donath notes several types of deceptive identity manipulations in the Usenet communities, including trolling, category deception, and identity concealment. Trolling refers to an individual posing as a legitimate member of a community who posts messages intended to spark intense fights within the community. Category deception refers to deceptions that manipulate our perceptions of individuals as members of social groups, or categories, such as male vs. female, white vs. black, student vs. worker, hockey player vs. squash player. Online, gender deception is perhaps the most commonly discussed example of category deception (e.g., Turkle 1995 ; Berman and Bruckman 2001 ; Herring and Martinson 2004 ). Finally, identity concealment refers to hiding or omitting aspects of one's identity, such as using a pseudonym when posting, in order to shield one's identity.

Research by Whitty and her colleagues (Whitty and Gavin 2001 ; Whitty 2002 ) suggests that the notion of using deception to shield one's identity is important for many participants interacting in relatively anonymous online spaces, such as chat rooms. In particular, in one survey of chat room participants, women reported using deception to conceal their identity for safety reasons, such as avoiding harassment. Men, on the other hand, reported using identity deception in order to allow themselves, somewhat paradoxically, to be more expressive and to reveal secrets about themselves (Utz 2005 ). Indeed, a number of studies have suggested that self-disclosure and honesty tend to increase online when participants' identities are not manifest (e.g., Joinson 2001 ; Bargh et al.   2002 ).

More recently, however, the Internet has evolved from a virtual space for exchanging information, chatting with others and forming virtual communities into a massive venue for financial and business transactions, with estimates of revenue generated from online transactions in the billions, and an increasing number of businesses and individuals engaging in commerce online. As might be expected, more serious and criminal forms of deception are keeping pace with the increase in money flowing through the Internet (Grazioli and Jarvenpaa 2003b ). Indeed, the InternetFraud Complaint Center (IFCC 2003) reported almost fifty thousand incidences of fraud online, a threefold increase from the previous year, the majority of which involved fraudulent Internet auctions, but also included credit card fraud and identity theft, in which someone's personal information is stolen and used for the gain of another individual.

In their work on deception that takes place in business and consumer contexts, such as touting unsound investments for personal gain or making misleading claims about goods for sale at an auction site, Grazioli and Jarvenpaa ( 2003a , b ) have identified seven common deception tactics. The first three tactics are concerned with obscuring the nature of the goods to be transacted, and include

Masking —eliminating critical information regarding an item (e.g., failing to disclose that the publisher of a newsletter receives advertisement money from stocks the newsletter recommends)

Dazzling —obscuring critical information regarding an item (e.g., free trials that lead to automatic enrolment without making that clear to consumers)

Decoying —distracting the victim's attention from the transaction (e.g., offers of free products that require the revealing of highly detailed personal information).

The four other types of deception tactics involve manipulating information about the transaction itself, and include:

Mimicking —assuming someone else's identity or modifying the transaction so that it appears legitimate (e.g., the creation of a ‘mirror’ bank site virtually identical to the legitimate site, inducing users to disclose personal information such as account information)

Inventing —making up information about the transaction (e.g., Internet auctioneers who advertise merchandise that they do not have)

Relabelling —describing a transaction expressly to mislead (e.g., selling questionable investments over the Internet as sound financial opportunities)

Double play —convincing a victim that they are taking advantage of the deceiver (e.g., emails designed to look like internal memos sent by mistake and which appear to contain insider information).

As Grazioli and Jarvenpaa ( 2000 ) note, the Internet offers a highly flexible environment for identity-based forms of deception that can make it difficult for even technologically savvy users to detect deception.

While the Internet certainly offers a number of advantages to the deceiver that may not be available face-to-face, an important question is whether one is more likely to encounter identity-based deception online or in more traditional face-to-face social exchanges. While this question is difficult to address for obvious reasons, a recent report comparing identity fraud that took place online or ftf suggests that identity fraud is still much more likely to take place ftf, and that when it does occur online it tends to be much less costly than when it occurs offline (Javelin Strategy and Research 2005).

While this is only one report, it does serve as a reminder that although Internet-based communication provides many features that may facilitate identity-based digital deception, and that this type of deception appears to be on the rise online, more traditional ftf forms of communication are certainly not immune to identity related deception. Nonetheless, identity-based digital deception is an important area for future research, especially given reports that criminal entities, such as organized crime and terrorist organizations, are increasingly relying on information technologies to communicate (Knight 2004 ).

Message-based digital deception

Although we typically associate Internet-based communication with relatively anonymous communication spaces, such as chat rooms, newsgroups, online games, etc., most people's everyday use of communication technologies tend to be with people that they know, such as an email to a colleague, an instant message with a friend, or text messaging on the phone with a spouse. In these instances, much like many of our ftf interactions, the identity of our interlocutors is known to us. How do communication technologies affect deception when identities are known? Let us consider first the production of digital deception.

Producing digital deception

Research suggests that deception in general is a fundamental and frequent part of everyday human communication, both in interpersonal settings as well as in work and organizational contexts (Camden et al.   1984 ; Lippard 1988 ; Metts 1989 ; DePaulo et al.   1996 ; Hancock et al.   2004a , b ). Some research suggests that people tell an average of one to two lies a day (DePaulo et al.   1996 ; Hancock et al.   2004a , b ), and these daily lies range from the trivial, such as a false opinion about someone's appearance, to the more serious matters, such as deception in business and legal negotiations, power and politics, and workplace issues. Indeed, as noted above, some have argued that deception is one of the most pervasive social phenomena of our age (Miller and Stiff 1993 ).

How do communication technologies affect the frequency with which we produce lies? In particular, are we more likely to lie in some media than in others? Some have speculated that Internet-based communication is rife with deception. For example, Keyes ( 2004 : 198) argues that ‘electronic mail is a godsend. With email we needn't worry about so much as a quiver in our voice or a tremor in our pinkie when telling a lie. Email is a first rate deception-enabler’. While this may reflect a popular view of how communication technology might affect deception, theoretical approaches to media effects suggest several possible ways that media may affect lying behaviour.

Media Richness Theory (Daft and Lengel 1986 ; Daft et al.   1987 ), for example, assumes that users will choose rich media, which have multiple cue systems, immediate feedback, natural language and message personalization, for more equivocal and complex communication activities. Because lying can be considered a complex type of communication, media richness theory predicts that users should chose to lie most frequently in rich media, such as ftf, and least frequently in less rich media, such as email. In contrast, DePaulo et al. ( 1996 ) argued that because lying makes people uncomfortable, users should choose less rich media in order to maintain social distance between the liar and the target, an argument I refer to as the social distance hypothesis. According to this hypothesis, users should choose email most frequently for lying, followed in order by instant messaging, telephone and finally ftf (see also, Bradner and Mark 2002 ).

Note that both of these approaches assume that communication technology varies along only a single underlying dimension (i.e., richness, distance) that will influence deception, and ignore other important differences in their design that may have important implications for deception. In our feature-based model of media and deception (Hancock et al.   2004a , b ), we proposed that at least three features of media are important for the act of deception, including (1) the synchronicity of the medium (i.e., the degree to which messages are exchanged instantaneously and in real-time) (2) the recordability of the medium (i.e., the degree to which the interaction is automatically documented), and (3) whether or not the speaker and listener are distributed (i.e., they do not share the same physical space).

In particular, synchronous media should increase opportunities for deception because the majority of lies are unplanned and tend to emerge spontaneously from conversation (DePaulo et al.   1996 ). For example, if during a conversation a new friend says to another that his favorite movie is one that she hates, she is now presented with a decision to lie or not about her opinion of the movie. This type of emergent opportunity is less likely to arise when composing an email. Thus, media that are synchronous, such as ftf and telephone, and to a large degree instant messaging, should present more situations in which deception may be opportune.

The more recordable a medium, the less likely users should be willing to speak falsely. Email is perhaps the most recordable interpersonal medium we have ever developed, with copies being saved on multiple computers (including the targets). In contrast, ftf and telephone conversations are typically recordless. Although instant messaging (IM) conversations are logged for the duration of an exchange and can be easily saved, most people do not save their IM conversations. Of course, this may change as IM enters the workplace and companies begin automatically recording IM by employees. In order to avoid being caught, speakers may choose to lie more frequently in recordless media, such as ftf and the telephone, than in more recordable media, such as email and instant messaging.

Finally, media in which participants are not distributed (i.e., co-present) should constrain deception to some degree because they limit deception involving topics or issues that are contradicted by the physical setting (e.g., ‘I'm working on the case report’ when in fact the speaker is surfing news on the Web). In fact, software is now available that can be downloaded into a phone that plays ambient noise that may be consistent with your lie (e.g., playing the sounds of an office when in fact you are in a car). Because mediated interactions such as the phone, IM and email involve physically distributed participants, this constraint should be reduced relative to ftf interactions. Some support for this notion comes from a study by Bradner and Mark ( 2002 ), in which participants were more likely to deceive a partner when they believed their partner was in a distant city than if they were in the same city.

According to our feature-based model, the more synchronous and distributed but less recordable a medium is, the more frequently lying should occur. As such, if these design features of communication media affect deception, then lying should occur most frequently on the telephone, followed by ftf and instant messaging, and least frequently via email.

To test the predictions flowing from the theories described above, we (Hancock et al.   2004a ) conducted a diary study adapted from DePaulo et al. 's ( 1996 ) procedures. After a training session on how to record and code their own social interactions and deceptions, participants recorded all of their lies and social interactions for seven days. For each interaction, they recorded in which medium the interaction took place, ftf, phone, IM, email, and whether or not they lied. The results suggested that participants lied most frequently on the telephone (37 per cent of social interactions), followed by ftf (27 per cent) and IM interactions (21 per cent), and that they lied least by email (14 per cent). These data are not consistent with either media richness theory or the social distance hypothesis, which predict that deception will vary linearly along a single dimension, such as richness or social distance. In contrast, the data are consistent with our feature-based model of deception, which predicted that deception production should be highest in synchronous, recordless and distributed media. The data also go against the conventional wisdom that the online world is rife with deception and subterfuge.

Although the features described in the feature-based model predicted overall rates of digital deception, lies are not homogenous (DePaulo et al.   1996 ; Feldman et al.   2002 ). Deception, for example, can be about one's actions—‘I'm in the library’ when in fact the speaker is at the pub—feelings—‘I love your shirt’ with regard to a friend's ugly shirt—facts—‘I'm an A student’—and explanations—‘I couldn't make it because my car broke down’. Do people select different types of media for different types of deception? The feature-based model of deception makes several predictions. First, lies about actions should be less likely to occur in non-distributed communicative settings, where the target of the lie can physically see the speaker. Because lies about feelings are most likely to arise in synchronous interactions (e.g., a friend asking whether you like their ugly shirt), lies about feelings were predicted to occur most frequently face-to-face and on the telephone. Lies about facts should be least likely to be told in recordable media that can later be reviewed, such as email. Finally, explanation type lies were predicted to take place most frequently in asynchronous media, such as email, which provides the liar with more time to construct and plan their explanation than synchronous media.

People also lie differently to different types of people. For example, because people report valuing authenticity and trust in close relationships, people tend to lie less to close relationship partners, such as spouses, family and friends, than to casual relationship partners, such as acquaintances, colleagues and strangers (Metts 1989 ; Millar and Millar 1995 ; DePaulo and Kashy 1998 ). Lies to close and casual relationship targets also seem to differ qualitatively. In particular, lies told in close relationships tend to be more altruistic, in which the lie is told primarily to benefit the target (e.g., false compliments, pretend agreement) than self-serving, in which the lie benefits the liar, while lies in casual relationships tend to be more self-serving than altruistic.

In order to examine whether people used different media to lie about different things or to different people, we conducted another diary-based study in which we also assessed the content and target of the lie (Hancock et al.   2004b ). While we saw the same pattern of deception frequency across media (i.e., highest rate of deception on the phone, followed by ftf and IM, and least frequently email), the data provided only mixed support for our predictions regarding deception content and target relationship. As predicted, asynchronous interactions involved the least lies about feelings (i.e., email) but involved the most explanation-based lies, which involve explanations about why some event or action occurred—for example ‘My dog ate my homework’ as an explanation for why a student didn't complete the homework). Distributed media were predicted to involve more lies about actions, but this was only true for lies on the telephone. Finally, lies about facts did not differ across media. With respect to relationships, relative to ftf interactions, phone lies were most likely told to family and significant others. Instant messaging lies were most likely to be told to family. Finally, email lies were most likely to involve lies to higher status individuals, such as a student's professor.

Carlson and George ( 2004 ; George and Carlson 2005 ) have taken a similar approach to examining how the features of a medium, including synchronicity, recordlessness and richness, may affect deception production. While synchronicity and recordlessness are also in the feature-based model described above, Carlson and George ( 2004 ) argue that synchronicity may be preferred by deceivers for a somewhat different and very good reason, namely because it increases the deceiver's ability to assess and react to the receiver's behaviour. Richness is considered a positive for deception for the same reason—increased richness should lead to enhanced control over how the receiver perceives the deceiver as truthful. In this approach, however, richness is determined not only be availability of cues and speed of feedback, but also by the participant's experience with that medium (Carlson and Zmud 1999 ).

In two studies, Carlson and George ( 2004 ; George and Carlson 2005 ) provided a variety of scenarios to business managers that described situations in which they would be required to lie. In general, the results suggested that participants were most likely to choose synchronous and recordless media when they needed to lie, regardless of the severity of the situation. Although these data are generally consistent with the feature-based model, the results in these studies suggested that ftf tended to be the most frequent choice for deception, not the telephone. One possible reason for this difference may be the method employed, which does not control for the different baseline frequencies with which we interact in different media. That is, despite the wide range of communication technologies available to us, the majority of our interactions tend to be ftf. As such, we might expect ftf to be the place that people imagine they will lie most frequently in absolute terms simply because that is where most of their interactions take place.

Regardless of this methodological difference, when considered together, the data from these studies and the ones described above suggest that contrary to some speculations (e.g., Keyes 2004 ), asynchronous and recordable media, such as email, are unlikely places for people to lie in during their everyday communication. Instead, more synchronous and recordless forms of media, such as the telephone and ftf settings, appear to be where we lie most.

A final question concerned with how technology might affect deception is whether our language use is different when we lie compared to when we tell the truth online. In groundbreaking work in this area, Zhou and colleagues (Zhou et al.   2004a , b , Zhou and Zhang 2004 ) use computer-assisted, automated analysis of linguistic cues to classify deceptive and non-deceptive text-based communication. In this approach, the language of deceptive and truthful participants' communication are subjected to an automated analysis along a number of linguistic dimensions, including word count, pronoun usage, expressivity, affect and non-immediacy (i.e., less self-reference), among others. For example, in one study examining asynchronous text-based exchanges, Zhou et al. ( 2004 ) found that, compared to truth-tellers, liars used more words, were more expressive, non-immediate and informal and made more typographical errors. In one of our studies, we (Hancock et al. in press a) found similar patterns in synchronous online interaction (i.e., instant messaging), including increased word use and fewer self-references, during deception. Perhaps even more interestingly, we also found that the targets of lies, who were blind to the deception manipulation, also changed systematically depending on whether they were being lied to or told the truth. In particular, when being lied to targets used shorter sentences and asked more questions. These data suggest the fascinating possibility that targets had an implicit awareness or suspicion about the veracity of their partner, despite the fact that when asked whether they thought their partners were lying or not they performed at chance levels. While additional research is required for this novel line of research, these data suggest that how people use language online may change systematically according to whether or not they are being truthful. If this is the case, then the implications for deception detection online are substantial. We turn now to this issue, the detecting of digital deception.

Detecting digital deception

While an extensive literature has examined deception detection in ftf contexts (for review, see Zuckerman and Driver 1985 ; Vrij 2000 ; DePaulo et al.   2003 ), the question of how communication technologies affects deception detection has only begun to be addressed. Are we worse at detecting a lie in a text-based interaction than we are in a face-to-face exchange? How do factors that affect deception detection in ftf contexts, such as motivation, suspicion and non-verbal cues, interact with the effects of communication technology?

Although the extensive literature on ftf deception detection suggests that our accuracy to detect deception tends to be around chance (Vrij 2000 ), there are a number of factors that appear to reliably influence an individual's ability to detect deceit, and these factors may have important implications in the context of digital deception. Perhaps the most intuitively obvious factor for digital deception is the reduction of non-verbal cues that are associated with deception in mediated communication. Previous research suggests that there are a small set of reliable verbal, non-verbal and vocal cues to deception (for review, see DePaulo et al.   2003 ). Perhaps the most important of these are ‘leakage cues’, which are nonstrategic behaviours (usually non-verbal) that are assumed to betray the senders' deceptive intentions or feelings, such as a decrease in illustrators, body movements and higher pitch (Ekman 2001 ).

Given that these types of leakage cues are eliminated in text-based CMC interactions, one might suppose that deception detection would be less accurate in CMC than in ftf interactions (Hollingshead 2000 ). However, the relationship between communication media and deception appears to be much more complex than a simple reduction of cues. In perhaps the first theoretical framework to consider systematically the detection of message-based digital deception, Carlson et al. ( 2004 ) draw on Interpersonal Deception Theory (Buller and Burgoon 1996 ) to identify a number of variables that may interact with the communication medium in the context of deception detection. These factors include the (1) characteristics of the deceiver and receiver, and of their relationship, and (2) aspects of the communication event and the medium in which it takes place.

Characteristics of the deceiver and receiver that are considered relevant to success rates of deception detection include the motivation to lie or catch a lie, each individual's intrinsic abilities at deceiving or detecting deceit, aspects of the task and the various cognitions and affect that may arise from the discomfort associated with lying. Experience and familiarity are also assumed to play an important role in the model, including the relational experience between the deceiver and receiver, as well as both individuals' experience with the communication medium and context.

Aspects of the communication medium that are considered important include synchronicity, symbol variety (i.e., the number of different types of language elements and symbols available, including letters, basic symbols, fonts, etc.), cue multiplicity (i.e., number of simultaneous information channels supported), tailorability (i.e., ability to customize the message for the audience), reprocessability (i.e., the inverse of recordlessness described above) and rehearsability (i.e., the degree to which it gives participants time to plan, edit and rehearse messages). In this model, the relationships between these variables and deception detection is not assumed to be simple or one-to-one. Instead, the model assumes a ‘deceptive potential’ that is derived from constellations of these media variables. In particular, Carlson et al. propose that media with higher levels of symbol variety, tailorability, and rehearsability increase deceptive potential and reduce the likelihood of deception detection. In contrast, media that have higher levels of cue multiplicity and reprocessability decrease deceptive potential.

An important underlying assumption of this model, derived from the Interperonal Deception Theory, is that deception is a strategic act that is part of an ongoing, interactive communication process, and that all of the factors described above interact in important and predictable ways. A number of the factors described in the model have begun to be examined in several recent studies examining deception detection in online communication (Heinrich and Borkenau 1998 ; George and Carlson 1999 ; Hollingshead 2000 ; Horn 2001; Horn et al.   2002 ; Burgoon et al.   2003 ; George and Marrett 2004 , Carlson and George 2004 Study 2; George et al . 2004 ; Hancock et al.   in press b ).

A survey of these studies suggests that, as Carlson et al. ( 2004 ) predict, the relationship between communication media and deception detection is not a simple one. Some studies, for example, have found more accurate deception detection in richer media (e.g., Heinrich and Borkenau 1998 ; Burgoon et al.   2003 ), others have found higher accuracy in less rich media (e.g., Horn et al.   2002 ), while still others have found no overall difference between media (Hollingshead 2000 ; George and Marrett 2004 ; Woodworth et al. 2005). Instead, it appears that a number of factors, such as those described above, interact with the communication medium to determine deception detection accuracy.

Hancock et al. (in press b), for example, examined the impact of motivation of the deceiver and the communication medium on deception detection. People who are highly motivated to get away with their deceptive behaviour tend to act differently than those who are less concerned with the outcome, and their non-verbal behaviour (e.g., increased behavioural rigidity) is more likely to give them away (DePaulo et al.   1983 ). The observation that highly motivated liars are more likely to be detected has been referred to as the motivational impairment effect (DePaulo and Kirkendol 1989 ).

Because CMC eliminates non-verbal cues, the motivation impairment effect should be attenuated for highly motivated liars interacting in CMC. In addition, Burgoon and her colleagues (Burgoon and Buller 1994 ; Buller and Burgoon 1996 ) argue that moderately motivated liars engage in strategic communication behaviors to enhance their credibility. If that is the case, then there are several aspects of the CMC environment that should be advantageous to a sufficiently motivated liar (Carlson et al.   2004 ): (1) CMC speakers have more time to plan and construct their utterances, and (2) CMC settings enable the sender to carefully edit their messages before transmitting them to their partner, even in synchronous CMC, which affords speakers greater control over message generation and transmission (Hancock and Dunham 2001b ). As such, CMC may not only attenuate the motivational impairment effect, but actually reverse it.

To test this possibility, Hancock et al. (in press b) examined deceptive and truthful interactions in ftf and CMC environments. Half of the senders were motivated to lie by telling them that research has shown that successful liars tend to have better jobs, higher incomes and more success with finding a mate (see Forrest and Feldman 2000 ), while the other half were not. Deception detection accuracy did not differ across ftf and CMC conditions or across motivation levels. However, an interaction between communication environment and motivation was observed. Consistent with the motivation impairment effect, relative to unmotivated liars, motivated liars in the ftf condition were detected more accurately. In contrast, motivated liars in the CMC condition were detected less accurately than unmotivated liars. In fact, a comparison across the four conditions in the study reveals that the highly motivated CMC liars were the most successful in their ability to deceive their partner.

We refer to this observation as the Motivation Enhancement Effect , which has a number of important implications for digital deception. For example, investigators have warned of the increasing number of intrinsically highly motivated sexual offenders (particularly paedophiles) who have been using various online communication forums to lure potential victims (Mitchell et al.   2001 ). This is a particularly important development given the results of the present study, which suggest that highly motivated liars in CMC contexts are not detected very accurately.

As this study suggests, and the Carlson et al. ( 2004 ) model predicts, the effect of communication technologies on how humans detect deception is complex. Another interesting line of detection research, however, involves computer-assisted detection of deception (Burgoon et al.   2003 ; Burgoon and Nunamaker 2004 ). As described above, research on automated textual analysis suggests that there are detectable differences in linguistic patterns across deceptive and non-deceptive text-based communication (e.g., Zhou et al.   2004a ; Hancock et al. in press a). Can a tool be developed that exploits these differences to detect digital deception in real time, as an interaction unfolds? While the prospect of creating this type of tool is appealing, the task of automating the detection of such a complex communication process as digital deception is a clearly daunting one (Burgoon and Nunamaker 2004 ). Nonetheless, the research findings from the studies described above, which suggest a high diagnostic value of text-based cues (e.g., word quantity, pronoun use, etc.) in digital deception, and the tremendous advances in computing power and statistical classification techniques, lay a foundation for the development of such a tool.

Conclusions

Given the degree to which information and communication technologies pervade many aspects of our lives, it is perhaps difficult to overestimate the impact such technologies may have on one of the oldest aspects of human life, deception. The present chapter provides an overview of the state-of-the-art on the early stages of research on digital deception. Additional research is needed to examine systematically the wide variety of factors that the literature has identified as affecting deception face-to-face, including, among others, the motivation to detect deception, the relationship between deceiver and target, the type and magnitude of the deception, the role of suspicion (e.g., George and Marrett 2004 ) and experience with the medium.

Similarly, as new technologies are developed and employed, their features and affordances with respect to deception will need to be identified. For example, how do online dating sites, on which people post profiles of themselves, affect deception and its perception (Cornwell and Lundgren 2001 ; Ellison et al.   2004 )? How frequently do people lie in their profiles, and what kinds of lies are considered acceptable?

While further studies are needed, the research to date suggests that the questions posed at the beginning of this chapter concerning the intersection of deception and technology have complex answers, but the research also suggests that communication technologies do indeed affect how frequently we lie, about what and to whom. The data also suggest that deception detection will be as complicated, if not more so, online as it is face-to-face, although the potential for computer-assisted deception detection may create new avenues for this age-old issue.

Bargh, J. A., McKenna, K. Y. A. and Fitzsimons, G. J. ( 2002 ). Can you see the real me? The activation and expression of the ‘true self’ on the Internet.   Journal of Social Issues 58, 33–48.

Berman, J. and Bruckman, A. ( 2001 ). The Turing game: exploring identity in an online environment.   Convergence 7, 83–102.

Bradner, E. and Mark, G. (2002). Why distance matters: effects on cooperation, persuasion and deception. Proceedings of the 2002 ACM Conference on Computer Supported Cooperative Work (pp. 226–235). ACM Press: New York.

Buller, D. B. and J. K. Burgoon. ( 1996 ). Interpersonal deception theory.   Communication Theory 6, 203–242.

Burgoon, J. K. and Buller, D. B. ( 1994 ). Interpersonal deception: III. Effects of deceit on perceived communication and nonverbal behavior dynamics.   Journal of Nonverbal Behavior 18, 155–184>

Burgoon, J. K. and Nunamaker, J. F. ( 2004 ). Toward computer-aided support for the detection of deception.   Group Decision and Negotiation 13, 1–4.

Burgoon, J. K., Stoner, G. M., Bonito, J. A. and Dunbar, N. E. (2003). Trust and deception in mediated communication. Proceedings of the 36th Annual Hawaii International Conference on System Sciences (10 pages). IEEE Computer Society Press: Washington, D. C.

Burgoon, J. K., Stoner, G. M., Bonito, J. A. and Dunbar, N. E. (2003). Trust and deception in mediated communication. Proceedings of the 36th Hawaii International Conference on Systems Sciences , Maui, USA.

Bok, S. ( 1978 ). Lying: Moral choice in public and private life . New York: Pantheon.

Google Scholar

Google Preview

Bradner, E. and Mark, G. (2002). Why distance matters: effects on cooperation, persuasion and deception. Proceedings, Computer Supported Cooperative Work (CSCW 02) (pp. 226–235). November, New Orleans, LI.

Camden, C., Motley, M. T. and Wilson, A. ( 1984 ). White lies in interpersonal communication: a taxonomy and preliminary investigation of social motivations.   Western Journal of Speech Communication 48, 309–325.

Carlson, J. R. and George, J. F. ( 2004 ). Media appropriateness in the conduct and discovery of deceptive communication: the relative influence of richness and synchronicity.   Group Decision and Negotiation 13, 191–210.

Carlson, J. R., George, J. F., Burgoon, J. K., Adkins, M. and White, C. H. ( 2004 ). Deception in computer-mediated communication.   Group Decision and Negotiation 13, 5–28.

Carlson, J. R. and Zmud, R. W. ( 1999 ). Channel expansion theory and the experiential nature of media richness perceptions.   Academy of Management Journal 42(2), 153–170.

Cornwell, B. and Lundgren, D. C. ( 2001 ). Love on the Internet: involvement and misrepresentation in romantic relationships in cyberspace vs. realspace.   Computers in Human Behavior 17, 197–211.

Daft, R. L. and R. H. Lengel. ( 1986 ). organizational information requirements: media richness and structural design.   Management Science 32(5), 554–571.

Daft, R. L., R. H. Lengel, and L. K. Trevino. ( 1987 ). Message equivocality, media selection, and manager performance: implications for information systems.   MIS Quarterly 11(3), 355–366.

DePaulo, B. M. and Kashy, D. A. ( 1998 ). Everyday lies in close and casual relationships.   Journal of Personality and Social Psychology 74, 63–79.

DePaulo, B. M. Kashy, D. A., Kirkendol, S. E., Wyer, M. M. and Epstein, J. A. ( 1996 ). Lying in everyday life.   Journal of Personality and Social Psychology 70, 979–995.

DePaulo, B. M. and Kirkendol, S. E. ( 1989 ). The motivational impairment effect in the communication of deception. In J. C. Yuille (ed.), Credibility assessment (pp. 51–70). Dordrecht, Netherlands: Kluwer Academic.

DePaulo, B. M., Lanier, K. and Davis, T. ( 1983 ). Detecting the deceit of the motivated liar.   Journal of Personality and Social Psychology 45, 1096–1103.

DePaulo, B. M., Lindsay, J. J., Malone, B. E., Muhlenbruck, L., Charlton, K. and Cooper, H. ( 2003 ). Cues to deception.   Psychological Bulletin 129, 74–118.

Donath, J. S. ( 1998 ). Identity and deception in the virtual community. In M. A. Smith and P. Kollock (eds) Communities in Cyberspace (pp. 29–59). New York: Routledge.

Douglas, K. M. and McGarty, C. ( 2001 ). Identifiability and self-presentation: computer-mediated communication and intergroup interaction.   British Journal of Social Psychology   40 , 399–416.

Ekman, P. ( 2001 ). Telling lies: Clues to deceit in the marketplace, politics and marriage . New York: W. W. Norton.

Ellison, N. B., Heino, R. D. and Gibbs, J. L. (2004). Truth in advertising? An explanation of self-presentation and disclosure in online personals. Paper presented at the Annual Convention of the International Communication Association, New Orleans, LA.

Feldman, R. S., Forrest, J. A. and Happ, B. R. ( 2002 ). Self-presentation and verbal deception: do self-presenters lie more?   Basic and Applied Social Psychology 24, 163–170.

Forrest, J. A. and Feldman, R. S. ( 2000 ). Detecting deception and judge's involvement: lower task involvement leads to better lie detection.   Personality and Social Psychology Bulletin 26, 118–125.

Galasinski, D. ( 2000 ). The language of deception. A discourse analytic study . Thousand Oaks, CA: Sage.

George, J. F. and J. R. Carlson. (1999). Group support systems and deceptive communication. Proceedings of the 32nd Hawaii International Conference on Systems Sciences , Maui, HI.

George, J. F. and J. R. Carlson. (1999). Group support systems and deceptive communication. Proceedings of the 32nd Annual Hawaii International Conference on System Sciences (10 pages). IEEE Computer Society Press: Washington, D. C.

George, J. F. and Carlson, J. R. (2005). Media selection for deceptive communication. Proceedings of the of the 38th Annual Hawaii International Conference on System Sciences (10 pages). IEEE Computer Society Press: Washington, D. C.

George, J. F. and Carlson, J. R. (2005). Media selection for deceptive communication. Proceedings of the 38th Hawaii International Conference on System Sciences . Big Island, HI.

George, J. F. and Marrett, K. (2004). Inhibiting deception and its detection. Proceedings of the 34th Hawaii International Conference on System Sciences .

George, J. F. and Marrett, K. (2004). Inhibiting deception and its detection. Proceedings of the 34th Annual Hawaii International Conference on System Sciences (10 pages). IEEE Computer Society Press: Washington, D. C.

George, J. F., Marrett, K. and Tilley, P. (2004). Deception detection under varying electronic media and warning conditions. Proceedings of the 34th Hawaii International Conference on System Sciences .

George, J. F. and Marrett, K and Tilley, P. (2004). Deception detection under varying electronic media and warning conditions. Proceedings of the 34th Annual Hawaii International Conference on System Sciences (10 pages). IEEE Computer Society Press: Washington, D. C.

Grazioli, S. and Jarvenpaa, S. ( 2000 ). Perils of Internet fraud: an empirical investigation of deception and trust with experienced Internet consumers.   IEEE transactions on Systems, Man, and Cybernetics 3, 395–410.

Grazioli, S. and Jarvenpaa, S. ( 2003 a). Consumer and business deception on the Internet: content analysis of documentary evidence.   International Journal of Electronic Commerce 7, 93–118.

Grazioli, S. and Jarvenpaa, S. ( 2003 b). Deceived! Under target on line.   Communications of the ACM 46, 196–205.

Hancock, J. T., Curry, L., Goorha, S., and Woodworth, M. (in press a). On Lying and Being Lied To: A Linguistic Analysis of Deception in Computer-Mediated Communication.   Discourse Processes .

Hancock, J. T. and Dunham, P. J. ( 2001 a). Impression formation in computer-mediated communication revisited: an analysis of the breadth and intensity of impressions.   Communication Research 28, 325–347.

Hancock, J. T. and Dunham, P. J. ( 2001 b). Language use in computer-mediated communication: the role of coordination devices.   Discourse Processes 31, 91–110.

Hancock, J. T., Thom-Santelli, J. and Ritchie, T. (2004a). Deception and design: The impact of communication technologies on lying behavior. Proceedings, Conference on Computer Human Interaction (pp. 130–136). New York, ACM.

Hancock, J. T., Thom-Santelli, J. and Ritchie, T. (2004b). What lies beneath: the effect of the communication medium on the production of deception. Presented at the Annual Meeting of the Society for Text and Discourse , Chicago, IL.

Hancock, J. T., Woodworth, M., and Goorha, S. ( in press b). See no evil: The effect of communication medium and motivation on deception detection.   Group Decision and Negotiation .

Heinrich, C. U. and Borkenau, P. ( 1998 ). Deception and deception detection: the role of cross-modal inconsistency.   Journal of Personality 66, 667–712.

Herring, S. C. and Martinson, A. ( 2004 ). Assessing gender authenticity in computer-mediated language use: evidence from an identity game.   Journal of Language and Social Psychology 23, 424–446.

Hollingshead, A. ( 2000 ). Truth and lying in computer-mediated groups. In M. A. Neale, E. A. Mannix, and T. Griffith (eds), Research in managing groups and teams, vol.3: Technology and teams (pp. 157–173). Greenwich, CT: JAI Press.

Horn, D. B. Is seeing believing? Detecting deception in technologically mediated communication. Proceedings, Extended Abstracts of CHIʼ01 .

Horn, D. B., Olson, J. S. and Karasik, L. (2002). The effects of spatial and temporal video distortion on lie detection performance. Proceedings, Extended Abstracts of CHIʼ02 .

Horn, D. B., Olson, J. S. and Karasik, L. (2002). The effects of spatial and temporal video distortion on lie detection performance. Extended Abstracts of the CHI' 02 Conference on Human Factors in Computing Systems (pp. 714–715). ACM: New York.

Internet Fraud Complaint Center ( 2003 ). Internet Fraud Report . The National White Collar Crime Center. Washington, D. C.

Joinson, A. N. ( 2001 ). Self-disclosure in computer-mediated communication: the role of self-awareness and visual anonymity.   European Journal of Social Psychology 31, 177–192.

Keyes, R. ( 2004 ). The post-truth era: Dishonesty and deception in contemporary life.   New York: St. Martin's Press.

Knight, J. ( 2004 ). The truth about lying.   Nature 428, 692–694.

Messaris, P. ( 1997 ). Visual persuasion . Thousand Oaks, CA: Sage Publications, Inc.

Lippard, P. V. ( 1988 ). ‘Ask me no questions, Iʼll tell you no lies’: situational exigencies for interpersonal deception.   Western Journal of Speech Communication 52, 91–103.

Metts, S. ( 1989 ). An exploratory investigation of deception in close relationships.   Journal of Social and Personal Relationships 6, 159–179.

Millar, M. and Millar, K. ( 1995 ). Detection of deception in familiar and unfamiliar persons: the effects of information restriction.   Journal of Nonverbal Behavior 19, 69–84.

Miller, G. R. and Stiff, J. B. ( 1993 ). Deceptive communication: Sage series in interpersonal communication, vol. 14 . Thousand Oaks, CA: Sage Publications, Inc.

Mitchell, K. J., Finkelhor, D. and Wolak, J. ( 2001 ). Risk factors and impact of online solicitation of youth.   Journal of the American Medical Association 285, 3011–3014.

Postmes, T., Spears, R. and Lea, M. ( 1999 ). Social identity, group norms, and ‘deindividuation’: lessons from computer-mediated communication for social influence in the group. In N. Ellemers, R. Spears and B. Doosje (eds), Social identity: Context, commitment, content (pp. 164–183). Oxford: Blackwell.

Robinson, W. P. ( 1996 ). Deceit, delusion, and detection . Thousand Oaks, CA: Sage Publications Inc.

Schein, E. H. ( 2004 ). Learning when and how to lie: a neglected aspect of organizational and occupational socialization.   Human Relations 57, 259–273.

Schneider, F. B. (Ed.) ( 1999 ). Trust in cyberspace . Washington, DC: National Academy Press.

Spears, R., Postmes, T. and Lea, M. ( 2002 ). The power of influence and the influence of power in virtual groups: a SIDE look at CMC and the Internet.   The Journal of Social Issues. Special Issue: Social impact of the Internet 58, 91–108.

Stolfo, S. J., Lee, W., Chan, P. K., Fan, W. and Eskin ( 2001 ). Data-mining based intrustion detectors: an overview of the Columbia IDS project.   SIGMOD Record 30, 5–14.

Turkle, S. ( 1995 ). Life on the screen: Identity in the age of the Internet . New York: Simon and Schuster.

Utz, S. ( 2005 ). Types of deception and underlying motivation: what people think. Social Science Computer Review 23, 49–56.

Vrij, A. ( 2000 ). Detecting lies and deceit: The psychology of lying and its implications for professional practice . Chichester: John Wiley and Sons.

Walther, J. B. ( 1996 ). Computer-mediated communication: impersonal, interpersonal, and hyperpersonal interaction.   Communication Research 23, 1–43.

Walther, J. B. and Parks, M. R. ( 2002 ). Cues filtered out, cues filtered in: computer-mediated communication and relationships. In M. L. Knapp and J. A. Daly (eds), Handbook of interpersonal communication , 3rd edn. (pp. 529–563). Thousand Oaks, CA: Sage.

Walther, J. B. and Slovacek, C. and Tidwell, L. C. ( 2001 ). Is a picture worth a thousand words? Photographic images in long term and short term virtual teams.   Communication Research 28, 105–134.

Whitty, M. T. ( 2002 ). Liar, Liar! An examination of how open, supportive and honest people are in chat rooms.   Computers in Human Behavior 18(4), 343–352.

Whitty, M. and Gavin, J. ( 2001 ). Age/sex/location: uncovering the social cues in the development of online relationships.   CyberPsychology and Behavior 4(5), 623–630.

Zahavi, A. ( 1993 ). The fallacy of conventional signaling.   The Royal Society Philosophical Transaction 340, 227–230.

Zhou, L., Burgoon, J. K., Nunamaker, J. F. and Twitchell, D. ( 2004 a). Automating linguistics-based cues for detecting deception in text-based asynchronous computer-mediated communication.   Group Decision and Negotiation 13, 81–106.

Zhou, L., Burgoon, J. K., Twitchell, D., Qin, T. and Nunamaker, J. F. ( 2004 b). A comparison of classification methods for predicting deception in computer-mediated communication.   Journal of Management Information Systems 20, 139–165.

Zhou, L. and Zhang, D. (2004). Can online behavior unveil deceivers? An exploratory investigation of deception in instant messaging. Proceedings of the 37th Hawaii International Conference on Systems Sciences , Maui, USA.

Zhou, L. and Zhang, D. (2004). Can online behavior unveil deceivers? An exploratory investigation of deception in instant messaging. Proceedings of the 37th Annual Hawaii International Conference on System Sciences (10 pages). IEEE Computer Society Press: Washington, D. C.

Zuckerman, M. and Driver, R. E. ( 1985 ). Telling lies: verbal and nonverbal correlates of deception. In A. W. Siegman and S. Feldstein (eds), Multichannel integrations of nonverbal behavior (pp. 129–147). Hillsdale, NJ: Erlbaum.

  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

Why do people spread false information online? The effects of message and viewer characteristics on self-reported likelihood of sharing social media disinformation

Roles Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Writing – original draft, Writing – review & editing

* E-mail: [email protected]

Affiliation School of Social Sciences, University of Westminster, London, United Kingdom

ORCID logo

  • Tom Buchanan

PLOS

  • Published: October 7, 2020
  • https://doi.org/10.1371/journal.pone.0239666
  • Reader Comments

Table 1

Individuals who encounter false information on social media may actively spread it further, by sharing or otherwise engaging with it. Much of the spread of disinformation can thus be attributed to human action. Four studies (total N = 2,634) explored the effect of message attributes (authoritativeness of source, consensus indicators), viewer characteristics (digital literacy, personality, and demographic variables) and their interaction (consistency between message and recipient beliefs) on self-reported likelihood of spreading examples of disinformation. Participants also reported whether they had shared real-world disinformation in the past. Reported likelihood of sharing was not influenced by authoritativeness of the source of the material, nor indicators of how many other people had previously engaged with it. Participants’ level of digital literacy had little effect on their responses. The people reporting the greatest likelihood of sharing disinformation were those who thought it likely to be true, or who had pre-existing attitudes consistent with it. They were likely to have previous familiarity with the materials. Across the four studies, personality (lower Agreeableness and Conscientiousness, higher Extraversion and Neuroticism) and demographic variables (male gender, lower age and lower education) were weakly and inconsistently associated with self-reported likelihood of sharing. These findings have implications for strategies more or less likely to work in countering disinformation in social media.

Citation: Buchanan T (2020) Why do people spread false information online? The effects of message and viewer characteristics on self-reported likelihood of sharing social media disinformation. PLoS ONE 15(10): e0239666. https://doi.org/10.1371/journal.pone.0239666

Editor: Jichang Zhao, Beihang University, CHINA

Received: June 3, 2020; Accepted: September 10, 2020; Published: October 7, 2020

Copyright: © 2020 Tom Buchanan. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: All files are available from the UK Data Service archive (doi: 10.5255/UKDA-SN-854297 ).

Funding: This work was funded by an award to TB from the Centre for Research and Evidence on Security Threats (ESRC Award: ES/N009614/1). https://crestresearch.ac.uk The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Competing interests: The authors have declared that no competing interests exist.

Introduction

Disinformation is currently a critically important problem in social media and beyond. Typically defined as “the deliberate creation and sharing of false and/or manipulated information that is intended to deceive and mislead audiences, either for the purposes of causing harm, or for political, personal or financial gain”, political disinformation has been characterized as a significant threat to democracy [1, p.10]. It forms part of a wider landscape of information operations conducted by governments and other entities [ 2 , 3 ]. Its intended effects include political influence, increasing group polarisation, reducing trust, and generally undermining civil society [ 4 ]. Effects are not limited to online processes. They regularly spill over into other parts of our lives. Experimental work has shown that exposure to disinformation can lead to attitude change [ 5 ] and there are many real-world examples of behaviours that have been directly attributed to disinformation, such people as attacking telecommunications masts in response to fake stories about ‘5G causing coronavirus’ [ 6 , 7 ]. Social media disinformation is very widely used as a tool of influence: computational propaganda has been described as a pervasive and ubiquitous part of modern everyday life [ 8 ].

How does social media disinformation spread?

Once disinformation has initially been seeded online by its creators, one of the ways in which it spreads is through the actions of individual social media users. Ordinary people may propagate the material to their own social networks through deliberate sharing–a core function of platforms such as Facebook and Twitter. Other interactions with it, such as ‘liking’, also trigger the algorithms of social media platforms to display it to other users. This is a phenomenon known as ‘organic reach’ [ 9 ]. It can lead to false information spreading exponentially. As an example, analysis of the activity of the Russian ‘Internet Research Agency’ (IRA) disinformation group in the USA between 2015 and 2017 concluded that over 30 million users shared and otherwise interacted with the IRA’s Facebook and Instagram posts, propagating them to their families and friends [ 4 ]. There is evidence that false material is spread widely and rapidly through social media due to such human behaviour [ 10 ].

Why do people spread social media disinformation?

When individuals share or interact with disinformation they see online, they have essentially been persuaded to do so by its originators. Influential models of social information processing suggest there are different routes to persuasion [e.g. 11 ]. Under some circumstances, we may carefully consider the information available. At other times, we make rapid decisions based on heuristics and peripheral cues. When sharing information on social media occurs, it is likely to be spontaneous and rapid, rather than being a considered action that people spend time deliberating over. For example, there are indications of people using the interaction features of Facebook in a relatively unthinking and automatic manner [ 12 ]. In such situations, a peripheral route to persuasion is likely be important [ 13 ]. Individuals’ choices to share, like and so on will thus be guided primarily by heuristics or contextual cues [ 14 ].

Three potentially important heuristics in this context are consistency , consensus and authority [ 15 ]. These are not the only heuristics that might possibly influence whether we share false material. However, in each case there is suggestive empirical evidence, and apparent real-world attempts to leverage these phenomena, that make them worth considering.

Consistency.

Consistency is the extent to which sharing would be consistent with past behaviours or beliefs of the individual. For example, in the USA people with a history of voting Republican might be more likely to endorse and disseminate right-wing messaging [ 16 ]. There is a large body of work based on the idea that people prefer to behave in ways consistent with their attitudes [ 17 ]. Research has indicated that social media users consider headlines consistent with their pre-existing beliefs as more credible, even when explicitly flagged as being false [ 18 ]. In the context of disinformation, this could make it desirable to target audiences sympathetic to the message content.

Consensus is the extent to which people think their behaviour would be consistent with that of most other people. In the current context, it is possible that seeing a message has already been shared widely might make people more likely to forward it on themselves. In marketing, this influence tactic is known as ‘social proof’ [ 19 ]. It is widely used in online commerce in attempts to persuade consumers to purchase goods or services (e.g. by displaying reviews or sales rankings). The feedback mechanisms of social networks can be manipulated to create an illusion of such social support, and this tactic seems to have been used in the aftermath of terror attacks in the UK [ 20 ].

Bot networks are used to spread low-credibility information on Twitter through automated means. Bots have been shown to be involved in the rapid spread of information, tweeting and retweeting messages many times [ 21 ]. Among humans who see the messages, the high retweet counts achieved through the bot networks might be interpreted as indicating that many other people agree with them. There is evidence which suggests that "each amount of sharing activity by likely bots tends to trigger a disproportionate amount of human engagement" [21, p.4]. Such bot activity could be an attempt to exploit the consensus effect.

It is relatively easy to manipulate the degree of consensus or social proof associated with an online post. Work by the NATO Strategic Communications Centre of Excellence [ 22 ] indicated that it was very easy to purchase high levels of false engagement for social media posts (e.g. sharing of posts by networks of fake accounts) and that there was a significant black market for social media manipulation. Thus, if boosting consensus effectively influences organic reach, then it could be a useful tool for both those seeding disinformation and those seeking to spread counter-messages.

Authority is the extent to which the communication appears to come from a credible, trustworthy source [ 23 ]. Research participants have been found to report a greater likelihood of propagating a social media message if it came from a trustworthy source [ 24 ]. There is evidence of real-world attempts to exploit this effect. In 2018, Twitter identified fraudulent accounts that simulated those of US local newspapers [ 25 ], which may be trusted more than national media [ 26 ]. These may have been sleeper accounts established specifically for the purpose of building trust prior to later active use.

Factors influencing the spread of disinformation.

While there are likely to be a number of other variables that also influence the spread of disinformation, there are grounds for believing that consistency, consensus and authority may be important. Constructing or targeting disinformation messages in such a way as to maximise these three characteristics may be a way to increase their organic reach. There is real-world evidence of activity consistent with attempts to exploit them. If these effects do exist, they could also be exploited by initiatives to counter disinformation.

Who spreads social media disinformation?

Not all individuals who encounter untrue material online spread it further. In fact, the great majority do not. Research linking behavioural and survey data [ 16 ] found that less than 10% of participants shared articles from ‘fake news’ domains during the 2016 US presidential election campaign (though of course when extrapolated to the huge user base of social network platforms like Facebook, this is still a very large number of people).

The fact that only a minority of people actually propagate disinformation makes it important to consider what sets them apart from people who don’t spread untrue material further. This will help to inform interventions aimed at countering disinformation. For example, those most likely to be misled by disinformation, or to spread it further, could be targeted with counter-messaging. It is known that the originators of disinformation have already targeted specific demographic groups, in the same way as political campaigns micro-target messaging at those audience segments deemed most likely to be persuadable [ 27 ]. For example, it is believed that the ‘Internet Research Agency’ sought to segment Facebook and Instagram users based on race, ethnicity and identity by targeting their messaging to people recorded by the platforms as having certain interests for marketing purposes [ 4 ]. They targeted specific communications tailored to those segments (e.g. trying to undermine African Americans’ faith in political processes and suppress their voting in the US presidential election).

Digital media literacy.

Research has found that older adults, especially those aged over 65, were by far the most likely to spread material originally published by ‘fake news’ domains [ 16 ]. A key hypothesis advanced to explain this is that older adults have lower levels of digital media literacy, and are thus less likely to be able to distinguish between true and false information online. While definitions may vary, digital media literacy can be thought of as including “… the ability to interact with textual, sound, image, video and social medias … finding, manipulating and using such information” [28, p. 11] and being a “multidimensional concept that comprised technical, cognitive, motoric, sociological, and emotional aspects” [29, p.834]. Digital media literacy is widely regarded as an important variable mediating the spread and impact of disinformation [e.g. 1]. It is argued that many people lack the sophistication to detect a message as being untruthful, particularly when it appears to come from an authoritative or trusted source. Furthermore, people higher in digital media literacy may be more likely to engage in elaborated, rather than heuristic-driven, processing (cf. work on phishing susceptibility [ 30 ]), and thus be less susceptible to biases such as consistency, consensus and authority.

Educating people in digital media literacy is the foundation of many anti-disinformation initiatives. Examples include the ‘News Hero’ Facebook game developed by the NATO Strategic Communications Centre of Excellence ( https://www.stratcomcoe.org/news-hero ), government initiatives in Croatia and France [ 8 ] or the work of numerous fact-checking organisations. The effectiveness of such initiatives relies on two assumptions being met. The first is that lower digital media literacy really does reduce our capacity to identify disinformation. There is currently limited empirical evidence on this point, complicated by the fact that definitions of ‘digital literacy’ are varied and contested, and there are currently no widely accepted measurement tools [ 28 ]. The second is that the people sharing disinformation are doing so unwittingly, having been tricked into spreading it. However, it is possible that at least some people know the material is untrue, and they spread it anyway. Survey research [ 31 ] has found that believing a story was false was not necessarily a barrier to sharing it. People may act like this because they are sympathetic to a story’s intentions or message, or they are explicitly signalling their social identity or allegiance to some political group or movement. If people are deliberately forwarding information that they know is untrue, then raising their digital media literacy would be ineffective as a stratagem to counter disinformation. This makes it important to simultaneously consider users’ beliefs about the veracity of disinformation stories, to inform the design of countermeasures.

Personality.

It is also known that personality influences how people use social media [e.g. 32]. This makes it possible that personality variables will also influence interactions with disinformation. Indeed, previous research [ 24 ] found that people low on Agreeableness reported themselves as more likely to propagate a message. This is an important possibility to consider, because it raises the prospect that individuals could be targeted on the basis of their personality traits with either disinformation or counter-messaging. In a social media context, personality-based targeting of communications is feasible because personality characteristics can be detected from individuals’ social media footprints [ 33 , 34 ]. Large scale field experiments have shown that personality-targeted advertising on social media can influence user behaviour [ 35 ].

The question of which personality traits might be important is an open one. In the current study, personality was approached on an exploratory basis, with no specific hypotheses about effects or their directions. This is because there are a number of different and potentially rival effects that might operate. For example, higher levels of Conscientiousness may be associated with a greater likelihood of posting political material in social media [ 36 ] leading to a higher level of political disinformation being shared. However, people higher in Conscientiousness are likely to be more cautious [ 37 ] and pay more attention to details [ 38 ]. They might therefore also be more likely to check the veracity of the material they share, leading to a lower level of political disinformation being shared.

Research aims and hypotheses

The overall aim of this project was to establish whether contextual factors in the presentation of disinformation, or characteristics of the people seeing it, make it more likely that they extend its reach. The methodology adopted was scenario-based, with individuals being asked to rate their likelihood of sharing exemplar disinformation messages. A series of four studies was conducted, all using the same methodology. Multiple studies were used to establish whether the same effects were found across different social media platforms (Facebook in Study 1, Twitter in Study 2, Instagram in Study 3) and countries (Facebook with a UK sample in Study 1, Facebook with a US sample in Study 4). Data were also collected on whether participants had shared disinformation in the past. A number of distinct hypotheses were advanced:

H1: Individuals will report themselves as more likely to propagate messages from more authoritative compared to less authoritative sources.

H2: Individuals will report themselves as more likely to propagate messages showing a higher degree of consensus compared to those showing a lower degree of consensus.

H3: Individuals will report themselves as more likely to propagate messages consistent with their pre-existing beliefs compared to inconsistent messages.

H4: Individuals lower in digital literacy will report a higher likelihood of sharing false messages than individuals higher in digital literacy.

Other variables were included in the analysis on an exploratory basis with no specific hypotheses being advanced. In summary, this project asks why ordinary social media users share political disinformation messages they see online. It tests whether specific characteristics of messages or their recipients influence the likelihood of disinformation being further shared online. Understanding any such mechanisms will both increase our understanding of the phenomenon and inform the design of interventions seeking to reduce its impact.

Study 1 tested hypotheses 1–4 with a UK sample, using stimuli relevant to the UK. The study was completed online. Participants were members of research panels sourced through the research company Qualtrics.

Participants were asked to rate their likelihood of sharing three simulated Facebook posts. The study used an experimental design, manipulating levels of authoritativeness and consensus apparent in the stimuli. All manipulations were between, not within, participants. Consistency with pre-existing beliefs was not manipulated. Instead, the political orientation of the stimuli was held constant, and participants’ scores on conservative political orientation were used as an index of consistency between messages and participant beliefs. The effects of these variables on self-rated likelihood of sharing the stimuli, along with those of a number of other predictors, were assessed using multiple regression. The primary goal of the analysis was to identify variables that statistically significantly explained variance in the likelihood of sharing disinformation. The planned analysis was followed by supplementary and exploratory analyses. All analyses were conducted using SPSS v.25 for Mac. For all studies reported in this paper, ethical approval came from both the University of Westminster Research Ethics Committee (ETH1819-1420) and the Lancaster University Security Research Ethics Committee (BUCHANAN 2019 07 23). Consent was obtained, via an electronic form, from anonymous participants.

A short questionnaire was used to capture demographic information (gender; country of residence; education; age; occupational status; political orientation expressed as right, left or centre; frequency of Facebook use). Individual differences in personality, political orientation, and digital / new media literacy were measured using established validated questionnaires. Ecologically valid stimuli were used, with their presentation being modified across conditions to vary authoritativeness and consensus markers.

Personality was measured using a 41-item Five-Factor personality questionnaire [ 38 ] derived from the International Personality Item Pool [ 37 ]. The measure provides indices of Extraversion, Neuroticism, Openness to Experience, Agreeableness and Conscientiousness that correlate well with the domains of Costa and McCrae's [ 39 ] Five Factor Model.

Conservatism was measured using the 12-item Social and Economic Conservatism Scale (SECS) [ 40 ], which is designed to measure political orientation along a left-right; liberal-conservative continuum. It was developed and validated using a US sample. In pilot work for the current study, mean scores for individuals who reported voting for the Labour and Conservative parties in the 2017 UK general election were found to differ in the expected manner ( t (28) = -2.277, p = .031, d = 0.834). This provides evidence of its appropriateness for use in UK samples. While the measure provides indices of different aspects of conservatism, it also provides an overall conservatism score which was used in this study.

Digital media literacy was measured using the 35-item New Media Literacy Scale (NMLS) [ 29 ]. This is a theory-based self-report measure of competences in using, critically interrogating, and creating digital media technologies and messaging. In pilot work with a UK sample, it was found to distinguish between individuals high or low in social media (Twitter) use, providing evidence of validity ( t (194) = -3.847, p < .001, d = .55). While the measure provides indices of different aspects of new media literacy, it also provides an overall score which was used in this study.

Participants were asked to rate their likelihood of sharing three genuine examples of ‘fake news’ that had been previously published online. An overall score for their likelihood of sharing the stimuli was obtained by summing the three ratings, creating a combined score. This was done, and a set of three stimuli was used, to reduce the likelihood that any effects found were peculiar to a specific story. The stimuli were sourced from the website Infowars.com (which in some cases had republished them from other sources). Infowars.com has been described [ 41 ] as a high-exposure site strongly associated with the distribution of ‘fake news’. Rather than full articles, excerpts (screenshots) were used that had the size and general appearance of what respondents might expect to see on social media sites. The excerpts were edited to remove any indicators of the source, metrics such as the numbers of shares, date, and author. All had a right-wing orientation (so that participant conservatism could be used as a proxy for consistency between the messages and existing beliefs). This was established in pilot work rating their political orientation and likelihood of being shared. The three stories were among seven rated by a UK sample ( N = 30) on an 11-point scale asking “To what extent do you think this post was designed to appeal to people with right wing (politically conservative) views?” anchored at “Very left wing oriented” and “Very right wing oriented”. All seven were rated statistically significantly above the politically-neutral midpoint of the scale. Of the three stimuli selected for use in this study, a one-sample t- test showed that the least right-wing was statistically significantly higher than the midpoint, ( t (39) = 4.385, p < .001, d = 0.70).

One of the stimuli was a picture of masked and hooded men titled “Censored video: watch Muslims attack men, women & children in England”. One was a picture of many people walking down a road, titled “Revealed: UN plan to flood America with 600 million migrants”, with accompanying text describing a plan to “flood America and Europe with hundreds of millions of migrants to maintain population levels”. The third was a picture of the Swedish flag titled “‘Child refugee’ with flagship Samsung phone and gold watch complains about Swedish benefits rules”, allegedly describing a 19 year-old refugee’s complaints.

The authoritativeness manipulation was achieved by pairing the stimuli with sources regarded as relatively high or low in authoritativeness. The source was shown above the stimulus being rated, in the same way as the avatar and username of someone who had posted a message would be on Facebook. The lower authoritativeness group were slight variants on real usernames that had previously retweeted either stories from Infowars.com or another story known to be untrue. The original avatars were used. The exemplars used in this study were named ‘Tigre’ (with an avatar of an indistinct picture of a female face), ‘jelly beans’ (a picture of some jelly beans) and ‘ChuckE’ (an indistinct picture of a male face). The higher authoritativeness group comprised actual fake accounts set up by the Internet Research Agency (IRA) group to resemble local news sources, selected from a list of suspended IRA accounts released by Twitter. The exemplars used in this study were ‘Los Angeles Daily’, ‘Chicago Daily News’ and ‘El Paso Top News’. Pilot work was conducted with a sample of UK participants ( N = 30) who each rated a selection of 9 usernames, including these 6, for the extent to which each was “likely to be an authoritative source—that is, likely to be a credible and reliable source of information”. A within-subjects t -test indicated that mean authoritativeness ratings for the ‘higher’ group were statistically significantly higher than the ‘lower’ group ( t (29) = -11.181, p < .001, d z = 2.04).

The consensus manipulation was achieved by pairing the stimuli with indicators of the number of shares and likes the story had. The indicators were shown below the stimulus being rated, in the same way as they normally would be on Facebook. In the low consensus conditions, low numbers of likes (1, 3, 2) and shares (2, 0, 2) were displayed. In the high consensus conditions, higher (but not unrealistic) numbers of likes (104K, 110K, 63K) and shares (65K, 78K, 95K) were displayed. The information was presented using the same graphical indicators as would be the case on Facebook, accompanied by the (inactive) icons for interacting with the post, in order to maximise ecological validity.

The study was conducted completely online, using materials hosted on the Qualtrics research platform. Participants initially saw an information page about the study, and on indicating their consent proceeded to the demographic items. They then completed the personality, conservatism and new media literacy scales. Each of these was presented on a separate page, except the NMLS which was split across three pages.

Participants were then asked to rate the three disinformation items. Participants were randomized to different combinations of source and story within their assigned condition. For example, Participant A might have seen Story 1 attributed to Source 1, Story 2 attributed to Source 2, and Story 3 attributed to Source 3; while Participant B saw Story 1 attributed to Source 2, Story 2 attributed to Source 1, and Story 3 attributed to Source 3. Each participant saw the same three stories paired with one combination of authoritativeness and consensus. There were 24 distinct sets of stimuli.

Each participant saw an introductory paragraph stating “A friend of yours recently shared this on Facebook, commenting that they thought it was important and asking all their friends to share it:”. Below this was the combination of source, story, and consensus indicators, presented together in the same way as a genuine Facebook post would be. They then rated the likelihood of them sharing the post to their own public timeline, on an 11-point scale anchored at ‘Very Unlikely’ and ‘Very Likely’. This was repeated for the second and third stimuli, each on a separate page. Having rated each one, participants were then shown all three stimuli again, this time on the same page. They were asked to rate each one for “how likely do you think it is that the message is accurate and truthful” and “how likely do you think it is that you have seen it before today”, on 5-point scales anchored at ‘Not at all likely’ and ‘Very likely’.

After rating the stimuli, participants were asked two further questions: “Have you ever shared a political news story online that you later found out was made up?”, and “And have you ever shared a political news story online that you thought AT THE TIME was made up?”, with ‘yes’ or ‘no’ response options. This question format directly replicated that used in Pew Research Centre surveys dealing with disinformation [e.g. 31].

Finally, participants were given the opportunity once again to give or withdraw their consent for participation. They then proceeded to a debriefing page. It was only at the debriefing stage that they were told the stories they had seen were untrue: no information about whether the stimuli were true or false had been presented prior to that point.

Data screening and processing.

Prior to delivery of the sample, Qualtrics performed a series of quality checks and ‘data scrubbing’ procedures to remove and replace participants with response patterns suggesting inauthentic or inattentive responding. These included speeding checks and examination of response patterns. On delivery of the initial sample ( N = 688) further screening procedures were performed. Sixteen respondents were identified who had responded with the same scores to substantive sections of the questionnaire (‘straightlining’). These were removed, leaving N = 672. These checks and exclusions were carried out prior to any data analysis. Where participants had missing data on any variables, they were omitted only from analyses including those variables. Thus, N s vary slightly throughout the analyses.

Participants.

The target sample size was planned to exceed N = 614, which would give 95% power to detect R 2 = .04 (a benchmark for the minimum effect size likely to have real-world importance in social science research [ 42 ]), in the planned multiple regression analysis with 11 predictors. Qualtrics was contracted to provide a sample of Facebook users that was broadly representative of the UK 2011 census population in terms of gender; the split between those who had post-secondary-school education and those who had not; and age profile (18+). Quotas were used to assemble a sample comprising approximately one third each self-describing as left-wing, centre and right-wing in their political orientation. Participant demographics are shown in Table 1 , column 1.

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

https://doi.org/10.1371/journal.pone.0239666.t001

Descriptive statistics for participant characteristics (personality, conservatism, new media literacy and age) and their reactions to the stimuli (likelihood of sharing, belief the stories were likely to be true, and rating of likelihood that they had seen them before) are summarised in Table 2 . All scales had acceptable reliability. The main dependent variable, likelihood of sharing, had a very skewed distribution with a strong floor effect: 39.4% of the participants indicated they were ‘very unlikely’ to share any of the three stories they saw. This is consistent with findings on real-world sharing that indicate only a small proportion of social media users will actually share disinformation [e.g. 16], though it gives a dependent variable with less than ideal distributional properties.

thumbnail

https://doi.org/10.1371/journal.pone.0239666.t002

To simultaneously test hypotheses 1–4 a multiple regression analysis was carried out. This evaluated the extent to which digital media literacy (NMLS), authority of the message source, consensus, belief in veracity of the messages, consistency with participant beliefs (operationalised as the total SECS conservatism scale score), age and personality (Extraversion, Conscientiousness, Agreeableness, Openness to Experience and Neuroticism), predicted self-rated likelihood of sharing the posts. This analysis is summarised in Table 3 . Checks were performed on whether the dataset met the assumptions required by the analysis (absence of collinearity, independence of residuals, heteroscedasticity and non-normal distribution of residuals). Despite the skewed distribution of the dependent variable, no significant issues were detected.

thumbnail

https://doi.org/10.1371/journal.pone.0239666.t003

However, exploratory analyses indicated that inclusion of other variables in the regression model might be warranted. It is well established that there are gender differences on a number of personality variables. Furthermore, in the current sample men and women differed in their level of conservatism ( M = 669.10, SD = 150.68 and M = 636.50, SD = 138.31 respectively; t (666) = 2.914, p = .004), their self-rated likelihood of sharing ( M = 10.41, SD = 8.33 and M = 7.60, SD = 6.38 respectively; t (589.60) = 4.928, p < .001; adjusted df used due to heterogeneity of variance, Levene’s F = 35.99, p < .001), and their belief that the stories were true ( M = 7.16, SD = 3.22 and M = 6.52, SD = 3.12 respectively; t (668) = 2.574, p = .010). Education level was found to correlate positively with NMLS scores ( r = .210, N = 651, p < .001). Level of Facebook use correlated significantly with age ( r = -.126, N = 669 , p = .001), education ( r = .082, N = 671, p = .034), NMLS ( r = .170, N = 652, p < .001), with likelihood of sharing ( r = .079, N = 672, p = .040), and with likelihood of having seen the stimuli before ( r = .107, N = 672, p = .006). Self-reported belief that respondents had seen the stories before also correlated significantly with likelihood of sharing ( r = .420, N = 672, p < .001), and a number of other predictor variables.

Accordingly, a further regression analysis was performed, including these additional predictors (gender, education, level of Facebook use, belief they had seen the stories before). Given inclusion of gender as a predictor variable, the two respondents who did not report their gender as either male or female were excluded from further analysis. The analysis, summarised in Table 4 , indicated that the model explained 43% of the variance in self-reported likelihood of sharing the three disinformation items. Neither the authoritativeness of the story source, nor consensus information associated with the stories, was a significant predictor.

thumbnail

https://doi.org/10.1371/journal.pone.0239666.t004

Consistency of the items with participant attitudes (conservatism) was important, with a positive and statistically significant relationship between conservatism and likelihood of sharing. The only personality variable predicting sharing was Agreeableness, with less agreeable people giving higher ratings of likelihood of sharing. In terms of demographic characteristics, gender and education were statistically significant predictors, with men and less-educated people reporting a higher likelihood of sharing. Finally, people reported a greater likelihood of sharing the items if they believed they were likely to be true, and if they thought they had seen them before.

Participants had also been asked about their historical sharing of untrue political stories, both unknowing and deliberate. 102 out of 672 participants (15.2%) indicated that they had ever ‘shared a political news story online that they later found out was made up’, while 64 out of 672 indicated they had shared one that they ‘thought AT THE TIME was made up’ (9.5%). Predictors of whether or not people had shared untrue material under both sets of circumstances were examined using logistic regressions, with the same sets of participant-level predictors.

Having unknowingly shared untrue material ( Table 5 ) was significantly predicted by lower Conscientiousness, lower Agreeableness, and lower age. Having shared material known to be untrue at the time ( Table 6 ) was significantly predicted by lower Agreeableness and lower age.

thumbnail

https://doi.org/10.1371/journal.pone.0239666.t005

thumbnail

https://doi.org/10.1371/journal.pone.0239666.t006

The main analysis in this study ( Table 4 ) provided limited support for the hypotheses. Contrary to hypotheses 1, 2, and 4, neither consensus markers, authoritativeness of source, nor new media literacy were associated with self-rated likelihood of sharing the disinformation stories. However, in line with hypothesis 3, higher levels of conservatism were associated with higher likelihood of sharing disinformation. This finding supports the proposition that we are more likely to share things that are consistent with our pre-existing beliefs, as all the stimuli were right-wing in orientation. An alternative explanation might be that more conservative people are simply more likely to share disinformation. However, as well as lacking a solid rationale, this explanation is not supported by the fact that conservatism did not seem to be associated with self-reported historical sharing (Tables 5 and 6 ).

The strongest predictors of likelihood of sharing were belief that the stories were true, and likelihood of having seen them before. Belief in the truth of the stories provides further evidence for the role of consistency (hypothesis 3), in that we are more likely to share things we believe are true. The association with likely previous exposure to the materials is consistent with other recent research [ 43 , 44 ] that found that prior exposure to ‘fake news’ headlines led to higher belief in their accuracy and reduced belief that it would be unethical to share them.

Of the personality variables, only Agreeableness was a significant predictor, with less agreeable people rating themselves are more likely to share the stimuli. This is consistent with previous findings [ 24 ] that less agreeable people reported they were more likely to share a critical political message.

Lower education levels were associated with a higher self-reported likelihood of sharing. It is possible that less educated people may be more susceptible to online influence, given work finding that less educated people were more influenced by micro-targeted political advertising on Facebook [ 45 ].

Finally, gender was found to be an important variable, with men reporting a higher likelihood of sharing the disinformation messages than women. This was unanticipated: while there are a number of gender-related characteristics (e.g. personality traits) that were thought might be important, there were no a priori grounds to expect that gender itself would be a predictor variable.

Study 1 also examined predictors of reported historical sharing of false political information. Consistent with real-world data [ 16 ], and past representative surveys [e.g. 31], a minority of respondents reported such past sharing. Unknowingly sharing false political stories was predicted by low Conscientiousness, low Agreeableness, and lower age, while knowingly sharing false material was predicted only by lower Agreeableness and lower age. The effect of Agreeableness is consistent with the findings from the main analysis and from [ 24 ]. The finding that Conscientiousness influenced accidental, but not deliberate, sharing is consistent with the idea that less conscientious people are less likely to check the details or veracity of a story before sharing it. Clearly this tendency would not apply to deliberate sharing of falsehoods. The age effect is harder to explain, especially given evidence [ 16 ] that older people were more likely to share material from fake news sites. One possible explanation is that younger people are more active on social media, so would be more likely to share any kind of article. Another possibility is that they are more likely to engage in sharing humorous political memes, which could often be classed as false political stories.

Study 2 set out to repeat Study 1, but presented the materials as if they had been posted on Twitter rather than Facebook. The purpose of this was to test whether the observed effects applied across different platforms. Research participants have reported using ‘likes’ on Twitter in a more considered manner than on Facebook [ 12 ], raising the possibility that heuristics might be less important for this platform. The study was completed online, using paid respondents sourced from the Prolific research panel ( www.prolific.co ).

The methodology exactly replicated that of Study 1, except in the case of details noted below. The planned analysis was revised to include the expanded set of predictors eventually used in Study 1 (see Table 4 ).

Measures and materials were the same as used in Study 1. The key difference from Study 1 was in the presentation of the three stimuli, which were portrayed as having been posted to Twitter rather than Facebook. For the authoritativeness manipulation, the screen names of the sources were accompanied by @usernames, as is conventional on Twitter. For the consensus manipulation, ‘retweets’ were displayed rather than ‘shares’, and the appropriate icons for Twitter were used. Participants also indicated their level of Twitter, rather than Facebook, use.

The procedure replicated Study 1, save that in this case the NMLS was presented on a single page. Before participants saw each of the three disinformation items, the introductory paragraph stated “A friend of yours recently shared this on Twitter, commenting that they thought it was important and asking all their friends to retweet it:”, and they were asked to indicate the likelihood of them ‘retweeting’ rather than ‘sharing’ the post.

Data submissions were initially obtained from 709 participants. A series of checks were performed to ensure data quality, resulting in a number of responses being excluded. One individual declined consent. Eleven were judged to have responded inauthentically, with the same responses to all items in substantive sections of the questionnaire (‘straightlining’). Twenty were not active Twitter users: three individuals visited Twitter ‘not at all’ and seventeen ‘less often’ than every few weeks. Three participants responded unrealistically quickly, with response durations shorter than four minutes (the same value used as a speeding check by Qualtrics in Study 1). All of these respondents were removed, leaving N = 674. These checks and exclusions were carried out prior to any data analysis.

The target sample size was planned to exceed N = 614, as in Study 1. No attempt was made to recruit a demographically representative sample: instead, sampling quotas were used to ensure the sample was not homogenous with respect to education (pre-degree vs. undergraduate degree or above), age (under 40 vs. over 40) and political preference (left, centre or right wing orientation). Additionally, participants had to be UK nationals resident in the UK; active Twitter users; and not participants in prior studies related to this one. Each participant received a reward of £1.25. Participant demographics are shown in Table 1 (column 2). For the focal analysis in this study, the sample size conferred 94.6% power to detect R 2 = .04 in a multiple regression with 15 predictors (2-tailed, alpha = .05).

Descriptive statistics are summarised in Table 7 . All scales had acceptable reliability. The main dependent variable, likelihood of sharing, again had a very skewed distribution with a strong floor effect.

thumbnail

https://doi.org/10.1371/journal.pone.0239666.t007

To simultaneously test hypotheses 1–4, a multiple regression analysis was carried out using the expanded predictor set from Study 1. Given inclusion of gender as a predictor variable, the three respondents who did not report their gender as either male or female were excluded from further analysis. The analysis, summarised in Table 8 , indicated that the model explained 46% of the variance in self-reported likelihood of sharing the three disinformation items. Neither the authoritativeness of the story source, nor consensus information associated with the stories, nor new media literacy, was a significant predictor. Consistency of the items with participant attitudes (conservatism) was important, with a positive and statistically significant relationship between conservatism and likelihood of sharing. No personality variable predicted ratings of likelihood of sharing. In terms of demographic characteristics, gender and education were statistically significant predictors, with men and less-educated people reporting a higher likelihood of sharing. Finally, people reported a greater likelihood of sharing the items if they believed they were likely to be true, and if they thought they had seen them before.

thumbnail

https://doi.org/10.1371/journal.pone.0239666.t008

Participants had also been asked about their historical sharing of untrue political stories, both unknowing and deliberate. 102 out of 674 participants (15.1%) indicated that they had out ever ‘shared a political news story online that they later found out was made up’, while 42 out of 674 indicated they had shared one that they ‘thought AT THE TIME was made up’ (6.2%). Predictors of whether or not people had shared untrue material under both sets of circumstances were examined using logistic regressions, with the same sets of participant-level predictors.

Having unknowingly shared untrue material ( Table 9 ) was significantly predicted by higher Extraversion and higher levels of Twitter use. Having shared material known to be untrue at the time ( Table 10 ) was significantly predicted by higher Neuroticism and being male.

thumbnail

https://doi.org/10.1371/journal.pone.0239666.t009

thumbnail

https://doi.org/10.1371/journal.pone.0239666.t010

For the main analysis, Study 2 replicates a number of key findings from Study 1. In particular, hypotheses 1, 2 and 4 were again unsupported by the results: consensus, authoritativeness, and new media literacy were not associated with self-rated likelihood of retweeting the disinformation stories. Evidence consistent with hypothesis 3 was again found, with higher levels of conservatism being associated with higher likelihood of retweeting. Again, the strongest predictor of likelihood of sharing was belief that the stories were true, while likelihood of having seen them before was again statistically significant. The only difference was in the role of personality: there was no association between Agreeableness (or any other personality variable) and likelihood of retweeting the material.

However, for self-reports of historical sharing of false political stories, the pattern of results was different. None of the previous results were replicated, and new predictors were observed for both un-knowing and deliberate sharing. For unintentional sharing, the link with higher levels of Twitter use makes sense, as higher usage confers more opportunities to accidentally share untruths. Higher Extraversion has also been found to correlate with higher levels of social media use [ 32 ] so the same logic may apply for that variable. For intentional sharing, the finding that men were more likely to share false political information is similar to findings from Study 1. The link with higher Neuroticism is less easy to explain: one possibility is that more neurotic people are more likely to share falsehoods that will reduce the chances of an event that they worry about (for example, spreading untruths about a political candidate who one is worried about being elected).

Given that these questions asked about past behaviour in general, and were not tied to the Twitter stimuli used in this study, it is not clear why the pattern of results should have differed from those in Study 1. One possibility is that the sample characteristics were different (this sample was younger, better educated, and drawn from a different source). Another realistic possibility, especially given the typically low effect sizes and large samples tested, is that these are simply ‘crud’ correlations [ 46 ] rather than useful findings. Going forward, it is likely to be more informative to focus on results that replicate across multiple studies or conceptually similar analyses.

Study 3 set out to repeat Study 1, but presented the materials as if they had been posted on Instagram rather than Facebook. Instagram presents an interesting contrast, as the mechanisms of engagement with material are different (for example there is no native sharing mechanism). Nonetheless, it has been identified as an important theater for disinformation operations [ 47 ]. Study 3 therefore sought to establish whether the same factors affecting sharing on Facebook also affect engagement with false material on Instagram. The study was completed online, using paid respondents sourced from the Prolific research panel.

Measures and materials were the same as used in Study 1. The only difference from Study 1 was in the presentation of the three stimuli, which were portrayed as having been posted to Instagram rather than Facebook. For the consensus manipulation, ‘likes’ were used as the sole consensus indicator, and the appropriate icons for Instagram were used.

The procedure replicated Study 1, save that in this case the NMLS was presented on a single page. Before participants saw each of the three disinformation items, the introductory paragraph stated “Imagine that you saw this post on your Instagram feed:” and they were asked to indicate the probability of them ‘liking’ the post.

Data submissions were initially obtained from 692 participants. A series of checks were performed to ensure data quality, resulting in a number of responses being excluded. Four individuals declined consent. Twenty-one were judged to have responded inauthentically, with the same scores to substantive sections of the questionnaire (‘straightlining’). Five did not indicate they were located in the UK. Ten were not active Instagram users: three individuals visited Instagram ‘not at all’ and seven ‘less often’ than every few weeks. Two participants responded unrealistically quickly, with response durations shorter than four minutes (the same value used as a speeding check by Qualtrics in Study 1). All of these respondents were removed, leaving N = 650. These checks and exclusions were carried out prior to any data analysis.

The target sample size was planned to exceed N = 614, as in Study 1. No attempt was made to recruit a demographically representative sample: instead, sampling quotas were used to ensure the sample was not homogenous with respect to education (pre-degree vs. undergraduate degree or above) and political preference (left, centre or right-wing orientation). Sampling was not stratified by age, given that Instagram use is associated with younger ages, and the number of older Instagram users in the Prolific pool was limited at the time the study was carried out. Additionally, participants had to be UK nationals resident in the UK; active Instagram users; and not participants in prior studies related to this one. Each participant received a reward of £1.25. Participant demographics are shown in Table 1 (column 3). For the focal analysis in this study, the sample size conferred 93.6% power to detect R 2 = .04 in a multiple regression with 15 predictors (2-tailed, alpha = .05).

Descriptive statistics are summarised in Table 11 . All scales had acceptable reliability. The main dependent variable, probability of liking, again had a very skewed distribution with a strong floor effect.

thumbnail

https://doi.org/10.1371/journal.pone.0239666.t011

To simultaneously test hypotheses 1–4, a multiple regression analysis was carried out using the expanded predictor set from Study 1. Given inclusion of gender as a predictor variable, the three respondents who did not report their gender as either male or female were excluded from further analysis. The analysis, summarised in Table 12 , indicated that the model explained 24% of the variance in self-reported likelihood of sharing the three disinformation items. Neither the authoritativeness of the story source, consensus information associated with the stories, nor consistency of the items with participant attitudes (conservatism) was a statistically significant predictor. Extraversion positively and Conscientiousness negatively predicted ratings of likelihood of sharing. In terms of demographic characteristics, men and younger participants reporting a higher likelihood of sharing. Finally, people reported a greater likelihood of sharing the items if they believed they were likely to be true, and if they thought they had seen them before.

thumbnail

https://doi.org/10.1371/journal.pone.0239666.t012

Participants had also been asked about their historical sharing of untrue political stories, both unknowing and deliberate. Eighty five out of 650 (13.1%) participants who answered the question indicated that they had out ever ‘shared a political news story online that they later found out was made up’, while 50 out of 650 indicated they had shared one that they ‘thought AT THE TIME was made up’ (7.7%). Predictors of whether or not people had shared untrue material under both sets of circumstances were examined using logistic regressions, with the same sets of participant-level predictors.

Having unknowingly shared untrue material ( Table 13 ) was significantly predicted by higher Extraversion, lower Conscientiousness and male gender. Having shared material known to be untrue at the time ( Table 14 ) was significantly predicted by higher New Media Literacy, higher Conservatism, and higher Neuroticism.

thumbnail

https://doi.org/10.1371/journal.pone.0239666.t013

thumbnail

https://doi.org/10.1371/journal.pone.0239666.t014

As in Studies 1 and 2, results were not consistent with hypotheses 1, 2 and 4: consensus, authoritativeness, and new media literacy were not associated with self-rated probability of liking the disinformation stories. In contrast to Studies 1 and 2, however, conservatism did not predict liking the stories. Belief that the stories were true was again the strongest predictor, while likelihood of having seen them before was again statistically significant. Among the personality variables, lower Agreeableness returned as a predictor of likely engagement with the stories, consistent with Study 1 but not Study 2. Lower age predicted likely engagement, a new finding, while being male predicted likely engagement as found in both in Study 1 and Study 2. Unlike Study 1 and Study 2, education had no effect.

With regard to historical accidental sharing, as in Study 3 higher Extraversion was a predictor, while as in Study 1 so was lower Conscientiousness. Men were more likely to have shared accidentally. Deliberate historical sharing was predicted by higher levels of New Media Literacy. This is counter-intuitive and undermines the argument that people share things because they know no better. In fact, in the context of deliberate deception, motivated individuals higher in digital literacy may actually be better equipped to spread untruths. Conservatism was also a predictor here. This could again be a reflection of the consistency hypothesis, given that there are high levels of conservative-oriented disinformation circulating. Finally, as in Study 3, higher Neuroticism predicted deliberate historical sharing.

Study 4 set out to repeat Study 1, but with a US sample and using US-centric materials. The purpose of this was to test whether the observed effects applied across different countries. The study was completed online, using as participants members of research panels sourced through the research company Qualtrics.

Measures and materials were the same as used in Study 1. The only difference from Study 1 was in the contents of the three disinformation exemplars, which were designed to be relevant to a US rather than UK audience. Two of the stimuli were sourced from the website Infowars.com , while a third was a story described as untrue by the fact-checking website Politifact.com . In the same way as in Study 1, the right-wing focus of the stories was again established in pilot work where a US sample ( N = 40) saw seven stories including these and rated their political orientation and likelihood of being shared. All were rated above the mid-point of an 11-point scale asking “To what extent do you think this post was designed to appeal to people with right wing (politically conservative) views?” anchored at “Very left wing oriented” and “Very right wing oriented”. For the least right-wing of the three stories selected, a one-sample t -test comparing the mean rating with the midpoint of the scale showed it was statistically significantly higher, t (39) = 6.729, p < .001, d = 1.07). One of the stimuli, also used in Study 1–3, was titled “Revealed: UN plan to flood America with 600 million migrants”. One was titled “Flashback: Obama’s attack on internet freedom”, subtitled ‘Globalists, Deep State continually targeting America’s internet dominance’, featuring further anti- Obama, China and ‘Big Tech’ sentiment, and an image of Barack Obama apparently drinking wine with a person of East Asian appearance. The third was text based and featured material titled “Surgeon who exposed Clinton foundation corruption in Haiti found dead in apartment with stab wound to the chest”.

The materials used to manipulate authoritativeness (Facebook usernames shown as sources of the stories) were the same as used in Studies 1–3. These were retained because pilot work indicated that the higher and lower sets differed in authoritativeness for US audiences in the same way as for UK audiences. A sample of 30 US participants again each rated a selection of 9 usernames, including these 6, for the extent to which each was “likely to be an authoritative source—that is, likely to be a credible and reliable source of information”. A within-subjects t -test indicated that mean authoritativeness ratings for the ‘higher’ group were statistically significantly higher than the ‘lower’ group ( t (29) = -9.355 p < .001, d z = 1.70).

The procedure replicated Study 1, save that in this case the NMLS was presented across two pages.

Prior to delivery of the sample, Qualtrics performed a series of quality checks and ‘data scrubbing’ procedures to remove and replace participants with response patterns suggesting inauthentic or inattentive responding. These included speeding checks and examination of response patterns. On delivery of the initial sample ( N = 660) further screening procedures were performed. Nine respondents were identified who had responded with the same scores to substantive sections of the questionnaire (‘straightlining’), and one who had not completed any of the personality items. Twelve respondents were not active Facebook users: Six reported using Facebook ‘not at all’ and a further six less often than ‘every few weeks’. All of these were removed, leaving N = 638. These checks and exclusions were carried out prior to any data analysis.

The target sample size was planned to exceed N = 614, as in Study 1. Qualtrics was contracted to provide a sample of active Facebook users that was broadly representative of the US population in terms of gender; education level; and age profile (18+). Sampling quotas were used to assemble a sample comprising approximately one third each self-describing as left-wing, centre and right-wing in their political orientation. Sampling errors on the part of Qualtrics led to over-recruitment of individuals aged 65 years, who make up 94 of the 160 individuals in the 60–69 age group. As a consequence, the 60–69 age group is itself over-represented in this sample compared to the broader US population. Participant demographics are shown in Table 1 , column 4. For the focal analysis in this study, the sample size conferred 92.6% power to detect R 2 = .04 in a multiple regression with 15 predictors (2-tailed, alpha = .05).

Descriptive statistics are summarised in Table 15 . All scales had acceptable reliability. The main dependent variable, likelihood of sharing, again had a very skewed distribution with a strong floor effect.

thumbnail

https://doi.org/10.1371/journal.pone.0239666.t015

To simultaneously test hypotheses 1–4 a multiple regression analysis was carried out using the expanded predictor set from Study 1. Given inclusion of gender as a predictor variable, the one respondent who did not report their gender as either male or female was excluded from further analysis. The analysis, summarised in Table 16 , indicated that the model explained 56% of the variance in self-reported likelihood of sharing the three disinformation items. Neither the authoritativeness of the story source, consensus information associated with the stories, nor consistency of the items with participant attitudes (conservatism) was a statistically significant predictor. Extraversion positively predicted ratings of likelihood of sharing. In terms of demographic characteristics, age was a significant predictor, with younger people reporting a higher likelihood of sharing. Finally, people reported a greater likelihood of sharing the items if they believed they were likely to be true, and if they thought they had seen them before.

thumbnail

https://doi.org/10.1371/journal.pone.0239666.t016

Participants had also been asked about their historical sharing of untrue political stories, both unknowing and deliberate. Of the 638 participants, 185 (29.0%) indicated that they had ever ‘shared a political news story online that they later found out was made up’, while 132 out of 638 indicated they had shared one that they ‘thought AT THE TIME was made up’ (20.7%). Predictors of whether or not people had shared untrue material under both sets of circumstances were examined using logistic regressions, with the same sets of participant-level predictors.

Having unknowingly shared untrue material ( Table 17 ) was significantly predicted by higher New Media Literacy, lower Conscientiousness, higher education, and higher levels of Facebook use. Having shared material known to be untrue at the time ( Table 18 ) was significantly predicted by higher Extraversion, lower Agreeableness, younger age, and higher levels of Facebook use.

thumbnail

https://doi.org/10.1371/journal.pone.0239666.t017

thumbnail

https://doi.org/10.1371/journal.pone.0239666.t018

Again, the pattern of results emerging from Study 4 had some similarities but also some differences from Studies 1–3. Once again, hypotheses 1, 2 and 4 were unsupported by the results. Similarly to Study 3, but unlike Studies 1 and 2, conservatism (the proxy for consistency) did not predict sharing the stories. Belief that the stories were true, and likelihood of having seen them before, were the strongest predictors. Higher levels of Extraversion (a new finding) and lower ages (as in Study 3) were associated with higher reported likelihood of sharing the stimuli.

For historical sharing, for the first time–and counterintuitively–new media literacy was associated with higher likelihood of having shared false material unknowingly. As in Studies 1 and 2, lower Conscientiousness was also important. Counterintuitively, higher education levels were associated with higher unintentional sharing, as were higher levels of Facebook use. For intentional sharing, higher Extraversion was a predictor, as was lower Agreeableness, younger age and higher levels of Facebook use.

General discussion

When interpreting the overall pattern of results from Studies 1–4, given the weakness of most of the associations, it is likely to be most useful to focus on relationships that are replicated across studies and disregard ‘one off’ findings. Tables 19 – 21 provide a summary of the statistically significant predictors in each of the studies. It is clear that two variables consistently predicted self-rated likelihood of sharing disinformation exemplars: belief that the stories were likely to be true, and likely prior familiarity with the stories. It is also clear that three key variables did not: markers of authority, markers of consensus and digital literacy.

thumbnail

https://doi.org/10.1371/journal.pone.0239666.t019

thumbnail

https://doi.org/10.1371/journal.pone.0239666.t020

thumbnail

https://doi.org/10.1371/journal.pone.0239666.t021

Hypothesis 1 predicted that stories portrayed as coming from more authoritative sources were more likely to be shared. However, this was not observed in any of the four studies. One interpretation of this is that the manipulation failed. However, pilot work (see Study 1, Study 4) with comparable samples indicated that people did see the sources as differing in authoritativeness. The failure to find the predicted effect could also be due to use of simulated scenarios–though care was taken to ensure they resembled reality–or weaknesses in the methodology, such as the distributional properties of the dependent variables. However, consistent relationships between other predictors and the dependent variable were observed. Thus, the current studies provide no evidence that authoritativeness of a source influences sharing behaviour.

Hypothesis 2 predicted that stories portrayed as having a higher degree of consensus in audience reactions (i.e. high numbers of people had previously shared them) would be more likely to be shared. In fact, consensus markers had no effect on self-reported probability of sharing or liking the stories. Therefore, the current studies provide no evidence that indicators of ‘social proof’ influence participant reactions to the stimuli.

Hypothesis 3 was that people would be more likely to share materials consistent with their pre-existing beliefs. This was operationalised by measuring participants’ political orientation (overall level of conservatism) and using stimuli that were right-wing in their orientation. In Studies 1 and 2, more conservative people were more likely to share the materials. Further evidence for hypothesis 3 comes from the finding, across all studies, that level of belief the stories were “accurate and truthful” was the strongest predictor of likelihood of sharing. This is again in line with the consistency hypothesis: people are behaving in ways consistent with their beliefs. The finding from Study 3 that more conservative people were more likely to have historically shared material they knew to be untrue could also be in line with this hypothesis, given that a great many of the untrue political stories circulated online are conservative-oriented.

Hypothesis 4, that people lower in digital literacy would be more likely to engage with disinformation, was again not supported. As noted earlier, measurement of digital literacy is problematic. However, pilot work showed that the New Media Literacy Scale did differentiate between people with higher and lower levels of social media use in the expected manner, so it is likely to have a degree of validity. In Study 4, higher NMLS scores were associated with having unwittingly shared false material in the past, which is counterintuitive. However, this may be due to the fact that more digitally literate people should be more able to see that something was false in hindsight. Higher NMLS scores were also associated with deliberately sharing falsehoods in Study 3. This could be attributable to greater ease with which digitally literate individuals can do such things, if motivated to do so.

A number of other variables were included on an exploratory basis, or for the purpose of controlling for possible confounds. Of these, the most important was participants’ ratings of the likelihood that they had seen the stimuli before. This variable was originally included in the design so that any familiarity effects could be controlled for when evaluating the effect of other variables. In fact, rated likelihood of having seen the materials before was the second strongest predictor of likelihood of sharing it. It was a predictor in all four studies, and for the Facebook studies (1 and 4) it was the second most important variable. This is consistent with work on prior exposure to false material online, where prior exposure to fake news headlines increased participants’ ratings of their accuracy [ 44 ]. Furthermore, it has been found that prior exposure to fake-news headlines reduced participants’ ratings of how unethical it was to share or publish the material, even when it was clearly marked as false [ 43 ]. Thus, repeated exposure to false material may increase our likelihood of sharing it. It is known that repeated exposure to statements increases people’s subjective ratings of their truth [ 48 ]. However, there must be more going on here, because the regression analyses indicated that the familiarity effect was independent of the level of belief that it is true. When considering work that found that amplification of content by bot networks led to greater levels of human sharing [ 21 ], the implication is that repeated actual exposure to the materials is what prompts people to share it, not metrics of consensus such as the number of likes or shares displayed beside an article.

Of the five dimensions of personality measured, four (Agreeableness, Extraversion, Neuroticism and Conscientiousness were predictors of either current or historical sharing in one or more studies. Consistent with findings from [ 24 ], Studies 1 and 3 found that lower Agreeableness was associated with greater probability of sharing or liking the stories. It was also associated with accidental historical sharing in Study 1, and deliberate historical sharing in Studies 1 and 4. In contrast to this, past research on personality and social media behaviour indicates that more agreeable people are more likely to share information on social media: [ 49 ] reported that its role in this was mediated by trust, while [ 32 ] found that higher Agreeableness was associated with higher levels of social media use in general. Given those findings, it is likely that the current results are specific to disinformation stimuli rather than social sharing in general. Agreeableness could potentially interact with the source of the information: more agreeable people might conceivably be more eager to please those close to them, However, while it is possible that Agreeableness interacted in some way with the framing of the material having been shared by ‘a friend’ in Study 1, Study 3 had no such framing. More broadly, the nature of the stories may be important: disinformation items are normally critical or hostile in their nature. This may mean they are more likely to be shared by disagreeable people, who themselves may be critical in their outlook and not concerned about offending others. Furthermore, Agreeableness is associated with general trusting behaviour. It may be that disagreeable people are therefore more likely to endorse conspiracist material, or other items consistent with a lack of trust in politicians or other public figures.

Lower Conscientiousness was associated with accidental historical sharing of false political stories in Studies 1, 3 and 4. This is unsurprising, as less conscientious people would be less likely to check the veracity of a story before sharing it. The lack of an association with deliberate historical sharing reinforces this view.

Higher Extraversion was associated with probability of sharing in Study 4, with accidental historical sharing in Study 2 and 3, and with deliberate historical sharing in Study 4. Higher Neuroticism was associated with historical deliberate sharing in Studies 2 and 3. All these relationships may simply reflect a higher tendency on the part of extraverted and neurotic individuals to use social media more [ 32 ].

There are clearly some links between personality and sharing of disinformation. However, the relationships are weak and inconsistent across studies. It is possible that different traits affect different behaviours: for example low Conscientiousness is associated with accidental but not deliberate sharing, while high Neuroticism is associated with deliberate but not accidental sharing. Thus, links between some personality traits and the spread of disinformation may be context- and motivation- specific, rather than reflecting blanket associations. However, lower Agreeableness–and to a lesser extent higher Extraversion–may predict an overall tendency to spread this kind of material.

Demographic variables were also measured and included in the analyses. Younger individuals rated themselves as more likely to engage with the disinformation stimuli in Studies 3 and 4, and were more likely to have shared untrue political stories in the past either accidentally (Study 1) or deliberately (Studies 1 and 4). This runs counter to findings that older adults were much more likely to have spread material from ‘fake news’ domains [ 16 ]. It is possible that the current findings simply reflect a tendency of younger people to be more active on social media.

People with lower levels of education reported a greater likelihood of sharing the disinformation stories in Studies 1 and 2. Counterintuitively, more educated people were more likely to have accidentally shared false material in the past (Study 4). One possible explanation is that more educated people are more likely to have realised that they had done this, so the effect in Study 4 reflects an influence on reporting of the behaviour rather than on the behaviour itself.

In each of Studies 1, 2 and 3, men reported a greater likelihood of sharing or liking the stimuli. Men were also more likely to have shared false material in the past unintentionally (Study 3) or deliberately (Study 2). Given its replicability, this would seem to be a genuine relationship, but one which is not easy to explain.

Finally, the level of use of particular platforms (Facebook, Twitter or Instagram) did not predict likelihood of sharing the stimuli in any study. Level of use of Twitter (Study 2) predicted accidental sharing of falsehoods, while Facebook use predicted both accidental and deliberate sharing (Study 4). For historical sharing, this may be attributable to a volume effect: the more you use the platforms, the more likely you are to do these things. It should be noted that the level of use metric lacked granularity and had a strong ceiling effect, with most people reporting the highest use level in each case.

In all four studies, a minority of respondents indicated that they had previously shared political disinformation they had encountered online, either by mistake or deliberately. The proportion who had done each varied across the four studies, likely as a function of the population sampled (13.1%-29.0% accidentally; 6.2%-20.7% deliberately), but the figures are a similar magnitude to those reported elsewhere [ 31 , 16 ]. Even if the proportion of social media users who deliberately share false information is just 6.2%, the lowest figure found here, then that is still a very large number of people who are actively and knowingly spreading untruths.

The current results indicate that a number of variables predict onward sharing of disinformation. However, most of these relationships are very small. It has been argued that the minimum effect size for a predictor that would have real-world importance in social science data is β = .2 [ 42 ]. Considering the effect sizes for the predictors in Tables 4 , 8 , 12 and 17 , only belief that the stories are true exceeds this benchmark in every study, while probability of having seen the stories before exceeded it in Studies 1 and 4. None of the other relationships reported exceeded the threshold. This has implications for the practical importance of these findings, in terms of informing interventions to counteract disinformation.

Practical implications

Some of the key conclusions in this set of studies arise from the failure to find evidence supporting an effect. Proceeding from such findings to a firm conclusion is a logically dangerous endeavour: absence of evidence is not, of course, evidence of absence. However, given the evidence from pilot studies that the manipulations were appropriate; the associations of the dependent measures with other variables; and the high levels of power to detect the specified effects, it is possible to say with some confidence that hypotheses 1, 2 and 4 are not supported by the current data. This means that the current project does not provide any evidence that interventions based on these would be of value.

This is particularly important for the findings around digital literacy. Raising digital media literacy is a common and appealing policy position for bodies concerned with disinformation (e.g. [ 1 ]). There is evidence from a number of trials that it can be effective in the populations studied. However, no support was found here for the idea that digital literacy has a role to play in the spread of disinformation. This could potentially be attributed to the methodology in this study. However, some participants– 288 in total across all four studies–reported sharing false political stories that they knew at the time were made up. It is hard to see how raising digital literacy would reduce such deliberate deception. Trying to raise digital literacy across the population is therefore unlikely to ever be a complete solution.

There is evidence that consistency with pre-existing beliefs can be an important factor, especially in relation to beliefs that disinformation stories are accurate and truthful. This implies that interventions are likely to be most effective when targeted at individuals who already hold an opinion or belief, rather than trying to change people’s minds. While this would be more useful to those seeking to spread disinformation, it could also give insights into populations worth targeting with countermessages. Targeting on other variables–personality or demographic–is unlikely to be of value given the low effect sizes. While these variables (perhaps gender and Agreeableness in particular) most likely do play a role, their relative importance seems so low that the information is unlikely to be useful in practice.

Alongside other recent work [ 43 , 44 ], the current findings suggest that repeated exposure to disinformation materials may increase our likelihood of sharing it, even if we don’t believe it. The practical implication would be that to get a message repeated online, one should repeat it many times (there is a clear parallel with the ‘repeat the lie often enough’ maxim regarding propaganda). Social proof (markers of consensus) seems unimportant based on current findings, so there is no point in trying to manipulate the numbers next to a post as sometimes done in online marketing. What might be more effective is to have the message posted many times (e.g. by bots) so that people had a greater chance of coming across it repeatedly. This would be true both for disinformation and counter-messages.

Limitations

As a scenario-based study, the current work has a number of limitations. While it is ethically preferable to field experiments, it suffers from reduced ecological validity and reliance on self-reports rather than genuine behaviour. Questions could be asked, for example, about whether the authoritativeness and consensus manipulations were sufficiently salient to participants (even though they closely mirrored the presentation of this information in real-life settings). Beyond this, questions might be raised about the use of self-reported likelihood of sharing: does sharing intention reflect real sharing behaviour? In fact, there is evidence to suggest that it does, with recent work finding that self-reported willingness to share news headlines on social media paralleled the actual level of sharing of those materials on Twitter [ 50 ].

The scenarios presented were all selected to be right-wing in their orientation, whereas participants spanned the full range from left to right in their political attitudes. This means that consistency was only evaluated with respect to one pole of the right-left dimension. There are a number of other dimensions that have been used as wedge issues in real-world information operations: for example, support for the Black Lives Matter movement; climate change; or for or against Britain leaving the European Union. The current research only evaluated consistency between attitudes and a single issue. A better test of the consistency hypothesis would be to extend that to evaluation of consistency between attitudes and some of those other issues.

A key issue is the distributions of the main outcome variables, which were heavily skewed with strong floor effects. While they still had sufficient sensitivity to make the regression analyses meaningful, they also meant that any effects found were likely to be attenuated. It may thus be that the current findings underestimate the strength of some of the associations reported.

Another measurement issue is around the index of social media use (Facebook, Twitter, Instagram). As Table 1 shows, in three of the studies over 60% of respondents fall into the highest use category. Again, this weakens the sensitivity of evaluations of these variables as predictors of sharing disinformation.

In order to identify variables associated with sharing disinformation, this research programme took the approach of presenting individuals with examples of disinformation, then testing which of the measured variables was associated with self-reported likelihood of sharing. A shortcoming of this approach is that it does not permit us to evaluate whether the same variables are associated with sharing true information. An alternative design would be to show participants either true or false information, and examine whether the same constructs predict sharing both. This would enable identification of variables differentially impacting the sharing of disinformation but not true information. Complexity arises, however, from the fact that whether a story can be considered disinformation, misinformation, or true information, depends on the observer’s perspective. False material deliberately placed online would be categorized as disinformation. A social media user sharing it in full knowledge that it was untrue would be sharing disinformation. However, if they shared it believing it was actually true, then from an observer’s perspective this would be technically categorised as misinformation (defined as “the inadvertent sharing of false information” [1, p.10]). In fact, from the user’s perspective, it would be true information (because they believe it) even though an omniscient observer would know it was actually false. This points to the importance of further research into user motivations for sharing, which are likely to differ depending on whether or not they believe the material is true.

In three of the four studies (Studies 1,2, 4), the stimulus material was introduced as having been posted by a friend who wanted them to share it. This is likely to have boosted the rates of self-reported likelihood of sharing in those studies. Previous work has shown that people rate themselves as more likely to engage with potential disinformation stories posted by a friend, as opposed to a more distant acquaintance [ 24 ]. To be clear, this does not compromise the testing of hypotheses in those studies (given that the framing was the same for all participants, in all conditions). It is also a realistic representation of how we may encounter material like this in our social media feeds. However, it does introduce an additional difference between Studies 1, 2 and 4 when compared with Study 3. It would be desirable for further work to check whether the same effects were found when messages were framed as having been posted by people other than friends.

Finally, the time spent reading and reacting to the disinformation stimuli was not measured. It is possible that faster response times would be indicative of more use of heuristics rather than considered thought about the issues. This could profitably be examined, potentially in observational or simulation studies rather than using self-report methodology.

Future work

A number of priorities for future research arise from the current work. First, it is desirable to confirm these findings using real-world behavioural measures rather than simulations. While it is not ethically acceptable to run experimental studies posting false information on social media, it would be possible to do real-world observational work. For example, one could measure digital literacy in a sample of respondents, then do analyses of their past social media sharing behaviour.

Another priority revolves around those individuals who knowingly share false information. Why do they do this? Without understanding the motivations of this group, any interventions aimed at reducing the behaviour are unlikely to be successful. As well as being of academic interest, motivation for sharing false material has been flagged as a gap in our knowledge by key stakeholders [ 7 ].

The current work found that men were more likely to spread disinformation than women. At present, it is not clear why this was the case. Are there gender-linked individual differences that influence the behaviour? Could it be that the subject matter of disinformation stories is stereotypically more interesting to men, or that men think their social networks are more likely to be interested in or sympathetic to them?

While the focus in this paper has been on factors influencing the spread of untruths, it should be remembered that ‘fake news’ is only one element in online information operations. Other tactics and phenomena, such as selective or out-of-context presentation of true information, political memes, and deliberately polarising hyperpartisan communication, are also prevalent. Work is required to establish whether the findings of this project related to disinformation, also apply to those other forms of computational propaganda. Related to this, it would be of value to establish whether the factors found here to influence sharing of untrue information, also influence the sharing of true information. This would indicate whether there is anything different about disinformation, and also point to factors that might influence sharing of true information that is selectively presented in information operations.

The current work allows some conclusions to be drawn about the kind of people who are likely to further spread disinformation material they encounter on social media. Typically, these will be people who think the material is likely to be true, or have beliefs consistent with it. They are likely to have previous familiarity with the materials. They are likely to be younger, male, and less educated. With respect to personality, it is possible that they will tend to be lower in Agreeableness and Conscientiousness, and higher in Extraversion and Neuroticism. With the exception of consistency and prior exposure, all of these effects are weak and may be inconsistent across different populations, platforms, and behaviours (deliberate v. innocuous sharing). The current findings do not suggest they are likely to be influenced by the source of the material they encounter, or indicators of how many other people have previously engaged with it. No evidence was found that level of literacy regarding new digital media makes much difference to their behaviour. These findings have implications for how governments and other bodies should go about tackling the problem of disinformation in social media.

  • 1. House of Commons Digital, Culture, Media and Sport Committee. Disinformation and ‘fake news’: Final Report. 2019 Feb 2 [cited 18 Feb 2019]. Available from: https://publications.parliament.uk/pa/cm201719/cmselect/cmcumeds/1791/1791.pdf .
  • View Article
  • Google Scholar
  • 4. Howard PN, Ganash B, Liotsiou D, Kell J, François C. The IRA, Social Media and Political Polarization in the United States, 2012–2018. Working Paper 2018.2. 2018 [cited 20 December 2019]. Available from https://comprop.oii.ox.ac.uk/wp-content/uploads/sites/93/2018/12/IRA-Report-2018.pdf .
  • 9. Facebook. What’s the difference between organic, paid and post reach? 2019 [cited 31 July 2019]. Available from: https://www.facebook.com/help/285625061456389 .
  • PubMed/NCBI
  • 11. Petty RE, Cacioppo JT. The Elaboration Likelihood Model of Persuasion. In: Berkowitz L, editor. Advances in Experimental Social Psychology Volume 19. Academic Press; 1986. p. 123–205.
  • 15. Cialdini RB. Influence: The Psychology of Persuasion. New York: HarperCollins; 2009.
  • 17. Festinger L. A theory of cognitive dissonance. Stanford University Press; 1957.
  • 37. Goldberg LR. A broad-bandwidth, public domain, personality inventory measuring the lower-level facets of several five-factor models. In: Mervielde I., Deary I.J., De Fruyt F., FO, editors. Personality Psychology in Europe Vol. 7. Tilburg, The Netherlands: Tilburg University Press; 1999. p. 7–28.
  • 39. Costa PT, McCrae RR. Revised NEO Personality Inventory (NEO PI-R) and NEO Five-Factor Inventory (NEO FFI): Professional Manual. Odessa, FL: Psychological Assessment Resources; 1992.
  • 45. Liberini F, Redoano M, Russo A, Cuevas A, Cuevas R. Politics in the Facebook Era. Evidence from the 2016 US Presidential Elections. CAGE Working Paper Series (389). 2018 [cited 17 Dec 2019]. Available from: https://warwick.ac.uk/fac/soc/economics/research/centres/cage/manage/publications/389-2018_redoano.pdf .
  • 47. DiResta R, Shaffer K, Ruppel B et al. The tactics & tropes of the Internet Research Agency. 2018 [cited 12 Dec 2018]. Available from: https://digitalcommons.unl.edu/cgi/viewcontent.cgi?article=1003&context=senatedocs .

Peer Reviewed

Research note: Fighting misinformation or fighting for information?

Article metrics.

CrossRef

CrossRef Citations

Altmetric Score

PDF Downloads

A wealth of interventions have been devised to reduce belief in fake news or the tendency to share such news. By contrast, interventions aimed at increasing trust in reliable news sources have received less attention. In this article, we show that, given the very limited prevalence of misinformation (including fake news), interventions aimed at reducing acceptance or spread of such news are bound to have very small effects on the overall quality of the information environment, especially compared to interventions aimed at increasing trust in reliable news sources. To make this argument, we simulate the effect that such interventions have on a global information score, which increases when people accept reliable information and decreases when people accept misinformation.

Centre for Culture and Evolution, Brunel University London, UK

Reuters Institute for the Study of Journalism, University of Oxford, UK

Institut Jean Nicod, Département d’études cognitives, ENS, EHESS, PSL University, CNRS, France

essay online information is deceiving and unreliable

Research Question

  • Given limited resources, should we focus our efforts on fighting the spread of misinformation or on supporting the acceptance of reliable information?

Essay Summary

  • To test the efficacy of various interventions aimed at improving the informational environment, we developed a model computing a global information score, which is the share of accepted pieces of reliable information minus the share of accepted pieces of misinformation.
  • Simulations show that, given that most of the news consumed by the public comes from reliable sources, small increases in acceptance of reliable information (e.g., 1%) improve the global information score more than bringing acceptance of misinformation to 0%. This outcome is robust for a wide range of parameters and is also observed if acceptance of misinformation decreases trust in reliable information or increases the supply of misinformation (within plausible limits).
  • Our results suggest that more efforts should be devoted to improving acceptance of reliable information, relative to fighting misinformation.
  • More elaborate simulations will allow for finer-grained comparisons of interventions targeting misinformation vs. interventions targeting reliable information, by considering their broad impact on the informational environment.

Implications

In psychological experiments, participants are approximately as likely to accept a piece of fake news as they are to reject a piece of true news (Altay et al., 2021a; Pennycook et al., 2020; Pennycook & Rand, 2021), suggesting that the acceptance of fake news and the rejection of true news are issues of similar amplitude. Such results, combined with the apparent harmfulness of some fake news, have led to a focus on fighting misinformation. However, studies concur that the base rate of online misinformation consumption in the United States and Europe is very low (~5%) (see Table 1). Most of the large-scale studies measuring the prevalence of online misinformation define misinformation at the source level: news shared by sources known to regularly share fake, deceptive, low-quality, or hyperpartisan news is considered to be online misinformation (see the ‘definition’ column in Table 1). In the United States, misinformation has been calculated to represent between 0.7% and 6% of people’s online news media diet (Altay et al., n.d.; Grinberg et al., 2019; Guess et al., 2018; Guess, Lerner, et al., 2020; Osmundsen et al., 2021), and 0.15% of their overall media diet (Allen et al., 2020). In France, misinformation has been calculated to represent between 4 and 5% of people’s online news diet (Altay et al., n.d.) and 0.16% of their total connected time (Cordonier & Brest, 2021). Misinformation has been calculated to represent approximately 1% of people’s online news diet in Germany (Altay et al., n.d.; Boberg et al., 2020), and 0.1% in the UK (Altay et al., n.d.). In Europe, during the 2019 EU Parliamentary election, less than 4% of the news content shared on Twitter came from unreliable sources (Marchal et al., 2019). Overall, these estimates suggest that online misinformation consumption is low in the global north, but this may not be the case in the global south (Narayanan et al., 2019). It is also worth noting that these estimates are limited to news sources, and do not include individuals’ own posts, group chats, memes, etc.

essay online information is deceiving and unreliable

To illustrate our argument, we developed a model that estimates the efficacy of interventions aimed at increasing the acceptance of reliable news or decreasing the acceptance of misinformation. Our model shows that under a wide range of realistic parameters, given the rarity of misinformation, the effect of fighting misinformation is bound to be minuscule, compared to the effect of fighting for a greater acceptance of reliable information (for a similar approach see Appendix G of Guess, Lerner, et al., 2020). This doesn’t mean that we should dismantle efforts to fight misinformation, since the current equilibrium, with its low prevalence of misinformation, is the outcome of these efforts. Instead, we argue that, at the margin, more efforts should be dedicated to increasing trust in reliable sources of information rather than in fighting misinformation. Moreover, it is also crucial to check that interventions aimed at increasing skepticism towards misinformation do not also increase skepticism towards reliable news (Clayton et al., 2020). Note that our model does not compare the effect of existing interventions, but the effect that hypothetical interventions would have if they improved either rejection of misinformation or acceptance of reliable information.

Improving trust in sound sources, engagement with reliable information, or acceptance of high-quality news is a daunting task. Yet, some preliminary results suggest that this is possible. First, several studies have shown that transparency boxes providing some information about the journalists who covered a news story and explaining why and how the story was covered enhances the perceived credibility of the journalist, the story, and the news organization (Chen et al., 2019; Curry & Stroud, 2017; Johnson & St. John III, 2021; Masullo et al., 2021). Second, credibility labels informing users about the reliability of news sources have been shown to increase the news diet quality of the 10% of people with the poorest news diet (Aslett et al., n.d.), but overall, such labels have produced inconsistent, and often null, results (Kim et al., 2019; Kim & Dennis, 2019). Third, in one experiment, fact-checks combined with opinions pieces defending journalism increased trust in the media and people’s intention to consume news in the future (Pingree et al., 2018). Fourth, in another experiment, fact-checking tips about how to verify information online increased people’s acceptance of scientific information from reliable news sources they were not familiar with (Panizza et al., 2021). Finally, a digital literacy intervention increased people’s acceptance of news from high-prominence mainstream sources but reduced acceptance of news from low-prominence mainstream sources (Guess, Lerner, et al., 2020).

More broadly, interventions fostering critical thinking, inducing mistrust in misinformation, and reducing the sharing of misinformation (Cook et al., 2017; Epstein et al., 2021; Roozenbeek & van der Linden, 2019; Tully et al., 2020), could be adapted to foster trust in reliable sources and promote the sharing of reliable content.

We developed a simple model with two main parameters: the share of misinformation (the rest being reliable information) in the environment and the tendency of individuals to accept each type of information when they encounter it. Reliable information refers to news shared by sources that, most of the time, report news accurately, while misinformation refers to news shared by sources that are known to regularly share fake, deceptive, low-quality, or hyperpartisan news. With this broad definition, misinformation represents approximatively 5% of people’s news diets, with the remaining 95% consisting of information from reliable sources (Allen et al., 2020; Cordonier & Brest, 2021; Grinberg et al., 2019; Guess et al., 2019; Guess, Nyhan, et al., 2020; Marchal et al., 2019). The rate at which people accept reliable information or misinformation when exposed to it is less clear. Here, we take as a starting point experiments in which participants are asked to ascertain the accuracy of true or fake news, suggesting that they accept approximately 60% of true news and 30% of fake news (Altay et al., 2021a; Pennycook et al., 2020; see Appendix A for more information). As shown below, the conclusions we draw from our models are robust to variations in these parameters (e.g., if people accept 90% of misinformation instead of 30%).

The goal of the model is to provide a broad picture of the informational environment, and a rough index of its quality. Although it has some clear limitations (discussed below), it captures the main elements of an informational environment: the prevalence of reliable information vs. misinformation, and people’s propensity to accept each type of information. While more elements could be included, such simple models are crucial to put the effects of any type of intervention in context.

In our model, exposition to news is drawn from a log-normal distribution, with few agents (i.e., the individuals simulated in the model) being exposed to many pieces of news (reliable and unreliable) and the majority of being exposed to few pieces of news, mimicking the real-life skewed distribution of news consumption (e.g., Allen et al., 2020; Cordonier & Brest, 2021). Due to the low prevalence of misinformation, we compare extreme interventions that bring the acceptance rate of misinformation to zero (Figure 1, left panel) to a counterfactual situation in which no intervention took place (black dotted line) and to interventions that increase the acceptance rate of reliable information from a range of one to ten percentage points. We show that an intervention reducing the acceptance rate of misinformation from 30% to zero, increases the overall information score as much as an intervention increasing acceptance of reliable information by one percentage point (i.e., from 60% to 61%).

essay online information is deceiving and unreliable

On the right panel of Figure 1, we plotted how much more efficient in improving the global information score is an intervention on reliable information, compared to an intervention reducing acceptance of misinformation to zero. The only situations in which the intervention on misinformation has an advantage is when the proportion of misinformation is (unrealistically) high, and the improvement in the acceptance rate of reliable information is very low (i.e., at the bottom right corner of the plot). Overall, minute increases in acceptance of reliable information have a stronger effect than completely wiping out acceptance of misinformation. A one percentage point increase in reliable information acceptance has more effect than wiping out all misinformation for all realistic baselines of misinformation prevalence (i.e., 1 to 5%).

In these simulations, the baseline acceptance of misinformation was set to 30%. This percentage, however, was obtained in experiments using fake news specifically, and not items from the broader category of misinformation (including biased, misleading, deceptive, or hyperpartisan news). As a result, the acceptance of items from this broader category might be significantly higher than 30%. We conducted simulations in which the baseline acceptance rate of misinformation was raised to a (very unrealistic) 90%. Even with such a high baseline acceptance of misinformation, given the disproportionate frequency of reliable information with respect to misinformation, an intervention that brings the acceptance rate of misinformation to 0% would only be as effective as increasing belief in reliable information by 4% (for a prevalence of misinformation of 5%).

This basic model was extended in two ways. First, despite its low prevalence, online misinformation can have deleterious effects on society by eroding trust in reliable media (Tandoc et al., 2021; Van Duyn & Collier, 2019; although see Ognyanova et al., 2020). In the first extension of our model, we tested whether misinformation could have a deleterious effect on the information score by decreasing trust in reliable information. In this model, when agents accept misinformation, they then reject more reliable information, and when agents accept reliable information, they then reject more misinformation. In such scenarios, losses in the global information score are mostly caused by decreased acceptance of reliable information, not by increased acceptance of misinformation. Manipulating the relevant parameters shows that even when the deleterious effect of misinformation on reliable information acceptance is two orders of magnitude stronger than the effect of reliable information on rejection of misinformation, the agents keep being more likely to accept reliable information than misinformation (Figure 2, bottom left). Even in this situation, modest interventions that improve acceptance of reliable information (by 1%) are more effective than bringing acceptance of misinformation to zero (Figure 2, bottom right).

essay online information is deceiving and unreliable

In the model so far, the relative proportion of misinformation and reliable information has been used as a fixed parameter. However, the acceptance of misinformation, or, respectively, reliable information, might increase its prevalence through, for example, algorithmic recommendations or social media sharing. In a second extension, accepting misinformation increases the prevalence of misinformation, and accepting reliable information increases the prevalence of reliable information. Similar to the results of the previous extension, given that reliable information is initially much more common, we find that misinformation becomes prevalent with respect to reliable information only when the effect of accepting misinformation on misinformation prevalence is two orders of magnitude larger than the effect of accepting reliable information on reliable information prevalence, which is highly unrealistic. This shows that a sharp increase in the prevalence of misinformation—which would invalidate our main results—requires unrealistic conditions. Moreover, the basic simulation shows that modest increases in the prevalence of misinformation do not challenge our main conclusion: even with a 10% prevalence of misinformation, improving the acceptance of reliable information by three percentage points is more effective than bringing acceptance of misinformation to zero.

The models described in this article deal with the prevalence and acceptance of misinformation and reliable information, not their potential real-life effects, which are difficult to estimate (although the importance of access to reliable information for sound political decision-making is well-established, see Gelman & King, 1993; Snyder & Strömberg, 2010). Our model doesn’t integrate the possibility that some pieces of misinformation could be extraordinarily damaging, such that even a tiny share of the population accepting misinformation could be hugely problematic. We do note, however, that since the prevalence of misinformation is very low, the negative effects of each individual piece of misinformation would have to be much greater than the positive effects of each individual piece of reliable information to compensate for their rarity. This appears unlikely for at least two reasons. First, every piece of misinformation could be countered by a piece of reliable information, making the benefits of accepting that piece of reliable information equal in absolute size to the costs of accepting the piece of misinformation. As a result, high costs of accepting misinformation would have to be mirrored in the model by high benefits of accepting reliable information. Second, some evidence suggests that much misinformation, even misinformation that might appear extremely damaging (such as COVID-19 related misinformation, or political fake news), mostly seem to have minimal effects (Allcott & Gentzkow, 2017; Altay et al., 2021b; Anderson, 2021; Carey et al., n.d.; Guess, Lockett, et al., 2020; Kim & Kim, 2019; Litman et al., 2020; Valensise et al., 2021; Watts & Rothschild, 2017).

Our model is clearly limited and preliminary. However, we hope that it demonstrates the importance of such modeling to get a broader picture of the potential impact of various interventions on the informational environment. Future research should refine the model, particularly in light of new data, but the main conclusion of our model is that interventions increasing the acceptance of reliable information are bound to have a greater effect than interventions on misinformation.

In the model, N agents ( N = 1,000 for all results described here) are exposed to pieces of news for T time steps, where each time step represents a possible exposure. Exposure is different for each agent, and it is drawn from a log-normal distribution (rescaled between 0 and 1), meaning that the majority of agents will have a low probability of being actually exposed to a piece of news at each time step, and few agents will have a high probability, mimicking the real-life skewed distribution of news consumptions (e.g., Allen et al., 2020; Cordonier & Brest, 2021). 

First, a main parameter of the model ( C m Composition misinformation ) determines the probability that each piece of news will be either misinformation or reliable misinformation. The baseline value of this parameter is 0.05, meaning that 5% of news is misinformation. Second, two other parameters control the probability, for each agent, to accept reliable information ( B r Believe reliable ) and to accept misinformation ( B m Believe misinformation ). These two values are extracted, for each agent, from a normal distribution truncated between 0 and 1, with standard deviation equal to 0.1 and with mean equal to the parameter values. The baseline values of these parameters are 0.6 and 0.3 for B r and B m respectively, so that agents tend to accept 60% of reliable information and 30% of misinformation. Finally, a global information score is calculated as the total number of pieces of reliable information accepted minus the total number of pieces of misinformation accepted, normalized with the overall amount of news (and then multiplied by 100 to make it more legible). A global information score of -100 would mean that all misinformation is accepted and no reliable information, and a global information score equal to 100 would mean that all reliable information is accepted and no misinformation.

Finally, a global information scoreis calculated as the total number of pieces of reliable information accepted minus the total number of pieces of misinformation accepted, normalized with the overall amount of news (and then multiplied by 100 to make it more legible). A global information score of -100 would mean that all misinformation is accepted and no reliable information, and a global information score equal to 100 would mean that all reliable information is accepted and no misinformation. 

In the main set of simulations, we first compare (see results in Figure 1 – left panel) the global information score of our baseline situation ( C m = 0.05; B m = 0.3; B r = 0.6) with a drastic intervention that completely wipes out acceptance of misinformation ( B m = 0), and with small improvements in reliable information acceptance ( B r = 0.61, B r = 0.62, B r = 0.63, etc. until B r = 0.7). We then explore the same results for a larger set of parameters, including changing C m from 0.01 to 0.1 in steps of 0.01, i.e., assuming that the proportion of misinformation can vary from 1 to 10% with respect to total information. The results in Figure 1 – right panel show the difference between the global information score obtained with the parameters indicated in the plot (improvements in reliable information acceptance and composition of news) and the information score obtained with the drastic intervention of misinformation for the same composition of news. All results are based on 10 repetitions of simulations for each parameter combination, for T = 1,000. The two extensions of the model are described in Appendix B and C. All the code to run the simulations is written in R, and it is available at https://osf.io/sxbm4/ .

Cite this Essay

Acerbi, A., Altay, S., & Mercier, H. (2022). Research note: Fighting misinformation or fighting for information?. Harvard Kennedy School (HKS) Misinformation Review . https://doi.org/10.37016/mr-2020-87

  • / Appendix B
  • / Appendix C

Bibliography

Allcott, H., & Gentzkow, M. (2017). Social media and fake news in the 2016 election. Journal of Economic Perspectives , 31 (2), 211–236. https://doi.org/10.1257/jep.31.2.211

Allen, J., Howland, B., Mobius, M., Rothschild, D., & Watts, D. J. (2020). Evaluating the fake news problem at the scale of the information ecosystem. Science Advances , 6 (14), eaay3539. https://doi.org/10.1126/sciadv.aay3539

Altay, S., Berriche, M., & Acerbi, A. (2021b). Misinformation on misinformation: Conceptual and methodological challenges. PsyArXiv . https://doi.org/10.31234/osf.io/edqc8

Altay, S., de Araujo, E., & Mercier, H. (2021a). “If this account is true, it is most enormously wonderful”: Interestingness-if-true and the sharing of true and false news. Digital Journalism . https://doi.org/10.1080/21670811.2021.1941163

Altay, S., Nielsen, R. K., & Fletcher, R. (n.d.). Quantifying the “infodemic”: People turned to trustworthy news outlets during the 2020 pandemic [Working paper].

Anderson, C. (2021). Fake news is not a virus: On platforms and their effects. Communication Theory , 31 (1), 42–61. https://doi.org/10.1093/ct/qtaa008

Aslett, K., Guess, A., Nagler, J., Boneeau, R., & Tucker, J. (n.d.). News credibility labels have limited but uneven effects on news diet quality and fail to reduce misperceptions [Working paper]. https://kaslett.github.io//Documents/Credibility_Ratings_Aslett_et_al_Main_Paper.pdf

Boberg, S., Quandt, T., Schatto-Eckrodt, T., & Frischlich, L. (2020). Pandemic populism: Facebook pages of alternative news media and the corona crisis—A computational content analysis. ArXiv. https://arxiv.org/abs/2004.02566

Carey, J., Guess, A., Nyhan, B., Phillips, J., & Reifler, J. (n.d.). COVID-19 misinformation consumption is minimal, has minimal effects, and does not prevent fact-checks from working [Working paper].

Chen, G. M., Curry, A., & Whipple, K. (2019). Building trust: What works for news organizations . Center for Media Engagement . https://mediaengagement.org/research/building-trust/   

Clayton, K., Blair, S., Busam, J. A., Forstner, S., Glance, J., Green, G., Kawata, A., Kovvuri, A., Martin, J., & Morgan, E. (2020). Real solutions for fake news? Measuring the effectiveness of general warnings and fact-check tags in reducing belief in false stories on social media. Political Behavior , 42 (4), 1073–1095. https://doi.org/10.1007/s11109-019-09533-0

Cook, J., Lewandowsky, S., & Ecker, U. K. (2017). Neutralizing misinformation through inoculation: Exposing misleading argumentation techniques reduces their influence. PloS One , 12 (5), e0175799. https://doi.org/10.1371/journal.pone.0175799

Cordonier, L., & Brest, A. (2021). How do the French inform themselves on the Internet? Analysis of online information and disinformation behaviors. Fondation Descartes. https://hal.archives-ouvertes.fr/hal-03167734/document

Curry, A., & Stroud, N. J. (2017). Trust in online news . Center for Media Engagement. https://mediaengagement.org/wp-content/uploads/2017/12/CME-Trust-in-Online-News.pdf

Epstein, Z., Berinsky, A. J., Cole, R., Gully, A., Pennycook, G., & Rand, D. G. (2021). Developing an accuracy-prompt toolkit to reduce COVID-19 misinformation online. Harvard Kennedy School (HKS) Misinformation Review, 2 (3). https://doi.org/10.37016/mr-2020-71

Gelman, A., & King, G. (1993). Why are American presidential election campaign polls so variable when votes are so predictable? British Journal of Political Science , 23 (4), 409–451. https://doi.org/10.1017/s0007123400006682

Godel, W., Sanderson, Z., Aslett, K., Nagler, J., Bonneau, R., Persily, N., & Tucker, J. (2021). Moderating with the mob: Evaluating the efficacy of real-time crowdsourced fact-checking. Journal of Online Trust and Safety , 1 (1). https://doi.org/10.54501/jots.v1i1.15

Grinberg, N., Joseph, K., Friedland, L., Swire-Thompson, B., & Lazer, D. (2019). Fake news on Twitter during the 2016 U.S. Presidential election. Science , 363 (6425), 374–378. https://doi.org/10.1126/science.aau2706

Guess, A., Lerner, M., Lyons, B., Montgomery, J. M., Nyhan, B., Reifler, J., & Sircar, N. (2020). A digital media literacy intervention increases discernment between mainstream and false news in the United States and India. Proceedings of the National Academy of Sciences , 117 (27), 15536–15545. https://doi.org/10.1073/pnas.1920498117

Guess, A., Lockett, D., Lyons, B., Montgomery, J. M., Nyhan, B., & Reifler, J. (2020). “Fake news” may have limited effects beyond increasing beliefs in false claims. Harvard Kennedy School (HKS) Misinformation Review , 1 (1). https://doi.org/10.37016/mr-2020-004

Guess, A., Lyons, B., Montgomery, J., Nyhan, B., & Reifler, J. (2018). Fake news, Facebook ads, and misperceptions: Assessing information quality in the 2018 U.S. midterm election campaign . Democracy Fund report. https://cpb-us-e1.wpmucdn.com/sites.dartmouth.edu/dist/5/2293/files/2021/03/fake-news-2018.pdf

Guess, A., Nagler, J., & Tucker, J. (2019). Less than you think: Prevalence and predictors of fake news dissemination on Facebook. Science Advances , 5 (1), eaau4586. https://doi.org/10.1126/sciadv.aau4586ex

Guess, A., Nyhan, B., & Reifler, J. (2020). Exposure to untrustworthy websites in the 2016 US election. Nature Human Behaviour , 4 (5), 472–480. https://doi.org/10.1038/s41562-020-0833-x

Johnson, K. A., & St. John III, B. (2021). Transparency in the news: The impact of self-disclosure and process disclosure on the perceived credibility of the journalist, the story, and the organization. Journalism Studies , 22 (7), 953–970. https://doi.org/10.1080/1461670X.2021.1910542

Kim, A., & Dennis, A. R. (2019). Says who? The effects of presentation format and source rating on fake news in social media. MIS Quarterly , 43 (3). http://dx.doi.org/10.2139/ssrn.2987866

Kim, A., Moravec, P. L., & Dennis, A. R. (2019). Combating fake news on social media with source ratings: The effects of user and expert reputation ratings. Journal of Management Information Systems , 36 (3), 931–968. https://doi.org/10.1080/07421222.2019.1628921

Kim, J. W., & Kim, E. (2019). Identifying the effect of political rumor diffusion using variations in survey timing. Quarterly Journal of Political Science , 14 (3), 293–311. http://dx.doi.org/10.1561/100.00017138

Litman, L., Rosen, Z., Ronsezweig, C., Weinberger, S. L., Moss, A. J., & Robinson, J. (2020). Did people really drink bleach to prevent COVID-19? A tale of problematic respondents and a guide for measuring rare events in survey data. MedRxiv . https://doi.org/10.1101/2020.12.11.20246694

Marchal, N., Kollanyi, B., Neudert, L.-M., & Howard, P. N. (2019). Junk news during the EU Parliamentary elections: Lessons from a seven-language study of Twitter and Facebook. Oxford Internet Institute, University of Oxford. https://demtech.oii.ox.ac.uk/wp-content/uploads/sites/93/2019/05/EU-Data-Memo.pdf

Masullo, G. M., Curry, A. L., Whipple, K. N., & Murray, C. (2021). The story behind the story: Examining transparency about the journalistic process and news outlet credibility. Journalism Practice . https://doi.org/10.1080/17512786.2020.1870529

Narayanan, V., Kollanyi, B., Hajela, R., Barthwal, A., Marchal, N., & Howard, P. N. (2019). News and information over Facebook and WhatsApp during the Indian election campaign . Oxford Internet Institute, Project on Computational Propaganda, Comprop Data Memo 2019.2. https://demtech.oii.ox.ac.uk/wp-content/uploads/sites/93/2019/05/India-memo.pdf

Ognyanova, K., Lazer, D., Robertson, R. E., & Wilson, C. (2020). Misinformation in action: Fake news exposure is linked to lower trust in media, higher trust in government when your side is in power. Harvard Kennedy School (HKS) Misinformation Review , 1 (4). https://doi.org/10.37016/mr-2020-024

Osmundsen, M., Bor, A., Vahlstrup, P. B., Bechmann, A., & Petersen, M. B. (2021). Partisan polarization is the primary psychological motivation behind political fake news sharing on Twitter. American Political Science Review , 115 (3), 999–1015. https://doi.org/10.1017/S0003055421000290

Panizza, F., Ronazni, P., Mattavelli, S., Morisseau, T., Martini, C., & Motterlini, M. (2021). Advised or paid way to get it right. The contribution of fact-checking tips and monetary incentives to spotting scientific disinformation. PsyArXiv . https://doi.org/10.21203/rs.3.rs-952649/v1

Pennycook, G., Epstein, Z., Mosleh, M., Arechar, A. A., Eckles, D., & Rand, D. G. (2021). Shifting attention to accuracy can reduce misinformation online. Nature , 592 (7855), 590–595. https://doi.org/10.1038/s41586-021-03344-2

Pennycook, G., McPhetres, J., Zhang, Y., Lu, J. G., & Rand, D. G. (2020). Fighting COVID-19 misinformation on social media: Experimental evidence for a scalable accuracy-nudge intervention. Psychological Science , 31 (7), 770–780. https://doi.org/10.1177/0956797620939054

Pennycook, G., & Rand, D. (2021). Reducing the spread of fake news by shifting attention to accuracy: Meta-analytic evidence of replicability and generalizability. PsyArXiv . https://doi.org/10.31234/osf.io/v8ruj

Pingree, R. J., Watson, B., Sui, M., Searles, K., Kalmoe, N. P., Darr, J. P., Santia, M., & Bryanov, K. (2018). Checking facts and fighting back: Why journalists should defend their profession. PloS One , 13 (12), e0208600. https://doi.org/10.1371/journal.pone.0208600

Roozenbeek, J., & van der Linden, S. (2019). Fake news game confers psychological resistance against online misinformation. Palgrave Communications , 5 (1), 1–10. https://doi.org/10.1057/s41599-019-0279-9

Snyder, J. M., & Strömberg, D. (2010). Press coverage and political accountability. Journal of Political Economy , 118 (2), 355–408. https://doi.org/10.1086/652903

Tandoc Jr., E. C., Duffy, A., Jones-Jang, S. M., & Wen Pin, W. G. (2021). Poisoning the information well? The impact of fake news on news media credibility. Journal of Language and Politics. https://doi.org/10.1075/jlp.21029.tan

Tully, M., Vraga, E. K., & Bode, L. (2020). Designing and testing news literacy messages for social media. Mass Communication and Society , 23 (1), 22–46. https://doi.org/10.1080/15205436.2019.1604970

Valensise, C. M., Cinelli, M., Nadini, M., Galeazzi, A., Peruzzi, A., Etta, G., Zollo, F., Baronchelli, A., & Quattrociocchi, W. (2021). Lack of evidence for correlation between COVID-19 infodemic and vaccine acceptance. ArXiv . https://arxiv.org/abs/2107.07946

Van Duyn, E., & Collier, J. (2019). Priming and fake news: The effects of elite discourse on evaluations of news media. Mass Communication and Society , 22 (1), 29–48. https://doi.org/10.1080/15205436.2018.1511807

Watts, D. J., & Rothschild, D. M. (2017). Don’t blame the election on fake news. Blame it on the media. Columbia Journalism Review , 5 . https://www.cjr.org/analysis/fake-news-media-election-trump.php

This research was supported by the Agence nationale de la recherche, grants ANR-17-EURE-0017 FrontCog, ANR-10-IDEX-0001-02 PSL, and ANR-21-CE28-0016-01 to HM.

Competing Interests

The authors declare no conflict of interests.

No participants were recruited.

This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided that the original author and source are properly credited.

Data Availability

All materials needed to replicate this study are available via the Harvard Dataverse: https://doi.org/10.7910/DVN/XGHDTJ

Alberto Acerbi and Sacha Altay contributed equally.

ScienceDaily

Students often do not question online information

Study examines students' ability to critically assess information from the internet and from social media.

The Internet and social media are among the most frequently used sources of information today. Students, too, often prefer online information rather than traditional teaching materials provided by universities. According to a study conducted by Johannes Gutenberg University Mainz (JGU) and Goethe University Frankfurt, students struggle to critically assess information from the Internet and are often influenced by unreliable sources. In this study, students from various disciplines such as medicine and economics took part in an online test, the Critical Online Reasoning Assessment (CORA). "Unfortunately, it is becoming evident that a large proportion of students are tempted to use irrelevant and unreliable information from the Internet when solving the CORA tasks," reported Professor Olga Zlatkin-Troitschanskaia from JGU. The study was carried out as part of the Rhine-Main Universities (RMU) alliance.

Critical evaluation of online information and online sources are particularly important today

Learning using the Internet offers many opportunities, but it also entails risks. It has become evident that not only "fake news" but also "fake science" with scientifically incorrect information is being spread on the Internet. This problem becomes particularly apparent in the context of controversially discussed social issues such as the current corona crisis, but it actually goes much deeper. "Having a critical attitude alone is not enough. Instead, Internet users need skills that enable them to distinguish reliable from incorrect and manipulative information. It is therefore particularly important for students to question and critically examine online information so they can build their own knowledge and expertise on reliable information," stated Zlatkin-Troitschanskaia.

To investigate how students deal with online information, Professor Olga Zlatkin-Troitschanskaia and her team have developed a new test based on the Civic Online Reasoning (COR) assessment developed by Stanford University. During the assessment, the test takers are presented with short tasks. They are asked to freely browse the Internet, focusing on relevant and reliable information that will help them to solve the tasks within the relatively short time frame of ten minutes, and to justify their solutions using arguments from the online information they used.

CORA testing requires complex and extensive analysis

The analysis of the results is based on the participants' responses to the tasks. In addition, their web search activity while solving the tasks is recorded to examine their strengths and weaknesses in dealing with online information in more detail. "We can see which websites the students accessed during their research and which information they used. Analyzing the entire process requires complex analyses and is very time-consuming," said Zlatkin-Troitschanskaia. The assessments have so far been carried out in two German federal states. To date, 160 students from different disciplines have been assessed; the majority of the participants studied medicine or economics and were in their first or second semester.

Critical online reasoning skills should be specifically promoted in higher education

The results are striking: almost all test participants had difficulties solving the tasks. On a scale of 0 to 2 points per task, the students scored only 0.75 points on average, with the results ranging from 0.50 to 1.38 points. "The majority of the students did not use any scientific sources at all," said Zlatkin-Troitschanskaia, pointing out that no domain-specific knowledge was required to solve the CORA tasks. "We are always testing new groups of students, and the assessment has also been continued as a longitudinal study. Since we first started conducting these assessments two years ago, the results are always similar: the students tend to achieve low scores." However, students in higher semesters perform slightly better than students in their first year of study. Critical online reasoning skills could therefore be promoted during the course of studies. In the United States, a significant increase in these kinds of skills was observed only a few weeks after implementing newly developed training approaches.

The study shows that most students do not succeed in correctly evaluating online sources in the given time and in using relevant information from reliable sources on the Internet to solve the tasks. "As we know from other studies, students are certainly able to adequately judge the reliability of well-known media portals and Internet sources. We could build on this fact and foster the skills required to critically evaluate new sources and online information and to use the Internet in a reflected manner to generate warranted knowledge," concluded Professor Olga Zlatkin-Troitschanskaia.

In research on this topic, skills related to critically dealing with online information and digital sources are regarded as an essential prerequisite for learning in the 21st century. However, there are still very few training approaches and assessments available for students to foster these skills, especially online. "The RMU study is still in the early stages of development. We have only just developed the first test of this kind in Germany," Zlatkin-Troitschanskaia pointed out. "We are currently in the process of developing teaching/learning materials and training courses and of testing their effectiveness. The analysis of the processing will be particularly useful when it comes to offering students targeted support in the future.

  • K-12 Education
  • Educational Psychology
  • Educational Technology
  • Communications
  • Privacy Issues
  • STEM Education
  • Education and Employment
  • Massively multiplayer online game
  • Cyber-bullying
  • Mathematics
  • Special education
  • Macroeconomics
  • World Wide Web

Story Source:

Materials provided by Johannes Gutenberg Universitaet Mainz . Note: Content may be edited for style and length.

Cite This Page :

Explore More

  • Controlling Shape-Shifting Soft Robots
  • Brain Flexibility for a Complex World
  • ONe Nova to Rule Them All
  • AI Systems Are Skilled at Manipulating Humans
  • Planet Glows With Molten Lava
  • A Fragment of Human Brain, Mapped
  • Symbiosis Solves Long-Standing Marine Mystery
  • Surprising Common Ideas in Environmental ...
  • 2D All-Organic Perovskites: 2D Electronics
  • Generative AI That Imitates Human Motion

Trending Topics

Strange & offbeat.

  • Free Samples
  • Premium Essays
  • Editing Services Editing Proofreading Rewriting
  • Extra Tools Essay Topic Generator Thesis Generator Citation Generator GPA Calculator Study Guides Donate Paper
  • Essay Writing Help
  • About Us About Us Testimonials FAQ
  • Studentshare
  • Information Technology
  • The Internet as an Unreliable Source of Information

The Internet as an Unreliable Source of Information - Essay Example

The Internet as an Unreliable Source of Information

  • Subject: Information Technology
  • Type: Essay
  • Level: Undergraduate
  • Pages: 2 (500 words)
  • Downloads: 3
  • Author: schoencoleman

Extract of sample "The Internet as an Unreliable Source of Information"

Technology can help us to find information faster than ever before. Because of ease-of-access, the Internet is the main source of information for most people. Modern technology has progressed to a point where news and information are almost instantaneous. It is a hassle for most people to go to the library to read a book, or even go and buy a newspaper. The reason for this is that the same information is available online and can be accessed faster. Although the Internet contains a great deal of useful information, certain information can be manipulated and altered depending on someone’s point of view, lies can be made to discredit others or improve someone’s credibility, and consumers can get sucked into buying something that they don’t fully understand.

The reason why we cannot believe everything that is on the Internet is because some information can be changed to suit someone’s biased opinion. For example, the site Wikipedia.com is an unreliable source of information because anyone can go onto the website and change the information anytime that he or she wants. If such a website is considered as a reliable source to get information from, then we will be led astray. It is difficult to believe anything that comes from that website even if the information is common sense.

One group that may take advantage of this situation is hackers. As we know, these people hack websites for several reasons, and only one of the reasons is for changing the information on a website. Other motives could be due to money, for revenge, or personal attacks. This is a huge danger that affects the credibility of websites as a source of reliable information. Another reason why the internet cannot be trusted is that there are many lies online. You cannot even imagine how many lies are posted every day.

This does not just affecting website credibility; it also gives that website a bad reputation, which means that the site is marked “x”. Why do people post lies on the Internet? This is a good question that cannot be fully answered. Some examples of this are people lying on dating websites in order to find a better partner and businesses lying to grab people’s attention. For dating websites, people do not post bad pictures of themselves because they want to make sure that their profiles are perfect.

When someone looks at the profile and sees what a person wrote about him or herself, the reader may say: “that is the type of person who I have been looking to find for a long time.” What makes me laugh is that people lie about their height, income, and body shape. A study by OkCupid shows that, on average, people are two inches shorter in real life than what they say on their profiles. The same study also showed that people are twenty percent poorer than what they say on their dating profiles.

There are many websites on the internet that want to grab people’s attention, and they do so by lying to them. Once people are hooked into the product or service that is being offered, there is sometimes no way to back out of the deal. As an example, hosting services on the Internet can be deceiving at times. When people search for hosting services on search engines, they always try to find a price that is cheaper than what is currently being offered to them.

  • Cited: 24 times
  • Copy Citation Citation is copied Copy Citation Citation is copied Copy Citation Citation is copied

CHECK THESE SAMPLES OF The Internet as an Unreliable Source of Information

Internet resource guide, the un-tradiional paper, dangers of web 2.0, internet is good for study, internet is good for study, journal: assessing sources for credibility, locating scholarly information on the internet, cloud computing and three service types.

essay online information is deceiving and unreliable

  • TERMS & CONDITIONS
  • PRIVACY POLICY
  • COOKIES POLICY

essay online information is deceiving and unreliable

  • Brandeis Library
  • Research Guides

Evaluating Online Information

Quick guide.

  • Scholarly Works
  • Websites and Social Media

Related Guides

  • Evaluating Primary Sources
  • Evaluating Journals: Impact & More
  • Citing Sources

Citing Online Sources

Always document your site with a URL address, organizational affiliation, the date you viewed it, and other identifying information. For more help with citing online sources and information, see our Citation Guide.

  • Citing Sources Guide

Introduction

The quality of information found online is extremely variable. Anyone can post data and information on the Internet and not all online sources are equally reliable, valuable, or accurate. It is important to carefully evaluate information found online before relying on it for your own research.

(Video by The Open University Library )

How To Use This Guide

Looking for general strategies for evaluating information? See below for an introduction to evaluating information effectively, and for some questions to keep in mind in any situation.

Looking for information on evaluating specific kinds of resources ? The tabs in the sidebar menu link to pages with some red flags to look out for and questions to consider for different types of resources. Try reading through before starting your research, so you know what to look out for when you start. You can also refer to this guide when you've got a source you aren't so sure about - we'll walk through it together.

If this guide doesn't have quite what you need, just ask a librarian ! Come see us at the Research Help Desk in the Goldfarb Library, send us an email , or contact us via chat.

The most important thing is to know what questions to ask when reviewing a source. The more research you do, the more of a habit it becomes! Here's a handy mnemonic for remembering some of the most important questions to ask:

The CRAAP Test

C - Currency - Is the information in this source current, or has it become outdated?

R - Relevance - Is the information relevant to your research question or topic? And is this kind of source appropriate for your uses?

A - Authority - Who's the author and what are their qualifications? How do they know what they're telling you is accurate?

A - Accuracy - You might not know right away if the information is accurate - after all, that's why you're researching it - but there are some flags you can watch out for. Are the claims supported by evidence, and are sources cited? Are there editors or peer reviewers? Do any other sources support or verify the information?

P - Purpose - Why is this information out there? Does it appear impartial or biased? Are the authors or the publisher trying to present facts or to convince you of something?  (The CRAAP test was originally created by the Meriam Library at California State University, Chico) For a deeper dive into some good questions to ask - helpful to read through when beginning your research, so these considerations are fresh in your mind - check out the UC Berkeley Library's guide .

  • Next: Scholarly Works >>
  • Last Updated: Aug 31, 2023 11:08 AM
  • URL: https://guides.library.brandeis.edu/evaluatinginfo

Home / Essay Samples / Sociology / Fake News / Online Information in Nepal: Deceiving and Unreliable Journalism

Online Information in Nepal: Deceiving and Unreliable Journalism

  • Category: Sociology
  • Topic: Fake News , Journalism

Pages: 3 (1233 words)

  • Downloads: -->

The Issue of Digital Disinformation in Nepal

An overview of nepal news media: its challenges and prospects, summary thoughts.

--> ⚠️ Remember: This essay was written and uploaded by an--> click here.

Found a great essay sample but want a unique one?

are ready to help you with your essay

You won’t be charged yet!

Rhetorical Strategies Essays

Communication Skills Essays

First Impression Essays

Propaganda Essays

Public Speaking Essays

Related Essays

We are glad that you like it, but you cannot copy from our website. Just insert your email and this sample will be sent to you.

By clicking “Send”, you agree to our Terms of service  and  Privacy statement . We will occasionally send you account related emails.

Your essay sample has been sent.

In fact, there is a way to get an original essay! Turn to our writers and order a plagiarism-free paper.

samplius.com uses cookies to offer you the best service possible.By continuing we’ll assume you board with our cookie policy .--> -->