The Science of People at Work

Workplace Psychology

  • How to Use Copilot In Word
  • Traveling? Get These Gadgets!

How AI Writing Tools Are Helping Students Fake Their Homework

Creativity could be on the way out

is assignment work real or fake

  • Macalester College
  • Columbia University

is assignment work real or fake

  • Western Kentucky University
  • Gulf Coast Community College
  • AI & Everyday Life News
  • The increasing use of AI writing tools could help students cheat.
  • Teachers say software that helps generate text can be used to fake homework assignments. 
  • One teacher says content from programs that rewrite or paraphrase content sticks out like a "sore thumb" at the middle school level.

Cavan Images / Getty Images

Getting good grades in school may soon be about artificial intelligence (AI) as much as hard work. 

Online software tools that help students write essays using AI have become so effective that some teachers worry the new technology is replacing creativity during homework assignments. Students are increasingly turning to these programs that can write entire paragraphs or essays with just a few prompts, often leaving teachers none the wiser. 

"As far as I can tell, it is currently not that easy to detect AI writing," Vincent Conitzer , a professor of computer science at Carnegie Mellon University , told Lifewire in an email interview. "These systems do not straightforwardly plagiarize existing text that can then be found. I am also not aware of any features of the writing that obviously signal that it came from AI."

Homework Helpers

The use of AI writing tools by students is on the rise, anecdotes suggest. Conitzer said he’s heard one philosophy professor say he would shift away from the use of essays in his classes due to concern over AI-generated reports.

Tools based on Large Language Models (LLMs), such as GPT-3/X, have seen tremendous improvement over the last few years, Robert Weißgraeber , the managing director of AX Semantics , an AI-powered, natural language generation (NLG) software company, said in an email interview. Users enter a short phrase or paragraph, and the tool extends that phrase or section into a lengthy article.

These systems do not straightforwardly plagiarize existing text that can then be found.

Don't expect LLMs to replace real authors anytime soon, though, Weissgraeber said. GPT-3X tools are just "stochastic parrots" that produce perfect-sounding text, "however when looked at in detail, they produce defects called 'hallucinations,'—which means they are outputting things that cannot be deduced from the arguments built into the input, data, or the text itself. The perfect syntax and word choices can dazzle the reader, but when looked at closely, they actually produce semantic and pragmatic gibberish."

Catching AI Cheaters

AI-assisted writing programs are now so effective that it's hard to catch cheaters, experts say. Other than making students write in a supervised setting, perhaps the best way for teachers to avoid the use of AI writing is to come up with unusual topics that require common sense to write about, Conitzer said.

"For example, I just had GPT-3 write the beginning of two essays," he added. "The first was about whether free speech should sometimes be restricted to keep people safe, a generic essay topic about which you can find all kinds of writing online, and GPT-3 produced sensible text listing the pros and cons.

"The second was about what a teenager who was accidentally transported to the year 1000 but still has her phone in her pocket should do with her phone. GPT-3 recommended using it to call her friends and family and do research about the year 1000."

The perfect syntax and word choices can dazzle the reader...

Erin Beers , a middle school language arts teacher in the Cincinnati area, told Lifewire in an email interview that content from programs that rewrite or paraphrase content sticks out like a "sore thumb" at the middle school level. 

"I can usually spot fraudulent activity due to a student's use of complex sentence structure and an abundance of adjectives," Beers said. "Most 7th-grade writers simply don't write at that level."

Beers said she's against students using most AI writing programs, saying, "Anything that attempts to replicate creativity is likely limiting a writer's growth."

Krit of Studio OMG / Getty Images

Weißgraeber recommends teachers not be fooled by smooth-looking prose that may have been generated by AI. "Look at the argumentation chains," he added. "Are all statements grounded in correlating facts and data that are also listed?"

However, despairing teachers take note. There's at least one upside to students using AI tools, Conitzer contends. 

"In principle, students could learn quite a few things from AI writing," he said. "It often produces clear and well-structured prose that could serve as a good example, though the style is usually generic. Students could also learn more about AI from it, including how it sometimes still fails miserably at commonsense reasoning and how it reflects the human writing it was trained on."

Get the Latest Tech News Delivered Every Day

  • How to Use Google Duet in Docs
  • The 15 Best Free AI Courses of 2024
  • How to Use Google Duet in Gmail
  • The 6 Best Homework Apps to Help Students (and Parents)
  • The 10 Best ChatGPT Alternatives (2024)
  • The 10 Best Writing Apps of 2024
  • What Is a Large Language Model?
  • Back to School: The 9 Tech Items Every Student Must Have
  • How to Use Bing AI to Get the Answers You Need
  • School Project Ideas for IT and Computer Networking Students
  • 10 Positive Impacts of Artificial Intelligence
  • The Benefits of a Using a Smartpen
  • What is OneNote Class Notebook and How Does it Work?
  • OpenAI Playground vs. ChatGPT: What's the Difference?
  • 8 Free Back-to-School Apps for Students
  • Back to School: Laptops, Smartphones, & Books, Oh My!
  • Skip to main content
  • Keyboard shortcuts for audio player

Ask Me Another

  • Latest Show
  • Meet Our Guests

Ask Me Another

  • LISTEN & FOLLOW
  • Apple Podcasts
  • Google Podcasts

Your support helps make our show possible and unlocks access to our sponsor-free feed.

Real or Fake College Essay

Which is tougher, writing a college admissions essay or guessing which college admissions essay prompts are real? Ask Me Another is playing this game because two hosts and three producers are soon going to be out of work and looking for something to do. Maybe... grad school!?

Heard on The Penultimate Puzzles

  • PRO Courses Guides New Tech Help Pro Expert Videos About wikiHow Pro Upgrade Sign In
  • EDIT Edit this Article
  • EXPLORE Tech Help Pro About Us Random Article Quizzes Request a New Article Community Dashboard This Or That Game Popular Categories Arts and Entertainment Artwork Books Movies Computers and Electronics Computers Phone Skills Technology Hacks Health Men's Health Mental Health Women's Health Relationships Dating Love Relationship Issues Hobbies and Crafts Crafts Drawing Games Education & Communication Communication Skills Personal Development Studying Personal Care and Style Fashion Hair Care Personal Hygiene Youth Personal Care School Stuff Dating All Categories Arts and Entertainment Finance and Business Home and Garden Relationship Quizzes Cars & Other Vehicles Food and Entertaining Personal Care and Style Sports and Fitness Computers and Electronics Health Pets and Animals Travel Education & Communication Hobbies and Crafts Philosophy and Religion Work World Family Life Holidays and Traditions Relationships Youth
  • Browse Articles
  • Learn Something New
  • Quizzes Hot
  • This Or That Game New
  • Train Your Brain
  • Explore More
  • Support wikiHow
  • About wikiHow
  • Log in / Sign up
  • School Stuff
  • Managing Time During School Years

How to Buy More Time on an Overdue Assignment

Last Updated: March 28, 2024 References

This article was co-authored by Alexander Ruiz, M.Ed. . Alexander Ruiz is an Educational Consultant and the Educational Director of Link Educational Institute, a tutoring business based in Claremont, California that provides customizable educational plans, subject and test prep tutoring, and college application consulting. With over a decade and a half of experience in the education industry, Alexander coaches students to increase their self-awareness and emotional intelligence while achieving skills and the goal of achieving skills and higher education. He holds a BA in Psychology from Florida International University and an MA in Education from Georgia Southern University. This article has been viewed 265,538 times.

Deadlines sneak up fast. If you’re short on time, you can always request an extension from your professor—your request may be based on real or fictionalized reasons. Alternatively, you could submit a corrupted file (a file your professor can’t open) and make the extension appear like an unintentional, happy accident.

Asking Your Teacher for an Extension

Step 1 Talk to your instructor in person.

  • If you're in college or graduate school, drop by your professor’s office hours.
  • If you're in high school or middle school, ask to speak to your teacher after class or set up a time to meet with them.
  • If you're making up an excuse, your professor might be able to see right through your lie. It might better to skip the face-to-face meeting and email them instead. [1] X Research source

Step 2 Explain the situation.

  • If you are struggling with depression and/or anxiety, don’t just say “I am overwhelmed.” Instead, explain how your mental health is affecting your ability to complete the assignment. “I’ve been struggling with depression all since midterms. I’ve learned that when I feel depressed, I have a very hard time focusing on my assignments. It has been very difficult for me to sit down a complete the paper.”
  • “Due to my financial situation, I had to start working this semester. My work schedule and class schedule are very demanding. I am struggling to manage both.”
  • "My parents are both working overtime right now. I have been watching my little siblings for them. I am having a hard time balancing school and my responsibilities at home"
  • "I am training for a big competition. My practices are going way longer than expected and by the time I get home I am too exhausted to do my work." [2] X Research source

Step 3 Ask for an extension.

  • “May I have the weekend to complete the assignment?”
  • “Can I have three days to finish my paper?” [3] X Research source

Step 4 Accept the instructor's response.

  • If they say “yes,” thank them profusely and work hard to meet your new deadline.
  • If they say “no,” thank them for their time and start working on the assignment as soon as you can.
  • If your teacher says “yes” but attaches a grade penalty, accept the grade penalty, thank them for the extension, and work diligently to meet your new deadline. [4] X Research source

Finding an Excuse

Step 1 Blame technology.

  • If you have to print out your paper, experiencing “printer problems” may grant you a few extra hours to work on the assignment.
  • If you typically store all of your work on a USB Drive, tell your teacher the thumb drive was stolen or misplaced. They may give you a few days to search for the missing drive. [5] X Research source

Step 2 Cite a lack of sources.

  • “I am taking the MCAT next month and have been studying for the test non-stop. As a result, the assignment for your class fell off my radar. May I have a few days to complete it?”
  • I am taking the SAT on Saturday and I really need to study for my Latin subject test. Can I have a few more days on my project?"
  • “I have three papers due at the same time. I am struggling to devote attention to each assignment. May I please have an extension so I can produce a paper I am proud of?” [6] X Research source

Step 4 Fake an emergency.

  • Be prepared for your professor to ask for proof or to look into your situation. [7] X Research source

Turning In a Corrupted File

Step 1 Create a new word document.

  • Professors and teachers are aware of this common trick. If you get caught, you may get a zero on the assignment and/or sent to the school's administrators. Before you consider this method, explore all of your other options and check your school's policies on the matter.

Step 2 Insert filler text.

  • You can copy and paste text from the internet, your rough draft, or even use an old paper.

Step 3 Save and name the document.

  • Name the document as your professor requested.
  • Save the file to your desktop.
  • Click Save .

Step 4 Corrupt the file with a free online service (Mac and Windows).

  • Navigate to [ Corrupt-A-File.net ].
  • Scroll down to “Select the file to corrupt” and select one of the following options: “From Your Device”, “From Dropbox”, or “From Google Drive”. If you saved the document on your desktop, click “From Your Device”.
  • Locate the file and click [[button|Select}} or Open .
  • Click Corrupt File . Once corrupted you will receive the following message: “Your file was dutifully corrupted”.
  • Click on the download button (black, downward pointing arrow).
  • Rename the document (if desired), change the location (if desired), and click Save .

Step 5 Corrupt your file manually (Windows only).

  • Right-click on the document’s icon, hover over “Open with” and select “Notepad”. A Notepad file will open. In addition to the filler text, you will see the document’s code (a jumble of letters, numbers, punctuation marks etc.).
  • Delete a portion of the code. Do not delete it all!
  • Press Ctrl + S and click Save . [8] X Research source

Step 6 Attempt to open the document.

  • Mac users will see a “Convert File” dialog box.
  • Windows users will see the message “The document name or path is not valid”. [9] X Research source

Step 7 Submit the corrupted file online and start working on your real assignment.

  • If your professor or teacher discovers you intentionally corrupted the file, you may get in serious trouble. Ask for an extension or simply submit what you have completed before you try this method. If you are doing online school just be sure to send an email explaining why and you can even make up a lie on why it wasn't turned in on time. Tell them you were stressed and had too much work to do so you forgot about it.

Community Q&A

Community Answer

  • Your professor has the right to say “no” when you ask for an extension. Be prepared for this response. Thanks Helpful 0 Not Helpful 0

Tips from our Readers

  • Although students see lying as the best possible way to get an extension, it’s really not! Only lie as a last resort or when the teacher absolutely won’t offer an extension.
  • Try not to lie to your professors if you can help it. You may be kicked out of school for violations of the academic honesty policy or have other consequences.

is assignment work real or fake

  • Do not submit several corrupted files to the same professor. They will catch on. Thanks Helpful 16 Not Helpful 1

You Might Also Like

Be a Responsible Student

  • ↑ http://www.complex.com/pop-culture/2013/09/how-to-get-an-extenstion-on-a-paper/ask-in-person
  • ↑ http://www.complex.com/pop-culture/2013/09/how-to-get-an-extenstion-on-a-paper/plan-ahead
  • ↑ http://www.complex.com/pop-culture/2013/09/how-to-get-an-extenstion-on-a-paper/dont-ask-for-a-long-extension
  • ↑ http://www.ivoryresearch.com/how-to-get-an-assignment-essay-coursework-extension/
  • ↑ https://www.youtube.com/watch?v=EgC-_9ZE5WA

About This Article

Alexander Ruiz, M.Ed.

If your assignment is overdue, you may be able to buy more time by asking for an extension. Talk to your teacher as soon as you can and go after class or during break when they’ll have time to listen to you. Explain specifically why you’ve fallen behind and ask if it’s possible to get an extension. For example, if you’ve been struggling with depression, you’ve had to work a job to help support your family, or you’ve had technical problems, your teacher might offer you some extra time to finish your assignment. Try not to take it personally if they say no, since the decision might be out of your teacher’s hands, and it might be unfair to other students. For more tips, including how to make a corrupted file to buy you time on your assignment, read on. Did this summary help you? Yes No

  • Send fan mail to authors

Reader Success Stories

James

Oct 12, 2017

Did this article help you?

is assignment work real or fake

Featured Articles

Relive the 1970s (for Kids)

Trending Articles

How to Celebrate Passover: Rules, Rituals, Foods, & More

Watch Articles

Fold Boxer Briefs

  • Terms of Use
  • Privacy Policy
  • Do Not Sell or Share My Info
  • Not Selling Info

Get all the best how-tos!

Sign up for wikiHow's weekly email newsletter

placeholder image to represent content

My fake homework

Quiz   by Kathy Stanley

Feel free to use or edit a copy

includes Teacher and Student dashboards

Measure skills from any curriculum

Tag the questions with any skills you have. Your dashboard will track each student's mastery of each skill.

  • edit the questions
  • save a copy for later
  • start a class game
  • automatically assign follow-up activities based on students’ scores
  • assign as homework
  • share a link with colleagues
  • print as a bubble sheet
  • Q 1 / 3 Score 0 What is the name of Natalie's dog? 29 Burrito Dude Rufus Hannah

Our brand new solo games combine with your quiz, on the same screen

Correct quiz answers unlock more play!

New Quizalize solo game modes

  • Q 1 What is the name of Natalie's dog? Burrito Dude Rufus Hannah 30 s
  • Q 2 Solve the following equation: 2x + 3 = 11 x = 4 x = 2 x = 5 x = 3 30 s
  • Q 3 Which property is this: x + (y + z) = (x + y) + z Identify property of addition. Commutative property of addition Associative property of addition Distributive property 30 s

Teachers give this quiz to your class

  • Updated Terms of Use
  • New Privacy Policy
  • Your Privacy Choices
  • Closed Caption Policy
  • Accessibility Statement

This material may not be published, broadcast, rewritten, or redistributed. ©2024 FOX News Network, LLC. All rights reserved. Quotes displayed in real-time or delayed by at least 15 minutes. Market data provided by Factset . Powered and implemented by FactSet Digital Solutions . Legal Statement . Mutual Fund and ETF data provided by Refinitiv Lipper .

Arizona alleged ‘fake electors’ who backed Trump in 2020 indicted by grand jury

While former president trump wasn't named, he was described in the indictment as an unindicted co-conspirator.

Brie Stimson

Trump charged with 4 counts in 2020 election case

Bret Baier reacts to the four charges against former President Donald Trump regarding 2020 election case on ‘The Five.’

Eleven Republicans have been indicted by a grand jury in Arizona and charged with conspiracy, fraud and forgery for falsely claiming that former President Trump had won the state in 2020 over then-Democratic nominee Joe Biden. 

"I will not allow American democracy to be undermined," Arizona Attorney General Kris Mayes said in a Wednesday video announcing the indictments over the "fake elector scheme." 

She added, "The investigators and attorneys assigned to this case took the time to thoroughly piece together the details of the events that began nearly four years ago. They followed the facts where they led, and I’m very proud of the work they’ve done today." 

She added that the co-conspirators were "unwilling to accept" that Arizonans voted for President Biden in an election that was "free and fair" and "schemed to prevent the lawful transfer of the presidency."

MICHIGAN AG CHARGES 16 ‘FLASE ELECTORS’ FOR DONALD TRUMP IN 2020 PRESIDENTIAL ELECTION

Kelli Ward speaking

Former Arizona Chairwoman Kelli Ward was among those charged Wednesday as a "fake elector" for Trump.  (Brandon Bell/Getty Images)

The defendants include former chair of the Arizona Republican Party Kelli Ward, sitting state Sens. Jake Hoffman and Anthony Kern and an unindicted co-conspirator described as "a former president of the United States who spread false claims of election fraud following the 2020 election," a clear reference to Trump. 

WITH PRESIDENTIAL RACE ON THE HORIZON, NM LAWMAKERS LOOK TO OUTLAW FAKE ELECTORS  

Former President Donald Trump exits Trump Tower in New York City

Former President Donald Trump was listed - without being named - as an unindicted co-conspirator.  (Probe-Media for Fox News Digital)

In December 2020, the defendants wrote on a certificate sent to Congress that they were "duly elected and qualified" electors for Trump, claiming he had won the state.  

Seven others were indicted but had their names redacted, pending charges being served. 

Sen. Anthony Kern

Arizona State Sen. Anthony Kern was among the 11 alleged "fake electors" charged.  (Rebecca Noble/Getty Images)

Some outlets reported that former White House chief of staff Mark Meadows and  Rudy Giuliani and were also unindicted co-conspirators along with Trump.

George Terwilliger, a lawyer representing Meadows, told Fox News he had not yet seen the indictment.

"If Mr. Meadows is named in this indictment; it is a blatantly political and politicized accusation and will be contested and defeated," he said.

CLICK HERE TO GET THE FOX NEWS APP

Alleged "fake electors" have also been charged in Georgia, Michigan and Nevada . 

Fox News First

Get the latest updates from the 2024 campaign trail, exclusive interviews and more Fox News politics content.

You've successfully subscribed to this newsletter!

More from Politics

Biden admin cracks down on power plants fueling nation's grid

Biden admin cracks down on power plants fueling nation's grid

Biden sparks Christian group's anger after making sign of the cross at abortion rally: 'Disgusting insult'

Biden sparks Christian group's anger after making sign of the cross at abortion rally: 'Disgusting insult'

Supreme Court to hear arguments in Trump presidential immunity case

Supreme Court to hear arguments in Trump presidential immunity case

Wisconsin GOP Senate candidate says Republicans 'making a mistake' by not discussing health care

Wisconsin GOP Senate candidate says Republicans 'making a mistake' by not discussing health care

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List

Logo of plosone

Determinants of individuals’ belief in fake news: A scoping review determinants of belief in fake news

Kirill bryanov.

Laboratory for Social and Cognitive Informatics, National Research University Higher School of Economics, St. Petersburg, Russia

Victoria Vziatysheva

Associated data.

All relevant data are available within the paper. Search protocol is described in the text, and Table 3 contains information about all studies included in the review.

Proliferation of misinformation in digital news environments can harm society in a number of ways, but its dangers are most acute when citizens believe that false news is factually accurate. A recent wave of empirical research focuses on factors that explain why people fall for the so-called fake news. In this scoping review, we summarize the results of experimental studies that test different predictors of individuals’ belief in misinformation.

The review is based on a synthetic analysis of 26 scholarly articles. The authors developed and applied a search protocol to two academic databases, Scopus and Web of Science. The sample included experimental studies that test factors influencing users’ ability to recognize fake news, their likelihood to trust it or intention to engage with such content. Relying on scoping review methodology, the authors then collated and summarized the available evidence.

The study identifies three broad groups of factors contributing to individuals’ belief in fake news. Firstly, message characteristics—such as belief consistency and presentation cues—can drive people’s belief in misinformation. Secondly, susceptibility to fake news can be determined by individual factors including people’s cognitive styles, predispositions, and differences in news and information literacy. Finally, accuracy-promoting interventions such as warnings or nudges priming individuals to think about information veracity can impact judgements about fake news credibility. Evidence suggests that inoculation-type interventions can be both scalable and effective. We note that study results could be partly driven by design choices such as selection of stimuli and outcome measurement.

Conclusions

We call for expanding the scope and diversifying designs of empirical investigations of people’s susceptibility to false information online. We recommend examining digital platforms beyond Facebook, using more diverse formats of stimulus material and adding a comparative angle to fake news research.

Introduction

Deception is not a new phenomenon in mass communication: people had been exposed to political propaganda, strategic misinformation, and rumors long before much of public communication migrated to digital spaces [ 1 ]. In the information ecosystem centered around social media, however, digital deception took on renewed urgency, with the 2016 U.S. presidential election marking the tipping point where the gravity of the issue became a widespread concern [ 2 , 3 ]. A growing body of work documents the detrimental effects of online misinformation on political discourse and people’s societally significant attitudes and beliefs. Exposure to false information has been linked to outcomes such as diminished trust in mainstream media [ 4 ], fostering the feelings of inefficacy, alienation, and cynicism toward political candidates [ 5 ], as well as creating false memories of fabricated policy-relevant events [ 6 ] and anchoring individuals’ perceptions of unfamiliar topics [ 7 ].

According to some estimates, the spread of politically charged digital deception in the buildup to and following the 2016 election became a mass phenomenon: for example, Allcott and Gentzkow [ 1 ] estimated that the average US adult could have read and remembered at least one fake news article in the months around the election (but see Allen et al. [ 8 ] for an opposing claim regarding the scale of the fake news issue). Scholarly reflections upon this new reality sparked a wave of research concerned with a specific brand of false information, labelled fake news and most commonly conceptualized as non-factual messages resembling legitimate news content and created with an intention to deceive [ 3 , 9 ]. One research avenue that has seen a major uptick in the volume of published work is concerned with uncovering the factors driving people’s ability to discern fake from legitimate news. Indeed, in order for deceitful messages to exert the hypothesized societal effects—such as catalyzing political polarization [ 10 ], distorting public opinion [ 11 ], and promoting inaccurate beliefs [ 12 ]—the recipients have to believe that the claims these messages present are true [ 13 ]. Furthermore, research shows that the more people find false information encountered on social media credible, the more likely they are to amplify it by sharing [ 14 ]. The factors and mechanisms underlying individuals’ judgements of fake news’ accuracy and credibility thus become a central concern for both theory and practice.

While message credibility has been a longstanding matter of interest for scholars of communication [ 15 ], the post-2016 wave of scholarship can be viewed as distinct on account of its focus on particular news formats, contents, and mechanisms of spread that have been prevalent amid the recent fake news crisis [ 16 ]. Furthermore, unlike previous studies of message credibility, the recent work is increasingly taking a turn towards developing and testing potential solutions to the problem of digital misinformation, particularly in the form of interventions aimed at improving people’s accuracy judgements.

Some scholars argue that the recent rise of fake news is a manifestation of a broader ongoing epistemological shift, where significant numbers of online information consumers move away from the standards of evidence-based reasoning and pursuit of objective truth toward “alternative facts” and partisan simplism—a malaise often labelled as the state of “post-truth” [ 17 , 18 ]. Lewandowsky and colleagues identify large-scale trends such as declining social capital, rising economic inequality and political polarization, diminishing trust in science, and an increasingly fragmented media landscape as the processes underlying the shift toward the “post-truth.” In order to narrow the scope of this report, we specifically focus on the news media component of the larger “post-truth” puzzle. This leads us to consider only the studies that explore the effects of misinformation packaged in news-like formats, perforce leaving out investigations dealing with other forms of online deception–for example, messages coming from political figures and parties [ 19 ] or rumors [ 20 ].

The apparently vast amount and heterogeneity of recent empirical research output addressing the antecedents to people’s belief in fake news calls for integrative work summarizing and mapping the newly generated findings. We are aware of a single review article published to date synthesizing empirical findings on the factors of individuals’ susceptibility to believing fake news in political contexts, a narrative summary of a subset of relevant evidence [ 21 ]. In order to systematically survey the available literature in a way that permits both transparency and sufficient conceptual breadth, we employ a scoping review methodology, most commonly used in medical and public health research. This method prescribes specifying a research question, search strategy, and criteria for inclusion and exclusion, along with the general logic of charting and arranging the data, thus allowing for a transparent, replicable synthesis [ 22 ]. Because it is well-suited for identifying diverse subsets of evidence pertaining to a broad research question [ 23 ], scoping review methodology is particularly relevant to our study’s objectives. We begin our investigation with articulating the following research questions:

  • RQ1: What factors have been found to predict individuals’ belief in fake news and their capacity to discern between false and real news?
  • RQ2: What interventions have been found to reduce individuals’ belief in fake news and boost their capacity to discern between false and real news?

In the following sections, we specify our methodology and describe the findings using an inductively developed framework organized around groups of factors and dependent variables extracted from the data. Specifically, we approached the analysis without a preconceived categorization of the factors in mind. Following our assessment of the studies included in the sample, we divided them into three groups based on whether the antecedents of belief in fake news that they focus on 1) reside within the individual or 2) are related to the features of the message, source, or information environment or 3) represent interventions specifically designed to tackle the problem of online misinformation. We conclude with a discussion of the state of play in the research area under review, identify strengths and gaps in existing scholarship, and offer potential avenues for further advancing this body of knowledge.

Materials and methods

Our research pipeline has been developed in accordance with PRISMA guidelines for systematic scoping reviews [ 24 ] and contains the following steps: a) development of a review protocol; b) identification of the relevant studies; c) extraction and charting of the data from selected studies, elaboration of the emerging themes; d) collation and summarization of the results; e) assessment of the strengths and limitations of the body of literature, identification of potential paths for addressing the existing gaps and theory advancement.

Search strategy and protocol development

At the outset, we defined the target population of texts as English-language scholarly articles published in peer-reviewed journals between January 1, 2016 and November 1, 2020 and using experimental methodology to investigate the factors underlying individuals’ belief in false news. We selected this time frame with the intention to specifically capture the research output that emerged in response to the “post-truth” turn in the public and scholarly discourse that many observers link to the political events of 2016, most notably Donald Trump’s ascent to U.S. presidency [ 17 ]. Because we were primarily interested in causal evidence for the role of various antecedents to fake news credibility perceptions, we decided to focus on experimental studies. Our definition of experiment has been purposefully lax, since we acknowledged the possibility that not all relevant studies could employ rigorous experimental design with random assignment and a control group. For example, this would likely be the case for studies testing factors that are more easily measured than manipulated, such as individual psychological predispositions, as predictors of fake news susceptibility. We therefore included investigations where researchers varied at least one of the elements of news exposure: Either a hypothesized factor driving belief in fake news (both between or within subjects), or veracity of news used as a stimulus (within-subjects). Consequently, the studies included in our review presented both causal and correlational evidence.

Upon the initial screening of relevant texts already known to the authors or discovered through cross-referencing, it became apparent that proposed remedies and interventions enhancing news accuracy judgements should also be included into the scope of the review. In many cases practical solutions are presented alongside fake news believability factors, while in several instances testing such interventions is the reports’ primary concern. We began with developing the string of search terms informed by the language found in the titles of the already known relevant studies [ 14 , 25 – 27 ], then enhanced it with plausible synonymous terms drawn from the online service Thesaurus . com . As the initial version of this report went into peer review, we received reviewer feedback suggesting that some of the relevant studies, particularly on the topic of inoculation-based interventions, were left out. We modified our search query accordingly, adding further three inoculation-related terms. The ultimate query looked as follows:

  • (belie* OR discern* OR identif* OR credib* OR evaluat* OR assess* OR rating OR
  • rate OR suspic* OR "thinking" OR accura* OR recogn* OR susceptib* OR malleab* OR trust* OR resist* OR immun* or innocul*) AND (false* OR fake OR disinform* OR misinform*).

Based on our understanding that the relevant studies should fall within the scope of such disciplines as media and communication studies, political science, psychology, cognitive science, and information sciences, we identified two citation databases, Scopus and Web of Science, as the target corpora of scholarly texts. Web of Science and Scopus are consistently ranked among leading academic databases providing citation indexing [ 28 , 29 ]. Norris and Oppenheim [ 30 ] argue that in terms of record processing quality and depth of coverage these databases provide valid instruments for evaluating scholarly contributions in social sciences. Another possible alternative is Google Scholar, which also provides citation indexing and is often considered the largest academic database [ 31 ]. Yet, according to some appraisals, this database lacks quality control [ 32 ], transparency, and can contribute to parts of relevant evidence being overlooked when used in systematic reviews [ 33 ]. Thus, for the purposes of this paper, we chose WoS and Scopus as sources of data.

Relevance screening and inclusion/exclusion criteria

Using title search, our queries resulted in 1622 and 1074 publications in Scopus and Web of Science, respectively. The study selection process is demonstrated in Fig 1 .

An external file that holds a picture, illustration, etc.
Object name is pone.0253717.g001.jpg

We began the search with crude title screening performed by the authors (KB and VV) on each database independently. On this stage, we mainly excluded obviously irrelevant articles (e.g. research reports mentioning false-positive biochemical tests results) and those whose titles unambiguously indicated that the item was outside of our original scope, such as work in the field of machine learning on automated fake news detection. Both authors’ results were then cross-checked, and disagreements resolved. This stage narrowed our selection down to 109 potentially relevant Scopus articles and 76 WoS articles. Having removed duplicate items present in both databases, we arrived at the list of 117 unique articles retained for abstract review.

On the abstract screening stage, we excluded items that could be identified as utilizing non-experimental research designs. Furthermore, on this stage we determined that all articles that fit our intended scope include at least one of the following outcome variables: 1) perceived credibility, believability, or accuracy of false news messages and 2) a measure of the capacity to discern false from authentic news. Screening potentially eligible abstracts suggested that studies not addressing one of these two outcomes do not answer the research questions at the center of our study. Seventy articles were thus removed, leaving us with 45 articles for full-text review.

The remaining articles were read in full by the authors independently, disagreements on whether specific items fit the inclusion criteria resolved, resulting in the final sample of 26 articles (see Table 1 for the full list of included studies). Since our primary focus is on perceptions of false media content and corresponding interventions designed to improve news delivery and consumption practices, we only included the experiments that utilized a news-like format of the stimulus material. As a result, we forwent investigations focusing on online rumors, individual politicians’ social media posts, and other stimuli that were not meant to represent content produced by a news organization. We did not limit the range of platforms where the news articles were presented to participants, since many studies simulated the processes of news selection and consumption in high-choice environments such as social media feeds. We then charted the evidence contained therein according to a categorization based on the outcome and independent variables that the included studies investigate.

* Note: In study design statements, all factors are between-subjects unless stated otherwise.

Outcome variables

Having arranged the available evidence along a number of ad-hoc dimensions, including the primary independent variables/correlates and focal outcome variables, we opted for a presentation strategy that opens with a classification of study dependent variables. Our analysis revealed that the body of scholarly literature under review is characterized by a significant heterogeneity of outcome variables. The concepts central to our synthesis are operationalized and measured in a variety of ways across studies, which presents a major hindrance to comparability of their results. In addition, in the absence of established terminology these variables are often labelled differently even when they represent similar constructs.

In addition to several variations of the dependent variables that we used as one of the inclusion criteria, we discovered a range of additional DVs relevant to the issue of online misinformation that the studies under review explored. The resulting classification is presented in Table 2 below.

Note: A single study could yield several observations if it considered multiple outcome variables.

As visible from Table 2 , the majority of studies in our sample measured the degree to which participants identified news messages or headlines as credible, believable or accurate. This strategy was utilized in experiments that both exposed individuals to made-up messages only, and those where stimulus material included a combination of real and fake items. Studies of the former type examined the effects of message characteristics or presentation cues on perceived credibility of misinformation, while the latter stimulus format also enabled scholars to examine the factors driving the accuracy of people’s identification of news as real or fake. In most instances, these synthetic “media truth discernment” scores were constructed post-hoc by matching participants’ credibility responses to the known “ground truth” of messages that they were asked to assess. These individual discernment scores could then be matched with the respondent’s or message’s features to infer the sources of systematic variation in the aggregate judgement accuracy.

Looking at credibility perceptions of real and false news separately also enabled scholars to determine whether the effects of factors or interventions were symmetric for both message types. In a media environment where the overwhelming majority of news is real after all [ 27 ], it is essential to ensure both that fake news is dismissed, and high-quality content is trusted.

Another outcome that several studies in our sample investigated is the self-reported likelihood to share the message on social media. Given that social platforms like Facebook are widely believed to be responsible for the rapid spread of deceitful political content in recent years [ 2 ], the determinants of sharing behavior are central to developing effective measures for limiting the reach of fake news. Moreover, in at least one study [ 34 ] researchers explicitly used sharing intent as a proxy for a news accuracy judgement in order to estimate perceived accuracy without priming participants’ thinking about veracity of information. This approach appears promising given that this as well as other studies reported sizable correlations between perceived accuracy and sharing intent [ 35 – 37 ], yet it is obviously limited as a host of considerations beyond credibility can inform the decision to share a news item on social media.

Having extracted and classified the dependent variables in the reviewed studies, we proceed to mapping our observations against the factors and correlates that were theorized to exert effects on them (see Table 3 ).

Note: Only outcome variables with more than one observation are included in the table.

A single study could yield several observations if it considered multiple independent and/or outcome variables.

We observed that the experimental studies in our sample measure or manipulate three types of factors hypothesized to influence individuals’ belief in fake news. The first category encompasses variables related to the news message, the way it is presented, or the features of the information environment where exposure to information occurs. In other words, these tests seek to answer the question: What kinds of fake news are people more likely to fall for? The second category takes a different approach and examines respondents’ individual traits predictive of their susceptibility to disinformation. Put simply, these tests address the broad question of who falls for fake news. Finally, the effects of measures specifically designed to combat the spread of fake news constitute a qualitatively distinct group. Granted, this is a necessarily simplified categorization, as factors do not always easily lend themselves to inclusion into one of these baskets. For example, the effect of a pro-attitudinal message can be seen as a combination of both message-level (e. g. conservative-friendly wording of the headline) and an individual-level predisposition (recipient embracing politically conservative views). For presentation purposes, we base our narrative synthesis of the reviewed evidence on the following categorization: 1) Factors residing entirely outside of the individual recipient (message features, presentation cues, information environment); 2) Recipient’s individual features; 3) Interventions. For each category, we discuss theoretical frameworks that the authors employ and specific study designs.

A fundamental question at the core of many investigations that we reviewed is whether people are generally predisposed to believe fake news that they encounter online. Previous research suggests that individuals go about evaluating the veracity of falsehoods similarly to how they process true information [ 38 ]. Generally, most individuals tend to accept information that others communicate to them as accurate, provided that there are no salient markers suggesting otherwise [ 39 ].

Informed by these established notions, some of the authors whose work we reviewed expect to find the effects of “truth bias,” a tendency to accept all incoming claims at face value, including false ones. This, however, does not seem to be the case. No study under review reported the majority of respondents trusting most fake messages or perceiving false and real messages as equally credible. If anything, in some cases a “deception bias” emerges, where individuals’ credibility judgements are biased in the direction of rating both real and false news as fake. For example, Luo et al. [ 40 ] found that across two experiments where stimuli consisted of equal numbers of real and fake headlines participants were more likely to rate all headlines as fake, resulting in just 44.6% and 40% of headlines marked as real across two studies. Yet, it is possible that this effect is a product of the experimental setting where individuals are alerted to the possibility that some of the news is fake and prompted to scrutinize each message more thoroughly than they would while leisurely browsing their newsfeed at home.

The reviewed evidence of individuals’ overall credibility perceptions of fake news as compared to real news, as well as of people’s ability to tell one from another, is somewhat contradictory. Several studies that examined participants’ accuracy in discerning real from fake news report estimates that are either below or indistinguishable from random chance: Moravec et al. [ 41 ] report a mean detection rate of 43.9%, with only 17% of participants performing better than chance; in Luo et al. [ 40 ], detection accuracy is slightly better than chance (53.5%) in study 1 and statistically indistinguishable from chance in study 2 (49.2%). Encouragingly, the majority of other studies where respondents were exposed to both real and fake news items provide evidence suggesting that people’s average capacity to tell one from another is considerably greater than chance. In all studies reported in Pennycook and Rand [ 25 ], average perceived credibility of real headlines is above 2.5 on a four-point scale from 1 to 4, while average credibility of fake headlines is below 1.6. A similar distance—about one point on a four-point scale—marks the difference between real and fake news’ perceived credibility in experiments reported in Bronstein et al. [ 42 ]. In Bago et al. [ 43 ], participants rated less than 40% of fake headlines and more than 60% of real headlines as accurate. In Jones-Jang et al. [ 44 ], respondents correctly identified fake news 6.35 attempts out of 10.

Following the aggregate-level assessment, we proceed to describing three main groups of factors that researchers identify as sources of variation in perceived credibility of fake news.

Message-level and environmental factors

When apparent signs of authenticity or fakeness of a news item are not immediately available, individuals can rely on certain message characteristics when making a credibility judgement. Two major message-level factors stand out in this cluster of evidence as most frequently tested (see Table 3 ). Firstly, alignment of the message source, topic, or content with the respondent’s prior beliefs and ideological predispositions; secondly, social endorsement cues. Theoretical expectations within this approach are largely shaped by dual-process models of learning and information processing [ 58 , 59 ] borrowed from the field of psychology and adapted for online information environments. These theories emphasize how people’s information processing can occur through either the more conscious, analytic route or the intuitive, heuristic route. The general assumption traceable in nearly every theoretical argument is that consumers of digital news routinely face information overload and have to resort to fast and economical heuristic modes of processing [ 60 ], which leads to reliance on cues embedded in messages or the way they are presented. For example, some studies that examine the influence of online social heuristics on evaluations of fake news’ credibility build on Sundar’s [ 61 ] concept of bandwagon cues, or indicators of collective endorsement of online content as a sign of its quality. More generally, these studies continue the line of research investigating how perceived social consensus on certain issues, gauged from online information environments, contributes to opinion formation (e. g. Lewandowsky et al. [ 62 ]).

Exploring the interaction between message topic and bandwagon heuristics on perceived credibility of fake news headlines, Luo et al. [ 40 ] find that a high number of likes associated with the post modestly increases (by 0.34 points on a 7-point scale) perceived credibility of both real and fake news compared to few likes. Notably, this effect is observed for health and science headlines, but not for political ones. In contrast, Kluck et al. [ 35 ] fail to find the effect of the numeric indicator of Facebook post endorsement on perceived credibility. This discrepancy could be explained by differences in the design of these two studies: whereas in Luo et al. participants were exposed to multiple headlines, both real and fake, Kluck et al. only assessed perceived credibility of just one made-up news story. This may have led to the unique properties of this single news story contributing to the observed result., Kluck et al. further reveal that negative comments questioning the stimulus post’s authenticity do dampen both perceived credibility (by 0.21 standard deviations) and sharing intent. In a rare investigation of news evaluation on Instagram, Mena et al. [ 46 ] demonstrate that trusted endorsements by celebrities do increase credibility of a made-up non-political news post, while bandwagon endorsements do not. Again, this study relies on one fabricated news post as a stimulus. These discrepant results of social influence studies suggest that the likelihood of detecting such effects may be contingent on specific study design choices, particularly the format, veracity, and sampling of stimulus messages. Generalizability and comparability of the results generated in experiments that use only one message as a stimulus should be enhanced by replications that employ stimulus sampling techniques [ 63 ].

Following one of the most influential paradigms in political communication research—the motivated reasoning account postulating that people are more likely to pursue, consume, endorse and otherwise favor information that matches their preexisting beliefs or comes from an ideologically aligned source—most studies in our sample measure the ideological or political concordance of the experimental messages and most commonly use it in statistical models as covariates or hypothesized moderators. Where they are reported, the pattern of direct effects of ideological concordance largely conforms to expectations, as people tend to rate congenial messages as more credible. In Bago et al. [ 43 ], headline political concordance increased the likelihood of participants rating it as accurate (b = 0.21), which was still meager compared to the positive effect of the headline’s actual veracity (b = 1.56). In Kim, Moravec and Dennis [ 50 ], headline political concordance was a significant predictor of believability (b = 0.585 in study 1; b = 0.153 in study 2), but the magnitude of this effect was surpassed by that of low source ratings by experts (b = −0.784 in study 1; b = -0.365 in study 2). In turn, increased believability heightened the reported intent to read, like, and share the story. In the same study, both expert and user ratings of the source displayed alongside the message influenced its perceived believability in both directions. According to the results of the study by Kim and Dennis [ 14 ], increased relevance and pro-attitudinal directionality of the statement contained in the headline predicted increased believability and sharing intent. Similarly, Moravec et al. [ 41 ] argued that the confirmatory nature of the headline is the single most powerful predictor of belief in false but not true news headlines. Tsang [ 55 ] found sizable effects of the respondents’ stance on the Hong Kong extradition bill on perceived fakeness of a news story covering the topic in line with the motivated reasoning mechanism.

At the same time, the expectation that individuals will use the ideological leaning of the source as a credibility cue when faced with ambiguous messages lacking other credibility indicators was not supported by data. Relying on the data collected from almost 4000 Amazon Mechanical Turk workers, Clayton et al. [ 45 ] failed to detect the hypothesized influence of motivated reasoning, induced by the right or left-leaning mainstream news source label, on belief in a false statement presented in a news report.

Several studies tested the effects of factors beyond social endorsement and directional cues. Schaewitz et al. [ 13 ] looked at the effects of such message characteristics as source credibility, content inconsistencies, subjectivity, sensationalism, and the presence of manipulated images on message and source credibility appraisals, and found no association between these factors and focal outcome variables—against the background of the significant influence of personal-level factors such as the need for cognition. As already mentioned, Luo et al. [ 40 ] found that fake news detection accuracy can also vary by the topic, with respondents recording the highest accuracy rates in the context of political news—a finding that could be explained by users’ greater familiarity and knowledge of politics compared to science and health.

One study under review investigated the possibility that news credibility perceptions can be influenced not by the features of specific messages, but by characteristics of a broader information environment, for example, the prevalence of certain types of discourse. Testing the effects of exposure to the widespread elite rhetoric about “fake news,” van Duyn and Collier [ 26 ] discovered evidence that it can dampen believability of all news, damaging people’s ability to identify legitimate content in addition to reducing general media trust. These effects were sizable, with primed participants ascribing real articles on average 0.47 credibility points less than those who haven’t been exposed to politicians’ tweets about fake news, on a 3-point scale.

As this brief overview demonstrates, the message-level approaches to fake news susceptibility consider a patchwork of diverse factors, whose effects may vary depending on the measurement instruments, context, and operationalization of independent and outcome variables. Compared to individual-level factors, scholars espousing this paradigm tend to rely on more diverse experimental stimuli. In addition to headlines, they often employ story leads and full news reports, while the stimulus new stories cover a broader range of topics than just politics. At the same time, out of ten studies attributed to this category, five used either one or two variations of a single stimulus news post. This constitutes an apparent limitation to the generalizability of their findings. To generate evidence generalizable beyond specific messages and topics, future studies in this domain should rely on more diverse sets of stimuli.

Individual-level factors

This strain of research recognizes the differences in people’s individual cognitive styles, predispositions, and conditions as the main source of variation in fake news credibility judgements. Theoretically, they largely rely on dual-process approaches to human cognition as well [ 64 , 65 ]. Scholars embracing this approach explain some people’s tendency to fall for fake news by their reliance, either innate or momentary, on less analytical and more reflexive modes of thinking [ 37 , 42 ]. Generally, they tend to ascribe fake news susceptibility to lack of reasoning rather than to directionally motivated reasoning.

Pennycook and Rand [ 25 ] employ the established measure of analytical thinking, the Cognitive Reflection Test, to demonstrate that respondents who are more prone to override intuitive thinking with further reflection are also better at discerning false from real news. This effect holds regardless of whether the headlines are ideologically concordant or discordant with individuals’ views. Importantly, the authors also find that headline plausibility (understood as the extent to which it contains a statement that sounds outrageous or patently false to an average person) moderates the observed effect, suggesting that more analytical individuals can use extreme implausibility as a cue indicating news’ fakeness.

In a 2020 study [ 37 ], Pennycook and Rand replicated the relationship between CRT and fake news discernment, in addition to testing novel measures—pseudo-profound bullshit receptivity (the tendency to ascribe profound meaning to randomly generated phrases) and a tendency to overclaim one’s level of knowledge—as potential correlates of respondents’ likelihood to accept claims contained in false headlines. Pearson’s r ranged from 0.30 to 0.39 in study 1 and from 0.20 to 0.26 in study 2 (all significant at p<0.001 in both studies), indicating modestly sized yet significant correlations. All three measures were correlated with perceived accuracy of fake news headlines as well as with each other, based on which the authors speculated that these measures are all connected to a common underlying trait that manifests as the propensity to uncritically accept various claims of low epistemic value. The researchers labelled this trait reflexive open-mindedness , as opposed to reflective open-mindedness observed in more analytical individuals. In a similar vein, Bronstein et al. [ 42 ] added cognitive tendencies such as delusion-like ideation, dogmatism, and religious fundamentalism to the list of individual-level traits weakly associated with heightened belief in fake news, while analytical and open-minded thinking slightly decreased this belief.

Schaewitz et al. [ 13 ] linked the classic concept from credibility research, need for cognition, to the tendency to rate down credibility (in some models but not others) and accuracy of non-political fake news. This concept overlaps with analytical thinking from Pennycook and Rand’s experiments, yet distinct in that it captures the self-reported pleasure from (and not just the proneness to) performing cognitively effortful tasks.

Much like the studies reviewed above, experiments by Martel et al. [ 48 ] and Bago et al. [ 43 ] challenged the motivated reasoning argument as applied to fake news detection, focusing instead on the classical reasoning explanation: the more analytic the reasoning, the higher the likelihood to accurately detect false headlines. In contrast to the above accounts, both studies investigate the momentary conditions, rather than stable cognitive features, as sources of variation in fake news detection accuracy. In Martel et al. [ 48 ], increased emotionality (as both the current mental state at the time of task completion and the induced mode of information processing) was strongly associated with the increased belief in fake news, with induced emotional processing resulting in a 10% increase in believability of false headlines. Fernández-López and Perea [ 49 ] reached similar conclusions about the role of emotion drawing on a sample of Spanish residents.

Bago et al. [ 43 ] relied on the two-response approach to test the effects of the increased time for deliberation on perceived accuracy of real and false headlines. Compared to the first response, given under time constraints and additional cognitive load, the final response to the same news items for which participants had no time limit and no additional cognitive task indicated significantly lower perceived accuracy of fake (but not real) headlines, both ideologically concordant and discordant. The effect of heightened deliberation (b = 0.36) was larger than the effect of headline political concordance (b = -0.21). These findings lend additional support to the argument that decision conditions favoring more measured, analytical modes of cognitive processing are also more likely to yield higher rates of fake news discernment.

Pennycook et al. [ 47 ] provide evidence supporting the existence of the illusory truth effect—the increased likelihood to view the already seen statements as true, regardless of the actual veracity—in the context of fake news. In their experiments, a single exposure to either a fake or real news headline slightly yet consistently (by 0.09 or 0.11 points on a 4-point scale) increased the likelihood to rate it as true on the second encounter, regardless of political concordance, and this effect persists after as long as a week.

It is not always how individuals process messages, but how competent they are about the information environment, that affects their ability to resist misinformation. Amazeen and Bucy [ 57 ] introduce a measure of procedural news knowledge (PNK), or working knowledge of how news media organizations operate, as a predictor of the ability to identify fake news and other online messages that can be viewed as deliberately deceptive (such as native advertising). In their analysis, one standard deviation decrease in PNK increased perceived accuracy of fabricated news headlines by 0.19 standard deviation. Interestingly, Jones-Jang et al. [ 44 ] find a significant correlation between information literacy (but not media and news literacies) and identification between fake news stories.

Taken together, the evidence reviewed in this section provides robust support to the idea that analytic processing is associated with more accurate discernment of fake news. Yet, it has to be noted that the generalizability of these findings could be constrained by the stimulus selection strategy that many of these studies share. All experiments reviewed above, excluding Schaewitz et al. [ 13 ] and Fernández-López and Perea [ 49 ], rely on stimulus material constructed from equal shares of real mainstream news headlines and real fake news headlines sourced from fact-checking websites like Snopes . com . As these statements are intensely political and often blatantly untrue, the sheer implausibility of some of the headlines can offer a “fakeness” cue easily picked up by more analytical—or simply politically knowledgeable—individuals, a proposition tested by Pennycook and Rand [ 25 ]. While they preserve the authenticity of the information environment around the 2016 U.S. presidential election, it is unclear what these findings can tell us about the reasons behind people’s belief in fake news that are less egregiously “fake” and therefore do not carry a conspicuous mark of falsehood.

Accuracy-promoting interventions

The normative foundation of much of the research investigating the reasons behind people’s vulnerability to misinformation is the need to develop measures limiting its negative effects on individuals and society. Two major approaches to countering fake news and its negative effects can be distinguished in the literature under review. The first approach, often labelled inoculation, is aimed at preemptively alerting individuals to the dangers of online deception and equipping them with the tools to combat it [ 44 , 56 ]. The second manifests in tackling specific questionable news stories or sources by labelling them in a way that triggers increased scrutiny by information consumers [ 51 , 54 ]. The key difference between the two is that inoculation-based strategies are designed to work preemptively, while labels and flags are most commonly presented to information consumers alongside the message itself.

Some of the most promising inoculation interventions are those designed to enhance various aspects of media and information literacy. Recent studies demonstrated that preventive techniques—like exposing people to anti-conspiracy arguments [ 66 ] or explaining deception strategies [ 67 ]—can help neutralize harmful effects of misinformation before the exposure. Grounded in the idea that the lack of adequate knowledge and skills among news consumers makes people less critical and, thus, more susceptible to fake news [ 68 ], such measures aim at making online deception-related considerations salient in the minds of large swaths of users, as well as at equipping them with basic techniques that help spot false news.

In a cross-national study that involved respondents from the United States and India, Guess et al [ 52 ] find that exposing users to a set of simple guidelines for detecting misinformation modelled after similar Facebook guidelines (e.g., “Be skeptical of headlines,” “Watch for unusual formatting”) improves fake news discernment rate by 26% in the U.S. sample and by 19% in the Indian sample, regardless of whether the headlines are politically concordant or discordant. These effects persist several weeks post-exposure. Interestingly, it might be that the effect is caused not so much by participants heeding the instructions as by simply priming them to think about accuracy. When testing the effects of accuracy priming in the context of COVID-19 misinformation, Pennycook et al. [ 34 ] reveal that inattention to accuracy considerations is rampant: people asked whether they would share false stories appear to rarely consider their veracity unless prompted to do so. Yet, asking them to rate the accuracy of a single unrelated headline before going into the task dramatically improved accuracy and reduced the likelihood to share false stories: the difference in sharing likelihood of true relative to false headlines was 2.8 times higher in the treatment group comparatively to the control group.

On a more general note, the latter finding could suggest that the results of all experiments that include false news discernment tasks could be biased in the direction of more accuracy simply by the virtue of priming participants to think about news’ veracity, compared to their usual state of mind when browsing online news. Lutzke et al. [ 36 ] reach similar results when they prime critical thinking in the context of climate change news, resulting in diminished trust and sharing intentions for falsehoods even among climate change doubters.

A study by Roozenbeek and van der Linden [ 56 ] demonstrated the capacity of a scalable inoculation intervention in the format of a choice-based online game to confer resistance against several common misinformation strategies. Over the average of 15 minutes of gameplay, users were tasked with choosing the most efficient ways of misinforming the audience in a series of hypothetical scenarios. Post-gameplay credibility scores of fake news items embedded in the game were significantly lower than pre-test scores using a one-way repeated measures F(5, 13559) = 980.65, Wilk’s Λ = 0.73, p < 0.001, η 2 = 0.27. These findings were replicated in a between-subjects design with a control group in Basol et al. [ 69 ], although this study was not included in our sample based on formal criteria.

Fact-checking is arguably the most publicly visible format of real measures used to combat online misinformation. Studies in our sample present mixed evidence of the effectiveness of fact-checking interventions in reducing credibility of misinformation. Using different formats of fact-checking warnings before exposing participants to a set of verifiably fake news stories, Morris et al. [ 53 ] demonstrated that the effects of such measures can be limited and contingent on respondents’ ideology (liberals tend to be more responsive to fact-checking warnings than conservatives). Encouragingly, Clayton et al. [ 51 ] found that labels indicating the fact that a particular false story has been either disputed or rated false do decrease belief in this story, regardless of partisanship. The “Disputed” tag placed next to the story headline decreased believability by 10%, while the “Rated false” tag was 13% effective. At the same time, in line with van Duyn and Collier [ 26 ], they showed that general warnings that are not specific to particular messages are less effective and can reduce belief in real news. Finally, Garrett and Poulsen [ 54 ], comparing the effects of three types of Facebook flags (fact-checking warning; peer warning; humorous label) found that only self-identification of the source as humorous reduces both belief and sharing intent. The discrepant conclusions that these three studies reach are unsurprising given differences in format and meaning of warnings that they test.

In sum, findings in this section suggest that the general warnings and non-specific rhetoric of “fake news” should be employed with caution so as to avoid the outcomes that can be opposite to the desired effects. Recent advances in scholarship on the backfire effect of misinformation corrections have called into question the empirical soundness of this phenomenon [ 70 , 71 ]. However, multiple earlier studies across several issue contexts have documented specific instances where attitude-challenging corrections were linked to compounding misperceptions rather than rectifying them [ 72 , 73 ]. Designers of accuracy-promoting interventions should at least be aware of the possibility that such effects could follow.

Overall, while the evidence of the effects of labelling and flagging specific social media messages and sources remains inconclusive, it appears that priming users to think of online news’ accuracy is a scalable and cheap way to improve the rates of fake news detection. Gamified inoculation strategies also hold potential to reach mass audiences while preemptively familiarizing users with the threat of online deception.

We have applied a scoping review methodology to map the existing evidence of the effects various antecedents to people’s belief in false news, predominantly in the context of social media. The research landscape presents a complex picture, suggesting that the focal phenomenon is driven by the interplay of cognitive, psychological and environmental factors, as well as characteristics of a specific message.

Overall, the evidence under review speaks to the fact that people on average are not entirely gullible, and they can detect deceitful messages reasonably well. While there has been no evidence to support the notion of “truth bias,” i.e., people’s propensity to accept most incoming messages as true, the results of some studies in our sample suggested that under certain conditions the opposite—a scenario that can be labelled “deception bias”—can be at work. This is consistent with some recent theoretical and empirical accounts suggesting that a large share of online information consumers today approach news content with skepticism [ 74 , 75 ]. In this regard, the problem with fake news could be not only that people fall for it, but also that it erodes trust in legitimate news.

At the same time, given the scarcity of attention and cognitive resources, individuals often rely on simple rules of thumb to make efficient credibility judgements. Depending on many contextual variables, such heuristics can be triggered by bandwagon and celebrity endorsements, topic relevance, or presentation format. In many cases, messages’ concordance with prior beliefs remains a predictor of increased credibility perceptions.

There is also consistent evidence supporting the notion that certain cognitive styles and predilections are associated with the ability to discern real from fake headlines. The overarching concept of reflexive open-mindedness captures an array of related constructs that are predictive of propensity to accept claims of questionable epistemological value, an entity of which fake news is representative. Yet, while many of the studies focusing on individual-level factors demonstrate that the effects of cognitive styles and mental states are robust across both politically concordant and discordant headlines, the overall effects of belief consistency remain powerful. For example, in Pennycook and Rand [ 25 ] politically concordant items were rated as significantly more accurate than politically discordant items overall (this analysis was used as a manipulation check). This suggests that individuals may not be necessarily engaging in motivated reasoning, yet still using belief consistency as a credibility cue.

The line of research concerned with accuracy-improving interventions reveals limited efficiency of general warnings and Facebook-style tags. Available evidence suggests that simple inoculation interventions embedded in news interfaces to prime critical thinking and exposure to news literacy guidelines can induce more reliable improvements while avoiding normatively undesirable effects.

Conclusions and future research

The review highlighted a number of blind spots in the existing experimental research on fake news perceptions. Since this literature has to a large extent emerged as a response to particular societal developments, the scope of investigations and study design choices bear many contextual similarities. The sample is heavily skewed toward the U.S. news and news consumers, with the majority of studies using a limited set of politically charged falsehoods for stimulus material. While this approach enhances external validity of studies, it also limits the universe of experimental fake news to a rather narrow subset of this sprawling genre. Future studies should transcend the boundaries of the “fake news canon” and look beyond Snopes and Politifact for stimulus material in order to investigate the effects of already established factors on perceived credibility of misinformation that is not political or has not yet been debunked by major fact-checking organizations.

Similarly, the overwhelming majority of experiments under review seek to replicate the environment where many information consumers encountered fake news during and after the misinformation crisis of 2016, to which end they present stimulus news items in the format of Facebook posts. As a result, there is currently a paucity of studies looking at all other rapidly emerging venues for political speech and fake news propagation: Instagram, messenger services like WhatsApp, and video platforms like YouTube and TikTok.

The comparative aspect of fake news perceptions, too, is conspicuously understudied. The only truly comparative study in our sample [ 52 ] uncovered meaningful differences in effect sizes and decay time between U.S. and Indian samples. More comparative research is needed to specify whether the determinants of fake news credibility are robust across various national political and media systems.

Two methodological concerns also stand out. Firstly, a dominant approach to constructing experimental stimuli rests on the assumption that the bulk of news consumption on social media occurs on the level of headline exposure—i.e. users process news and make sharing decisions based largely on news headlines. While there are strong reasons to believe that it is true for some news consumers, others might engage with news content more thoroughly, which can yield differences in effects observed on the headline level. Future studies could benefit from accounting for this potential divergence. For example, researchers can borrow the logic of Arceneaux and Johnson [ 76 ] and introduce an element of choice, thus enabling comparisons between those who only skim headlines and those who prefer to click on articles to read.

Finally, the results of most existing fake news studies could be systematically biased by the mere presence of a credibility assessment task. As Kim and Dennis [ 14 ] argue, browsing social media feeds is normally associated with a hedonic mindset, which is less conducive to critical assessment of information compared to a utilitarian mindset. This is corroborated by Pennycook et al. [ 34 ] who show that people who are not primed to think about accuracy are significantly more likely to share false news. A small credibility rating task produces massive accuracy improvement, underscoring the difference that a simple priming intervention can make. Asking respondents to rate credibility of treatment news items could work similarly, thus distorting the estimates compared to respondents’ “real” accuracy rates. In this light, future research should incorporate indirect measures of perceived fake and real news accuracy that could measure the focal construct without priming respondents to think about credibility and veracity of information.

Limitations

The necessary conceptual and temporal boundaries that constitute the framework of this review can also be viewed as its limitation. By focusing on a specific type of online misinformation—fake news—we intentionally excluded other variations of deceitful messages that can be influential in the public sphere, such as rumors, hoaxes, conspiracy theories, etc. This focus on the relatively recent species of misinformation led us to apply specific criteria to the stimulus material, as well as to limit the search by the period beginning in 2016. Since belief in both fake news and adjacent genres of misinformation could be driven by same mechanisms, focusing on just fake news could result in leaving out some potentially relevant evidence.

Another limitation is related to our methodological criteria. We selected studies to review based on the experimental design. Yet, the evidence of how people interact with misinformation may also be generated from questionnaires, behavioral data analysis, or qualitative inquiry. For example, recent non-experimental studies reveal certain demographic characteristics, political attitudes or media use habits associated with increased susceptibility to fake news [ 77 , 78 ]. Finally, our focus on articles published in peer-reviewed scholarly journals means that potentially relevant evidence that appeared in formats more oriented toward practitioners and policymakers could be overlooked. Future systematic reviews can present a more comprehensive view of the research area by expanding their focus beyond the exclusively “news-like” online misinformation formats, relaxing methodological criteria, and diversifying the range of data sources.

Funding Statement

The research was supported by the Russian Scientific Fund Grant № 19-18-00206 (2019–2021) at the National Research University Higher School of Economics. Funder website: https://grant.rscf.ru/enexp/ The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Data Availability

  • PLoS One. 2021; 16(6): e0253717.

Decision Letter 0

24 Mar 2021

PONE-D-21-00513

Determinants of individuals’ belief in fake news: A scoping review

Dear Dr. Bryanov,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

==============================

The expert Reviewers gave generally favorable opinions on the manuscript. However, revision is needed especially to better justify inclusion/exclusion of sources, structure and clarification of some methodological and discussion choices. 

Please submit your revised manuscript by May 08 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at gro.solp@enosolp . When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.
  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.
  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see:  http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

We look forward to receiving your revised manuscript.

Kind regards,

Stefano Triberti, Ph.D.

Academic Editor

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Please ensure that you refer to Figure 1 in your text as, if accepted, production will need this reference to link the reader to the figure.

3. We note you have included a table to which you do not refer in the text of your manuscript. Please ensure that you refer to Table 2 in your text; if accepted, production will need this reference to link the reader to the Table.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Partly

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: N/A

Reviewer #2: N/A

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #2: Yes

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: This is a very useful paper. It is also clear and well written. A pleasure to review.

I’m not familiar with the methodologies used to do systematic reviews, thus I cannot properly evaluate the rigor of their methodology. However, from what I understood, their methodology is sound and robust.

I have two minor comments and two personal comments. But the manuscript can be published as is.

Minor comments:

P24: “In sum, findings in this section suggest that the general warnings and non-specific rhetoric of “fake news” should be employed with caution so as to avoid the possible backfire effect.” ---> What do the authors mean by backfire effect? It is used in the literature to mean a lot of different things.

P26: “The line of research concerned with accuracy-improving interventions reveals limited efficiency of general warnings and Facebook-style tags and suggests that simple interventions embedded in news interfaces to prime critical thinking and exposure to news literacy guidelines can induce more reliable improvements while avoiding normatively undesirable backfire effects.” ---> Add commas (e.g. after “tags”) otherwise there is no space to breathe.

Personal comments:

The “truth bias” is, I believe, psychologically implausible (Sperber, D., Clément, F., Heintz, C., Mascaro, O., Mercier, H., Origgi, G., & Wilson, D. (2010). Epistemic vigilance. Mind & Language, 25(4), 359-393). However, it is, for some reason, very influential in the literature on misinformation. The authors’ result regarding the absence of truth bias could be mentioned in the discussion. Indeed, this implausible assumption could influence the way researchers design their experiments and frame their results.

On the other hand, the “deception bias” makes sense in light of what we know about trust in the digital age: the problem is not that people trust fake news sources too much but rather that they don’t trust good sources enough (e.g. Fletcher, R., & Nielsen, R. K. (2019). Generalised scepticism: how people navigate news on social media. Information, Communication & Society, 22(12), 1751-1769 ; Altay, S., Hacquin, AS. & Mercier, H. (2020) Why do so Few People Share Fake News? It Hurts Their Reputation. New Media & Society; Pennycook, G., & Rand, D. G. (2019). Fighting misinformation on social media using crowdsourced judgments of news source quality. Proceedings of the National Academy of Sciences, 116(7), 2521-2526.; Mercier, H. (2020). Not born yesterday: The science of who we trust and what we believe.).

Reviewer #2: Review of PONE-D-21-00513

Reviewers: Stephan Lewandowsky and Muhsin Yesilada

Overall, the paper has clear importance; identifying the determinants of fake news beliefs can have useful implications for targeted interventions. The authors mapped out factors that could affect the outcome variables set out in the included studies. These factors were: message-level factors, individual-level factors, and intervention & ecological factors (although it was hard to determine how they identified these three factors).

The research team decided to use precedents to guide their scoping review (such as the PRISMA review guidelines). Still, there were major issues with the presentation of evidence. For example, certain aspects of the methodology would have benefited from being in the results or even discussion section. There was also a lack of comprehensive coverage of certain research areas (such as fake news interventions). These issues are described below.

Overall, the submission requires major revision before a favourable opinion can be given.

Major Points:

1. The number of studies included in the scoping review that investigate fake news interventions was limited. The intervention section highlights the prominence of inoculation-based research in this context; however, we noticed some studies that could have been included but that were not (e.g., Basol, Roozenbeek, & van der Linden, 2020; Roozenbeek & van der Linden, 2019). These studies should be included to provide a comprehensive overview of the research area.

2. The discussion needs further organization into subsections to make it clear to the reader where to locate information. There is no real conclusion subsection which makes it difficult to tie together the report's implications and findings. Also, much of the methods section (page 7-9) seems to assess and evaluate the included studies' methodological decisions rather than describing the review's methodology. Although this information is valuable, it is perhaps better suited for the results or discussion section.

3. It is not entirely clear how the research team identified the three-factor groups (message-level factors, individual-level factors, and intervention factors). Were these groups based on a precedent, or is there consensus in the research area that factors can be categorized into these three groups? It is important to have this information to justify the methodology and determine if any potentially important factors were missed. Also, the intervention factor group is paired with ecological factors - it is not entirely clear what the research team means by "ecological factors".

4. Table three, which sets out the key methodological aspects and results of the included studies, needs more procedural information. At present, it is not easy to interpret how the included studies might have arrived at their results. This information could provide a more comprehensive summary of the research area, particularly for people who might want to know more about the commonalities amongst procedures across studies.

Detailed Comments

[page]:[para]:[line]

2:1:3 The end of this sentence requires a citation. A study or report that has mapped out propaganda, misinformation, and deception in the public sphere over time would be relevant to cite here.

2:1:6-7 The authors state, "a lack of consensus over the scale and persuasiveness of the phenomenon". However, it is not clear what they are referring to. We assume they are discussing the persuasiveness of misinformation in general. It is also unclear how the references they cited support such a claim.

3:1:1-5 The authors refer to a "massive spread of politically charged deception". To a certain extent, the word massive is subjective and does not point to the problem's true extent. With that in mind, some statistics or figures would be helpful.

3:1:7 The authors note the "hypothesized societal effects" of deceitful messages; however, they do not explain what these societal effects might be. There are some studies out there that have investigated the causal effects of misinformation. These studies might be a good idea to cite to set the scene for the study. See below for a selection:

Schaub, M., & Morisi, D. (2020). Voter mobilisation in the echo chamber: Broadband internet and the rise of populism in europe. European Journal of Political Research, 59(4), 752–773. https://doi.org/10.1111/1475-6765.12373

Bursztyn, L., Egorov, G., Enikolopov, R., & Petrova, M. (2019). Social media and xenophobia: evidence from Russia (No. w26567). National Bureau of Economic Research.

Allcott, H., Braghieri, L., Eichmeyer, S., & Gentzkow, M. (2020). The welfare effects of social media. American Economic Review, 110(3), 629–76. DOI: 10.1257/aer.20190658

3:2:4 Citation needed to support this claim.

3:2:6 The authors refer to a "focal issue", but it is not entirely clear what the focal issue is.

4:1:5 The Sentence started with "because", but because of what? Consider re-writing for clarity.

4:3:2 The authors use the term "inductively developed framework", it would be good if the authors described what this means.

5 The eligibility criteria would benefit from being placed in a table. At present, the criteria are embedded in the text, and it does not make it easy for the reader to identify the information.

5:1:2 The time frame for the included studies should be explained. We assume the time frame for the included studies starts in 2016 to coincide with the initial Trump presidential campaign, but this an assumption. Further clarification would help.

5:1:7-12 This Sentence is far too long, consider rewording or breaking it down into several sentences.

5:1:12-14 The search started from studies already known to the researchers; cite this here for clarity.

5:2 This sentence is too long and would benefit from shortening. Also, the paragraph states that the trio of databases would most likely yield the most comprehensive results - but why? This needs to be clearly explained.

6:3:4 The authors do not explain why they chose these outcome variables.

8:1:1 The authors wrote, "As visible from the table", but do not state which table they refer to.

10:1 This paragraph would be a good place to explain how the factor groups were identified.

10:2:1 Avoid using rhetorical questions.

11:1:3-5 This Sentence is wordy and unclear; consider re-writing.

11:2:10-12 It is unclear what these numbers mean concerning the scale.

16:1:3 The authors note that "two major message level factors stand out"; however, it is not explained why these two stand out in particular.

17:1:1-5 This sentence is too long, consider rewording or breaking it down into several sentences.

17:1:7-10 This sentence is not clear on its own. Another sentence is needed to explain why these methodological differences lead to differing results.

17:1:16-18 It is argued that the differences in findings might be down to different study design choices - however, this needs more unpacking. The sentence alone does not explain why.

17:1:1-5 sentence is too long; consider rewording.

19:2:2 The authors use the terms "vary dramatically"; however, it is unclear what this means exactly; some figures or further quantification would be handy here.

19:2:8 The authors note an apparent limitation, but further explanation is needed to determine why it is a limitation.

19:3:3-4 Citation needed.

19:3:4-6 Citation needed.

20:2:5-7 The authors state that the correlations are statistically significant but do not provide an indicator of significance.

23:1:2 Citation needed.

23:2:2-5 Citation needed.

23:3:1 What were the guidelines?

23:3:11-12 "2.8 times less people were willing to share fake news following the treatment than before the treatment." - It is not clear what this statistic means and how it was identified.

24:3:9-10 What are the flags? We assume they are materials in a study but this is not entirely clear.

25:2:1-3 The authors discuss avoiding backfire effects, but research surrounding backfire effects is complicated. The current understanding is that backfire effects are not nearly as much of a concern as once thought - these recent findings should be reflected in this paragraph. (e.g., see Swire et al., 2020, DOI: 10.1016/j.jarmac.2020.06.006).

27:2:2 What is meant by a "common decision environment"? A definition here would be useful.

6. PLOS authors have the option to publish the peer review history of their article ( what does this mean? ). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy .

Reviewer #1:  Yes:  Sacha Altay

Reviewer #2:  Yes:  Muhsin Yesilada and Stephan Lewandowsky

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool,  https://pacev2.apexcovantage.com/ . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at  gro.solp@serugif . Please note that Supporting Information files do not need this step.

Author response to Decision Letter 0

14 May 2021

Dear Drs. Altay, Lewandowsky, and Yesilada,

We are extremely grateful for your insightful and generously detailed feedback to our work. Based on your comments and suggestions, we have introduced some major changes to our report’s structure and evidence presentation. We believe that what resulted from this collaborative effort is a significantly improved manuscript. Our detailed responses to your comments, in the order we have received them, are listed in the table that can be found in an enclosed file entitled Response to Reviewers. We hope that you will find these responses sufficient.

The authors.

Submitted filename: Response to reviewers.docx

Decision Letter 1

PONE-D-21-00513R1

Some minor modifications have been suggested by the previous Reviewers. I believe these could be added in short time to improve the completeness of the manuscript. 

Please submit your revised manuscript by Jul 16 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at gro.solp@enosolp . When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see:  http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at  https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols .

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: All comments have been addressed

2. Is the manuscript technically sound, and do the data support the conclusions?

3. Has the statistical analysis been performed appropriately and rigorously?

4. Have the authors made all data underlying the findings in their manuscript fully available?

Reviewer #1: No

5. Is the manuscript presented in an intelligible fashion and written in standard English?

6. Review Comments to the Author

Reviewer #1: The authors did a great job at addressing my (minor) comments, I am now satisfied with the manuscript.

It's too bad that the very small scale of the fake news problem is not mentioned in the introduction (e.g. Allen et al. 2020) but the article is, I believe, good enough to be published as is.

Finally I would like to thank the authors for their work, it's a very useful paper!

Reviewer #2: Review of MS PONE-D-21-00513-R1

by Bryanov & Vziatysheva

Reviewer: Stephan Lewandowsky

Summary and Overall Recommendation

The paper is clearly important; identifying the determinants of fake news beliefs can have implications for successful interventions. The authors mapped out factors that could affect the outcome variables set out in the included studies. These factors were: message-level factors, individual-level factors, and intervention & ecological factors (although it was hard to determine how

they identified these three factors).

I reviewed the paper at the previous round (together with a PhD student whom I did not consult at this round to save time). Our judgment was positive in principle, but we requested major revisions, in particular relating to (1) the small number of studies; (2) clarity of the discussion and (3) the three factors being identified; and (4) expansion of the main Table (Table 3 in the original submission).

The revision has addressed these points and I found the manuscript to be much improved and (nearly) ready for publication, subject ot the minor comments below.

Detailed comments

165 I am not entirely clear why “cognitive science research on false memory recognition” would be “obviously irrelevant”?

173 Does “non-experimental” mean the authors excluded correlational studies? I would have thought most individual-differences research may involve correlational studies that do not include an experimental intervention. Perhaps the authors mean “non-empirical”? If they did exclude correlational studies I would be curious to know why.

23 The authors may be interested in DOI 10.3758/s13421-019-00948-y as another demonstration of social influences (although it is only indirectly related to fake news because the study compares pro- and anti-science blog posts).

387 Insert paragraph break before “As this…”.

392-394 This sentence is ungrammatical.

504 “the Indian sample” pops out of nowhere—this deserves a bit more explanation. Why India? What can be learned from this?

579 “messages” should be plural?

7. PLOS authors have the option to publish the peer review history of their article ( what does this mean? ). If published, this will include your full peer review and any attached files.

Reviewer #2:  Yes:  Stephan Lewandowsky

Author response to Decision Letter 1

Dear Drs. Altay and Lewandowsky,

We are grateful for your continued contributions to the improvement of our work. The revised manuscript addresses each comment you have raised in the latest round of review. Further details on the changes we have made can be found in the table appended to the enclosed file, titled Response to reviewers. We hope that you will deem the resulting manuscript fit for publication.

Decision Letter 2

11 Jun 2021

PONE-D-21-00513R2

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/ , click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at gro.solp@gnillibrohtua .

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact gro.solp@sserpeno .

Additional Editor Comments (optional):

Acceptance letter

16 Jun 2021

Determinants of individuals’ belief in fake news: A scoping review Determinants of belief in fake news

Dear Dr. Bryanov:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact gro.solp@sserpeno .

If we can help with anything else, please email us at gro.solp@enosolp .

Thank you for submitting your work to PLOS ONE and supporting open access.

PLOS ONE Editorial Office Staff

on behalf of

Dr. Stefano Triberti

  • Newsletters
  • Account Activating this button will toggle the display of additional content Account Sign out

Decoder Ring

Making real music for a fake band.

The Broadway show Stereophonic on how to construct the sound of the ’70s.

Listen & Subscribe

Choose your preferred player:

  • Apple Podcasts
  • Amazon Music

Please enable javascript to get your Slate Plus feeds.

Get Your Slate Plus Podcast

If you can't access your feeds, please contact customer support.

Thanks! Check your phone for a link to finish setting up your feed.

Please enter a 10-digit phone number.

Listen on your phone: RECOMMENDED

Enter your phone number and we'll text you a link to set up the podcast in your app:

We'll only text you about setting up this podcast, no spam.

Listen on your computer:

Apple Podcasts will only work on MacOS operating systems since Catalina . We do not support Android apps on desktop at this time.

Listen on your device: RECOMMENDED

These links will only work if you're on the device you listen to podcasts on.

Set up manually:

How does this work?

We're sorry, but something went wrong while fetching your podcast feeds. Please contact us at [email protected] for help.

Episode Notes

Pop culture is full of fictional bands singing songs purpose-made to capture a moment, a sound. This music doesn’t organically emerge from a scene or genre, hoping to find an audience. Instead it fulfills an assignment: it needs to be 1960s folk music, 1970s guitar rock, 80s hair metal, 90s gangsta rap, and on and on.

In this episode, we’re going to use ‘ Stereophonic ,’ which just opened on Broadway, as a kind of case study in how to construct songs like this. The playwright David Adjmi and his collaborator, Will Butler formerly of the band Arcade Fire, will walk us through how they did it. How they made music that needs to capture the past, but wants to speak to the present; that has to work dramatically but hopes to stand on its own; that must be plausible, but aspires to be something even more.

The band in Stereophonic includes Sarah Pidgeon , Tom Pecinka, Juliana Canfield , Will Brill , and Chris Stack . Stereophonic is now playing on Broadway—and the cast album will be out May 10.

Thank you to Daniel Aukin, Marie Bshara, and Blake Zidell and Nate Sloan.

This episode was produced by Max Freedman and edited by Evan Chung, who produce the show with Katie Shepherd. Derek John is Executive Producer. Merritt Jacob is Senior Technical Director.

If you haven’t yet, please subscribe and rate our feed in Apple Podcasts or wherever you get your podcasts. And even better, tell your friends.

If you’re a fan of the show, please sign up for Slate Plus. Members get to listen to Decoder Ring and all other Slate podcasts without any ads and have total access to Slate’s website. Your support is also crucial to our work. Go to Slate.com/decoderplus to join Slate Plus today.

About the Show

In each episode, host Willa Paskin takes a cultural question, object, or habit; examines its history; and tries to figure out what it means and why it matters.

Willa Paskin is Slate’s television critic.

  • @WillaPaskin on Twitter

comscore beacon

Opinion | Divide and conquer: The government’s…

Share this:.

  • Click to share on Facebook (Opens in new window)
  • Click to share on Twitter (Opens in new window)
  • Click to share on Reddit (Opens in new window)
  • Click to print (Opens in new window)
  • Opinion Columns
  • Guest Commentary
  • Letters to the Editor
  • Editorial Board
  • Endorsements

Opinion | Divide and conquer: The government’s propaganda of fear and fake news

is assignment work real or fake

“ Nothing is real ,” observed John Lennon, and that’s especially true of politics.

Much like the fabricated universe in Peter Weir’s 1998 film The Truman Show , in which a man’s life is the basis for an elaborately staged television show aimed at selling products and procuring ratings, the political scene in the United States has devolved over the years into a carefully calibrated exercise in how to manipulate, polarize, propagandize and control a population.

Take the media circus that is the Donald Trump hush money trial , which panders to the public’s voracious appetite for titillating, soap opera drama , keeping the citizenry distracted, diverted and divided .

This is the magic of the reality TV programming that passes for politics today.

Everything becomes entertainment fodder.

As long as we are distracted, entertained, occasionally outraged, always polarized but largely uninvolved and content to remain in the viewer’s seat, we’ll never manage to present a unified front against tyranny (or government corruption and ineptitude) in any form.

Studies suggest that the more reality TV people watch—and I would posit that it’s all reality TV, entertainment news included—the more difficult it becomes to distinguish between what is real and what is carefully crafted farce. 

“We the people” are watching a lot of TV.

On average, Americans spend five hours a day watching television. By the time we reach age 65, we’re watching more than 50 hours of television a week , and that number increases as we get older. And reality TV programming consistently captures the largest percentage of TV watchers every season by an almost 2-1 ratio. 

This doesn’t bode well for a citizenry able to sift through masterfully-produced propaganda in order to think critically about the issues of the day.

Yet look behind the spectacles, the reality TV theatrics, the sleight-of-hand distractions and diversions, and the stomach-churning, nail-biting drama that is politics today, and you will find there is a method to the madness.

We have become guinea pigs in a ruthlessly calculated, carefully orchestrated, chillingly cold-blooded experiment in how to control a population and advance a political agenda without much opposition from the citizenry.

This is how you persuade a populace to voluntarily march in lockstep with a police state and police themselves (and each other): by ratcheting up the fear-factor, meted out one carefully calibrated crisis at a time, and teaching them to distrust any who diverge from the norm through elaborate propaganda campaigns.

Unsurprisingly, one of the biggest propagandists today is the U.S. government.

Add the government’s inclination to monitor online activity and police so-called “disinformation,” and you have the makings of a restructuring of reality straight out of Orwell’s  1984 , where the Ministry of Truth polices speech and ensures that facts conform to whatever version of reality the government propagandists embrace.

This “policing of the mind” is exactly the danger author Jim Keith warned about when he predicted that “information and communication sources are gradually being linked together into a single computerized network, providing an opportunity for unheralded control of what will be broadcast, what will be said, and ultimately what will be thought.”

You may not hear much about the government’s role in producing, planting and peddling propaganda-driven fake news— often with the help of the corporate news media —because the powers-that-be don’t want us skeptical of the government’s message or its corporate accomplices in the mainstream media . 

However, when you have social media giants colluding with the government in order to censor so-called disinformation, all the while the mainstream news media, which is supposed to act as a bulwark against government propaganda, has instead become the mouthpiece of the world’s largest corporation (the U.S. government), the Deep State has grown dangerously out-of-control . 

This has been in the works for a long time.

Veteran journalist Carl Bernstein, in his expansive 1977 Rolling Stone piece “The CIA and the Media,” reported on Operation Mockingbird, a CIA campaign started in the 1950s to plant intelligence reports among reporters at more than 25 major newspapers and wire agencies, who would then regurgitate them for a public oblivious to the fact that they were being fed government propaganda.

In some instances, as Bernstein showed , members of the media also served as extensions of the surveillance state, with reporters actually carrying out assignments for the CIA. Executives with CBS, the New York Times and Time magazine also worked closely with the CIA to vet the news.

If it was happening then, you can bet it’s still happening today, only this collusion has been reclassified, renamed and hidden behind layers of government secrecy, obfuscation and spin .

In its article, “How the American government is trying to control what you think,” the Washington Post points out “Government agencies historically have made a habit of crossing the blurry line between informing the public and propagandizing.”

This is mind-control in its most sinister form.

The end goal of these mind-control campaigns—packaged in the guise of the greater good—is to see how far the American people will allow the government to go in re-shaping the country in the image of a totalitarian police state.

The government’s fear-mongering is a key element in its mind-control programming.

A populace that stops thinking for themselves is a populace that is easily led, easily manipulated and easily controlled whether through propaganda, brainwashing, mind control, or just plain fear-mongering.

This unseen mechanism of society that manipulates us through fear into compliance is what American theorist Edward L. Bernays referred to as “ an invisible government which is the true ruling power of our country .”

To this invisible government of rulers who operate behind the scenes—the architects of the Deep State—we are mere puppets on a string, to be brainwashed, manipulated and controlled.

Yet as I make clear in my book Battlefield America: The War on the American People and in its fictional counterpart The Erik Blair Diaries , it’s time to change the channel, tune out the reality TV show, and push back against the real menace of the police state. 

If not, if we continue to sit back and lose ourselves in political programming, we will remain a captive audience to a farce that grows more absurd by the minute.

Constitutional attorney and author John W. Whitehead is founder and president of The Rutherford Institute. His latest books The Erik Blair Diaries and Battlefield America: The War on the American People are available at www.amazon.com. Whitehead can be contacted at [email protected]. Nisha Whitehead is the Executive Director of The Rutherford Institute. Information about The Rutherford Institute is available at www.rutherford.org . 

  • Newsroom Guidelines
  • Report an Error

More in Opinion

It will take time and myriad legal fees, but the courts almost certainly will rebuke Huntington Beach in this and its other performative lawsuits. It’s time for the council to get back to governing and stop preening for Fox News.

Opinion | Huntington Beach City Council’s silly fight over voter ID law

Gov. Gavin Newsom made headlines last summer for proposing a 28th Amendment to the United States Constitution enshrining a handful of gun control measures into the supreme law of the land.

Opinion | Newsom’s gun control constitutional amendment gets nowhere, to the surprise of no one

Many in that heartland fear that the elitism and the arrogance on display last week are indicative of what an America under Democratic governance will look like.

Opinion | Elitism and arrogance threaten a backlash against Democrats

On Thursday, the California Assembly’s elections committee is set to consider Assembly Bill 2654.

Opinion | Keep NDAs out of the lawmaking process

Decoder Ring: Making Real Music for a Fake Band Slate Daily Feed

Pop culture is full of fictional bands singing songs purpose-made to capture a moment, a sound. This music doesn’t organically emerge from a scene or genre, hoping to find an audience. Instead it fulfills an assignment: it needs to be 1960s folk music, 1970s guitar rock, 80s hair metal, 90s gangsta rap, and on and on. In this episode, we’re going to use ‘Stereophonic,’ which just opened on Broadway, as a kind of case study in how to construct songs like this. The playwright David Adjmi and his collaborator, Will Butler formerly of the band Arcade Fire, will walk us through how they did it. How they made music that needs to capture the past, but wants to speak to the present; that has to work dramatically but hopes to stand on its own; that must be plausible, but aspires to be something even more.  The band in Stereophonic includes Sarah Pidgeon, Tom Pecinka, Juliana Canfield, Will Brill, and Chris Stack. Stereophonic is now playing on Broadway—and the cast album will be out May 10. Thank you to Daniel Aukin, Marie Bshara, and Blake Zidell and Nate Sloan.  This episode was produced by Max Freedman and edited by Evan Chung, who produce the show with Katie Shepherd. Derek John is Executive Producer. Merritt Jacob is Senior Technical Director. If you haven’t yet, please subscribe and rate our feed in Apple Podcasts or wherever you get your podcasts. And even better, tell your friends. If you’re a fan of the show, please sign up for Slate Plus. Members get to listen to Decoder Ring and all other Slate podcasts without any ads and have total access to Slate’s website. Your support is also crucial to our work. Go to Slate.com/decoderplus to join Slate Plus today.  Learn more about your ad choices. Visit megaphone.fm/adchoices

  • More Episodes
  • ©2018 The Slate Group

Top Podcasts In News

More by slate magazine.

Assignment in Pakistan

assignment work //0,3,0,2,0,4,5,7,4,1,7//

assignment work //0,3,0,2,0,4,5,7,4,1,7//

Assignment writing work Part Time/Full Time Daily payments

Assignment writing work Part Time/Full Time Daily payments

Assignment writing work Part Time/Full Time Daily payments

handwriting assignment work

Assignment work

Assignment work

Assignment writing available in cheapest rate

Assignment writing available in cheapest rate

  • Want to see your stuff here? Make some extra cash by selling things in your community. Go on, it's quick and easy. Start selling

handwriting assignment work

hand writing assignment work

Handwriting Assignment work

Handwriting Assignment work

Assignment Writing work availiable in cheapest rate

Assignment Writing work availiable in cheapest rate

Handwritten Assignment Writer

Handwritten Assignment Writer

Need Online Assignment Workers

Need Online Assignment Workers

Hand written assignment work

Hand written assignment work

Handwritten Assignement work

Handwritten Assignement work

handwriting Assignment work

handwriting Assignment work

Online Assignment Work Available On low Rates With Good Quality Work

Online Assignment Work Available On low Rates With Good Quality Work

Daily Democrat

Sponsored Content | Bio Complete 3 Reviews – Does It Work? Real…

Share this:.

  • Click to share on Facebook (Opens in new window)
  • Click to share on Twitter (Opens in new window)
  • Click to print (Opens in new window)
  • Advertise with us
  • Public Notices
  • Local Guide
  • Real Estate
  • Today’s Ads
  • Special Sections

Sponsored Content

Sponsored Content | Bio Complete 3 Reviews – Does It Work? Real Official Gundry MD Website Claims or Fake Customer Results?

is assignment work real or fake

The news and editorial staff of the Daily Democrat had no role in this post’s preparation. This is a paid advertisement and does not necessarily reflect the official policy or position of the Daily Democrat , its employees, or subsidiaries.

More in Sponsored Content

Have you ever wished you could unlock your innate brilliance and use your intellect to its full potential? To become more intellectual and creative, humans have long sought new ways to train their brains.

Sponsored Content | The Genius Wave Reviews (Latest News From Customers) Genius Wave Audio 2024 | Work Theta Waves Scam Alert?

Gluco Cleanse Tea Reviews (Fake or Legit) Trustworthy Results or Negative Complaints?

Sponsored Content | Gluco Cleanse Tea Reviews (Fake or Legit) Trustworthy Results or Negative Complaints?

Renew Reviews (Real or Fake Salt Water Trick?) What are Users Saying? [2024 Update]

Sponsored Content | Renew Reviews (Real or Fake Salt Water Trick?) What are Users Saying? [2024 Update]

GutOptim Reviews - Does It Work? Real Official Website Claims or Fake Results?

Sponsored Content | GutOptim Reviews – Does It Work? Real Official Website Claims or Fake Results?

  • Share full article

Advertisement

Supported by

Thursday Briefing

Signs suggest that Israel will likely invade Rafah.

Daniel E. Slotnik

By Daniel E. Slotnik

People walking on the rubble of buildings in Gaza.

Israel’s invasion of Rafah seems inevitable

After weeks of delays, negotiations and distractions, Israel appeared to hint this week that its assault of Rafah was all but inevitable .

Israeli warplanes bombed Rafah this week, and an Israeli official said that if an invasion began, civilians would be relocated to a safe zone on the Mediterranean coast.

Israel insists that invading Rafah, a southern city where more than a million Gazans are sheltering, is necessary to eliminate the militants hiding in a network of tunnels there and to ensure the release of hostages. The U.S. has pushed back against Israel conducting a major military operation there without a detailed plan to protect civilians.

At the moment, Hamas is bottled up in southern Gaza, heavy fighting has mostly subsided, a cease-fire remains possible and delaying helps placate American officials who are against the invasion. Some analysts have even suggested that Israel may never invade Rafah and that the threat is merely leverage against Hamas during negotiations.

But most officials and analysts said that an assault on the city was not a matter of if, but of when.

UNRWA: Germany said it would resume funding for the main U.N. agency aiding Gazans after an independent review found that Israel had not provided evidence that members of the agency were tied to Hamas.

Hostage: Hamas released a video showing Hersh Goldberg-Polin , an Israeli American dual citizen held hostage by the group. Goldberg-Polin was badly wounded during the Oct. 7 attack on Israel.

Biden said weapons will flow to Ukraine ‘within hours’

President Biden signed a long-delayed $95.3 billion aid package for Ukraine, Israel and Taiwan yesterday, saying that weapons would be bound for Ukraine “within hours.”

“It’s going to make the world safer,” Biden said. “And it continues America’s leadership in the world.”

Senator Mitch McConnell, a Republican from Kentucky who is the chamber’s minority leader, was crucial to helping the aid pass after months of congressional gridlock because of isolationist Republicans.

The package also includes $1 billion in humanitarian aid for Gaza and $15 billion in military aid for Israel. Biden reiterated that his commitment to Israel was “ironclad,” even though there is a growing backlash to American support for Israeli forces in Gaza.

More Ukraine news:

Here’s how the aid could help Ukraine .

The U.S. last week secretly shipped a new long-range missile system to Ukraine .

Life continues in Kharkiv, Ukraine’s second-largest city, despite a devastating bombing campaign .

Arizona charged Trump allies in an election interference case

Rudy Giuliani, Mark Meadows, Boris Epshteyn and several others who advised Donald Trump during the 2020 election, as well as the fake electors who tried to keep him in power after he lost, were indicted in Arizona yesterday .

The indictment includes conspiracy, fraud and forgery charges, related to alleged attempts by those charged to overturn the 2020 election results. Trump, who is on trial in Manhattan, was named as an unindicted co-conspirator.

MORE TOP NEWS

London: Several runaway military horses galloped through the streets yesterday, turning an ordinary rush hour into a spectacle.

U.S.: Tense scenes of protest against the war in Gaza continued at universities across the country. Mike Johnson, the House speaker, said during a visit to Columbia University in New York that “ there is an appropriate time for the National Guard ” to end protests. Israel’s prime minister, Benjamin Netanyahu, described pro-Palestinian protesters as “ antisemitic mobs .”

Myanmar: The country’s military junta recaptured the town of Myawaddy , an important trading hub on the border with Thailand, reversing a victory from resistance soldiers.

Diplomacy: Antony Blinken, the U.S. secretary of state, began a three-day trip to China as tensions over trade, territorial disputes and national security threaten to derail relations.

NATO: About 90,000 NATO troops have been training in Europe this spring to prepare for a clash between Russia and the West that they hope will never come .

Spain: Pedro Sánchez, Spain’s prime minister, said that he was considering resigning after a judge opened an investigation into whether his wife, Begoña Gomez, had abused her position.

TikTok: U.S. lawmakers and aides spent nearly a year secretly drafting and bulletproofing a bill to force the sale of the social media company.

Meta: Profits for Facebook and Instagram’s parent company more than doubled in the first quarter of 2024. The company said it planned to spend billions of dollars more on its artificial intelligence efforts .

British Soccer

Premier League: Luton Town F.C. jumped into the highest level of English soccer , revamping its town’s image.

Undefeated: The Queens Park Ladies, an under-12 girls’ soccer team in Bournemouth, England, won all 22 of its matches , all against boys’ teams.

MORNING READ

A growing crew of virtual-assistant apps combine artificial intelligence and human labor to help parents manage their family lives.

These apps are styled like cutesy helpmates, with names like Yohana, Ohai and Milo. My colleague Amanda Hess tried Yohana , finding “that the busywork I might delegate to a machine is actually more human, and valuable, than I realized.”

CONVERSATION STARTERS

A pricey painting: “Portrait of Fräulein Lieser,” a long-lost 1917 painting by Gustav Klimt, sold yesterday for 35 million euros to an unnamed buyer.

Meta style: Silver chains. Shearling jackets. Mark Zuckerberg, the company’s chief executive, has ditched his gray T-shirts for clothes that suggest “the kinder, gentler face of technology,” our chief fashion critic writes .

Eternal scaffolding: New York’s protective barriers can stand over sidewalks for years .

SPORTS NEWS

The UEFA Youth League: So much more than a teenage Champions League .

Proposal to change tennis: The Grand Slams present their premium tour plan .

Formula 1 safety cars: How they work and ideas for improvements.

ARTS AND IDEAS

Debating ‘tortured poets’.

“The Tortured Poets Department,” Taylor Swift’s lengthy excavation of her recent relationships, is not as universally loved as some of her other albums. Our pop music critics discussed the album’s sound, themes and reception.

“This feels like an album designed for her top 5 percent of listeners — the ride-or-dies who will defend her every move and pore over her every lyrical clue,” said Lindsay Zoladz, who reviewed the album. “Everyone else seems either puzzled or underwhelmed by it as a whole. But Swift is someone who thrives off feeling underestimated and misunderstood, so maybe the mixed reception of this album will be the creative rocket fuel that launches her into her next era.”

Here’s the rest of our critics’ conversation .

RECOMMENDATIONS

Cook: A simple miso roasted salmon can be both sustenance and self-care.

Read: “ The Paris Novel ,” by Ruth Reichl, is a rich ode to the city’s food scene.

Consider: Some nasal sprays can lead to dependence.

Listen: Check out some of the best songs from members of this year’s Rock & Roll Hall of Fame class .

Exercise: These earbuds won’t fall out during your workout.

Play the Spelling Bee . And here are today’s Mini Crossword and Wordle . You can find all our puzzles here .

That’s it for today’s briefing. Thank you for spending part of your morning with us, and see you tomorrow. — Dan

You can reach Dan and the team at [email protected] .

Daniel E. Slotnik is a general assignment reporter on the Metro desk and a 2020 New York Times reporting fellow. More about Daniel E. Slotnik

U.S. flag

An official website of the United States government

Here’s how you know

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Report Fraud
  • Read Consumer Alerts
  • Get Consumer Alerts
  • Visit ftc.gov

View all Consumer Alerts

Credit, Loans, and Debt

Learn about getting and using credit, borrowing money, and managing debt.

View Credit, Loans, and Debt

Jobs and Making Money

What to know when you're looking for a job or more education, or considering a money-making opportunity or investment.

View Jobs and Making Money

Unwanted Calls, Emails, and Texts

What to do about unwanted calls, emails, and text messages that can be annoying, might be illegal, and are probably scams.

View Unwanted Calls, Emails, and Texts

Identity Theft and Online Security

How to protect your personal information and privacy, stay safe online, and help your kids do the same.

View Identity Theft and Online Security

  • Search Show/hide Search menu items Items per page 20 50 100 Filters Fulltext search

Mystery shopping, (fake) checks, and gift cards

Facebook

If you’re looking for a new job, getting paid to shop might sound like a dream. Companies hire  mystery shoppers to try products or services and share experiences about things like buying or returning something, or their overall customer experience. But while some mystery shopping jobs are legitimate, many aren’t. So how do you spot the scams?

In many mystery shopping scams, a scammer pretending to be from a well-known company “hires” you to be a mystery shopper. They send you a check (it’s fake) and say to deposit it to buy gift cards from the store and keep the rest as pay. Then, they ask you to give them the numbers on the back of the cards. But it’s all a scam. The scammer gets the money you put on the gift card while the bank will want you to pay back whatever you spent.

If you’re considering a mystery shopping job, here are ways to spot and avoid scams:

  • Research the job first. Search online for the name of the company or person who’s hiring you, plus words like “review,” “complaint,” or “scam.” See what others are saying.
  • Never agree to deposit a check to buy gift cards and send the numbers back as part of a mystery shopper job — or any job. Only scammers will say to do that. It can take weeks for a bank to figure out that  the check is fake . By that time, you’re stuck repaying the money to the bank.
  • Don’t believe guarantees that you’ll make lots of money. Only scammers make these guarantees. Mystery shopping jobs are usually part-time or occasional work — not something to replace a full-time job.

Learn more about  mystery shopping scams and other job scams at  ftc.gov/jobscams . If you spot a scam, tell the FTC at  ReportFraud.ftc.gov .

Mystery shopping scams

Add new comment

Read our privacy act statement.

It is your choice whether to submit a comment. If you do, you must create a user name, or we will not post your comment. The Federal Trade Commission Act authorizes this information collection for purposes of managing online comments. Comments and user names are part of the Federal Trade Commission’s (FTC) public records system, and user names also are part of the FTC’s  computer user records  system. We may routinely use these records as described in the FTC’s  Privacy Act system notices . For more information on how the FTC handles information that we collect, please read our privacy policy .

Read Our Comment Policy

The purpose of this blog and its comments section is to inform readers about Federal Trade Commission activity, and share information to help them avoid, report, and recover from fraud, scams, and bad business practices. Your thoughts, ideas, and concerns are welcome, and we encourage comments. But keep in mind, this is a moderated blog. We review all comments before they are posted, and we won’t post comments that don’t comply with our commenting policy. We expect commenters to treat each other and the blog writers with respect.

  • We won’t post off-topic comments, repeated identical comments, or comments that include sales pitches or promotions.
  • We won’t post comments that include vulgar messages, personal attacks by name, or offensive terms that target specific people or groups.
  • We won’t post threats, defamatory statements, or suggestions or encouragement of illegal activity.
  • We won’t post comments that include personal information, like Social Security numbers, account numbers, home addresses, and email addresses. To file a detailed report about a scam, go to ReportFraud.ftc.gov.

We don't edit comments to remove objectionable content, so please ensure that your comment contains none of the above. The comments posted on this blog become part of the public domain. To protect your privacy and the privacy of other people, please do not include personal information. Opinions in comments that appear in this blog belong to the individuals who expressed them. They do not belong to or represent views of the Federal Trade Commission.

Found very educating.

Thank you for informative articles like these. It shows your commitment to "we the people"

Thanks for your info!

Great information. Thanks.

There is always great information provided, and all consumers needs to be aware of the scams.

Thank you for the update! I was scammed by a car wrap company a couple months ago. I knew better but Still Fell for it! If it's too good to be true, it's FAKE! If you are to deposit a check and return a portion, it's FAKE. RUN DON'T ATTEMPT IT & SAVE YOURSELF AND YOUR ACCOUNT!

Thank you so much for the information! Very valuable information in deed!

My husband fell for the ol deposit this check scam. As he sat in the drive thru at our bank, a number of police cars surrounded him! He was escorted into our bank and questioned. What an embarrassing nightmare. He, of course, was found to be innocent but gullible. Ha!

Another clue is that the company that is pretending to hire you will have a website that shows no evidence that they are also looking for clients. Clients are the companies that pay for the mystery shop.

No, REAL mystery shopping company sends you checks to cash to BUY anything. I've been mystery shopping for over 5 years. If you mystery shop a restaurant or grocery store you use your OWN money to make the purchase. If your report is acceptable with the proper receipt the company will reimburse you and pay you your shopping fee with whatever payment method and on whatever pay schedule they use.

Thank you for this information!

I accidentally applied for marketing job on a “fake website”. The scammers said the assignment it was for Walmart and the scammer sent me the check before any service had been performed. The scammer sent a check drawn from a salon. The instructions were to send a gift card and “keep the rest”. I contacted the company the check was drawn on strongly suggest to close that checking account.

I am a senior and feel we are easy targets. This information help me to stay on top of the latest scams and many of the old ones. I share your web site with all my friends!

IMAGES

  1. Real Vs Fake

    is assignment work real or fake

  2. What is Work Assignment?

    is assignment work real or fake

  3. Make Realistic Fake Assignments in One Click

    is assignment work real or fake

  4. 5 Tips and Tricks to Get Your Assignments Done Fast

    is assignment work real or fake

  5. How to Start an Assignment Right: Tips and Examples

    is assignment work real or fake

  6. Assignment Title: Real or fake? (Case study)

    is assignment work real or fake

COMMENTS

  1. Assignment Work Real or Fake

    Most Welcome on our YouTube Channel #TechLecturer ️Iam Engr Ali Raza owner of Tech Lecturer_____Qualification__Iam El...

  2. Assignment Work Reality Fake Or Real?

    In this video I am telling you truth about this work stay tuned for original work and fake work updates #AssignmentReality#WorkTruths#RealVsFake#AssignmentFa...

  3. assignment work real or fake / work from home

    assignment work real or fake Please don't trust on assignment work registration or investment methods

  4. 17 Common Job Scams and How To Protect Yourself

    Scammers use a variety of strategies to trick people into sharing personal information. Here are 17 common job scams to avoid: 1. Fake job listings. Fake job listings come in various forms. Though job sites have measures in place to verify legitimate employers, scammers sometimes manage to get their listings posted.

  5. Don't Get Hooked on "Fake Work"

    "Being busy does not always mean real work. The object of all work is production or accomplishment and to either of these ends there must be forethought, system, planning, intelligence, and honest purpose, as well as perspiration. Seeming to do is not doing." —Thomas Alva EdisonAs Gaylan Nielson (2022) wrote:"We define Fake Work as work that isn't directly linked to strategies. This ...

  6. How to Find Trustworthy Sources for School Assignments

    Research the website: Look up the company that owns the website and see how well-known and trusted it is for the information you're citing. You'll want to use sites that are: Well-known and well-respected. Credible. Check media coverage: Look for a Media or Press page on the website.

  7. Cut Out the Fake Work and Focus on Projects that Really Matter

    Peterson and Nielson define real work as work that is critical to and aligned with the key goals of an organization—whether we're talking about your entire company or your one-man wolf pack ...

  8. How AI Writing Tools Are Helping Students Fake Their Homework

    How AI Writing Tools Are Helping Students Fake Their Homework. The increasing use of AI writing tools could help students cheat. Teachers say software that helps generate text can be used to fake homework assignments. One teacher says content from programs that rewrite or paraphrase content sticks out like a "sore thumb" at the middle school level.

  9. Assignment Desk? Scam? Or legit? : r/cinematography

    100% legit. They've been around for years and actually merged with/were bought by another company (that has staff shooters), a few years ago. I do some work for "both" companies (they still maintain separate business identities, but share resources). I've been hired through Assignment Desk before.

  10. PDF The Foundational Principles of Fake Work and Real Work— and Knowing the

    Fake Work. A Road to Nowhere. 1. A basic understanding of the foundations of the Fake Work and Real . Work discussion is critical. Before we get into our formula for success, we must ensure that we discuss the enemy and understand it. This chapter lays the cornerstones so you will understand how you ultimately thwart it. Real Work and Fake Work ...

  11. Real or Fake College Essay : NPR

    Which is tougher, writing a college admissions essay or guessing which college admissions essay prompts are real? Ask Me Another is playing this game because two hosts and three producers are soon ...

  12. 3 Ways to Buy More Time on an Overdue Assignment

    Explain the situation to your teacher (real or fake) and hope that he or she grants you an extension. If you have to print out your paper, experiencing "printer problems" may grant you a few extra hours to work on the assignment. If you typically store all of your work on a USB Drive, tell your teacher the thumb drive was stolen or misplaced.

  13. Fact of Assignment Work

    Fact of Assignment Work | daily 100RS | Real or Fake | complete Review | Dietitian Irfan #assignment #onlineassignmentwork #onlineassignmenthelp #onlinework #writeandearnmoney...

  14. My fake homework

    automatically assign follow-up activities based on students' scores. assign as homework. share a link with colleagues. print as a bubble sheet. Quiz your students on My fake homework practice problems using our fun classroom quiz game Quizalize and personalize your teaching.

  15. Arizona alleged 'fake electors' who backed Trump in 2020 indicted by

    A grand jury has indicted 11 alleged "fake electors" who backed former President Trump falsely as having won the state of Arizona in 2020, charging them with conspiracy, fraud and forgery.

  16. Determinants of individuals' belief in fake news: A scoping review

    Random assignment to either 1) One-response condition with no constraints or 2) Two-response condition with time and cognitive load constraints → Exposure to stimulus: 16 true and false headlines presented as Facebook posts (real from mainstream sources; fake from Snopes.com, ideologically counterbalanced), time limit and additional memory ...

  17. Is Internshala Legit? : r/developersIndia

    Yes you should. I did around 4 assignments before I got a web dev internship. Even if you feel it's a waste of time, you might learn something from doing the assignments. ayush031198. • 2 yr. ago. Don't waste your time there, companies will give you some task, will most probably take it, but tell you to go f urself.

  18. Lesson 5: Spotting fake news (PSHE education)

    This lesson focuses on the NewsWise value: truthful. To identify fake news and its consequences. Explain what fake news is and why it is created. Identify what questions to ask and what checks to make to decide whether a news report is fake or real. Infer how a fake news story may affect someone's emotions and behaviour.

  19. Renew Reviews (Real or Fake Salt Water Trick?) What are Users Saying

    According to Renew reviews, its working mechanism and natural ingredients allow this supplement to promote weight loss, higher energy levels, and even better mood within a few weeks. Ingredients ...

  20. AI Homework Assignment Generator

    A homework assignment is a task assigned by educators as an extension of classroom work typically intended for students to complete outside of class. Written exercises, reading and comprehension activities, research projects, and problem-solving exercises are a few examples of homework varieties. However, the primary goal remains the same: to ...

  21. How Stereophonic Made Real Music for a Fake Band

    Making Real Music for a Fake Band. ... Instead it fulfills an assignment: it needs to be 1960s folk music, 1970s guitar rock, 80s hair metal, 90s gangsta rap, and on and on. ... that has to work ...

  22. Assignment work

    Assalam o alaikum, welcome to my youtube "Islamic Content".Here you will see latest and islamic videos,hope you like it.Assignment work bilkul bhi real nahi ...

  23. Divide and conquer: The government's propaganda of fear and fake news

    The government's fear-mongering is a key element in its mind-control programming. It's a simple enough formula. National crises, global pandemics, reported terrorist attacks, and sporadic ...

  24. Decoder Ring: Making Real Music for a Fake Band Slate Daily Feed

    Pop culture is full of fictional bands singing songs purpose-made to capture a moment, a sound. This music doesn't organically emerge from a scene or genre, hoping to find an audience. Instead it fulfills an assignment: it needs to be 1960s folk music, 1970s guitar rock, 80s hair metal, 90s gangsta…

  25. Assignment

    Handwriting Assignment work. Find the best Assignment in Pakistan. OLX Pakistan offers online local classified ads for Assignment. Post your classified ad for free in various categories like mobiles, tablets, cars, bikes, laptops, electronics, birds, houses, furniture, clothes, dresses for sale in Pakistan.

  26. Assignment work from home is real or Fake in Pakistan

    This video is shared to discuss the following Queries:Assignment work from home in PakistanAssignment work real or fakeHow to do assignment workassignment wo...

  27. Bio Complete 3 Reviews

    Sponsored Content. A healthy gut microbiome can help support digestive health and overall wellness. That's precisely what Gundry MD Bio Complete 3 is designed to do. This dietary supplement ...

  28. Thursday Briefing

    Biden said weapons will flow to Ukraine 'within hours'. President Biden signed a long-delayed $95.3 billion aid package for Ukraine, Israel and Taiwan yesterday, saying that weapons would be ...

  29. Mystery shopping, (fake) checks, and gift cards

    Never agree to deposit a check to buy gift cards and send the numbers back as part of a mystery shopper job — or any job. Only scammers will say to do that. It can take weeks for a bank to figure out that the check is fake. By that time, you're stuck repaying the money to the bank. Don't believe guarantees that you'll make lots of money.

  30. earn 5000 daily assignment work

    earn 5000 daily assignment work - assignment job - assignment home - assignment work real or fake 💼 App Link : https://shorturl.at/cevL4Discover the truth a...