Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

2-Types of Sources

3. Fact or Opinion

Silhouette of a head with a question mark in the center

Thinking about the reason an author created a source can be helpful to you because that reason was what dictated the kind of information he/she chose to include. Depending on that purpose, the author may have chosen to include factual, analytical, and objective information. Or, instead, it may have suited his/her purpose to include information that was subjective and therefore less factual and analytical. The author’s reason for producing the source also determined whether he or she included more than one perspective or just his/her own.

Authors typically want to do at least one of the following:

  • Inform and educate
  • Sell services or products

Combined Purposes

Sometimes authors have a combination of purposes, as when a marketer decides he can sell more smartphones with an informative sales video that also entertains us. The same is true when a singer writes and performs a song that entertains us but that she intends to make available for sale. Other examples of authors having multiple purposes occur in most scholarly writing.

In those cases, authors certainly want to inform and educate their audiences. But they also want to persuade their audiences that what they are reporting and/or postulating is a true description of a situation, event, or phenomenon or a valid argument that their audience must take a particular action. In this blend of scholarly authors’ purposes, the intent to educate and inform is considered to trump the intent to persuade

Why Intent Matters

Authors’ intent usually matters in how useful their information can be to your research project, depending on which information need you are trying to meet. For instance, when you’re looking for sources that will help you actually decide your answer to your research question or evidence for your answer that you will share with your audience, you will want the author’s main purpose to have been to inform or educate his/her audience. That’s because, with that intent, he/she is likely to have used:

  • Facts where possible.
  • Multiple perspectives instead of just his/her own.
  • Little subjective information.
  • Seemingly unbiased, objective language that cites where he/she got the information.

The reason you want that kind of source when trying to answer your research question or explaining that answer is that all of those characteristics will lend credibility to the argument you are making with your project. Both you and your audience will simply find it easier to believe—will have more confidence in the argument being made—when you include those types of sources.

Sources whose authors intend only to persuade others won’t meet your information need for an answer to your research question or evidence with which to convince your audience. That’s because they don’t always confine themselves to facts. Instead, they tell us their opinions without backing them up with evidence. If you used those sources, your readers would notice and would be less likely to believe your argument.

Fact vs. Opinion vs. Objective vs. Subjective

Need to brush up on the differences between fact, objective information, subjective information, and opinion?

Fact – Facts are useful to inform or make an argument.

  • The United States was established in 1776.
  • The pH levels in acids are lower than the pH levels in alkalines.
  • Beethoven had a reputation as a virtuoso pianist.

Opinion – Opinions are useful to persuade, but careful readers and listeners will notice and demand evidence to back them up.

  • That was a good movie.
  • Strawberries taste better than blueberries.
  • Timothee Chalamet is the sexiest actor alive.
  • The death penalty is wrong.
  • Beethoven’s reputation as a virtuoso pianist is overrated.

Objective – Objective information reflects a research finding or multiple perspectives that are not biased.

  • “Several studies show that an active lifestyle reduces the risk of heart disease and diabetes.”
  • “Studies from the Brown University Medical School show that twenty-somethings eat 25 percent more fast-food meals at this age than they did as teenagers.”

Subjective – Subjective information presents one person or organization’s perspective or interpretation. Subjective information can be meant to distort, or it can reflect educated and informed thinking. All opinions are subjective, but some are backed up with facts more than others.

  • “The simple truth is this: As human beings, we were meant to move.”
  • “In their thirties, women should stock up on calcium to ensure strong, dense bones and to ward off osteoporosis later in life.” *

*In this quote, it’s mostly the “should” that makes it subjective. The objective version of that quote would read something like: “Studies have shown that women who begin taking calcium in their 30s show stronger bone density and fewer repercussions of osteoporosis than women who did not take calcium at all.” But perhaps there are other data showing complications from taking calcium. That’s why drawing the conclusion that requires a “should” makes the statement subjective.

Choosing & Using Sources: A Guide to Academic Research Copyright © 2015 by Teaching & Learning, Ohio State University Libraries is licensed under a Creative Commons Attribution 4.0 International License , except where otherwise noted.

Share This Book

Logo for Open Textbook Collection

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

2-Types of Sources

Fact or Opinion

Silhouette of a head with a question mark in the center

Thinking about the reason an author produced a source can be helpful to you because that reason was what dictated the kind of information he/she chose to include. Depending on that purpose, the author may have chosen to include factual, analytical, and objective information. Or, instead, it may have suited his/her purpose to include information that was subjective and therefore less factual and analytical. The author’s reason for producing the source also determined whether he or she included more than one perspective or just his/her own.

Authors typically want to do at least one of the following:

  • Inform and educate
  • Sell services or products or

Combined Purposes

Sometimes authors have a combination of purposes, as when a marketer decides he can sell more smart phones with an informative sales video that also entertains us. The same is true when a singer writes and performs a song that entertains us but that she intends to make available for sale. Other examples of authors having multiple purposes occur in most scholarly writing.

In those cases, authors certainly want to inform and educate their audiences. But they also want to persuade their audiences that what they are reporting and/or postulating is a true description of a situation, event, or phenomenon or a valid argument that their audience must take a particular action. In this blend of scholarly author’s purposes, the intent to educate and inform is considered to trump the intent to persuade.

Why Intent Matters

Authors’ intent usually matters in how useful their information can be to your research project, depending on which information need you are trying to meet. For instance, when you’re looking for sources that will help you actually decide your answer to your research question or evidence for your answer that you will share with your audience, you will want the author’s main purpose to have been to inform or educate his/her audience. That’s because, with that intent, he/she is likely to have used:

  • Facts where possible.
  • Multiple perspectives instead of just his/her own.
  • Little subjective information.
  • Seemingly unbiased, objective language that cites where he/she got the information.

The reason you want that kind of resource when trying to answer your research question or explaining that answer is that all of those characteristics will lend credibility to the argument you are making with your project. Both you and your audience will simply find it easier to believe—will have more confidence in the argument being made—when you include those types of sources.

Sources whose authors intend only to persuade others won’t meet your information need for an answer to your research question or evidence with which to convince your audience. That’s because they don’t always confine themselves to facts. Instead, they tell us their opinions without backing them up with evidence. If you used those sources, your readers will notice and not believe your argument.

Fact vs. Opinion vs. Objective vs. Subjective

Need to brush up on the differences between fact, objective information, subjective information, and opinion?

Fact – Facts are useful to inform or make an argument.

  • The United States was established in 1776.
  • The pH levels in acids are lower than pH levels in alkalines.
  • Beethoven had a reputation as a virtuoso pianist.

Opinion – Opinions are useful to persuade, but careful readers and listeners will notice and demand evidence to back them up.

  • That was a good movie.
  • Strawberries taste better blueberries.
  • George Clooney is the sexiest actor alive.
  • The death penalty is wrong.
  • Beethoven’s reputation as a virtuoso pianist is overrated.

Objective – Objective information reflects a research finding or multiple perspectives that are not biased.

  • “Several studies show that an active lifestyle reduces the risk of heart disease and diabetes.”
  • “Studies from the Brown University Medical School show that twenty-somethings eat 25 percent more fast-food meals at this age than they did as teenagers.”

Subjective – Subjective information presents one person or organization’s perspective or interpretation. Subjective information can be meant to distort, or it can reflect educated and informed thinking. All opinions are subjective, but some are backed up with facts more than others.

  • “The simple truth is this: As human beings, we were meant to move.”
  • “In their thirties, women should stock up on calcium to ensure strong, dense bones and to ward off osteoporosis later in life.”*

*In this quote, it’s mostly the “should” that makes it subjective. The objective version of the last quote would read: “Studies have shown that women who begin taking calcium in their 30s show stronger bone density and fewer repercussions of osteoporosis than women who did not take calcium at all.” But perhaps there are other data showing complications from taking calcium. That’s why drawing the conclusion that requires a “should” makes the statement subjective.

Activity: Fact, Opinion, Objective, or Subjective?

Open activity in a web browser.

Choosing & Using Sources: A Guide to Academic Research by Teaching & Learning, Ohio State University Libraries is licensed under a Creative Commons Attribution 4.0 International License , except where otherwise noted.

Share This Book

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Indian Assoc Pediatr Surg
  • v.24(1); Jan-Mar 2019

Formulation of Research Question – Stepwise Approach

Simmi k. ratan.

Department of Pediatric Surgery, Maulana Azad Medical College, New Delhi, India

1 Department of Community Medicine, North Delhi Municipal Corporation Medical College, New Delhi, India

2 Department of Pediatric Surgery, Batra Hospital and Research Centre, New Delhi, India

Formulation of research question (RQ) is an essentiality before starting any research. It aims to explore an existing uncertainty in an area of concern and points to a need for deliberate investigation. It is, therefore, pertinent to formulate a good RQ. The present paper aims to discuss the process of formulation of RQ with stepwise approach. The characteristics of good RQ are expressed by acronym “FINERMAPS” expanded as feasible, interesting, novel, ethical, relevant, manageable, appropriate, potential value, publishability, and systematic. A RQ can address different formats depending on the aspect to be evaluated. Based on this, there can be different types of RQ such as based on the existence of the phenomenon, description and classification, composition, relationship, comparative, and causality. To develop a RQ, one needs to begin by identifying the subject of interest and then do preliminary research on that subject. The researcher then defines what still needs to be known in that particular subject and assesses the implied questions. After narrowing the focus and scope of the research subject, researcher frames a RQ and then evaluates it. Thus, conception to formulation of RQ is very systematic process and has to be performed meticulously as research guided by such question can have wider impact in the field of social and health research by leading to formulation of policies for the benefit of larger population.

I NTRODUCTION

A good research question (RQ) forms backbone of a good research, which in turn is vital in unraveling mysteries of nature and giving insight into a problem.[ 1 , 2 , 3 , 4 ] RQ identifies the problem to be studied and guides to the methodology. It leads to building up of an appropriate hypothesis (Hs). Hence, RQ aims to explore an existing uncertainty in an area of concern and points to a need for deliberate investigation. A good RQ helps support a focused arguable thesis and construction of a logical argument. Hence, formulation of a good RQ is undoubtedly one of the first critical steps in the research process, especially in the field of social and health research, where the systematic generation of knowledge that can be used to promote, restore, maintain, and/or protect health of individuals and populations.[ 1 , 3 , 4 ] Basically, the research can be classified as action, applied, basic, clinical, empirical, administrative, theoretical, or qualitative or quantitative research, depending on its purpose.[ 2 ]

Research plays an important role in developing clinical practices and instituting new health policies. Hence, there is a need for a logical scientific approach as research has an important goal of generating new claims.[ 1 ]

C HARACTERISTICS OF G OOD R ESEARCH Q UESTION

“The most successful research topics are narrowly focused and carefully defined but are important parts of a broad-ranging, complex problem.”

A good RQ is an asset as it:

  • Details the problem statement
  • Further describes and refines the issue under study
  • Adds focus to the problem statement
  • Guides data collection and analysis
  • Sets context of research.

Hence, while writing RQ, it is important to see if it is relevant to the existing time frame and conditions. For example, the impact of “odd-even” vehicle formula in decreasing the level of air particulate pollution in various districts of Delhi.

A good research is represented by acronym FINERMAPS[ 5 ]

Interesting.

  • Appropriate
  • Potential value and publishability
  • Systematic.

Feasibility means that it is within the ability of the investigator to carry out. It should be backed by an appropriate number of subjects and methodology as well as time and funds to reach the conclusions. One needs to be realistic about the scope and scale of the project. One has to have access to the people, gadgets, documents, statistics, etc. One should be able to relate the concepts of the RQ to the observations, phenomena, indicators, or variables that one can access. One should be clear that the collection of data and the proceedings of project can be completed within the limited time and resources available to the investigator. Sometimes, a RQ appears feasible, but when fieldwork or study gets started, it proves otherwise. In this situation, it is important to write up the problems honestly and to reflect on what has been learned. One should try to discuss with more experienced colleagues or the supervisor so as to develop a contingency plan to anticipate possible problems while working on a RQ and find possible solutions in such situations.

This is essential that one has a real grounded interest in one's RQ and one can explore this and back it up with academic and intellectual debate. This interest will motivate one to keep going with RQ.

The question should not simply copy questions investigated by other workers but should have scope to be investigated. It may aim at confirming or refuting the already established findings, establish new facts, or find new aspects of the established facts. It should show imagination of the researcher. Above all, the question has to be simple and clear. The complexity of a question can frequently hide unclear thoughts and lead to a confused research process. A very elaborate RQ, or a question which is not differentiated into different parts, may hide concepts that are contradictory or not relevant. This needs to be clear and thought-through. Having one key question with several subcomponents will guide your research.

This is the foremost requirement of any RQ and is mandatory to get clearance from appropriate authorities before stating research on the question. Further, the RQ should be such that it minimizes the risk of harm to the participants in the research, protect the privacy and maintain their confidentiality, and provide the participants right to withdraw from research. It should also guide in avoiding deceptive practices in research.

The question should of academic and intellectual interest to people in the field you have chosen to study. The question preferably should arise from issues raised in the current situation, literature, or in practice. It should establish a clear purpose for the research in relation to the chosen field. For example, filling a gap in knowledge, analyzing academic assumptions or professional practice, monitoring a development in practice, comparing different approaches, or testing theories within a specific population are some of the relevant RQs.

Manageable (M): It has the similar essence as of feasibility but mainly means that the following research can be managed by the researcher.

Appropriate (A): RQ should be appropriate logically and scientifically for the community and institution.

Potential value and publishability (P): The study can make significant health impact in clinical and community practices. Therefore, research should aim for significant economic impact to reduce unnecessary or excessive costs. Furthermore, the proposed study should exist within a clinical, consumer, or policy-making context that is amenable to evidence-based change. Above all, a good RQ must address a topic that has clear implications for resolving important dilemmas in health and health-care decisions made by one or more stakeholder groups.

Systematic (S): Research is structured with specified steps to be taken in a specified sequence in accordance with the well-defined set of rules though it does not rule out creative thinking.

Example of RQ: Would the topical skin application of oil as a skin barrier reduces hypothermia in preterm infants? This question fulfills the criteria of a good RQ, that is, feasible, interesting, novel, ethical, and relevant.

Types of research question

A RQ can address different formats depending on the aspect to be evaluated.[ 6 ] For example:

  • Existence: This is designed to uphold the existence of a particular phenomenon or to rule out rival explanation, for example, can neonates perceive pain?
  • Description and classification: This type of question encompasses statement of uniqueness, for example, what are characteristics and types of neuropathic bladders?
  • Composition: It calls for breakdown of whole into components, for example, what are stages of reflux nephropathy?
  • Relationship: Evaluate relation between variables, for example, association between tumor rupture and recurrence rates in Wilm's tumor
  • Descriptive—comparative: Expected that researcher will ensure that all is same between groups except issue in question, for example, Are germ cell tumors occurring in gonads more aggressive than those occurring in extragonadal sites?
  • Causality: Does deletion of p53 leads to worse outcome in patients with neuroblastoma?
  • Causality—comparative: Such questions frequently aim to see effect of two rival treatments, for example, does adding surgical resection improves survival rate outcome in children with neuroblastoma than with chemotherapy alone?
  • Causality–Comparative interactions: Does immunotherapy leads to better survival outcome in neuroblastoma Stage IV S than with chemotherapy in the setting of adverse genetic profile than without it? (Does X cause more changes in Y than those caused by Z under certain condition and not under other conditions).

How to develop a research question

  • Begin by identifying a broader subject of interest that lends itself to investigate, for example, hormone levels among hypospadias
  • Do preliminary research on the general topic to find out what research has already been done and what literature already exists.[ 7 ] Therefore, one should begin with “information gaps” (What do you already know about the problem? For example, studies with results on testosterone levels among hypospadias
  • What do you still need to know? (e.g., levels of other reproductive hormones among hypospadias)
  • What are the implied questions: The need to know about a problem will lead to few implied questions. Each general question should lead to more specific questions (e.g., how hormone levels differ among isolated hypospadias with respect to that in normal population)
  • Narrow the scope and focus of research (e.g., assessment of reproductive hormone levels among isolated hypospadias and hypospadias those with associated anomalies)
  • Is RQ clear? With so much research available on any given topic, RQs must be as clear as possible in order to be effective in helping the writer direct his or her research
  • Is the RQ focused? RQs must be specific enough to be well covered in the space available
  • Is the RQ complex? RQs should not be answerable with a simple “yes” or “no” or by easily found facts. They should, instead, require both research and analysis on the part of the writer
  • Is the RQ one that is of interest to the researcher and potentially useful to others? Is it a new issue or problem that needs to be solved or is it attempting to shed light on previously researched topic
  • Is the RQ researchable? Consider the available time frame and the required resources. Is the methodology to conduct the research feasible?
  • Is the RQ measurable and will the process produce data that can be supported or contradicted?
  • Is the RQ too broad or too narrow?
  • Create Hs: After formulating RQ, think where research is likely to be progressing? What kind of argument is likely to be made/supported? What would it mean if the research disputed the planned argument? At this step, one can well be on the way to have a focus for the research and construction of a thesis. Hs consists of more specific predictions about the nature and direction of the relationship between two variables. It is a predictive statement about the outcome of the research, dictate the method, and design of the research[ 1 ]
  • Understand implications of your research: This is important for application: whether one achieves to fill gap in knowledge and how the results of the research have practical implications, for example, to develop health policies or improve educational policies.[ 1 , 8 ]

Brainstorm/Concept map for formulating research question

  • First, identify what types of studies have been done in the past?
  • Is there a unique area that is yet to be investigated or is there a particular question that may be worth replicating?
  • Begin to narrow the topic by asking open-ended “how” and “why” questions
  • Evaluate the question
  • Develop a Hypothesis (Hs)
  • Write down the RQ.

Writing down the research question

  • State the question in your own words
  • Write down the RQ as completely as possible.

For example, Evaluation of reproductive hormonal profile in children presenting with isolated hypospadias)

  • Divide your question into concepts. Narrow to two or three concepts (reproductive hormonal profile, isolated hypospadias, compare with normal/not isolated hypospadias–implied)
  • Specify the population to be studied (children with isolated hypospadias)
  • Refer to the exposure or intervention to be investigated, if any
  • Reflect the outcome of interest (hormonal profile).

Another example of a research question

Would the topical skin application of oil as a skin barrier reduces hypothermia in preterm infants? Apart from fulfilling the criteria of a good RQ, that is, feasible, interesting, novel, ethical, and relevant, it also details about the intervention done (topical skin application of oil), rationale of intervention (as a skin barrier), population to be studied (preterm infants), and outcome (reduces hypothermia).

Other important points to be heeded to while framing research question

  • Make reference to a population when a relationship is expected among a certain type of subjects
  • RQs and Hs should be made as specific as possible
  • Avoid words or terms that do not add to the meaning of RQs and Hs
  • Stick to what will be studied, not implications
  • Name the variables in the order in which they occur/will be measured
  • Avoid the words significant/”prove”
  • Avoid using two different terms to refer to the same variable.

Some of the other problems and their possible solutions have been discussed in Table 1 .

Potential problems and solutions while making research question

An external file that holds a picture, illustration, etc.
Object name is JIAPS-24-15-g001.jpg

G OING B EYOND F ORMULATION OF R ESEARCH Q UESTION–THE P ATH A HEAD

Once RQ is formulated, a Hs can be developed. Hs means transformation of a RQ into an operational analog.[ 1 ] It means a statement as to what prediction one makes about the phenomenon to be examined.[ 4 ] More often, for case–control trial, null Hs is generated which is later accepted or refuted.

A strong Hs should have following characteristics:

  • Give insight into a RQ
  • Are testable and measurable by the proposed experiments
  • Have logical basis
  • Follows the most likely outcome, not the exceptional outcome.

E XAMPLES OF R ESEARCH Q UESTION AND H YPOTHESIS

Research question-1.

  • Does reduced gap between the two segments of the esophagus in patients of esophageal atresia reduces the mortality and morbidity of such patients?

Hypothesis-1

  • Reduced gap between the two segments of the esophagus in patients of esophageal atresia reduces the mortality and morbidity of such patients
  • In pediatric patients with esophageal atresia, gap of <2 cm between two segments of the esophagus and proper mobilization of proximal pouch reduces the morbidity and mortality among such patients.

Research question-2

  • Does application of mitomycin C improves the outcome in patient of corrosive esophageal strictures?

Hypothesis-2

In patients aged 2–9 years with corrosive esophageal strictures, 34 applications of mitomycin C in dosage of 0.4 mg/ml for 5 min over a period of 6 months improve the outcome in terms of symptomatic and radiological relief. Some other examples of good and bad RQs have been shown in Table 2 .

Examples of few bad (left-hand side column) and few good (right-hand side) research questions

An external file that holds a picture, illustration, etc.
Object name is JIAPS-24-15-g002.jpg

R ESEARCH Q UESTION AND S TUDY D ESIGN

RQ determines study design, for example, the question aimed to find the incidence of a disease in population will lead to conducting a survey; to find risk factors for a disease will need case–control study or a cohort study. RQ may also culminate into clinical trial.[ 9 , 10 ] For example, effect of administration of folic acid tablet in the perinatal period in decreasing incidence of neural tube defect. Accordingly, Hs is framed.

Appropriate statistical calculations are instituted to generate sample size. The subject inclusion, exclusion criteria and time frame of research are carefully defined. The detailed subject information sheet and pro forma are carefully defined. Moreover, research is set off few examples of research methodology guided by RQ:

  • Incidence of anorectal malformations among adolescent females (hospital-based survey)
  • Risk factors for the development of spontaneous pneumoperitoneum in pediatric patients (case–control design and cohort study)
  • Effect of technique of extramucosal ureteric reimplantation without the creation of submucosal tunnel for the preservation of upper tract in bladder exstrophy (clinical trial).

The results of the research are then be available for wider applications for health and social life

C ONCLUSION

A good RQ needs thorough literature search and deep insight into the specific area/problem to be investigated. A RQ has to be focused yet simple. Research guided by such question can have wider impact in the field of social and health research by leading to formulation of policies for the benefit of larger population.

Financial support and sponsorship

Conflicts of interest.

There are no conflicts of interest.

R EFERENCES

Banner

Research Basics

  • What Is Research?
  • Types of Research
  • Secondary Research | Literature Review
  • Developing Your Topic
  • Primary vs. Secondary Sources
  • Evaluating Sources
  • Responsible Conduct of Research
  • Additional Help

Research is formalized curiosity. It is poking and prying with a purpose. - Zora Neale Hurston

A good working definition of research might be:

Research is the deliberate, purposeful, and systematic gathering of data, information, facts, and/or opinions for the advancement of personal, societal, or overall human knowledge.

Based on this definition, we all do research all the time. Most of this research is casual research. Asking friends what they think of different restaurants, looking up reviews of various products online, learning more about celebrities; these are all research.

Formal research includes the type of research most people think of when they hear the term “research”: scientists in white coats working in a fully equipped laboratory. But formal research is a much broader category that just this. Most people will never do laboratory research after graduating from college, but almost everybody will have to do some sort of formal research at some point in their careers.

So What Do We Mean By “Formal Research?”

Casual research is inward facing: it’s done to satisfy our own curiosity or meet our own needs, whether that’s choosing a reliable car or figuring out what to watch on TV. Formal research is outward facing. While it may satisfy our own curiosity, it’s primarily intended to be shared in order to achieve some purpose. That purpose could be anything: finding a cure for cancer, securing funding for a new business, improving some process at your workplace, proving the latest theory in quantum physics, or even just getting a good grade in your Humanities 200 class.

What sets formal research apart from casual research is the documentation of where you gathered your information from. This is done in the form of “citations” and “bibliographies.” Citing sources is covered in the section "Citing Your Sources."

Formal research also follows certain common patterns depending on what the research is trying to show or prove. These are covered in the section “Types of Research.”

Creative Commons License

  • Next: Types of Research >>
  • Last Updated: Dec 21, 2023 3:49 PM
  • URL: https://guides.library.iit.edu/research_basics

If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

To log in and use all the features of Khan Academy, please enable JavaScript in your browser.

Course: LSAT   >   Unit 1

  • Getting started with Logical Reasoning
  • Introduction to arguments
  • Catalog of question types
  • Types of conclusions

Types of evidence

  • Types of flaws
  • Identify the conclusion | Quick guide
  • Identify the conclusion | Learn more
  • Identify the conclusion | Examples
  • Identify an entailment | Quick guide
  • Identify an entailment | Learn more
  • Strongly supported inferences | Quick guide
  • Strongly supported inferences | Learn more
  • Disputes | Quick guide
  • Disputes | Learn more
  • Identify the technique | Quick guide
  • Identify the technique | Learn more
  • Identify the role | Quick guide
  • Identify the role | learn more
  • Identify the principle | Quick guide
  • Identify the principle | Learn more
  • Match structure | Quick guide
  • Match structure | Learn more
  • Match principles | Quick guide
  • Match principles | Learn more
  • Identify a flaw | Quick guide
  • Identify a flaw | Learn more
  • Match a flaw | Quick guide
  • Match a flaw | Learn more
  • Necessary assumptions | Quick guide
  • Necessary assumptions | Learn more
  • Sufficient assumptions | Quick guide
  • Sufficient assumptions | Learn more
  • Strengthen and weaken | Quick guide
  • Strengthen and weaken | Learn more
  • Helpful to know | Quick guide
  • Helpful to know | learn more
  • Explain or resolve | Quick guide
  • Explain or resolve | Learn more

Types of Evidence

  • This can help you avoid getting “lost” in the words; if you’re reading actively and recognizing what type of evidence you’re looking at, then you’re more likely to stay focused.
  • Different types of evidence are often associated with specific types of assumptions or flaws, so if a question presents a classic evidence structure, you may be able to find the answer more quickly.

Common Evidence Types

Examples as evidence.

  • [Paola is the best athlete in the state.] After all, Paola has won medals in 8 different Olympic sports.
  • Paola beat last year's decathlon state champion on Saturday, so [she is the best athlete in the state].

What others say

  • [Paola is the best athlete in the state.] We know this because the most highly-acclaimed sports magazine has named her as such.
  • Because the population voted Paola the Best Athlete in the state in a landslide, [it would be absurd to claim that anyone else is the best athlete in the state].

Using the past

  • [Paola is the best athlete in the state.] She must be, since she won the state championships last year, two years ago, three years ago, and four years ago.
  • [Paola is the best athlete in the state], because she won the most athletic awards. Look at Jude, who's currently the Best Chef in the State because he won the most cooking awards.

Generalizing from a Sample

  • [Paola is the best athlete in the state], because she won every local tournament in every spring sport.

Common Rebuttal Structures

Counterexamples, alternate possibilities, other types of argument structures, conditional.

  • Penguins win → ‍   Flyers make big mistake
  • Flyers make big mistake → ‍   coach tired
  • Friday → ‍   coach is not tired

Causation based on correlation

Want to join the conversation.

  • Upvote Button navigates to signup page
  • Downvote Button navigates to signup page
  • Flag Button navigates to signup page

Chapter 4 Theories in Scientific Research

As we know from previous chapters, science is knowledge represented as a collection of “theories” derived using the scientific method. In this chapter, we will examine what is a theory, why do we need theories in research, what are the building blocks of a theory, how to evaluate theories, how can we apply theories in research, and also presents illustrative examples of five theories frequently used in social science research.

Theories are explanations of a natural or social behavior, event, or phenomenon. More formally, a scientific theory is a system of constructs (concepts) and propositions (relationships between those constructs) that collectively presents a logical, systematic, and coherent explanation of a phenomenon of interest within some assumptions and boundary conditions (Bacharach 1989). [1]

Theories should explain why things happen, rather than just describe or predict. Note that it is possible to predict events or behaviors using a set of predictors, without necessarily explaining why such events are taking place. For instance, market analysts predict fluctuations in the stock market based on market announcements, earnings reports of major companies, and new data from the Federal Reserve and other agencies, based on previously observed correlations . Prediction requires only correlations. In contrast, explanations require causations , or understanding of cause-effect relationships. Establishing causation requires three conditions: (1) correlations between two constructs, (2) temporal precedence (the cause must precede the effect in time), and (3) rejection of alternative hypotheses (through testing). Scientific theories are different from theological, philosophical, or other explanations in that scientific theories can be empirically tested using scientific methods.

Explanations can be idiographic or nomothetic. Idiographic explanations are those that explain a single situation or event in idiosyncratic detail. For example, you did poorly on an exam because: (1) you forgot that you had an exam on that day, (2) you arrived late to the exam due to a traffic jam, (3) you panicked midway through the exam, (4) you had to work late the previous evening and could not study for the exam, or even (5) your dog ate your text book. The explanations may be detailed, accurate, and valid, but they may not apply to other similar situations, even involving the same person, and are hence not generalizable. In contrast, nomothetic explanations seek to explain a class of situations or events rather than a specific situation or event. For example, students who do poorly in exams do so because they did not spend adequate time preparing for exams or that they suffer from nervousness, attention-deficit, or some other medical disorder. Because nomothetic explanations are designed to be generalizable across situations, events, or people, they tend to be less precise, less complete, and less detailed. However, they explain economically, using only a few explanatory variables. Because theories are also intended to serve as generalized explanations for patterns of events, behaviors, or phenomena, theoretical explanations are generally nomothetic in nature.

While understanding theories, it is also important to understand what theory is not. Theory is not data, facts, typologies, taxonomies, or empirical findings. A collection of facts is not a theory, just as a pile of stones is not a house. Likewise, a collection of constructs (e.g., a typology of constructs) is not a theory, because theories must go well beyond constructs to include propositions, explanations, and boundary conditions. Data, facts, and findings operate at the empirical or observational level, while theories operate at a conceptual level and are based on logic rather than observations.

There are many benefits to using theories in research. First, theories provide the underlying logic of the occurrence of natural or social phenomenon by explaining what are the key drivers and key outcomes of the target phenomenon and why, and what underlying processes are responsible driving that phenomenon. Second, they aid in sense-making by helping us synthesize prior empirical findings within a theoretical framework and reconcile contradictory findings by discovering contingent factors influencing the relationship between two constructs in different studies. Third, theories provide guidance for future research by helping identify constructs and relationships that are worthy of further research. Fourth, theories can contribute to cumulative knowledge building by bridging gaps between other theories and by causing existing theories to be reevaluated in a new light.

However, theories can also have their own share of limitations. As simplified explanations of reality, theories may not always provide adequate explanations of the phenomenon of interest based on a limited set of constructs and relationships. Theories are designed to be simple and parsimonious explanations, while reality may be significantly more complex. Furthermore, theories may impose blinders or limit researchers’ “range of vision,” causing them to miss out on important concepts that are not defined by the theory.

Building Blocks of a Theory

David Whetten (1989) suggests that there are four building blocks of a theory: constructs, propositions, logic, and boundary conditions/assumptions. Constructs capture the “what” of theories (i.e., what concepts are important for explaining a phenomenon), propositions capture the “how” (i.e., how are these concepts related to each other), logic represents the “why” (i.e., why are these concepts related), and boundary conditions/assumptions examines the “who, when, and where” (i.e., under what circumstances will these concepts and relationships work). Though constructs and propositions were previously discussed in Chapter 2, we describe them again here for the sake of completeness.

Constructs are abstract concepts specified at a high level of abstraction that are chosen specifically to explain the phenomenon of interest. Recall from Chapter 2 that constructs may be unidimensional (i.e., embody a single concept), such as weight or age, or multi-dimensional (i.e., embody multiple underlying concepts), such as personality or culture. While some constructs, such as age, education, and firm size, are easy to understand, others, such as creativity, prejudice, and organizational agility, may be more complex and abstruse, and still others such as trust, attitude, and learning, may represent temporal tendencies rather than steady states. Nevertheless, all constructs must have clear and unambiguous operational definition that should specify exactly how the construct will be measured and at what level of analysis (individual, group, organizational, etc.). Measurable representations of abstract constructs are called variables . For instance, intelligence quotient (IQ score) is a variable that is purported to measure an abstract construct called intelligence. As noted earlier, scientific research proceeds along two planes: a theoretical plane and an empirical plane. Constructs are conceptualized at the theoretical plane, while variables are operationalized and measured at the empirical (observational) plane. Furthermore, variables may be independent, dependent, mediating, or moderating, as discussed in Chapter 2. The distinction between constructs (conceptualized at the theoretical level) and variables (measured at the empirical level) is shown in Figure 4.1.

Flowchart showing the theoretical plane with construct A leading to a proposition of construct B, then the emprical plane below with the independent variable leading to a hypothesis about the dependent variable.

Figure 4.1. Distinction between theoretical and empirical concepts

Propositions are associations postulated between constructs based on deductive logic. Propositions are stated in declarative form and should ideally indicate a cause-effect relationship (e.g., if X occurs, then Y will follow). Note that propositions may be conjectural but MUST be testable, and should be rejected if they are not supported by empirical observations. However, like constructs, propositions are stated at the theoretical level, and they can only be tested by examining the corresponding relationship between measurable variables of those constructs. The empirical formulation of propositions, stated as relationships between variables, is called hypotheses . The distinction between propositions (formulated at the theoretical level) and hypotheses (tested at the empirical level) is depicted in Figure 4.1.

The third building block of a theory is the logic that provides the basis for justifying the propositions as postulated. Logic acts like a “glue” that connects the theoretical constructs and provides meaning and relevance to the relationships between these constructs. Logic also represents the “explanation” that lies at the core of a theory. Without logic, propositions will be ad hoc, arbitrary, and meaningless, and cannot be tied into a cohesive “system of propositions” that is the heart of any theory.

Finally, all theories are constrained by assumptions about values, time, and space, and boundary conditions that govern where the theory can be applied and where it cannot be applied. For example, many economic theories assume that human beings are rational (or boundedly rational) and employ utility maximization based on cost and benefit expectations as a way of understand human behavior. In contrast, political science theories assume that people are more political than rational, and try to position themselves in their professional or personal environment in a way that maximizes their power and control over others. Given the nature of their underlying assumptions, economic and political theories are not directly comparable, and researchers should not use economic theories if their objective is to understand the power structure or its evolution in a organization. Likewise, theories may have implicit cultural assumptions (e.g., whether they apply to individualistic or collective cultures), temporal assumptions (e.g., whether they apply to early stages or later stages of human behavior), and spatial assumptions (e.g., whether they apply to certain localities but not to others). If a theory is to be properly used or tested, all of its implicit assumptions that form the boundaries of that theory must be properly understood. Unfortunately, theorists rarely state their implicit assumptions clearly, which leads to frequent misapplications of theories to problem situations in research.

Attributes of a Good Theory

Theories are simplified and often partial explanations of complex social reality. As such, there can be good explanations or poor explanations, and consequently, there can be good theories or poor theories. How can we evaluate the “goodness” of a given theory? Different criteria have been proposed by different researchers, the more important of which are listed below:

  • Logical consistency : Are the theoretical constructs, propositions, boundary conditions, and assumptions logically consistent with each other? If some of these “building blocks” of a theory are inconsistent with each other (e.g., a theory assumes rationality, but some constructs represent non-rational concepts), then the theory is a poor theory.
  • Explanatory power : How much does a given theory explain (or predict) reality? Good theories obviously explain the target phenomenon better than rival theories, as often measured by variance explained (R-square) value in regression equations.
  • Falsifiability : British philosopher Karl Popper stated in the 1940’s that for theories to be valid, they must be falsifiable. Falsifiability ensures that the theory is potentially disprovable, if empirical data does not match with theoretical propositions, which allows for their empirical testing by researchers. In other words, theories cannot be theories unless they can be empirically testable. Tautological statements, such as “a day with high temperatures is a hot day” are not empirically testable because a hot day is defined (and measured) as a day with high temperatures, and hence, such statements cannot be viewed as a theoretical proposition. Falsifiability requires presence of rival explanations it ensures that the constructs are adequately measurable, and so forth. However, note that saying that a theory is falsifiable is not the same as saying that a theory should be falsified. If a theory is indeed falsified based on empirical evidence, then it was probably a poor theory to begin with!
  • Parsimony : Parsimony examines how much of a phenomenon is explained with how few variables. The concept is attributed to 14 th century English logician Father William of Ockham (and hence called “Ockham’s razor” or “Occam’s razor), which states that among competing explanations that sufficiently explain the observed evidence, the simplest theory (i.e., one that uses the smallest number of variables or makes the fewest assumptions) is the best. Explanation of a complex social phenomenon can always be increased by adding more and more constructs. However, such approach defeats the purpose of having a theory, which are intended to be “simplified” and generalizable explanations of reality. Parsimony relates to the degrees of freedom in a given theory. Parsimonious theories have higher degrees of freedom, which allow them to be more easily generalized to other contexts, settings, and populations.

Approaches to Theorizing

How do researchers build theories? Steinfeld and Fulk (1990) [2] recommend four such approaches. The first approach is to build theories inductively based on observed patterns of events or behaviors. Such approach is often called “grounded theory building”, because the theory is grounded in empirical observations. This technique is heavily dependent on the observational and interpretive abilities of the researcher, and the resulting theory may be subjective and non -confirmable. Furthermore, observing certain patterns of events will not necessarily make a theory, unless the researcher is able to provide consistent explanations for the observed patterns. We will discuss the grounded theory approach in a later chapter on qualitative research.

The second approach to theory building is to conduct a bottom-up conceptual analysis to identify different sets of predictors relevant to the phenomenon of interest using a predefined framework. One such framework may be a simple input-process-output framework, where the researcher may look for different categories of inputs, such as individual, organizational, and/or technological factors potentially related to the phenomenon of interest (the output), and describe the underlying processes that link these factors to the target phenomenon. This is also an inductive approach that relies heavily on the inductive abilities of the researcher, and interpretation may be biased by researcher’s prior knowledge of the phenomenon being studied.

The third approach to theorizing is to extend or modify existing theories to explain a new context, such as by extending theories of individual learning to explain organizational learning. While making such an extension, certain concepts, propositions, and/or boundary conditions of the old theory may be retained and others modified to fit the new context. This deductive approach leverages the rich inventory of social science theories developed by prior theoreticians, and is an efficient way of building new theories by building on existing ones.

The fourth approach is to apply existing theories in entirely new contexts by drawing upon the structural similarities between the two contexts. This approach relies on reasoning by analogy, and is probably the most creative way of theorizing using a deductive approach. For instance, Markus (1987) [3] used analogic similarities between a nuclear explosion and uncontrolled growth of networks or network-based businesses to propose a critical mass theory of network growth. Just as a nuclear explosion requires a critical mass of radioactive material to sustain a nuclear explosion, Markus suggested that a network requires a critical mass of users to sustain its growth, and without such critical mass, users may leave the network, causing an eventual demise of the network.

Examples of Social Science Theories

In this section, we present brief overviews of a few illustrative theories from different social science disciplines. These theories explain different types of social behaviors, using a set of constructs, propositions, boundary conditions, assumptions, and underlying logic. Note that the following represents just a simplistic introduction to these theories; readers are advised to consult the original sources of these theories for more details and insights on each theory.

Agency Theory. Agency theory (also called principal-agent theory), a classic theory in the organizational economics literature, was originally proposed by Ross (1973) [4] to explain two-party relationships (such as those between an employer and its employees, between organizational executives and shareholders, and between buyers and sellers) whose goals are not congruent with each other. The goal of agency theory is to specify optimal contracts and the conditions under which such contracts may help minimize the effect of goal incongruence. The core assumptions of this theory are that human beings are self-interested individuals, boundedly rational, and risk-averse, and the theory can be applied at the individual or organizational level.

The two parties in this theory are the principal and the agent; the principal employs the agent to perform certain tasks on its behalf. While the principal’s goal is quick and effective completion of the assigned task, the agent’s goal may be working at its own pace, avoiding risks, and seeking self-interest (such as personal pay) over corporate interests. Hence, the goal incongruence. Compounding the nature of the problem may be information asymmetry problems caused by the principal’s inability to adequately observe the agent’s behavior or accurately evaluate the agent’s skill sets. Such asymmetry may lead to agency problems where the agent may not put forth the effort needed to get the task done (the moral hazard problem) or may misrepresent its expertise or skills to get the job but not perform as expected (the adverse selection problem). Typical contracts that are behavior-based, such as a monthly salary, cannot overcome these problems. Hence, agency theory recommends using outcome-based contracts, such as a commissions or a fee payable upon task completion, or mixed contracts that combine behavior-based and outcome-based incentives. An employee stock option plans are is an example of an outcome-based contract while employee pay is a behavior-based contract. Agency theory also recommends tools that principals may employ to improve the efficacy of behavior-based contracts, such as investing in monitoring mechanisms (such as hiring supervisors) to counter the information asymmetry caused by moral hazard, designing renewable contracts contingent on agent’s performance (performance assessment makes the contract partially outcome-based), or by improving the structure of the assigned task to make it more programmable and therefore more observable.

Theory of Planned Behavior. Postulated by Azjen (1991) [5] , the theory of planned behavior (TPB) is a generalized theory of human behavior in the social psychology literature that can be used to study a wide range of individual behaviors. It presumes that individual behavior represents conscious reasoned choice, and is shaped by cognitive thinking and social pressures. The theory postulates that behaviors are based on one’s intention regarding that behavior, which in turn is a function of the person’s attitude toward the behavior, subjective norm regarding that behavior, and perception of control over that behavior (see Figure 4.2). Attitude is defined as the individual’s overall positive or negative feelings about performing the behavior in question, which may be assessed as a summation of one’s beliefs regarding the different consequences of that behavior, weighted by the desirability of those consequences.

Subjective norm refers to one’s perception of whether people important to that person expect the person to perform the intended behavior, and represented as a weighted combination of the expected norms of different referent groups such as friends, colleagues, or supervisors at work. Behavioral control is one’s perception of internal or external controls constraining the behavior in question. Internal controls may include the person’s ability to perform the intended behavior (self-efficacy), while external control refers to the availability of external resources needed to perform that behavior (facilitating conditions). TPB also suggests that sometimes people may intend to perform a given behavior but lack the resources needed to do so, and therefore suggests that posits that behavioral control can have a direct effect on behavior, in addition to the indirect effect mediated by intention.

TPB is an extension of an earlier theory called the theory of reasoned action, which included attitude and subjective norm as key drivers of intention, but not behavioral control. The latter construct was added by Ajzen in TPB to account for circumstances when people may have incomplete control over their own behaviors (such as not having high-speed Internet access for web surfing).

Flowchart theory of planned behavior showing a consequence leading to attitude, a norm leading to subjective norms, control leading to behavioral control, and all of these things leading to the intention and then the behavior.

Figure 4.2. Theory of planned behavior

Innovation diffusion theory. Innovation diffusion theory (IDT) is a seminal theory in the communications literature that explains how innovations are adopted within a population of potential adopters. The concept was first studied by French sociologist Gabriel Tarde, but the theory was developed by Everett Rogers in 1962 based on observations of 508 diffusion studies. The four key elements in this theory are: innovation, communication channels, time, and social system. Innovations may include new technologies, new practices, or new ideas, and adopters may be individuals or organizations. At the macro (population) level, IDT views innovation diffusion as a process of communication where people in a social system learn about a new innovation and its potential benefits through communication channels (such as mass media or prior adopters) and are persuaded to adopt it. Diffusion is a temporal process; the diffusion process starts off slow among a few early adopters, then picks up speed as the innovation is adopted by the mainstream population, and finally slows down as the adopter population reaches saturation. The cumulative adoption pattern therefore an S-shaped curve, as shown in Figure 4.3, and the adopter distribution represents a normal distribution. All adopters are not identical, and adopters can be classified into innovators, early adopters, early majority, late majority, and laggards based on their time of their adoption. The rate of diffusion a lso depends on characteristics of the social system such as the presence of opinion leaders (experts whose opinions are valued by others) and change agents (people who influence others’ behaviors).

At the micro (adopter) level, Rogers (1995) [6] suggests that innovation adoption is a process consisting of five stages: (1) knowledge: when adopters first learn about an innovation from mass-media or interpersonal channels, (2) persuasion: when they are persuaded by prior adopters to try the innovation, (3) decision: their decision to accept or reject the innovation, (4) implementation: their initial utilization of the innovation, and (5) confirmation: their decision to continue using it to its fullest potential (see Figure 4.4). Five innovation characteristics are presumed to shape adopters’ innovation adoption decisions: (1) relative advantage: the expected benefits of an innovation relative to prior innovations, (2) compatibility: the extent to which the innovation fits with the adopter’s work habits, beliefs, and values, (3) complexity: the extent to which the innovation is difficult to learn and use, (4) trialability: the extent to which the innovation can be tested on a trial basis, and (5) observability: the extent to which the results of using the innovation can be clearly observed. The last two characteristics have since been dropped from many innovation studies. Complexity is negatively correlated to innovation adoption, while the other four factors are positively correlated. Innovation adoption also depends on personal factors such as the adopter’s risk- taking propensity, education level, cosmopolitanism, and communication influence. Early adopters are venturesome, well educated, and rely more on mass media for information about the innovation, while later adopters rely more on interpersonal sources (such as friends and family) as their primary source of information. IDT has been criticized for having a “pro-innovation bias,” that is for presuming that all innovations are beneficial and will be eventually diffused across the entire population, and because it does not allow for inefficient innovations such as fads or fashions to die off quickly without being adopted by the entire population or being replaced by better innovations.

S-shaped diffusion curve showing the comparison with the traditional bell-shaped curve with 2.5% as innovators, 13.5% as early adopters, 34% as early majority, 34% as the late majority, and 16% as laggards.

Figure 4.3. S-shaped diffusion curve

Innovation adoption process showing knowledge then persuasion then decision then implementation and then confirmation.

Figure 4.4. Innovation adoption process.

Elaboration Likelihood Model . Developed by Petty and Cacioppo (1986) [7] , the elaboration likelihood model (ELM) is a dual-process theory of attitude formation or change in the psychology literature. It explains how individuals can be influenced to change their attitude toward a certain object, events, or behavior and the relative efficacy of such change strategies. The ELM posits that one’s attitude may be shaped by two “routes” of influence, the central route and the peripheral route, which differ in the amount of thoughtful information processing or “elaboration” required of people (see Figure 4.5). The central route requires a person to think about issue-related arguments in an informational message and carefully scrutinize the merits and relevance of those arguments, before forming an informed judgment about the target object. In the peripheral route, subjects rely on external “cues” such as number of prior users, endorsements from experts, or likeability of the endorser, rather than on the quality of arguments, in framing their attitude towards the target object. The latter route is less cognitively demanding, and the routes of attitude change are typically operationalized in the ELM using the argument quality and peripheral cues constructs respectively.

Argument quality (central route), motivation and ability (elaboration likelihood) and source credibility (peripheral route) all lead to attitude change

Figure 4.5. Elaboration likelihood model

Whether people will be influenced by the central or peripheral routes depends upon their ability and motivation to elaborate the central merits of an argument. This ability and motivation to elaborate is called elaboration likelihood . People in a state of high elaboration likelihood (high ability and high motivation) are more likely to thoughtfully process the information presented and are therefore more influenced by argument quality, while those in the low elaboration likelihood state are more motivated by peripheral cues. Elaboration likelihood is a situational characteristic and not a personal trait. For instance, a doctor may employ the central route for diagnosing and treating a medical ailment (by virtue of his or her expertise of the subject), but may rely on peripheral cues from auto mechanics to understand the problems with his car. As such, the theory has widespread implications about how to enact attitude change toward new products or ideas and even social change.

General Deterrence Theory. Two utilitarian philosophers of the eighteenth century, Cesare Beccaria and Jeremy Bentham, formulated General Deterrence Theory (GDT) as both an explanation of crime and a method for reducing it. GDT examines why certain individuals engage in deviant, anti-social, or criminal behaviors. This theory holds that people are fundamentally rational (for both conforming and deviant behaviors), and that they freely choose deviant behaviors based on a rational cost-benefit calculation. Because people naturally choose utility-maximizing behaviors, deviant choices that engender personal gain or pleasure can be controlled by increasing the costs of such behaviors in the form of punishments (countermeasures) as well as increasing the probability of apprehension. Swiftness, severity, and certainty of punishments are the key constructs in GDT.

While classical positivist research in criminology seeks generalized causes of criminal behaviors, such as poverty, lack of education, psychological conditions, and recommends strategies to rehabilitate criminals, such as by providing them job training and medical treatment, GDT focuses on the criminal decision making process and situational factors that influence that process. Hence, a criminal’s personal situation (such as his personal values, his affluence, and his need for money) and the environmental context (such as how protected is the target, how efficient is the local police, how likely are criminals to be apprehended) play key roles in this decision making process. The focus of GDT is not how to rehabilitate criminals and avert future criminal behaviors, but how to make criminal activities less attractive and therefore prevent crimes. To that end, “target hardening” such as installing deadbolts and building self-defense skills, legal deterrents such as eliminating parole for certain crimes, “three strikes law” (mandatory incarceration for three offenses, even if the offenses are minor and not worth imprisonment), and the death penalty, increasing the chances of apprehension using means such as neighborhood watch programs, special task forces on drugs or gang -related crimes, and increased police patrols, and educational programs such as highly visible notices such as “Trespassers will be prosecuted” are effective in preventing crimes. This theory has interesting implications not only for traditional crimes, but also for contemporary white-collar crimes such as insider trading, software piracy, and illegal sharing of music.

[1] Bacharach, S. B. (1989). “Organizational Theories: Some Criteria for Evaluation,” Academy of Management Review (14:4), 496-515.

[2] Steinfield, C.W. and Fulk, J. (1990). “The Theory Imperative,” in Organizations and Communications Technology , J. Fulk and C. W. Steinfield (eds.), Newbury Park, CA: Sage Publications.

[3] Markus, M. L. (1987). “Toward a ‘Critical Mass’ Theory of Interactive Media: Universal Access, Interdependence, and Diffusion,” Communication Research (14:5), 491-511.

[4] Ross, S. A. (1973). “The Economic Theory of Agency: The Principal’s Problem,” American Economic Review (63:2), 134-139.

[5] Ajzen, I. (1991). “The Theory of Planned Behavior,” Organizational Behavior and Human Decision Processes (50), 179-211.

[6] Rogers, E. (1962). Diffusion of Innovations . New York: The Free Press. Other editions 1983, 1996, 2005.

[7] Petty, R. E., and Cacioppo, J. T. (1986). Communication and Persuasion: Central and Peripheral Routes to Attitude Change . New York: Springer-Verlag.

  • Social Science Research: Principles, Methods, and Practices. Authored by : Anol Bhattacherjee. Provided by : University of South Florida. Located at : http://scholarcommons.usf.edu/oa_textbooks/3/ . License : CC BY-NC-SA: Attribution-NonCommercial-ShareAlike
  • About The Project
  • Current, Upcoming & Past Workshops
  • Information for Applicants
  • Science Journalism Resources

Fact-Checking Essentials

Why Fact-Check?

Fact-checking ensures that a story is as accurate and clear as possible before it is published or broadcast. It improves the accuracy and credibility of a publication/program and helps to eliminate errors. As a fact-checker, you must be detail oriented and committed to making sure every fact in the story is accurate.

What Is a Fact?

A fact is a statement that can be verified. A statement of opinion is not a fact. As a fact-checker, you are working with content that is written, not researching new material. Therefore, you must read the document and identify and extract all content in need of fact checking.

How Do You Fact-Check?

The first step is to read through the entire document. Next, read the document again, this time highlighting, underlining, or marking all facts that can be verified, including phrasing and word choices such as “always” and “exactly." The following are common places to start when fact-checking:

  • Always ask yourself, “Who would know this?” to find the best resource.
  • Always ask, “Does this make sense?”
  • Check assertions about scientific theories and evidence. Sometimes, the easiest way to do this will be to contact scientists in the field (see below); other times, the information will be well-established in the literature.
  • Confirm statistics.
  • Check all proper names, titles, product names, place names, locations, etc.
  • Check terms used. Are they commonplace and agreed upon in the scientific community? Do they need clarification?
  • Check declarative statements, for example, “…this is a big deal,” “the area is huge,” "always," "exactly," etc. The reason it is a “big deal” (how “huge” is the area?) should be explained in the text. If it isn’t, find out why: Is it a big deal because of money, time, compared with something else?
  • Be particularly cautious of facts stated absolutely.
  • Verify any numbers used in the article.

When you begin the process of verifying the facts in the document, you must choose quality resources. Working with primary sources when available (including contacting a source via telephone, reading original publications, accessing government Web sites) and quality secondary sources when necessary will ensure the legitimacy, authenticity, and validity of the information at hand. Statements of common sense or common knowledge ("the sky is blue") do not need to be verified. A source's personal opinions or experiences, or the personal opinions of the writer, do not need verification. If a fact has you stumped, move on to the next one and come back. You may not be able to confirm all facts in an article (if you can't, make sure you indicate that a fact has not been confirmed).

For additional information about working with statistics and scientific journals, refer to the "How to Deal with Statistics" and "Science Journal Resource List" tip sheets on the Science Literacy Project Web site.

Using the Internet to Fact-Check

Many facts can be easily verified, for example, a URL (does it work?) or the spelling of a famous name or place. For facts that need additional verification, the Internet can be a powerful tool, but it must be used with caution. In this day of electronic information, many studies and primary research are available online. Whenever possible, find the primary source (for example, if the document cites a study, try to find the study itself). Reputable Web sites (.gov or government-run Web sites, .edu or university Web sites, publisher Web sites) are more reliable than Web sites run by nonprofessionals (Wikipedia, hobbyist Web sites, outdated or unmaintained Web sites). A note about Wikipedia and Google: Wikipedia should NEVER be your only source to confirm a fact. A Google search is a good place to start, but you must look at the results with a careful eye and select the best site for verifying the fact in question.

For example, what Web site would you use to check the following statement: "The first replacement magnet for Sector 3-4 of the Large Hadron Collider (LHC) underwent its final preparations before being lowered into the tunnel on 28 November." If you Google "LHC," the first result is a Wikipedia page ( http://en.wikipedia.org/wiki/Large_Hadron_Collider ). However, a better place to start is the LHC home page ( http://lhc.web.cern.ch/lhc/ ), which is run by CERN, the European Organization for Nuclear Research. CERN issued a press release on December 1, 2008 about Sector 3-4. As of December 29, 2008, the Wikipedia LHC page did NOT include this updated information about Sector 3-4, making it a dead end for confirming the fact you are checking.

Government and university Web sites also tend to be updated on a regular basis, whereas hobbyist sites are often not updated at all. The International Astronomical Union (IAU) General Assembly introduced the category of dwarf planets and recategorized Pluto in August 2006; However, many Web pages continue to list Pluto as a planet. IAU.org and NASA.gov maintain up-to-date list of Solar System objects. Other sites have not been updated to reflect the new definition of planet [for example, as of January 22, 2009, The Solar System ( http://www.solarviews.com/eng/solarsys.htm ) lists Pluto as the ninth planet].

In 2004, Google launched Google Scholar. Google Scholar is "a free service that helps users search scholarly literature such as peer-reviewed papers, theses, books, preprints, abstracts and technical reports." This can be a powerful tool for fact-finding, but should be used with the same caution as any other search engine. Not every search result is from a peer-reviewed, reputable source, so consider the content carefully. Google is one of many search engines. Science-specialized search engines can also be helpful (Google "Science Search Engines" or visit the Yahoo Science Directory at http://dir.yahoo.com/Science/ ).

Using the Internet at a Glance:

  • Whenever possible, find the primary source of the fact.
  • Use reputable Web sites rather than Web sites run by nonprofessionals.
  • Do not use Wikipedia as your only source.
  • When using Google, look at the results with a careful eye and select the best site for verifying the fact in question.
  • Always cite the date you accessed a Web page.
  • Useful Web tools: Google Scholar, SCIRUS, GoPubMed

Using E-mail and the Telephone to Fact-Check

It is often necessary to contact a source directly to verify facts in a document. Whether contacting a source via email or telephone, always remember that you are representing all those involved, including authors, editors, and publishers. Be polite and professional. When contacting a source via email, it is important to NEVER send the actual script. Sources may become self-conscious of how they are portrayed in the article, so it is important to check facts without revealing more context than necessary. If possible, paraphrase the fact(s) into simple, concise statements that are easy to verify (bulleted lists work well). Your email should include deadlines for the information and instructions for indicating corrections in addition to the requests for verification.

As with email, when contacting a source via telephone, it is important to not read directly from a script. It is often helpful to have a separate list of statements and facts ready when you call the source, rather than working from the main document. Always introduce yourself and let the source know why you are calling (to verify facts for a publication, radio program, etc.). If a source becomes upset about how a direct quotation seems to portray the speaker, be calm and courteous, but do not offer to change the article. Focus on the content (the fact are you trying to verify) of the statement, rather than the context (how that fact fits into the script as a whole).

Although you are only verifying facts, not doing original research, it might be helpful to refer to the "How to Talk to a Scientist" tip sheet on the Science Literacy Project Web site.

Using E-mail and the Telephone at a Glance:

  • Be polite and professional; you are representing all those involved, not just you.
  • Never send the actual script to a source.
  • If possible, paraphrase the fact(s) into simple, concise statements that are easy to verify.
  • Include the following in your email: Requests for verification, deadlines, and instructions for corrections.
  • If a source becomes upset about how a direct quotation seems to portray the speaker, be calm and courteous, but do not offer to change the article. Focus on the content (the fact are you trying to verify) of the statement, rather than the context (how that fact fits into the script as a whole).

Additional Fact-Check Resources

  • Libraries/bookstores
  • Subject databases (e.g., LexusNexus, available at many libraries)
  • Dictionaries/encyclopedias
  • Review articles/scholarly publications/press releases
  • Overviews and information provided by the writer or producer
  • Science Literacy Project Tip Sheets
  • Science Literacy Project Science Journalism Resources

Implementing Changes

When verifications and corrections come back, it is up to the fact-checker to confirm if a fact has been verified. Indicate all facts that have been verified (you will need to check with the author to see how they want you to indicate this on the document). Always include detailed notes about how and when you verified a fact (did you use a Web search? Telephone conversation with researcher? When did you access the government Web site?). So what do you do if you find a discrepancy? When you find an inaccuracy, cross it out in the text and indicate what is accurate. Don’t just find a problem; solve it (for example, don't just indicate that a number is wrong, find the correct number). A fact-checker's comments should be few and concise. Remember, you are not adding additional information to the document; you are verifying what is already there.

Fact-checking is an essential part of science journalism, so it is important to stay organized and meet your deadlines. Your careful attention to detail and verification of facts will help get accurate information about sometimes complicated topics out to the public!

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons
  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Humanities LibreTexts

2.3: Fact or Opinion

  • Last updated
  • Save as PDF
  • Page ID 205565

Teaching & Learning and University Libraries

Silhouette of a head with a question mark in the center

Thinking about the reason an author produced a source can be helpful to you because that reason was what dictated the kind of information he/she chose to include. Depending on that purpose, the author may have chosen to include factual, analytical, and objective information. Or, instead, it may have suited his/her purpose to include information that was subjective and therefore less factual and analytical. The author’s reason for producing the source also determined whether he or she included more than one perspective or just his/her own.

Authors typically want to do at least one of the following:

  • Inform and educate
  • Sell services or products

Combined Purposes

Sometimes authors have a combination of purposes, as when a marketer decides he can sell more smart phones with an informative sales video that also entertains us. The same is true when a singer writes and performs a song that entertains us but that she intends to make available for sale. Other examples of authors having multiple purposes occur in most scholarly writing.

In those cases, authors certainly want to inform and educate their audiences. But they also want to persuade their audiences that what they are reporting and/or postulating is a true description of a situation, event, or phenomenon or a valid argument that their audience must take a particular action. In this blend of scholarly author’s purposes, the intent to educate and inform is more useful to you as a researcher than the intent to persuade.

Why Intent Matters

Authors’ intent usually matters in how useful their information can be to your research project, depending on which information need you are trying to meet. For instance, when you’re looking for sources that will help you to craft a response to your research question or to locate evidence to support the claim that you will share with your audience, you will want the author’s main purpose to have been to inform or educate his/her audience. That’s because, with that intent, he/she is likely to have used:

  • Facts where possible.
  • Multiple perspectives instead of just his/her own.
  • Little subjective information.
  • Seemingly unbiased, objective language that cites where he/she got the information.

The reason you want that kind of resource when trying to respond to your research question is that all of those characteristics will lend credibility to the argument you are making with your project. Both you and your audience will simply find it easier to believe—will have more confidence in the argument being made—when you include those types of sources.

Sources whose authors intend only to persuade others won’t meet your information need for a credible response to your research question or as evidence with which to convince your audience. That’s because they don’t always confine themselves to facts. Instead, they tell us their opinions without backing them up with evidence. If you used those sources, your readers will notice and question the credibility of your argument.

Fact vs. Opinion vs. Objective vs. Subjective

Do you need to brush up on the differences between fact, objective information, subjective information, and opinion?

Fact – Facts are useful to inform or make an argument.

  • The United States was established in 1776.
  • The pH levels in acids are lower than pH levels in alkalines.
  • Beethoven had a reputation as a virtuoso pianist.

Opinion – Opinions are useful to persuade, but careful readers and listeners will notice and demand evidence to back them up.

  • That was a good movie.
  • Strawberries taste better blueberries.
  • Ryan Gosling is the sexiest actor alive.
  • The death penalty is wrong.
  • Beethoven’s reputation as a virtuoso pianist is overrated.

Objective – Objective information reflects a research finding or multiple perspectives that are not biased.

  • “Several studies show that an active lifestyle reduces the risk of heart disease and diabetes.”
  • “Studies from the Brown University Medical School show that twenty-somethings eat 25 percent more fast-food meals at this age than they did as teenagers.”

Subjective – Subjective information presents one person or organization’s perspective or interpretation. Subjective information can be meant to distort, or it can reflect educated and informed thinking. All opinions are often subjective, but some are backed up with facts more than others.

  • “The simple truth is this: As human beings, we were meant to move.”
  • “In their thirties, women should stock up on calcium to ensure strong, dense bones and to ward off osteoporosis later in life.”*

*In this quote, it’s mostly the “should” that makes it subjective. The objective version of the last quote would read: “Studies have shown that women who begin taking calcium in their 30s show stronger bone density and fewer repercussions of osteoporosis than women who did not take calcium at all.” But perhaps there are other data showing complications from taking calcium. That’s why drawing the conclusion that requires a “should” makes the statement subjective.

How well can you tell factual from opinion statements? (Pew Research Foundation)

Quiz: Fact, Opinion, Objective, or Subjective?

Complete the quiz in Canvas .

Science and the scientific method: Definitions and examples

Here's a look at the foundation of doing science — the scientific method.

Kids follow the scientific method to carry out an experiment.

The scientific method

Hypothesis, theory and law, a brief history of science, additional resources, bibliography.

Science is a systematic and logical approach to discovering how things in the universe work. It is also the body of knowledge accumulated through the discoveries about all the things in the universe. 

The word "science" is derived from the Latin word "scientia," which means knowledge based on demonstrable and reproducible data, according to the Merriam-Webster dictionary . True to this definition, science aims for measurable results through testing and analysis, a process known as the scientific method. Science is based on fact, not opinion or preferences. The process of science is designed to challenge ideas through research. One important aspect of the scientific process is that it focuses only on the natural world, according to the University of California, Berkeley . Anything that is considered supernatural, or beyond physical reality, does not fit into the definition of science.

When conducting research, scientists use the scientific method to collect measurable, empirical evidence in an experiment related to a hypothesis (often in the form of an if/then statement) that is designed to support or contradict a scientific theory .

"As a field biologist, my favorite part of the scientific method is being in the field collecting the data," Jaime Tanner, a professor of biology at Marlboro College, told Live Science. "But what really makes that fun is knowing that you are trying to answer an interesting question. So the first step in identifying questions and generating possible answers (hypotheses) is also very important and is a creative process. Then once you collect the data you analyze it to see if your hypothesis is supported or not."

Here's an illustration showing the steps in the scientific method.

The steps of the scientific method go something like this, according to Highline College :

  • Make an observation or observations.
  • Form a hypothesis — a tentative description of what's been observed, and make predictions based on that hypothesis.
  • Test the hypothesis and predictions in an experiment that can be reproduced.
  • Analyze the data and draw conclusions; accept or reject the hypothesis or modify the hypothesis if necessary.
  • Reproduce the experiment until there are no discrepancies between observations and theory. "Replication of methods and results is my favorite step in the scientific method," Moshe Pritsker, a former post-doctoral researcher at Harvard Medical School and CEO of JoVE, told Live Science. "The reproducibility of published experiments is the foundation of science. No reproducibility — no science."

Some key underpinnings to the scientific method:

  • The hypothesis must be testable and falsifiable, according to North Carolina State University . Falsifiable means that there must be a possible negative answer to the hypothesis.
  • Research must involve deductive reasoning and inductive reasoning . Deductive reasoning is the process of using true premises to reach a logical true conclusion while inductive reasoning uses observations to infer an explanation for those observations.
  • An experiment should include a dependent variable (which does not change) and an independent variable (which does change), according to the University of California, Santa Barbara .
  • An experiment should include an experimental group and a control group. The control group is what the experimental group is compared against, according to Britannica .

The process of generating and testing a hypothesis forms the backbone of the scientific method. When an idea has been confirmed over many experiments, it can be called a scientific theory. While a theory provides an explanation for a phenomenon, a scientific law provides a description of a phenomenon, according to The University of Waikato . One example would be the law of conservation of energy, which is the first law of thermodynamics that says that energy can neither be created nor destroyed. 

A law describes an observed phenomenon, but it doesn't explain why the phenomenon exists or what causes it. "In science, laws are a starting place," said Peter Coppinger, an associate professor of biology and biomedical engineering at the Rose-Hulman Institute of Technology. "From there, scientists can then ask the questions, 'Why and how?'"

Laws are generally considered to be without exception, though some laws have been modified over time after further testing found discrepancies. For instance, Newton's laws of motion describe everything we've observed in the macroscopic world, but they break down at the subatomic level.

This does not mean theories are not meaningful. For a hypothesis to become a theory, scientists must conduct rigorous testing, typically across multiple disciplines by separate groups of scientists. Saying something is "just a theory" confuses the scientific definition of "theory" with the layperson's definition. To most people a theory is a hunch. In science, a theory is the framework for observations and facts, Tanner told Live Science.

This Copernican heliocentric solar system, from 1708, shows the orbit of the moon around the Earth, and the orbits of the Earth and planets round the sun, including Jupiter and its moons, all surrounded by the 12 signs of the zodiac.

The earliest evidence of science can be found as far back as records exist. Early tablets contain numerals and information about the solar system , which were derived by using careful observation, prediction and testing of those predictions. Science became decidedly more "scientific" over time, however.

1200s: Robert Grosseteste developed the framework for the proper methods of modern scientific experimentation, according to the Stanford Encyclopedia of Philosophy. His works included the principle that an inquiry must be based on measurable evidence that is confirmed through testing.

1400s: Leonardo da Vinci began his notebooks in pursuit of evidence that the human body is microcosmic. The artist, scientist and mathematician also gathered information about optics and hydrodynamics.

1500s: Nicolaus Copernicus advanced the understanding of the solar system with his discovery of heliocentrism. This is a model in which Earth and the other planets revolve around the sun, which is the center of the solar system.

1600s: Johannes Kepler built upon those observations with his laws of planetary motion. Galileo Galilei improved on a new invention, the telescope, and used it to study the sun and planets. The 1600s also saw advancements in the study of physics as Isaac Newton developed his laws of motion.

1700s: Benjamin Franklin discovered that lightning is electrical. He also contributed to the study of oceanography and meteorology. The understanding of chemistry also evolved during this century as Antoine Lavoisier, dubbed the father of modern chemistry , developed the law of conservation of mass.

1800s: Milestones included Alessandro Volta's discoveries regarding electrochemical series, which led to the invention of the battery. John Dalton also introduced atomic theory, which stated that all matter is composed of atoms that combine to form molecules. The basis of modern study of genetics advanced as Gregor Mendel unveiled his laws of inheritance. Later in the century, Wilhelm Conrad Röntgen discovered X-rays , while George Ohm's law provided the basis for understanding how to harness electrical charges.

1900s: The discoveries of Albert Einstein , who is best known for his theory of relativity, dominated the beginning of the 20th century. Einstein's theory of relativity is actually two separate theories. His special theory of relativity, which he outlined in a 1905 paper, " The Electrodynamics of Moving Bodies ," concluded that time must change according to the speed of a moving object relative to the frame of reference of an observer. His second theory of general relativity, which he published as " The Foundation of the General Theory of Relativity ," advanced the idea that matter causes space to curve.

In 1952, Jonas Salk developed the polio vaccine , which reduced the incidence of polio in the United States by nearly 90%, according to Britannica . The following year, James D. Watson and Francis Crick discovered the structure of DNA , which is a double helix formed by base pairs attached to a sugar-phosphate backbone, according to the National Human Genome Research Institute .

2000s: The 21st century saw the first draft of the human genome completed, leading to a greater understanding of DNA. This advanced the study of genetics, its role in human biology and its use as a predictor of diseases and other disorders, according to the National Human Genome Research Institute .

  • This video from City University of New York delves into the basics of what defines science.
  • Learn about what makes science science in this book excerpt from Washington State University .
  • This resource from the University of Michigan — Flint explains how to design your own scientific study.

Merriam-Webster Dictionary, Scientia. 2022. https://www.merriam-webster.com/dictionary/scientia

University of California, Berkeley, "Understanding Science: An Overview." 2022. ​​ https://undsci.berkeley.edu/article/0_0_0/intro_01  

Highline College, "Scientific method." July 12, 2015. https://people.highline.edu/iglozman/classes/astronotes/scimeth.htm  

North Carolina State University, "Science Scripts." https://projects.ncsu.edu/project/bio183de/Black/science/science_scripts.html  

University of California, Santa Barbara. "What is an Independent variable?" October 31,2017. http://scienceline.ucsb.edu/getkey.php?key=6045  

Encyclopedia Britannica, "Control group." May 14, 2020. https://www.britannica.com/science/control-group  

The University of Waikato, "Scientific Hypothesis, Theories and Laws." https://sci.waikato.ac.nz/evolution/Theories.shtml  

Stanford Encyclopedia of Philosophy, Robert Grosseteste. May 3, 2019. https://plato.stanford.edu/entries/grosseteste/  

Encyclopedia Britannica, "Jonas Salk." October 21, 2021. https://www.britannica.com/ biography /Jonas-Salk

National Human Genome Research Institute, "​Phosphate Backbone." https://www.genome.gov/genetics-glossary/Phosphate-Backbone  

National Human Genome Research Institute, "What is the Human Genome Project?" https://www.genome.gov/human-genome-project/What  

‌ Live Science contributor Ashley Hamer updated this article on Jan. 16, 2022.

Sign up for the Live Science daily newsletter now

Get the world’s most fascinating discoveries delivered straight to your inbox.

Alina Bradford

AI-powered 'digital twin' of Earth could make weather predictions at super speeds

Part of the San Andreas fault may be gearing up for an earthquake

'Unprecedented' discovery of mysterious circular monument near 2 necropolises found in France

Most Popular

  • 2 'Gambling with your life': Experts weigh in on dangers of the Wim Hof method
  • 3 'Exceptional' prosthesis of gold, silver and wool helped 18th-century man live with cleft palate
  • 4 NASA spacecraft snaps mysterious 'surfboard' orbiting the moon. What is it?
  • 5 AI pinpoints where psychosis originates in the brain
  • 2 AI pinpoints where psychosis originates in the brain
  • 3 NASA's downed Ingenuity helicopter has a 'last gift' for humanity — but we'll have to go to Mars to get it
  • 4 Anglerfish entered the midnight zone 55 million years ago and thrived by becoming sexual parasites
  • 5 2,500-year-old skeletons with legs chopped off may be elites who received 'cruel' punishment in ancient China

choose the statement that is a fact established by research

Logo for M Libraries Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

11.4 Strategies for Gathering Reliable Information

Learning objectives.

  • Distinguish between primary and secondary sources.
  • Identify strategies for locating relevant print and electronic resources efficiently.
  • Identify instances when it is appropriate to use human sources, such as interviews or eyewitness testimony.
  • Identify criteria for evaluating research resources.
  • Understand why many electronic resources are not reliable.

Now that you have planned your research project, you are ready to begin the research. This phase can be both exciting and challenging. As you read this section, you will learn ways to locate sources efficiently, so you have enough time to read the sources, take notes, and think about how to use the information.

Of course, the technological advances of the past few decades—particularly the rise of online media—mean that, as a twenty-first-century student, you have countless sources of information available at your fingertips. But how can you tell whether a source is reliable? This section will discuss strategies for evaluating sources critically so that you can be a media-savvy researcher.

In this section, you will locate and evaluate resources for your paper and begin taking notes. As you read, begin gathering print and electronic resources, identify at least eight to ten sources by the time you finish the chapter, and begin taking notes on your research findings.

Locating Useful Resources

When you chose a paper topic and determined your research questions, you conducted preliminary research to stimulate your thinking. Your research proposal included some general ideas for how to go about your research—for instance, interviewing an expert in the field or analyzing the content of popular magazines. You may even have identified a few potential sources. Now it is time to conduct a more focused, systematic search for informative primary and secondary sources.

Using Primary and Secondary Sources

Writers classify research resources in two categories: primary sources and secondary sources. Primary sources are direct, firsthand sources of information or data. For example, if you were writing a paper about the First Amendment right to freedom of speech, the text of the First Amendment in the Bill of Rights would be a primary source.

Other primary sources include the following:

  • Research articles
  • Literary texts
  • Historical documents such as diaries or letters
  • Autobiographies or other personal accounts

Secondary sources discuss, interpret, analyze, consolidate, or otherwise rework information from primary sources. In researching a paper about the First Amendment, you might read articles about legal cases that involved First Amendment rights, or editorials expressing commentary on the First Amendment. These sources would be considered secondary sources because they are one step removed from the primary source of information.

The following are examples of secondary sources:

  • Magazine articles
  • Biographical books
  • Literary and scientific reviews
  • Television documentaries

Your topic and purpose determine whether you must cite both primary and secondary sources in your paper. Ask yourself which sources are most likely to provide the information that will answer your research questions. If you are writing a research paper about reality television shows, you will need to use some reality shows as a primary source, but secondary sources, such as a reviewer’s critique, are also important. If you are writing about the health effects of nicotine, you will probably want to read the published results of scientific studies, but secondary sources, such as magazine articles discussing the outcome of a recent study, may also be helpful.

Once you have thought about what kinds of sources are most likely to help you answer your research questions, you may begin your search for print and electronic resources. The challenge here is to conduct your search efficiently. Writers use strategies to help them find the sources that are most relevant and reliable while steering clear of sources that will not be useful.

Finding Print Resources

Print resources include a vast array of documents and publications. Regardless of your topic, you will consult some print resources as part of your research. (You will use electronic sources as well, but it is not wise to limit yourself to electronic sources only, because some potentially useful sources may be available only in print form.) Table 11.1 “Library Print Resources” lists different types of print resources available at public and university libraries.

Table 11.1 Library Print Resources

Some of these resources are also widely available in electronic format. In addition to the resources noted in the table, library holdings may include primary texts such as historical documents, letters, and diaries.

Writing at Work

Businesses, government organizations, and nonprofit organizations produce published materials that range from brief advertisements and brochures to lengthy, detailed reports. In many cases, producing these publications requires research. A corporation’s annual report may include research about economic or industry trends. A charitable organization may use information from research in materials sent to potential donors.

Regardless of the industry you work in, you may be asked to assist in developing materials for publication. Often, incorporating research in these documents can make them more effective in informing or persuading readers.

As you gather information, strive for a balance of accessible, easy-to-read sources and more specialized, challenging sources. Relying solely on lightweight books and articles written for a general audience will drastically limit the range of useful, substantial information. On the other hand, restricting oneself to dense, scholarly works could make the process of researching extremely time-consuming and frustrating.

Make a list of five types of print resources you could use to find information about your research topic. Include at least one primary source. Be as specific as possible—if you have a particular resource or type of resource in mind, describe it.

To find print resources efficiently, first identify the major concepts and terms you will use to conduct your search—that is, your keywords . These, along with the research questions you identified in Chapter 11 “Writing from Research: What Will I Learn?” , Section 11.2 “Steps in Developing a Research Proposal” , will help you find sources using any of the following methods:

  • Using the library’s online catalog or card catalog
  • Using periodicals indexes and databases
  • Consulting a reference librarian

You probably already have some keywords in mind based on your preliminary research and writing. Another way to identify useful keywords is to visit the Library of Congress’s website at http://id.loc.gov/authorities . This site allows you to search for a topic and see the related subject headings used by the Library of Congress, including broader terms, narrower terms, and related terms. Other libraries use these terms to classify materials. Knowing the most-used terms will help you speed up your keyword search.

Jorge used the Library of Congress site to identify general terms he could use to find resources about low-carb dieting. His search helped him identify potentially useful keywords and related topics, such as carbohydrates in human nutrition, glycemic index, and carbohydrates—metabolism. These terms helped Jorge refine his search.

Knowing the right keywords can sometimes make all the difference in conducting a successful search. If you have trouble finding sources on a topic, consult a librarian to see whether you need to modify your search terms.

Visit the Library of Congress’s website at http://id.loc.gov/authorities and conduct searches on a few terms related to your topic.

  • Review your search results and identify six to eight additional terms you might use when you conduct your research.
  • Print out the search results or save the results to your research folder on your computer or portable storage device.

Using Periodicals, Indexes, and Databases

Library catalogs can help you locate book-length sources, as well as some types of nonprint holdings, such as CDs, DVDs, and audio books. To locate shorter sources, such as magazine and journal articles, you will need to use a periodical index or an online periodical database . These tools index the articles that appear in newspapers, magazines, and journals. Like catalogs, they provide publication information about an article and often allow users to access a summary or even the full text of the article.

Print indexes may be available in the periodicals section of your library. Increasingly, libraries use online databases that users can access through the library website. A single library may provide access to multiple periodical databases. These can range from general news databases to specialized databases. Table 11.2 “Commonly Used Indexes and Databases” describes some commonly used indexes and databases.

Table 11.2 Commonly Used Indexes and Databases

Reading Popular and Scholarly Periodicals

When you search for periodicals, be sure to distinguish among different types. Mass-market publications, such as newspapers and popular magazines, differ from scholarly publications in their accessibility, audience, and purpose.

Newspapers and magazines are written for a broader audience than scholarly journals. Their content is usually quite accessible and easy to read. Trade magazines that target readers within a particular industry may presume the reader has background knowledge, but these publications are still reader-friendly for a broader audience. Their purpose is to inform and, often, to entertain or persuade readers as well.

Scholarly or academic journals are written for a much smaller and more expert audience. The creators of these publications assume that most of their readers are already familiar with the main topic of the journal. The target audience is also highly educated. Informing is the primary purpose of a scholarly journal. While a journal article may advance an agenda or advocate a position, the content will still be presented in an objective style and formal tone. Entertaining readers with breezy comments and splashy graphics is not a priority.

Because of these differences, scholarly journals are more challenging to read. That doesn’t mean you should avoid them. On the contrary, they can provide in-depth information unavailable elsewhere. Because knowledgeable professionals carefully review the content before publication, scholarly journals are far more reliable than much of the information available in popular media. Seek out academic journals along with other resources. Just be prepared to spend a little more time processing the information.

Periodicals databases are not just for students writing research papers. They also provide a valuable service to workers in various fields. The owner of a small business might use a database such as Business Source Premiere to find articles on management, finance, or trends within a particular industry. Health care professionals might consult databases such as MedLine to research a particular disease or medication. Regardless of what career path you plan to pursue, periodicals databases can be a useful tool for researching specific topics and identifying periodicals that will help you keep up with the latest news in your industry.

Consulting a Reference Librarian

Sifting through library stacks and database search results to find the information you need can be like trying to find a needle in a haystack. If you are not sure how you should begin your search, or if it is yielding too many or too few results, you are not alone. Many students find this process challenging, although it does get easier with experience. One way to learn better search strategies is to consult a reference librarian.

Reference librarians are intimately familiar with the systems libraries use to organize and classify information. They can help you locate a particular book in the library stacks, steer you toward useful reference works, and provide tips on how to use databases and other electronic research tools. Take the time to see what resources you can find on your own, but if you encounter difficulties, ask for help. Many university librarians hold virtual office hours and are available for online chatting.

Visit your library’s website or consult with a reference librarian to determine what periodicals indexes or databases would be useful for your research. Depending on your topic, you may rely on a general news index, a specialized index for a particular subject area, or both. Search the catalog for your topic and related keywords. Print out or bookmark your search results.

  • Identify at least one to two relevant periodicals, indexes, or databases.
  • Conduct a keyword search to find potentially relevant articles on your topic.
  • Save your search results. If the index you are using provides article summaries, read these to determine how useful the articles are likely to be.
  • Identify at least three to five articles to review more closely. If the full article is available online, set aside time to read it. If not, plan to visit our library within the next few days to locate the articles you need.

One way to refine your keyword search is to use Boolean operators. These operators allow you to combine keywords, find variations on a word, and otherwise expand or limit your results. Here are some of the ways you can use Boolean operators:

  • Combine keywords with and or + to limit results to citations that include both keywords—for example, diet + nutrition .
  • Combine keywords with not or – to search for the first word without the second. This can help you eliminate irrelevant results based on words that are similar to your search term. For example, searching for obesity not childhood locates materials on obesity but excludes materials on childhood obesity.
  • Enclose a phrase in quotation marks to search for an exact phrase, such as “ morbid obesity .”
  • Use parentheses to direct the order of operations in a search string. For example, since Type II diabetes is also known as adult-onset diabetes, you could search (Type II or adult-onset) and diabetes to limit your search results to articles on this form of the disease.
  • Use a wildcard symbol such as # , ? , or $ after a word to search for variations on a term. For instance, you might type diabet# to search for information on diabetes and diabetics. The specific symbol used varies with different databases.

Finding and Using Electronic Resources

With the expansion of technology and media over the past few decades, a wealth of information is available to you in electronic format. Some types of resources, such as a television documentary, may only be available electronically. Other resources—for instance, many newspapers and magazines—may be available in both print and electronic form. The following are some of the electronic sources you might consult:

  • Online databases
  • Popular web search engines
  • Websites maintained by businesses, universities, nonprofit organizations, or government agencies
  • Newspapers, magazines, and journals published on the web
  • Audio books
  • Industry blogs
  • Radio and television programs and other audio and video recordings
  • Online discussion groups

The techniques you use to locate print resources can also help you find electronic resources efficiently. Libraries usually include CD-ROMs, audio books, and audio and video recordings among their holdings. You can locate these materials in the catalog using a keyword search. The same Boolean operators used to refine database searches can help you filter your results in popular search engines.

Using Internet Search Engines Efficiently

When faced with the challenge of writing a research paper, some students rely on popular search engines as their first source of information. Typing a keyword or phrase into a search engine instantly pulls up links to dozens, hundreds, or even thousands of related websites—what could be easier? Unfortunately, despite its apparent convenience, this research strategy has the following drawbacks to consider:

  • Results do not always appear in order of reliability. The first few hits that appear in search results may include sites whose content is not always reliable, such as online encyclopedias that can be edited by any user. Because websites are created by third parties, the search engine cannot tell you which sites have accurate information.
  • Results may be too numerous for you to use. The amount of information available on the web is far greater than the amount of information housed within a particular library or database. Realistically, if your web search pulls up thousands of hits, you will not be able to visit every site—and the most useful sites may be buried deep within your search results.
  • Search engines are not connected to the results of the search. Search engines find websites that people visit often and list the results in order of popularity. The search engine, then, is not connected to any of the results. When you cite a source found through a search engine, you do not need to cite the search engine. Only cite the source.

A general web search can provide a helpful overview of a topic and may pull up genuinely useful resources. To get the most out of a search engine, however, use strategies to make your search more efficient. Use multiple keywords and Boolean operators to limit your results. Click on the Advanced Search link on the homepage to find additional options for streamlining your search. Depending on the specific search engine you use, the following options may be available:

  • Limit results to websites that have been updated within a particular time frame.
  • Limit results by language or country.
  • Limit results to scholarly works available online.
  • Limit results by file type.
  • Limit results to a particular domain type, such as .edu (school and university sites) or .gov (government sites). This is a quick way to filter out commercial sites, which can often lead to more objective results.

Use the Bookmarks or Favorites feature of your web browser to save and organize sites that look promising.

Using Other Information Sources: Interviews

With so many print and electronic media readily available, it is easy to overlook another valuable information resource: other people. Consider whether you could use a person or group as a primary source. For instance, you might interview a professor who has expertise in a particular subject, a worker within a particular industry, or a representative from a political organization. Interviews can be a great way to get firsthand information.

To get the most out of an interview, you will need to plan ahead. Contact your subject early in the research process and explain your purpose for requesting an interview. Prepare detailed questions. Open-ended questions, rather than questions with simple yes-or-no answers, are more likely to lead to an in-depth discussion. Schedule a time to meet, and be sure to obtain your subject’s permission to record the interview. Take careful notes and be ready to ask follow-up questions based on what you learn.

If scheduling an in-person meeting is difficult, consider arranging a telephone interview or asking your subject to respond to your questions via e-mail. Recognize that any of these formats takes time and effort. Be prompt and courteous, avoid going over the allotted interview time, and be flexible if your subject needs to reschedule.

Evaluating Research Resources

As you gather sources, you will need to examine them with a critical eye. Smart researchers continually ask themselves two questions: “Is this source relevant to my purpose?” and “Is this source reliable?” The first question will help you avoid wasting valuable time reading sources that stray too far from your specific topic and research questions. The second question will help you find accurate, trustworthy sources.

Determining Whether a Source Is Relevant

At this point in your research process, you may have identified dozens of potential sources. It is easy for writers to get so caught up in checking out books and printing out articles that they forget to ask themselves how they will use these resources in their research. Now is a good time to get a little ruthless. Reading and taking notes takes time and energy, so you will want to focus on the most relevant sources.

To weed through your stack of books and articles, skim their contents. Read quickly with your research questions and subtopics in mind. Table 11.3 “Tips for Skimming Books and Articles” explains how to skim to get a quick sense of what topics are covered. If a book or article is not especially relevant, put it aside. You can always come back to it later if you need to.

Table 11.3 Tips for Skimming Books and Articles

Determining Whether a Source Is Reliable

All information sources are not created equal. Sources can vary greatly in terms of how carefully they are researched, written, edited, and reviewed for accuracy. Common sense will help you identify obviously questionable sources, such as tabloids that feature tales of alien abductions, or personal websites with glaring typos. Sometimes, however, a source’s reliability—or lack of it—is not so obvious. For more information about source reliability, see Chapter 12 “Writing a Research Paper” .

To evaluate your research sources, you will use critical thinking skills consciously and deliberately. You will consider criteria such as the type of source, its intended purpose and audience, the author’s (or authors’) qualifications, the publication’s reputation, any indications of bias or hidden agendas, how current the source is, and the overall quality of the writing, thinking, and design.

Evaluating Types of Sources

The different types of sources you will consult are written for distinct purposes and with different audiences in mind. This accounts for other differences, such as the following:

  • How thoroughly the writers cover a given topic
  • How carefully the writers research and document facts
  • How editors review the work
  • What biases or agendas affect the content

A journal article written for an academic audience for the purpose of expanding scholarship in a given field will take an approach quite different from a magazine feature written to inform a general audience. Textbooks, hard news articles, and websites approach a subject from different angles as well. To some extent, the type of source provides clues about its overall depth and reliability. Table 11.4 “Source Rankings” ranks different source types.

Table 11.4 Source Rankings

Free online encyclopedias and wikis may seem like a great source of information. They usually appear among the first few results of a web search. They cover thousands of topics, and many articles use an informal, straightforward writing style. Unfortunately, these sites have no control system for researching, writing, and reviewing articles. Instead, they rely on a community of users to police themselves. At best, these sites can be a starting point for finding other, more trustworthy sources. Never use them as final sources.

Evaluating Credibility and Reputability

Even when you are using a type of source that is generally reliable, you will still need to evaluate the author’s credibility and the publication itself on an individual basis. To examine the author’s credibility —that is, how much you can believe of what the author has to say—examine his or her credentials. What career experience or academic study shows that the author has the expertise to write about this topic?

Keep in mind that expertise in one field is no guarantee of expertise in another, unrelated area. For instance, an author may have an advanced degree in physiology, but this credential is not a valid qualification for writing about psychology. Check credentials carefully.

Just as important as the author’s credibility is the publication’s overall reputability. Reputability refers to a source’s standing and reputation as a respectable, reliable source of information. An established and well-known newspaper, such as the New York Times or the Wall Street Journal , is more reputable than a college newspaper put out by comparatively inexperienced students. A website that is maintained by a well-known, respected organization and regularly updated is more reputable than one created by an unknown author or group.

If you are using articles from scholarly journals, you can check databases that keep count of how many times each article has been cited in other articles. This can be a rough indication of the article’s quality or, at the very least, of its influence and reputation among other scholars.

Checking for Biases and Hidden Agendas

Whenever you consult a source, always think carefully about the author’s or authors’ purpose in presenting the information. Few sources present facts completely objectively. In some cases, the source’s content and tone are significantly influenced by biases or hidden agendas.

Bias refers to favoritism or prejudice toward a particular person or group. For instance, an author may be biased against a certain political party and present information in a way that subtly—or not so subtly—makes that organization look bad. Bias can lead an author to present facts selectively, edit quotations to misrepresent someone’s words, and distort information.

Hidden agendas are goals that are not immediately obvious but influence how an author presents the facts. For instance, an article about the role of beef in a healthy diet would be questionable if it were written by a representative of the beef industry—or by the president of an animal-rights organization. In both cases, the author would likely have a hidden agenda.

As Jorge conducted his research, he read several research studies in which scientists found significant benefits to following a low-carbohydrate diet. He also noticed that many studies were sponsored by a foundation associated with the author of a popular series of low-carbohydrate diet books. Jorge read these studies with a critical eye, knowing that a hidden agenda might be shaping the researchers’ conclusions.

Using Current Sources

Be sure to seek out sources that are current, or up to date. Depending on the topic, sources may become outdated relatively soon after publication, or they may remain useful for years. For instance, online social networking sites have evolved rapidly over the past few years. An article published in 2002 about this topic will not provide current information. On the other hand, a research paper on elementary education practices might refer to studies published decades ago by influential child psychologists.

When using websites for research, check to see when the site was last updated. Many sites publish this information on the homepage, and some, such as news sites, are updated daily or weekly. Many nonfunctioning links are a sign that a website is not regularly updated. Do not be afraid to ask your professor for suggestions if you find that many of your most relevant sources are not especially reliable—or that the most reliable sources are not relevant.

Evaluating Overall Quality by Asking Questions

When you evaluate a source, you will consider the criteria previously discussed as well as your overall impressions of its quality. Read carefully, and notice how well the author presents and supports his or her statements. Stay actively engaged—do not simply accept an author’s words as truth. Ask questions to determine each source’s value. Checklist 11.1 lists ten questions to ask yourself as a critical reader.

Checklist 11.1

Source Evaluation

  • Is the type of source appropriate for my purpose? Is it a high-quality source or one that needs to be looked at more critically?
  • Can I establish that the author is credible and the publication is reputable?
  • Does the author support ideas with specific facts and details that are carefully documented? Is the source of the author’s information clear? (When you use secondary sources, look for sources that are not too removed from primary research.)
  • Does the source include any factual errors or instances of faulty logic?
  • Does the author leave out any information that I would expect to see in a discussion of this topic?
  • Do the author’s conclusions logically follow from the evidence that is presented? Can I see how the author got from one point to another?
  • Is the writing clear and organized, and is it free from errors, clichés, and empty buzzwords? Is the tone objective, balanced, and reasonable? (Be on the lookout for extreme, emotionally charged language.)
  • Are there any obvious biases or agendas? Based on what I know about the author, are there likely to be any hidden agendas?
  • Are graphics informative, useful, and easy to understand? Are websites organized, easy to navigate, and free of clutter like flashing ads and unnecessary sound effects?
  • Is the source contradicted by information found in other sources? (If so, it is possible that your sources are presenting similar information but taking different perspectives, which requires you to think carefully about which sources you find more convincing and why. Be suspicious, however, of any source that presents facts that you cannot confirm elsewhere.)

The critical thinking skills you use to evaluate research sources as a student are equally valuable when you conduct research on the job. If you follow certain periodicals or websites, you have probably identified publications that consistently provide reliable information. Reading blogs and online discussion groups is a great way to identify new trends and hot topics in a particular field, but these sources should not be used for substantial research.

Use a search engine to conduct a web search on your topic. Refer to the tips provided earlier to help you streamline your search. Evaluate your search results critically based on the criteria you have learned. Identify and bookmark one or more websites that are reliable, reputable, and likely to be useful in your research.

Managing Source Information

As you determine which sources you will rely on most, it is important to establish a system for keeping track of your sources and taking notes. There are several ways to go about it, and no one system is necessarily superior. What matters is that you keep materials in order; record bibliographical information you will need later; and take detailed, organized notes.

Keeping Track of Your Sources

Think ahead to a moment a few weeks from now, when you’ve written your research paper and are almost ready to submit it for a grade. There is just one task left—writing your list of sources.

As you begin typing your list, you realize you need to include the publication information for a book you cited frequently. Unfortunately, you already returned it to the library several days ago. You do not remember the URLs for some of the websites you used or the dates you accessed them—information that also must be included in your bibliography. With a sinking feeling, you realize that finding this information and preparing your bibliography will require hours of work.

This stressful scenario can be avoided. Taking time to organize source information now will ensure that you are not scrambling to find it at the last minute. Throughout your research, record bibliographical information for each source as soon as you begin using it. You may use pen-and-paper methods, such as a notebook or note cards, or maintain an electronic list. (If you prefer the latter option, many office software packages include separate programs for recording bibliographic information.)

Table 11.5 “Details for Commonly Used Source Types” shows the specific details you should record for commonly used source types. Use these details to develop a working bibliography —a preliminary list of sources that you will later use to develop the references section of your paper. You may wish to record information using the formatting system of the American Psychological Association (APA) or the Modern Language Association (MLA), which will save a step later on. (For more information on APA and MLA formatting, see Chapter 13 “APA and MLA Documentation and Formatting” .)

Table 11.5 Details for Commonly Used Source Types

Your research may involve less common types of sources not listed in Table 11.5 “Details for Commonly Used Source Types” . For additional information on citing different sources, see Chapter 13 “APA and MLA Documentation and Formatting” .

Create a working bibliography using the format that is most convenient for you. List at least five sources you plan to use. Continue to add sources to your working bibliography throughout the research process.

To make your working bibliography even more complete, you may wish to record additional details, such as a book’s call number or contact information for a person you interviewed. That way, if you need to locate a source again, you have all the information you need right at your fingertips. You may also wish to assign each source a code number to use when taking notes (1, 2, 3, or a similar system).

Taking Notes Efficiently

Good researchers stay focused and organized as they gather information from sources. Before you begin taking notes, take a moment to step back and think about your goal as a researcher—to find information that will help you answer your research question. When you write your paper, you will present your conclusions about the topic supported by research. That goal will determine what information you record and how you organize it.

Writers sometimes get caught up in taking extensive notes, so much so that they lose sight of how their notes relate to the questions and ideas they started out with. Remember that you do not need to write down every detail from your reading. Focus on finding and recording details that will help you answer your research questions. The following strategies will help you take notes efficiently.

Use Headings to Organize Ideas

Whether you use old-fashioned index cards or organize your notes using word-processing software, record just one major point from each source at a time, and use a heading to summarize the information covered. Keep all your notes in one file, digital or otherwise. Doing so will help you identify connections among different pieces of information. It will also help you make connections between your notes and the research questions and subtopics you identified earlier.

Know When to Summarize, Paraphrase, or Directly Quote a Source

Your notes will fall under three categories—summary notes, paraphrased information, and direct quotations from your sources. Effective researchers make choices about which type of notes is most appropriate for their purpose.

  • Summary notes sum up the main ideas in a source in a few sentences or a short paragraph. A summary is considerably shorter than the original text and captures only the major ideas. Use summary notes when you do not need to record specific details but you intend to refer to broad concepts the author discusses.
  • Paraphrased notes restate a fact or idea from a source using your own words and sentence structure.
  • Direct quotations use the exact wording used by the original source and enclose the quoted material in quotation marks. It is a good strategy to copy direct quotations when an author expresses an idea in an especially lively or memorable way. However, do not rely exclusively on direct quotations in your note taking.

Most of your notes should be paraphrased from the original source. Paraphrasing as you take notes is usually a better strategy than copying direct quotations, because it forces you to think through the information in your source and understand it well enough to restate it. In short, it helps you stay engaged with the material instead of simply copying and pasting. Synthesizing will help you later when you begin planning and drafting your paper. (For detailed guidelines on summarizing, paraphrasing, and quoting, see Chapter 11 “Writing from Research: What Will I Learn?” , Section 11.6 “Writing from Research: End-of-Chapter Exercises” .)

Maintain Complete, Accurate Notes

Regardless of the format used, any notes you take should include enough information to help you organize ideas and locate them instantly in the original text if you need to review them. Make sure your notes include the following elements:

  • Heading summing up the main topic covered
  • Author’s name, a source code, or an abbreviated source title
  • Page number
  • Full URL of any pages buried deep in a website

Throughout the process of taking notes, be scrupulous about making sure you have correctly attributed each idea to its source. Always include source information so you know exactly which ideas came from which sources. Use quotation marks to set off any words for phrases taken directly from the original text. If you add your own responses and ideas, make sure they are distinct from ideas you quoted or paraphrased.

Finally, make sure your notes accurately reflect the content of the original text. Make sure quoted material is copied verbatim. If you omit words from a quotation, use ellipses to show the omission and make sure the omission does not change the author’s meaning. Paraphrase ideas carefully, and check your paraphrased notes against the original text to make sure that you have restated the author’s ideas accurately in your own words.

Use a System That Works for You

There are several formats you can use to take notes. No technique is necessarily better than the others—it is more important to choose a format you are comfortable using. Choosing the format that works best for you will ensure your notes are organized, complete, and accurate. Consider implementing one of these formats when you begin taking notes:

  • Use index cards. This traditional format involves writing each note on a separate index card. It takes more time than copying and pasting into an electronic document, which encourages you to be selective in choosing which ideas to record. Recording notes on separate cards makes it easy to later organize your notes according to major topics. Some writers color-code their cards to make them still more organized.
  • Use note-taking software. Word-processing and office software packages often include different types of note-taking software. Although you may need to set aside some time to learn the software, this method combines the speed of typing with the same degree of organization associated with handwritten note cards.
  • Maintain a research notebook. Instead of using index cards or electronic note cards, you may wish to keep a notebook or electronic folder, allotting a few pages (or one file) for each of your sources. This method makes it easy to create a separate column or section of the document where you add your responses to the information you encounter in your research.
  • Annotate your sources. This method involves making handwritten notes in the margins of sources that you have printed or photocopied. If using electronic sources, you can make comments within the source document. For example, you might add comment boxes to a PDF version of an article. This method works best for experienced researchers who have already thought a great deal about the topic because it can be difficult to organize your notes later when starting your draft.

Choose one of the methods from the list to use for taking notes. Continue gathering sources and taking notes. In the next section, you will learn strategies for organizing and synthesizing the information you have found.

Key Takeaways

  • A writer’s use of primary and secondary sources is determined by the topic and purpose of the research. Sources used may include print sources, such as books and journals; electronic sources, such as websites and articles retrieved from databases; and human sources of information, such as interviews.
  • Strategies that help writers locate sources efficiently include conducting effective keyword searches, understanding how to use online catalogs and databases, using strategies to narrow web search results, and consulting reference librarians.
  • Writers evaluate sources based on how relevant they are to the research question and how reliable their content is.
  • Skimming sources can help writers determine their relevance efficiently.
  • Writers evaluate a source’s reliability by asking questions about the type of source (including its audience and purpose); the author’s credibility, the publication’s reputability, the source’s currency, and the overall quality of the writing, research, logic, and design in the source.
  • In their notes, effective writers record organized, complete, accurate information. This includes bibliographic information about each source as well as summarized, paraphrased, or quoted information from the source.

Writing for Success Copyright © 2015 by University of Minnesota is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

National Academies Press: OpenBook

Responsible Science: Ensuring the Integrity of the Research Process: Volume I (1992)

Chapter: 2 scientific principles and research practices, scientific principles and research practices.

Until the past decade, scientists, research institutions, and government agencies relied solely on a system of self-regulation based on shared ethical principles and generally accepted research practices to ensure integrity in the research process. Among the very basic principles that guide scientists, as well as many other scholars, are those expressed as respect for the integrity of knowledge, collegiality, honesty, objectivity, and openness. These principles are at work in the fundamental elements of the scientific method, such as formulating a hypothesis, designing an experiment to test the hypothesis, and collecting and interpreting data. In addition, more particular principles characteristic of specific scientific disciplines influence the methods of observation; the acquisition, storage, management, and sharing of data; the communication of scientific knowledge and information; and the training of younger scientists. 1 How these principles are applied varies considerably among the several scientific disciplines, different research organizations, and individual investigators.

The basic and particular principles that guide scientific research practices exist primarily in an unwritten code of ethics. Although some have proposed that these principles should be written down and formalized, 2 the principles and traditions of science are, for the most part, conveyed to successive generations of scientists through example, discussion, and informal education. As was pointed out in an early Academy report on responsible conduct of research in the

health sciences, “a variety of informal and formal practices and procedures currently exist in the academic research environment to assure and maintain the high quality of research conduct” (IOM, 1989a, p. 18).

Physicist Richard Feynman invoked the informal approach to communicating the basic principles of science in his 1974 commencement address at the California Institute of Technology (Feynman, 1985):

[There is an] idea that we all hope you have learned in studying science in school—we never explicitly say what this is, but just hope that you catch on by all the examples of scientific investigation. It's a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty—a kind of leaning over backwards. For example, if you're doing an experiment, you should report everything that you think might make it invalid—not only what you think is right about it; other causes that could possibly explain your results; and things you thought of that you've eliminated by some other experiment, and how they worked—to make sure the other fellow can tell they have been eliminated.

Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you can—if you know anything at all wrong, or possibly wrong—to explain it. If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it. In summary, the idea is to try to give all the information to help others to judge the value of your contribution, not just the information that leads to judgment in one particular direction or another. (pp. 311-312)

Many scholars have noted the implicit nature and informal character of the processes that often guide scientific research practices and inference. 3 Research in well-established fields of scientific knowledge, guided by commonly accepted theoretical paradigms and experimental methods, involves few disagreements about what is recognized as sound scientific evidence. Even in a revolutionary scientific field like molecular biology, students and trainees have learned the basic principles governing judgments made in such standardized procedures as cloning a new gene and determining its sequence.

In evaluating practices that guide research endeavors, it is important to consider the individual character of scientific fields. Research fields that yield highly replicable results, such as ordinary organic chemical structures, are quite different from fields such as cellular immunology, which are in a much earlier stage of development and accumulate much erroneous or uninterpretable material before the pieces fit together coherently. When a research field is too new or too fragmented to support consensual paradigms or established methods, different scientific practices can emerge.

A well-established discipline can also experience profound changes during periods of new conceptual insights. In these moments, when scientists must cope with shifting concepts, the matter of what counts as scientific evidence can be subject to dispute. Historian Jan Sapp has described the complex interplay between theory and observation that characterizes the operation of scientific judgment in the selection of research data during revolutionary periods of paradigmatic shift (Sapp, 1990, p. 113):

What “liberties” scientists are allowed in selecting positive data and omitting conflicting or “messy” data from their reports is not defined by any timeless method. It is a matter of negotiation. It is learned, acquired socially; scientists make judgments about what fellow scientists might expect in order to be convincing. What counts as good evidence may be more or less well-defined after a new discipline or specialty is formed; however, at revolutionary stages in science, when new theories and techniques are being put forward, when standards have yet to be negotiated, scientists are less certain as to what others may require of them to be deemed competent and convincing.

Explicit statements of the values and traditions that guide research practice have evolved through the disciplines and have been given in textbooks on scientific methodologies. 4 In the past few decades, many scientific and engineering societies representing individual disciplines have also adopted codes of ethics (see Volume II of this report for examples), 5 and more recently, a few research institutions have developed guidelines for the conduct of research (see Chapter 6 ).

But the responsibilities of the research community and research institutions in assuring individual compliance with scientific principles, traditions, and codes of ethics are not well defined. In recent

years, the absence of formal statements by research institutions of the principles that should guide research conducted by their members has prompted criticism that scientists and their institutions lack a clearly identifiable means to ensure the integrity of the research process.

FACTORS AFFECTING THE DEVELOPMENT OF RESEARCH PRACTICES

In all of science, but with unequal emphasis in the several disciplines, inquiry proceeds based on observation and experimentation, the exercising of informed judgment, and the development of theory. Research practices are influenced by a variety of factors, including:

The general norms of science;

The nature of particular scientific disciplines and the traditions of organizing a specific body of scientific knowledge;

The example of individual scientists, particularly those who hold positions of authority or respect based on scientific achievements;

The policies and procedures of research institutions and funding agencies; and

Socially determined expectations.

The first three factors have been important in the evolution of modern science. The latter two have acquired more importance in recent times.

Norms of Science

As members of a professional group, scientists share a set of common values, aspirations, training, and work experiences. 6 Scientists are distinguished from other groups by their beliefs about the kinds of relationships that should exist among them, about the obligations incurred by members of their profession, and about their role in society. A set of general norms are imbedded in the methods and the disciplines of science that guide individual, scientists in the organization and performance of their research efforts and that also provide a basis for nonscientists to understand and evaluate the performance of scientists.

But there is uncertainty about the extent to which individual scientists adhere to such norms. Most social scientists conclude that all behavior is influenced to some degree by norms that reflect socially or morally supported patterns of preference when alternative courses of action are possible. However, perfect conformity with any rele-

vant set of norms is always lacking for a variety of reasons: the existence of competing norms, constraints, and obstacles in organizational or group settings, and personality factors. The strength of these influences, and the circumstances that may affect them, are not well understood.

In a classic statement of the importance of scientific norms, Robert Merton specified four norms as essential for the effective functioning of science: communism (by which Merton meant the communal sharing of ideas and findings), universalism, disinterestedness, and organized skepticism (Merton, 1973). Neither Merton nor other sociologists of science have provided solid empirical evidence for the degree of influence of these norms in a representative sample of scientists. In opposition to Merton, a British sociologist of science, Michael Mulkay, has argued that these norms are “ideological” covers for self-interested behavior that reflects status and politics (Mulkay, 1975). And the British physicist and sociologist of science John Ziman, in an article synthesizing critiques of Merton's formulation, has specified a set of structural factors in the bureaucratic and corporate research environment that impede the realization of that particular set of norms: the proprietary nature of research, the local importance and funding of research, the authoritarian role of the research manager, commissioned research, and the required expertise in understanding how to use modern instruments (Ziman, 1990).

It is clear that the specific influence of norms on the development of scientific research practices is simply not known and that further study of key determinants is required, both theoretically and empirically. Commonsense views, ideologies, and anecdotes will not support a conclusive appraisal.

Individual Scientific Disciplines

Science comprises individual disciplines that reflect historical developments and the organization of natural and social phenomena for study. Social scientists may have methods for recording research data that differ from the methods of biologists, and scientists who depend on complex instrumentation may have authorship practices different from those of scientists who work in small groups or carry out field studies. Even within a discipline, experimentalists engage in research practices that differ from the procedures followed by theorists.

Disciplines are the “building blocks of science,” and they “designate the theories, problems, procedures, and solutions that are prescribed, proscribed, permitted, and preferred” (Zuckerman, 1988a,

p. 520). The disciplines have traditionally provided the vital connections between scientific knowledge and its social organization. Scientific societies and scientific journals, some of which have tens of thousands of members and readers, and the peer review processes used by journals and research sponsors are visible forms of the social organization of the disciplines.

The power of the disciplines to shape research practices and standards is derived from their ability to provide a common frame of reference in evaluating the significance of new discoveries and theories in science. It is the members of a discipline, for example, who determine what is “good biology” or “good physics” by examining the implications of new research results. The disciplines' abilities to influence research standards are affected by the subjective quality of peer review and the extent to which factors other than disciplinary quality may affect judgments about scientific achievements. Disciplinary departments rely primarily on informal social and professional controls to promote responsible behavior and to penalize deviant behavior. These controls, such as social ostracism, the denial of letters of support for future employment, and the withholding of research resources, can deter and penalize unprofessional behavior within research institutions. 7

Many scientific societies representing individual disciplines have adopted explicit standards in the form of codes of ethics or guidelines governing, for example, the editorial practices of their journals and other publications. 8 Many societies have also established procedures for enforcing their standards. In the past decade, the societies' codes of ethics—which historically have been exhortations to uphold high standards of professional behavior —have incorporated specific guidelines relevant to authorship practices, data management, training and mentoring, conflict of interest, reporting research findings, treatment of confidential or proprietary information, and addressing error or misconduct.

The Role of Individual Scientists and Research Teams

The methods by which individual scientists and students are socialized in the principles and traditions of science are poorly understood. The principles of science and the practices of the disciplines are transmitted by scientists in classroom settings and, perhaps more importantly, in research groups and teams. The social setting of the research group is a strong and valuable characteristic of American science and education. The dynamics of research groups can foster —or inhibit—innovation, creativity, education, and collaboration.

One author of a historical study of research groups in the chemical and biochemical sciences has observed that the laboratory director or group leader is the primary determinant of a group's practices (Fruton, 1990). Individuals in positions of authority are visible and are also influential in determining funding and other support for the career paths of their associates and students. Research directors and department chairs, by virtue of personal example, thus can reinforce, or weaken, the power of disciplinary standards and scientific norms to affect research practices.

To the extent that the behavior of senior scientists conforms with general expectations for appropriate scientific and disciplinary practice, the research system is coherent and mutually reinforcing. When the behavior of research directors or department chairs diverges from expectations for good practice, however, the expected norms of science become ambiguous, and their effects are thus weakened. Thus personal example and the perceived behavior of role models and leaders in the research community can be powerful stimuli in shaping the research practices of colleagues, associates, and students.

The role of individuals in influencing research practices can vary by research field, institution, or time. The standards and expectations for behavior exemplified by scientists who are highly regarded for their technical competence or creative insight may have greater influence than the standards of others. Individual and group behaviors may also be more influential in times of uncertainty and change in science, especially when new scientific theories, paradigms, or institutional relationships are being established.

Institutional Policies

Universities, independent institutes, and government and industrial research organizations create the environment in which research is done. As the recipients of federal funds and the institutional sponsors of research activities, administrative officers must comply with regulatory and legal requirements that accompany public support. They are required, for example, “to foster a research environment that discourages misconduct in all research and that deals forthrightly with possible misconduct” (DHHS, 1989a, p. 32451).

Academic institutions traditionally have relied on their faculty to ensure that appropriate scientific and disciplinary standards are maintained. A few universities and other research institutions have also adopted policies or guidelines to clarify the principles that their members are expected to observe in the conduct of scientific research. 9 In addition, as a result of several highly publicized incidents of miscon-

duct in science and the subsequent enactment of governmental regulations, most major research institutions have now adopted policies and procedures for handling allegations of misconduct in science.

Institutional policies governing research practices can have a powerful effect on research practices if they are commensurate with the norms that apply to a wide spectrum of research investigators. In particular, the process of adopting and implementing strong institutional policies can sensitize the members of those institutions to the potential for ethical problems in their work. Institutional policies can establish explicit standards that institutional officers then have the power to enforce with sanctions and penalties.

Institutional policies are limited, however, in their ability to specify the details of every problematic situation, and they can weaken or displace individual professional judgment in such situations. Currently, academic institutions have very few formal policies and programs in specific areas such as authorship, communication and publication, and training and supervision.

Government Regulations and Policies

Government agencies have developed specific rules and procedures that directly affect research practices in areas such as laboratory safety, the treatment of human and animal research subjects, and the use of toxic or potentially hazardous substances in research.

But policies and procedures adopted by some government research agencies to address misconduct in science (see Chapter 5 ) represent a significant new regulatory development in the relationships between research institutions and government sponsors. The standards and criteria used to monitor institutional compliance with an increasing number of government regulations and policies affecting research practices have been a source of significant disagreement and tension within the research community.

In recent years, some government research agencies have also adopted policies and procedures for the treatment of research data and materials in their extramural research programs. For example, the National Science Foundation (NSF) has implemented a data-sharing policy through program management actions, including proposal review and award negotiations and conditions. The NSF policy acknowledges that grantee institutions will “keep principal rights to intellectual property conceived under NSF sponsorship” to encourage appropriate commercialization of the results of research (NSF, 1989b, p. 1). However, the NSF policy emphasizes “that retention of such rights does not reduce the responsibility of researchers and in-

stitutions to make results and supporting materials openly accessible ” (p. 1).

In seeking to foster data sharing under federal grant awards, the government relies extensively on the scientific traditions of openness and sharing. Research agency officials have observed candidly that if the vast majority of scientists were not so committed to openness and dissemination, government policy might require more aggressive action. But the principles that have traditionally characterized scientific inquiry can be difficult to maintain. For example, NSF staff have commented, “Unless we can arrange real returns or incentives for the original investigator, either in financial support or in professional recognition, another researcher's request for sharing is likely to present itself as ‘hassle'—an unwelcome nuisance and diversion. Therefore, we should hardly be surprised if researchers display some reluctance to share in practice, however much they may declare and genuinely feel devotion to the ideal of open scientific communication ” (NSF, 1989a, p. 4).

Social Attitudes and Expectations

Research scientists are part of a larger human society that has recently experienced profound changes in attitudes about ethics, morality, and accountability in business, the professions, and government. These attitudes have included greater skepticism of the authority of experts and broader expectations about the need for visible mechanisms to assure proper research practices, especially in areas that affect the public welfare. Social attitudes are also having a more direct influence on research practices as science achieves a more prominent and public role in society. In particular, concern about waste, fraud, and abuse involving government funds has emerged as a factor that now directly influences the practices of the research community.

Varying historical and conceptual perspectives also can affect expectations about standards of research practice. For example, some journalists have criticized several prominent scientists, such as Mendel, Newton, and Millikan, because they “cut corners in order to make their theories prevail” (Broad and Wade, 1982, p. 35). The criticism suggests that all scientists at all times, in all phases of their work, should be bound by identical standards.

Yet historical studies of the social context in which scientific knowledge has been attained suggest that modern criticism of early scientific work often imposes contemporary standards of objectivity and empiricism that have in fact been developed in an evolutionary manner. 10 Holton has argued, for example, that in selecting data for

publication, Millikan exercised creative insight in excluding unreliable data resulting from experimental error. But such practices, by today 's standards, would not be acceptable without reporting the justification for omission of recorded data.

In the early stages of pioneering studies, particularly when fundamental hypotheses are subject to change, scientists must be free to use creative judgment in deciding which data are truly significant. In such moments, the standards of proof may be quite different from those that apply at stages when confirmation and consensus are sought from peers. Scientists must consistently guard against self-deception, however, particularly when theoretical prejudices tend to overwhelm the skepticism and objectivity basic to experimental practices.

In discussing “the theory-ladenness of observations,” Sapp (1990) observed the fundamental paradox that can exist in determining the “appropriateness” of data selection in certain experiments done in the past: scientists often craft their experiments so that the scientific problems and research subjects conform closely with the theory that they expect to verify or refute. Thus, in some cases, their observations may come closer to theoretical expectations than what might be statistically proper.

This source of bias may be acceptable when it is influenced by scientific insight and judgment. But political, financial, or other sources of bias can corrupt the process of data selection. In situations where both kinds of influence exist, it is particularly important for scientists to be forthcoming about possible sources of bias in the interpretation of research results. The coupling of science to other social purposes in fostering economic growth and commercial technology requires renewed vigilance to maintain acceptable standards for disclosure and control of financial or competitive conflicts of interest and bias in the research environment. The failure to distinguish between appropriate and inappropriate sources of bias in research practices can lead to erosion of public trust in the autonomy of the research enterprise.

RESEARCH PRACTICES

In reviewing modern research practices for a range of disciplines, and analyzing factors that could affect the integrity of the research process, the panel focused on the following four areas:

Data handling—acquisition, management, and storage;

Communication and publication;

Correction of errors; and

Research training and mentorship.

Commonly understood practices operate in each area to promote responsible research conduct; nevertheless, some questionable research practices also occur. Some research institutions, scientific societies, and journals have established policies to discourage questionable practices, but there is not yet a consensus on how to treat violations of these policies. 11 Furthermore, there is concern that some questionable practices may be encouraged or stimulated by other institutional factors. For example, promotion or appointment policies that stress quantity rather than the quality of publications as a measure of productivity could contribute to questionable practices.

Data Handling

Acquisition and management.

Scientific experiments and measurements are transformed into research data. The term “research data” applies to many different forms of scientific information, including raw numbers and field notes, machine tapes and notebooks, edited and categorized observations, interpretations and analyses, derived reagents and vectors, and tables, charts, slides, and photographs.

Research data are the basis for reporting discoveries and experimental results. Scientists traditionally describe the methods used for an experiment, along with appropriate calibrations, instrument types, the number of repeated measurements, and particular conditions that may have led to the omission of some datain the reported version. Standard procedures, innovations for particular purposes, and judgments concerning the data are also reported. The general standard of practice is to provide information that is sufficiently complete so that another scientist can repeat or extend the experiment.

When a scientist communicates a set of results and a related piece of theory or interpretation in any form (at a meeting, in a journal article, or in a book), it is assumed that the research has been conducted as reported. It is a violation of the most fundamental aspect of the scientific research process to set forth measurements that have not, in fact, been performed (fabrication) or to ignore or change relevant data that contradict the reported findings (falsification).

On occasion what is actually proper research practice may be confused with misconduct in science. Thus, for example, applying scientific judgment to refine data and to remove spurious results places

special responsibility on the researcher to avoid misrepresentation of findings. Responsible practice requires that scientists disclose the basis for omitting or modifying data in their analyses of research results, especially when such omissions or modifications could alter the interpretation or significance of their work.

In the last decade, the methods by which research scientists handle, store, and provide access to research data have received increased scrutiny, owing to conflicts, over ownership, such as those described by Nelkin (1984); advances in the methods and technologies that are used to collect, retain, and share data; and the costs of data storage. More specific concerns have involved the profitability associated with the patenting of science-based results in some fields and the need to verify independently the accuracy of research results used in public or private decision making. In resolving competing claims, the interests of individual scientists and research institutions may not always coincide: researchers may be willing to exchange scientific data of possible economic significance without regard for financial or institutional implications, whereas their institutions may wish to establish intellectual property rights and obligations prior to any disclosure.

The general norms of science emphasize the principle of openness. Scientists are generally expected to exchange research data as well as unique research materials that are essential to the replication or extension of reported findings. The 1985 report Sharing Research Data concluded that the general principle of data sharing is widely accepted, especially in the behavioral and social sciences (NRC, 1985). The report catalogued the benefits of data sharing, including maintaining the integrity of the research process by providing independent opportunities for verification, refutation, or refinement of original results and data; promoting new research and the development and testing of new theories; and encouraging appropriate use of empirical data in policy formulation and evaluation. The same report examined obstacles to data sharing, which include the criticism or competition that might be stimulated by data sharing; technical barriers that may impede the exchange of computer-readable data; lack of documentation of data sets; and the considerable costs of documentation, duplication, and transfer of data.

The exchange of research data and reagents is ideally governed by principles of collegiality and reciprocity: scientists often distribute reagents with the hope that the recipient will reciprocate in the future, and some give materials out freely with no stipulations attached. 12 Scientists who repeatedly or flagrantly deviate from the tradition of sharing become known to their peers and may suffer

subtle forms of professional isolation. Such cases may be well known to senior research investigators, but they are not well documented.

Some scientists may share materials as part of a collaborative agreement in exchange for co-authorship on resulting publications. Some donors stipulate that the shared materials are not to be used for applications already being pursued by the donor's laboratory. Other stipulations include that the material not be passed on to third parties without prior authorization, that the material not be used for proprietary research, or that the donor receive prepublication copies of research publications derived from the material. In some instances, so-called materials transfer agreements are executed to specify the responsibilities of donor and recipient. As more academic research is being supported under proprietary agreements, researchers and institutions are experiencing the effects of these arrangements on research practices.

Governmental support for research studies may raise fundamental questions of ownership and rights of control, particularly when data are subsequently used in proprietary efforts, public policy decisions, or litigation. Some federal research agencies have adopted policies for data sharing to mitigate conflicts over issues of ownership and access (NIH, 1987; NSF, 1989b).

Many research investigators store primary data in the laboratories in which the data were initially derived, generally as electronic records or data sheets in laboratory notebooks. For most academic laboratories, local customary practice governs the storage (or discarding) of research data. Formal rules or guidelines concerning their disposition are rare.

Many laboratories customarily store primary data for a set period (often 3 to 5 years) after they are initially collected. Data that support publications are usually retained for a longer period than are those tangential to reported results. Some research laboratories serve as the proprietor of data and data books that are under the stewardship of the principal investigator. Others maintain that it is the responsibility of the individuals who collected the data to retain proprietorship, even if they leave the laboratory.

Concerns about misconduct in science have raised questions about the roles of research investigators and of institutions in maintaining and providing access to primary data. In some cases of alleged misconduct, the inability or unwillingness of an investigator to provide

primary data or witnesses to support published reports sometimes has constituted a presumption that the experiments were not conducted as reported. 13 Furthermore, there is disagreement about the responsibilities of investigators to provide access to raw data, particularly when the reported results have been challenged by others. Many scientists believe that access should be restricted to peers and colleagues, usually following publication of research results, to reduce external demands on the time of the investigator. Others have suggested that raw data supporting research reports should be accessible to any critic or competitor, at any time, especially if the research is conducted with public funds. This topic, in particular, could benefit from further research and systematic discussion to clarify the rights and responsibilities of research investigators, institutions, and sponsors.

Institutional policies have been developed to guide data storage practices in some fields, often stimulated by desires to support the patenting of scientific results and to provide documentation for resolving disputes over patent claims. Laboratories concerned with patents usually have very strict rules concerning data storage and note keeping, often requiring that notes be recorded in an indelible form and be countersigned by an authorized person each day. A few universities have also considered the creation of central storage repositories for all primary data collected by their research investigators. Some government research institutions and industrial research centers maintain such repositories to safeguard the record of research developments for scientific, historical, proprietary, and national security interests.

In the academic environment, however, centralized research records raise complex problems of ownership, control, and access. Centralized data storage is costly in terms of money and space, and it presents logistical problems of cataloguing and retrieving data. There have been suggestions that some types of scientific data should be incorporated into centralized computerized data banks, a portion of which could be subject to periodic auditing or certification. 14 But much investigator-initiated research is not suitable for random data audits because of the exploratory nature of basic or discovery research. 15

Some scientific journals now require that full data for research papers be deposited in a centralized data bank before final publication. Policies and practices differ, but in some fields support is growing for compulsory deposit to enhance researchers' access to supporting data.

Issues Related to Advances in Information Technology

Advances in electronic and other information technologies have raised new questions about the customs and practices that influence the storage, ownership, and exchange of electronic data and software. A number of special issues, not addressed by the panel, are associated with computer modeling, simulation, and other approaches that are becoming more prevalent in the research environment. Computer technology can enhance research collaboration; it can also create new impediments to data sharing resulting from increased costs, the need for specialized equipment, or liabilities or uncertainties about responsibilities for faulty data, software, or computer-generated models.

Advances in computer technology may assist in maintaining and preserving accurate records of research data. Such records could help resolve questions about the timing or accuracy of specific research findings, especially when a principal investigator is not available or is uncooperative in responding to such questions. In principle, properly managed information technologies, utilizing advances in nonerasable optical disk systems, might reinforce openness in scientific research and make primary data more transparent to collaborators and research managers. For example, the so-called WORM (write once, read many) systems provide a high-density digital storage medium that supplies an ineradicable audit trail and historical record for all entered information (Haas, 1991).

Advances in information technologies could thus provide an important benefit to research institutions that wish to emphasize greater access to and storage of primary research data. But the development of centralized information systems in the academic research environment raises difficult issues of ownership, control, and principle that reflect the decentralized character of university governance. Such systems are also a source of additional research expense, often borne by individual investigators. Moreover, if centralized systems are perceived by scientists as an inappropriate or ineffective form of management or oversight of individual research groups, they simply may not work in an academic environment.

Communication and Publication

Scientists communicate research results by a variety of formal and informal means. In earlier times, new findings and interpretations were communicated by letter, personal meeting, and publication. Today, computer networks and facsimile machines have sup-

plemented letters and telephones in facilitating rapid exchange of results. Scientific meetings routinely include poster sessions and press conferences as well as formal presentations. Although research publications continue to document research findings, the appearance of electronic publications and other information technologies heralds change. In addition, incidents of plagiarism, the increasing number of authors per article in selected fields, and the methods by which publications are assessed in determining appointments and promotions have all increased concerns about the traditions and practices that have guided communication and publication.

Journal publication, traditionally an important means of sharing information and perspectives among scientists, is also a principal means of establishing a record of achievement in science. Evaluation of the accomplishments of individual scientists often involves not only the numbers of articles that have resulted from a selected research effort, but also the particular journals in which the articles have appeared. Journal submission dates are often important in establishing priority and intellectual property claims.

Authorship of original research reports is an important indicator of accomplishment, priority, and prestige within the scientific community. Questions of authorship in science are intimately connected with issues of credit and responsibility. Authorship practices are guided by disciplinary traditions, customary practices within research groups, and professional and journal standards and policies. 16 There is general acceptance of the principle that each named author has made a significant intellectual contribution to the paper, even though there remains substantial disagreement over the types of contributions that are judged to be significant.

A general rule is that an author must have participated sufficiently in the work to take responsibility for its content and vouch for its validity. Some journals have adopted more specific guidelines, suggesting that credit for authorship be contingent on substantial participation in one or more of the following categories: (1) conception and design of the experiment, (2) execution of the experiment and collection and storage of the supporting data, (3) analysis and interpretation of the primary data, and (4) preparation and revision of the manuscript. The extent of participation in these four activities required for authorship varies across journals, disciplines, and research groups. 17

“Honorary,” “gift,” or other forms of noncontributing authorship

are problems with several dimensions. 18 Honorary authors reap an inflated list of publications incommensurate with their scientific contributions (Zen, 1988). Some scientists have requested or been given authorship as a form of recognition of their status or influence rather than their intellectual contribution. Some research leaders have a custom of including their own names in any paper issuing from their laboratory, although this practice is increasingly discouraged. Some students or junior staff encourage such “gift authorship” because they feel that the inclusion of prestigious names on their papers increases the chance of publication in well-known journals. In some cases, noncontributing authors have been listed without their consent, or even without their being told. In response to these practices, some journals now require all named authors to sign the letter that accompanies submission of the original article, to ensure that no author is named without consent.

“Specialized” authorship is another issue that has received increasing attention. In these cases, a co-author may claim responsibility for a specialized portion of the paper and may not even see or be able to defend the paper as a whole. 19 “Specialized” authorship may also result from demands that co-authorship be given as a condition of sharing a unique research reagent or selected data that do not constitute a major contribution—demands that many scientists believe are inappropriate. “Specialized” authorship may be appropriate in cross-disciplinary collaborations, in which each participant has made an important contribution that deserves recognition. However, the risks associated with the inabilities of co-authors to vouch for the integrity of an entire paper are great; scientists may unwittingly become associated with a discredited publication.

Another problem of lesser importance, except to the scientists involved, is the order of authors listed on a paper. The meaning of author order varies among and within disciplines. For example, in physics the ordering of authors is frequently alphabetical, whereas in the social sciences and other fields, the ordering reflects a descending order of contribution to the described research. Another practice, common in biology, is to list the senior author last.

Appropriate recognition for the contributions of junior investigators, postdoctoral fellows, and graduate students is sometimes a source of discontent and unease in the contemporary research environment. Junior researchers have raised concerns about treatment of their contributions when research papers are prepared and submitted, particularly if they are attempting to secure promotions or independent research funding or if they have left the original project. In some cases, well-meaning senior scientists may grant junior colleagues

undeserved authorship or placement as a means of enhancing the junior colleague's reputation. In others, significant contributions may not receive appropriate recognition.

Authorship practices are further complicated by large-scale projects, especially those that involve specialized contributions. Mission teams for space probes, oceanographic expeditions, and projects in high-energy physics, for example, all involve large numbers of senior scientists who depend on the long-term functioning of complex equipment. Some questions about communication and publication that arise from large science projects such as the Superconducting Super Collider include: Who decides when an experiment is ready to be published? How is the spokesperson for the experiment determined? Who determines who can give talks on the experiment? How should credit for technical or hardware contributions be acknowledged?

Apart from plagiarism, problems of authorship and credit allocation usually do not involve misconduct in science. Although some forms of “gift authorship,” in which a designated author made no identifiable contribution to a paper, may be viewed as instances of falsification, authorship disputes more commonly involve unresolved differences of judgment and style. Many research groups have found that the best method of resolving authorship questions is to agree on a designation of authors at the outset of the project. The negotiation and decision process provides initial recognition of each member's effort, and it may prevent misunderstandings that can arise during the course of the project when individuals may be in transition to new efforts or may become preoccupied with other matters.

Plagiarism. Plagiarism is using the ideas or words of another person without giving appropriate credit. Plagiarism includes the unacknowledged use of text and ideas from published work, as well as the misuse of privileged information obtained through confidential review of research proposals and manuscripts.

As described in Honor in Science, plagiarism can take many forms: at one extreme is the exact replication of another's writing without appropriate attribution (Sigma Xi, 1986). At the other is the more subtle “borrowing” of ideas, terms, or paraphrases, as described by Martin et al., “so that the result is a mosaic of other people's ideas and words, the writer's sole contribution being the cement to hold the pieces together.” 20 The importance of recognition for one's intellectual abilities in science demands high standards of accuracy and diligence in ensuring appropriate recognition for the work of others.

The misuse of privileged information may be less clear-cut because it does not involve published work. But the general principles

of the importance of giving credit to the accomplishments of others are the same. The use of ideas or information obtained from peer review is not acceptable because the reviewer is in a privileged position. Some organizations, such as the American Chemical Society, have adopted policies to address these concerns (ACS, 1986).

Additional Concerns. Other problems related to authorship include overspecialization, overemphasis on short-term projects, and the organization of research communication around the “least publishable unit.” In a research system that rewards quantity at the expense of quality and favors speed over attention to detail (the effects of “publish or perish”), scientists who wait until their research data are complete before releasing them for publication may be at a disadvantage. Some institutions, such as Harvard Medical School, have responded to these problems by limiting the number of publications reviewed for promotion. Others have placed greater emphasis on major contributions as the basis for evaluating research productivity.

As gatekeepers of scientific journals, editors are expected to use good judgment and fairness in selecting papers for publication. Although editors cannot be held responsible for the errors or inaccuracies of papers that may appear in their journals, editors have obligations to consider criticism and evidence that might contradict the claims of an author and to facilitate publication of critical letters, errata, or retractions. 21 Some institutions, including the National Library of Medicine and professional societies that represent editors of scientific journals, are exploring the development of standards relevant to these obligations (Bailar et al., 1990).

Should questions be raised about the integrity of a published work, the editor may request an author's institution to address the matter. Editors often request written assurances that research reported conforms to all appropriate guidelines involving human or animal subjects, materials of human origin, or recombinant DNA.

In theory, editors set standards of authorship for their journals. In practice, scientists in the specialty do. Editors may specify the. terms of acknowledgment of contributors who fall short of authorship status, and make decisions regarding appropriate forms of disclosure of sources of bias or other potential conflicts of interest related to published articles. For example, the New England Journal of Medicine has established a category of prohibited contributions from authors engaged in for-profit ventures: the journal will not allow

such persons to prepare review articles or editorial commentaries for publication. Editors can clarify and insist on the confidentiality of review and take appropriate actions against reviewers who violate it. Journals also may require or encourage their authors to deposit reagents and sequence and crystallographic data into appropriate databases or storage facilities. 22

Peer Review

Peer review is the process by which editors and journals seek to be advised by knowledgeable colleagues about the quality and suitability of a manuscript for publication in a journal. Peer review is also used by funding agencies to seek advice concerning the quality and promise of proposals for research support. The proliferation of research journals and the rewards associated with publication and with obtaining research grants have put substantial stress on the peer review system. Reviewers for journals or research agencies receive privileged information and must exert great care to avoid sharing such information with colleagues or allowing it to enter their own work prematurely.

Although the system of peer review is generally effective, it has been suggested that the quality of refereeing has declined, that self-interest has crept into the review process, and that some journal editors and reviewers exert inappropriate influence on the type of work they deem publishable. 23

Correction of Errors

At some level, all scientific reports, even those that mark profound advances, contain errors of fact or interpretation. In part, such errors reflect uncertainties intrinsic to the research process itself —a hypothesis is formulated, an experimental test is devised, and based on the interpretation of the results, the hypothesis is refined, revised, or discarded. Each step in this cycle is subject to error. For any given report, “correctness” is limited by the following:

The precision and accuracy of the measurements. These in turn depend on available technology, the use of proper statistical and analytical methods, and the skills of the investigator.

Generality of the experimental system and approach. Studies must often be carried out using “model systems.” In biology, for example, a given phenomenon is examined in only one or a few among millions of organismal species.

Experimental design—a product of the background and expertise of the investigator.

Interpretation and speculation regarding the significance of the findings—judgments that depend on expert knowledge, experience, and the insightfulness and boldness of the investigator.

Viewed in this context, errors are an integral aspect of progress in attaining scientific knowledge. They are consequences of the fact that scientists seek fundamental truths about natural processes of vast complexity. In the best experimental systems, it is common that relatively few variables have been identified and that even fewer can be controlled experimentally. Even when important variables are accounted for, the interpretation of the experimental results may be incorrect and may lead to an erroneous conclusion. Such conclusions are sometimes overturned by the original investigator or by others when new insights from another study prompt a reexamination of older reported data. In addition, however, erroneous information can also reach the scientific literature as a consequence of misconduct in science.

What becomes of these errors or incorrect interpretations? Much has been made of the concept that science is “self-correcting”—that errors, whether honest or products of misconduct, will be exposed in future experiments because scientific truth is founded on the principle that results must be verifiable and reproducible. This implies that errors will generally not long confound the direction of thinking or experimentation in actively pursued areas of research. Clearly, published experiments are not routinely replicated precisely by independent investigators. However, each experiment is based on conclusions from prior studies; repeated failure of the experiment eventually calls into question those conclusions and leads to reevaluation of the measurements, generality, design, and interpretation of the earlier work.

Thus publication of a scientific report provides an opportunity for the community at large to critique and build on the substance of the report, and serves as one stage at which errors and misinterpretations can be detected and corrected. Each new finding is considered by the community in light of what is already known about the system investigated, and disagreements with established measurements and interpretations must be justified. For example, a particular interpretation of an electrical measurement of a material may implicitly predict the results of an optical experiment. If the reported optical results are in disagreement with the electrical interpretation, then the latter is unlikely to be correct—even though the measurements them-

selves were carefully and correctly performed. It is also possible, however, that the contradictory results are themselves incorrect, and this possibility will also be evaluated by the scientists working in the field. It is by this process of examination and reexamination that science advances.

The research endeavor can therefore be viewed as a two-tiered process: first, hypotheses are formulated, tested, and modified; second, results and conclusions are reevaluated in the course of additional study. In fact, the two tiers are interrelated, and the goals and traditions of science mandate major responsibilities in both areas for individual investigators. Importantly, the principle of self-correction does not diminish the responsibilities of the investigator in either area. The investigator has a fundamental responsibility to ensure that the reported results can be replicated in his or her laboratory. The scientific community in general adheres strongly to this principle, but practical constraints exist as a result of the availability of specialized instrumentation, research materials, and expert personnel. Other forces, such as competition, commercial interest, funding trends and availability, or pressure to publish may also erode the role of replication as a mechanism for fostering integrity in the research process. The panel is unaware of any quantitative studies of this issue.

The process of reevaluating prior findings is closely related to the formulation and testing of hypotheses. 24 Indeed, within an individual laboratory, the formulation/testing phase and the reevaluation phase are ideally ongoing interactive processes. In that setting, the precise replication of a prior result commonly serves as a crucial control in attempts to extend the original findings. It is not unusual that experimental flaws or errors of interpretation are revealed as the scope of an investigation deepens and broadens.

If new findings or significant questions emerge in the course of a reevaluation that affect the claims of a published report, the investigator is obliged to make public a correction of the erroneous result or to indicate the nature of the questions. Occasionally, this takes the form of a formal published retraction, especially in situations in which a central claim is found to be fundamentally incorrect or irreproducible. More commonly, a somewhat different version of the original experiment, or a revised interpretation of the original result, is published as part of a subsequent report that extends in other ways the initial work. Some concerns have been raised that such “revisions” can sometimes be so subtle and obscure as to be unrecognizable. Such behavior is, at best, a questionable research practice. Clearly, each scientist has a responsibility to foster an environment that en-

courages and demands rigorous evaluation and reevaluation of every key finding.

Much greater complexity is encountered when an investigator in one research group is unable to confirm the published findings of another. In such situations, precise replication of the original result is commonly not attempted because of the lack of identical reagents, differences in experimental protocols, diverse experimental goals, or differences in personnel. Under these circumstances, attempts to obtain the published result may simply be dropped if the central claim of the original study is not the major focus of the new study. Alternatively, the inability to obtain the original finding may be documented in a paper by the second investigator as part of a challenge to the original claim. In any case, such questions about a published finding usually provoke the initial investigator to attempt to reconfirm the original result, or to pursue additional studies that support and extend the original findings.

In accordance with established principles of science, scientists have the responsibility to replicate and reconfirm their results as a normal part of the research process. The cycles of theoretical and methodological formulation, testing, and reevaluation, both within and between laboratories, produce an ongoing process of revision and refinement that corrects errors and strengthens the fabric of research.

Research Training and Mentorship

The panel defined a mentor as that person directly responsible for the professional development of a research trainee. 25 Professional development includes both technical training, such as instruction in the methods of scientific research (e.g., research design, instrument use, and selection of research questions and data), and socialization in basic research practices (e.g., authorship practices and sharing of research data).

Positive Aspects of Mentorship

The relationship of the mentor and research trainee is usually characterized by extraordinary mutual commitment and personal involvement. A mentor, as a research advisor, is generally expected to supervise the work of the trainee and ensure that the trainee's research is completed in a sound, honest, and timely manner. The ideal mentor challenges the trainee, spurs the trainee to higher scientific achievement, and helps socialize the trainee into the community

of scientists by demonstrating and discussing methods and practices that are not well understood.

Research mentors thus have complex and diverse roles. Many individuals excel in providing guidance and instruction as well as personal support, and some mentors are resourceful in providing funds and securing professional opportunities for their trainees. The mentoring relationship may also combine elements of other relationships, such as parenting, coaching, and guildmastering. One mentor has written that his “research group is like an extended family or small tribe, dependent on one another, but led by the mentor, who acts as their consultant, critic, judge, advisor, and scientific father” (Cram, 1989, p. 1). Another mentor described as “orphaned graduate students” trainees who had lost their mentors to death, job changes, or in other ways (Sindermann, 1987). Many students come to respect and admire their mentors, who act as role models for their younger colleagues.

Difficulties Associated with Mentorship

However, the mentoring relationship does not always function properly or even satisfactorily. Almost no literature exists that evaluates which problems are idiosyncratic and which are systemic. However, it is clear that traditional practices in the area of mentorship and training are under stress. In some research fields, for example, concerns are being raised about how the increasing size and diverse composition of research groups affect the quality of the relationship between trainee and mentor. As the size of research laboratories expands, the quality of the training environment is at risk (CGS, 1990a).

Large laboratories may provide valuable instrumentation and access to unique research skills and resources as well as an opportunity to work in pioneering fields of science. But as only one contribution to the efforts of a large research team, a graduate student's work may become highly specialized, leading to a narrowing of experience and greater dependency on senior personnel; in a period when the availability of funding may limit research opportunities, laboratory heads may find it necessary to balance research decisions for the good of the team against the individual educational interests of each trainee. Moreover, the demands of obtaining sufficient resources to maintain a laboratory in the contemporary research environment often separate faculty from their trainees. When laboratory heads fail to participate in the everyday workings of the laboratory—even for the most beneficent of reasons, such as finding funds to support young investigators—their inattention may harm their trainees' education.

Although the size of a research group can influence the quality of mentorship, the more important issues are the level of supervision received by trainees, the degree of independence that is appropriate for the trainees' experience and interests, and the allocation of credit for achievements that are accomplished by groups composed of individuals with different status. Certain studies involving large groups of 40 to 100 or more are commonly carried out by collaborative or hierarchical arrangements under a single investigator. These factors may affect the ability of research mentors to transmit the methods and ethical principles according to which research should be conducted.

Problems also arise when faculty members are not directly rewarded for their graduate teaching or training skills. Although faculty may receive indirect rewards from the contributions of well-trained graduate students to their own research as well as the satisfaction of seeing their students excelling elsewhere, these rewards may not be sufficiently significant in tenure or promotion decisions. When institutional policies fail to recognize and reward the value of good teaching and mentorship, the pressures to maintain stable funding for research teams in a competitive environment can overwhelm the time allocated to teaching and mentorship by a single investigator.

The increasing duration of the training period in many research fields is another source of concern, particularly when it prolongs the dependent status of the junior investigator. The formal period of graduate and postdoctoral training varies considerably among fields of study. In 1988, the median time to the doctorate from the baccalaureate degree was 6.5 years (NRC, 1989). The disciplinary median varied: 5.5 years in chemistry; 5.9 years in engineering; 7.1 years in health sciences and in earth, atmospheric, and marine sciences; and 9.0 years in anthropology and sociology. 26

Students, research associates, and faculty are currently raising various questions about the rights and obligations of trainees. Sexist behavior by some research directors and other senior scientists is a particular source of concern. Another significant concern is that research trainees may be subject to exploitation because of their subordinate status in the research laboratory, particularly when their income, access to research resources, and future recommendations are dependent on the goodwill of the mentor. Foreign students and postdoctoral fellows may be especially vulnerable, since their immigration status often depends on continuation of a research relationship with the selected mentor.

Inequalities between mentor and trainee can exacerbate ordinary conflicts such as the distribution of credit or blame for research error (NAS, 1989). When conflicts arise, the expectations and assumptions

that govern authorship practices, ownership of intellectual property, and the giving of references and recommendations are exposed for professional—and even legal—scrutiny (Nelkin, 1984; Weil and Snapper, 1989).

Making Mentorship Better

Ideally, mentors and trainees should select each other with an eye toward scientific merit, intellectual and personal compatibility, and other relevant factors. But this situation operates only under conditions of freely available information and unconstrained choice —conditions that usually do not exist in academic research groups. The trainee may choose to work with a faculty member based solely on criteria of patronage, perceived influence, or ability to provide financial support.

Good mentors may be well known and highly regarded within their research communities and institutions. Unfortunately, individuals who exploit the mentorship relationship may be less visible. Poor mentorship practices may be self-correcting over time, if students can detect and avoid research groups characterized by disturbing practices. However, individual trainees who experience abusive relationships with a mentor may discover only too late that the practices that constitute the abuse were well known but were not disclosed to new initiates.

It is common practice for a graduate student to be supervised not only by an individual mentor but also by a committee that represents the graduate department or research field of the student. However, departmental oversight is rare for the postdoctoral research fellow. In order to foster good mentorship practices for all research trainees, many groups and institutions have taken steps to clarify the nature of individual and institutional responsibilities in the mentor–trainee relationship. 27

FINDINGS AND CONCLUSIONS

The self-regulatory system that characterizes the research process has evolved from a diverse set of principles, traditions, standards, and customs transmitted from senior scientists, research directors, and department chairs to younger scientists by example, discussion, and informal education. The principles of honesty, collegiality, respect for others, and commitment to dissemination, critical evaluation, and rigorous training are characteristic of all the sciences. Methods and techniques of experimentation, styles of communicating findings,

the relationship between theory and experimentation, and laboratory groupings for research and for training vary with the particular scientific disciplines. Within those disciplines, practices combine the general with the specific. Ideally, research practices reflect the values of the wider research community and also embody the practical skills needed to conduct scientific research.

Practicing scientists are guided by the principles of science and the standard practices of their particular scientific discipline as well as their personal moral principles. But conflicts are inherent among these principles. For example, loyalty to one's group of colleagues can be in conflict with the need to correct or report an abuse of scientific practice on the part of a member of that group.

Because scientists and the achievements of science have earned the respect of society at large, the behavior of scientists must accord not only with the expectations of scientific colleagues, but also with those of a larger community. As science becomes more closely linked to economic and political objectives, the processes by which scientists formulate and adhere to responsible research practices will be subject to increasing public scrutiny. This is one reason for scientists and research institutions to clarify and strengthen the methods by which they foster responsible research practices.

Accordingly, the panel emphasizes the following conclusions:

The panel believes that the existing self-regulatory system in science is sound. But modifications are necessary to foster integrity in a changing research environment, to handle cases of misconduct in science, and to discourage questionable research practices.

Individual scientists have a fundamental responsibility to ensure that their results are reproducible, that their research is reported thoroughly enough so that results are reproducible, and that significant errors are corrected when they are recognized. Editors of scientific journals share these last two responsibilities.

Research mentors, laboratory directors, department heads, and senior faculty are responsible for defining, explaining, exemplifying, and requiring adherence to the value systems of their institutions. The neglect of sound training in a mentor's laboratory will over time compromise the integrity of the research process.

Administrative officials within the research institution also bear responsibility for ensuring that good scientific practices are observed in units of appropriate jurisdiction and that balanced reward systems appropriately recognize research quality, integrity, teaching, and mentorship. Adherence to scientific principles and disciplinary standards is at the root of a vital and productive research environment.

At present, scientific principles are passed on to trainees primarily by example and discussion, including training in customary practices. Most research institutions do not have explicit programs of instruction and discussion to foster responsible research practices, but the communication of values and traditions is critical to fostering responsible research practices and detering misconduct in science.

Efforts to foster responsible research practices in areas such as data handling, communication and publication, and research training and mentorship deserve encouragement by the entire research community. Problems have also developed in these areas that require explicit attention and correction by scientists and their institutions. If not properly resolved, these problems may weaken the integrity of the research process.

1. See, for example, Kuyper (1991).

2. See, for example, the proposal by Pigman and Carmichael (1950).

3. See, for example, Holton (1988) and Ravetz (1971).

4. Several excellent books on experimental design and statistical methods are available. See, for example, Wilson (1952) and Beveridge (1957).

5. For a somewhat dated review of codes of ethics adopted by the scientific and engineering societies, see Chalk et al. (1981).

6. The discussion in this section is derived from Mark Frankel's background paper, “Professional Societies and Responsible Research Conduct,” included in Volume II of this report.

7. For a broader discussion on this point, see Zuckerman (1977).

8. For a full discussion of the roles of scientific societies in fostering responsible research practices, see the background paper prepared by Mark Frankel, “Professional Societies and Responsible Research Conduct,” in Volume II of this report.

9. Selected examples of academic research conduct policies and guidelines are included in Volume II of this report.

10. See, for example, Holton's response to the criticisms of Millikan in Chapter 12 of Thematic Origins of Scientific Thought (Holton, 1988). See also Holton (1978).

11. See, for example, responses to the Proceedings of the National Academy of Sciences action against Friedman: Hamilton (1990) and Abelson et al. (1990). See also the discussion in Bailar et al. (1990).

12. Much of the discussion in this section is derived from a background paper, “Reflections on the Current State of Data and Reagent Exchange Among Biomedical Researchers,” prepared by Robert Weinberg and included in Volume II of this report.

13. See, for example, Culliton (1990) and Bradshaw et al. (1990). For the impact of the inability to provide corroborating data or witnesses, also see Ross et al. (1989).

14. See, for example, Rennie (1989) and Cassidy and Shamoo (1989).

15. See, for example, the discussion on random data audits in Institute of Medicine (1989a), pp. 26-27.

16. For a full discussion of the practices and policies that govern authorship in the biological sciences, see Bailar et al. (1990).

17. Note that these general guidelines exclude the provision of reagents or facilities or the supervision of research as a criteria of authorship.

18. A full discussion of problematic practices in authorship is included in Bailar et al. (1990). A controversial review of the responsibilities of co-authors is presented by Stewart and Feder (1987).

19. In the past, scientific papers often included a special note by a named researcher, not a co-author of the paper, who described, for example, a particular substance or procedure in a footnote or appendix. This practice seems to.have been abandoned for reasons that are not well understood.

20. Martin et al. (1969), as cited in Sigma Xi (1986), p. 41.

21. Huth (1988) suggests a “notice of fraud or notice of suspected fraud” issued by the journal editor to call attention to the controversy (p. 38). Angell (1983) advocates closer coordination between institutions and editors when institutions have ascertained misconduct.

22. Such facilities include Cambridge Crystallographic Data Base, GenBank at Los Alamos National Laboratory, the American Type Culture Collection, and the Protein Data Bank at Brookhaven National Laboratory. Deposition is important for data that cannot be directly printed because of large volume.

23. For more complete discussions of peer review in the wider context, see, for example, Cole et al. (1977) and Chubin and Hackett (1990).

24. The strength of theories as sources of the formulation of scientific laws and predictive power varies among different fields of science. For example, theories derived from observations in the field of evolutionary biology lack a great deal of predictive power. The role of chance in mutation and natural selection is great, and the future directions that evolution may take are essentially impossible to predict. Theory has enormous power for clarifying understanding of how evolution has occurred and for making sense of detailed data, but its predictive power in this field is very limited. See, for example, Mayr (1982, 1988).

25. Much of the discussion on mentorship is derived from a background paper prepared for the panel by David Guston. A copy of the full paper, “Mentorship and the Research Training Experience,” is included in Volume II of this report.

26. Although the time to the doctorate is increasing, there is some evidence that the magnitude of the increase may be affected by the organization of the cohort chosen for study. In the humanities, the increased time to the doctorate is not as large if one chooses as an organizational base the year in which the baccalaureate was received by Ph.D. recipients, rather than the year in which the Ph.D. was completed; see Bowen et al. (1991).

27. Some universities have written guidelines for the supervision or mentorship of trainees as part of their institutional research policy guidelines (see, for example, the guidelines adopted by Harvard University and the University of Michigan that are included in Volume II of this report). Other groups or institutions have written “guidelines ” (IOM, 1989a; NIH, 1990), “checklists” (CGS, 1990a), and statements of “areas of concern” and suggested “devices” (CGS, 1990c).

The guidelines often affirm the need for regular, personal interaction between the mentor and the trainee. They indicate that mentors may need to limit the size of their laboratories so that they are able to interact directly and frequently with all of their trainees. Although there are many ways to ensure responsible mentorship, methods that provide continuous feedback, whether through formal or informal mechanisms, are apt to be the most successful (CGS, 1990a). Departmental mentorship awards (comparable to teaching or research prizes) can recognize, encourage, and enhance the

mentoring relationship. For other discussions on mentorship, see the paper by David Guston in Volume II of this report.

One group convened by the Institute of Medicine has suggested “that the university has a responsibility to ensure that the size of a research unit does not outstrip the mentor's ability to maintain adequate supervision” (IOM, 1989a, p. 85). Others have noted that although it may be desirable to limit the number of trainees assigned to a senior investigator, there is insufficient information at this time to suggest that numbers alone significantly affect the quality of research supervision (IOM, 1989a, p. 33).

Responsible Science is a comprehensive review of factors that influence the integrity of the research process. Volume I examines reports on the incidence of misconduct in science and reviews institutional and governmental efforts to handle cases of misconduct.

The result of a two-year study by a panel of experts convened by the National Academy of Sciences, this book critically analyzes the impact of today's research environment on the traditional checks and balances that foster integrity in science.

Responsible Science is a provocative examination of the role of educational efforts; research guidelines; and the contributions of individual scientists, mentors, and institutional officials in encouraging responsible research practices.

READ FREE ONLINE

Welcome to OpenBook!

You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

Do you want to take a quick tour of the OpenBook's features?

Show this book's table of contents , where you can jump to any chapter by name.

...or use these buttons to go back to the previous chapter or skip to the next one.

Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

Switch between the Original Pages , where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

To search the entire text of this book, type in your search term here and press Enter .

Share a link to this book page on your preferred social network or via email.

View our suggested citation for this chapter.

Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

Get Email Updates

Do you enjoy reading reports from the Academies online for free ? Sign up for email notifications and we'll let you know about new publications in your areas of interest when they're released.

Logo for The University of Regina OEP Program

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

12 Fact or Opinion

Silhouette of a head with a question mark in the center

Thinking about the reason an author produced a source can be helpful to you because that reason was what dictated the kind of information they chose to include. Depending on that purpose, the author may have chosen to include factual, analytical, and objective information. Or, instead, it may have suited their purpose to include information that was subjective and therefore less factual and analytical. The author’s reason for producing the source also determined whether they included more than one perspective or just their own.

Authors typically want to do at least one of the following:

  • Inform and educate;
  • Sell services or products or;

Combined Purposes

Sometimes authors have a combination of purposes, as when a marketer decides she can sell more smart phones with an informative sales video that also entertains us. The same is true when a singer writes and performs a song that entertains us but that she intends to make available for sale. Other examples of authors having multiple purposes occur in most scholarly writing.

In those cases, authors certainly want to inform and educate their audiences. But they also want to persuade their audiences that what they are reporting and/or postulating is a true description of a situation, event, or phenomenon, or a valid argument that their audience must take a particular action. In this blend of scholarly authors’ purposes, the intent to educate and inform is considered to trump the intent to persuade.

Why Intent Matters

Authors’ intent usually matters in how useful their information can be to your research project, depending on which information need you are trying to meet. For instance, when you’re looking for sources that will help you actually decide your answer to your research question or evidence for your answer that you will share with your audience, you will want the author’s main purpose to have been to inform or educate his/her audience. That’s because, with that intent, they are likely to have used:

  • Facts where possible.
  • Multiple perspectives instead of just their own.
  • Little subjective information.
  • Seemingly unbiased, objective language that cites where they got the information.

The reason you want that kind of resource when trying to answer your research question or explaining that answer is because all of those characteristics will lend credibility to the argument you are making with your project. Both you and your audience will simply find it easier to believe—will have more confidence in the argument being made—when you include those types of sources.

Sources whose authors intend only to persuade others won’t meet your information need for an answer to your research question or evidence with which to convince your audience. That’s because they don’t always confine themselves to facts. Instead, they tell us their opinions without backing them up with evidence. If you used those sources, your readers will notice and not believe your argument.

Fact vs. Opinion vs. Objective vs. Subjective

Need to brush up on the differences between fact, objective information, subjective information, and opinion?

Fact – Facts are useful to inform or make an argument.

  • The country of Canada was established in 1867.
  • The pH levels in acids are lower than pH levels in alkalines.
  • Beethoven was a composer and pianist.

Opinion – Opinions are useful to persuade, but careful readers and listeners will notice and demand evidence to back them up.

  • That was a good movie.
  • Strawberries taste better blueberries.
  • Beethoven’s reputation as a virtuoso pianist is overrated.

Objective – Objective information reflects a research finding or multiple perspectives that are not biased.

  • “Several studies show that an active lifestyle reduces the risk of heart disease and diabetes.”
  • “Studies from the Brown University Medical School show that twenty-somethings eat 25 percent more fast-food meals at this age than they did as teenagers.”

Subjective – Subjective information presents one person or organization’s perspective or interpretation. Subjective information can be meant to distort, or it can reflect educated and informed thinking. All opinions are subjective, but some are backed up with facts more than others.

  • “The simple truth is this: as human beings, we were meant to move.”
  • “In their thirties, women should stock up on calcium to ensure strong, dense bones and to ward off osteoporosis later in life.”*

*In this quote, it’s mostly the “should” that makes it subjective. The objective version of the last quote would read: “Studies have shown that women who begin taking calcium in their 30s show stronger bone density and fewer repercussions of osteoporosis than women who did not take calcium at all.” But perhaps there are other data showing complications from taking calcium. That’s why drawing the conclusion that requires a “should” makes the statement subjective.

Activity: Fact, Opinion, Objective, or Subjective?

Choosing & Using Sources: A Guide to Academic Research, 1st Canadian Edition Copyright © 2020 by Lindsey MacCallum is licensed under a Creative Commons Attribution 4.0 International License , except where otherwise noted.

Share This Book

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

What the data says about abortion in the U.S.

Pew Research Center has conducted many surveys about abortion over the years, providing a lens into Americans’ views on whether the procedure should be legal, among a host of other questions.

In a  Center survey  conducted nearly a year after the Supreme Court’s June 2022 decision that  ended the constitutional right to abortion , 62% of U.S. adults said the practice should be legal in all or most cases, while 36% said it should be illegal in all or most cases. Another survey conducted a few months before the decision showed that relatively few Americans take an absolutist view on the issue .

Find answers to common questions about abortion in America, based on data from the Centers for Disease Control and Prevention (CDC) and the Guttmacher Institute, which have tracked these patterns for several decades:

How many abortions are there in the U.S. each year?

How has the number of abortions in the u.s. changed over time, what is the abortion rate among women in the u.s. how has it changed over time, what are the most common types of abortion, how many abortion providers are there in the u.s., and how has that number changed, what percentage of abortions are for women who live in a different state from the abortion provider, what are the demographics of women who have had abortions, when during pregnancy do most abortions occur, how often are there medical complications from abortion.

This compilation of data on abortion in the United States draws mainly from two sources: the Centers for Disease Control and Prevention (CDC) and the Guttmacher Institute, both of which have regularly compiled national abortion data for approximately half a century, and which collect their data in different ways.

The CDC data that is highlighted in this post comes from the agency’s “abortion surveillance” reports, which have been published annually since 1974 (and which have included data from 1969). Its figures from 1973 through 1996 include data from all 50 states, the District of Columbia and New York City – 52 “reporting areas” in all. Since 1997, the CDC’s totals have lacked data from some states (most notably California) for the years that those states did not report data to the agency. The four reporting areas that did not submit data to the CDC in 2021 – California, Maryland, New Hampshire and New Jersey – accounted for approximately 25% of all legal induced abortions in the U.S. in 2020, according to Guttmacher’s data. Most states, though,  do  have data in the reports, and the figures for the vast majority of them came from each state’s central health agency, while for some states, the figures came from hospitals and other medical facilities.

Discussion of CDC abortion data involving women’s state of residence, marital status, race, ethnicity, age, abortion history and the number of previous live births excludes the low share of abortions where that information was not supplied. Read the methodology for the CDC’s latest abortion surveillance report , which includes data from 2021, for more details. Previous reports can be found at  stacks.cdc.gov  by entering “abortion surveillance” into the search box.

For the numbers of deaths caused by induced abortions in 1963 and 1965, this analysis looks at reports by the then-U.S. Department of Health, Education and Welfare, a precursor to the Department of Health and Human Services. In computing those figures, we excluded abortions listed in the report under the categories “spontaneous or unspecified” or as “other.” (“Spontaneous abortion” is another way of referring to miscarriages.)

Guttmacher data in this post comes from national surveys of abortion providers that Guttmacher has conducted 19 times since 1973. Guttmacher compiles its figures after contacting every known provider of abortions – clinics, hospitals and physicians’ offices – in the country. It uses questionnaires and health department data, and it provides estimates for abortion providers that don’t respond to its inquiries. (In 2020, the last year for which it has released data on the number of abortions in the U.S., it used estimates for 12% of abortions.) For most of the 2000s, Guttmacher has conducted these national surveys every three years, each time getting abortion data for the prior two years. For each interim year, Guttmacher has calculated estimates based on trends from its own figures and from other data.

The latest full summary of Guttmacher data came in the institute’s report titled “Abortion Incidence and Service Availability in the United States, 2020.” It includes figures for 2020 and 2019 and estimates for 2018. The report includes a methods section.

In addition, this post uses data from StatPearls, an online health care resource, on complications from abortion.

An exact answer is hard to come by. The CDC and the Guttmacher Institute have each tried to measure this for around half a century, but they use different methods and publish different figures.

The last year for which the CDC reported a yearly national total for abortions is 2021. It found there were 625,978 abortions in the District of Columbia and the 46 states with available data that year, up from 597,355 in those states and D.C. in 2020. The corresponding figure for 2019 was 607,720.

The last year for which Guttmacher reported a yearly national total was 2020. It said there were 930,160 abortions that year in all 50 states and the District of Columbia, compared with 916,460 in 2019.

  • How the CDC gets its data: It compiles figures that are voluntarily reported by states’ central health agencies, including separate figures for New York City and the District of Columbia. Its latest totals do not include figures from California, Maryland, New Hampshire or New Jersey, which did not report data to the CDC. ( Read the methodology from the latest CDC report .)
  • How Guttmacher gets its data: It compiles its figures after contacting every known abortion provider – clinics, hospitals and physicians’ offices – in the country. It uses questionnaires and health department data, then provides estimates for abortion providers that don’t respond. Guttmacher’s figures are higher than the CDC’s in part because they include data (and in some instances, estimates) from all 50 states. ( Read the institute’s latest full report and methodology .)

While the Guttmacher Institute supports abortion rights, its empirical data on abortions in the U.S. has been widely cited by  groups  and  publications  across the political spectrum, including by a  number of those  that  disagree with its positions .

These estimates from Guttmacher and the CDC are results of multiyear efforts to collect data on abortion across the U.S. Last year, Guttmacher also began publishing less precise estimates every few months , based on a much smaller sample of providers.

The figures reported by these organizations include only legal induced abortions conducted by clinics, hospitals or physicians’ offices, or those that make use of abortion pills dispensed from certified facilities such as clinics or physicians’ offices. They do not account for the use of abortion pills that were obtained  outside of clinical settings .

(Back to top)

A line chart showing the changing number of legal abortions in the U.S. since the 1970s.

The annual number of U.S. abortions rose for years after Roe v. Wade legalized the procedure in 1973, reaching its highest levels around the late 1980s and early 1990s, according to both the CDC and Guttmacher. Since then, abortions have generally decreased at what a CDC analysis called  “a slow yet steady pace.”

Guttmacher says the number of abortions occurring in the U.S. in 2020 was 40% lower than it was in 1991. According to the CDC, the number was 36% lower in 2021 than in 1991, looking just at the District of Columbia and the 46 states that reported both of those years.

(The corresponding line graph shows the long-term trend in the number of legal abortions reported by both organizations. To allow for consistent comparisons over time, the CDC figures in the chart have been adjusted to ensure that the same states are counted from one year to the next. Using that approach, the CDC figure for 2021 is 622,108 legal abortions.)

There have been occasional breaks in this long-term pattern of decline – during the middle of the first decade of the 2000s, and then again in the late 2010s. The CDC reported modest 1% and 2% increases in abortions in 2018 and 2019, and then, after a 2% decrease in 2020, a 5% increase in 2021. Guttmacher reported an 8% increase over the three-year period from 2017 to 2020.

As noted above, these figures do not include abortions that use pills obtained outside of clinical settings.

Guttmacher says that in 2020 there were 14.4 abortions in the U.S. per 1,000 women ages 15 to 44. Its data shows that the rate of abortions among women has generally been declining in the U.S. since 1981, when it reported there were 29.3 abortions per 1,000 women in that age range.

The CDC says that in 2021, there were 11.6 abortions in the U.S. per 1,000 women ages 15 to 44. (That figure excludes data from California, the District of Columbia, Maryland, New Hampshire and New Jersey.) Like Guttmacher’s data, the CDC’s figures also suggest a general decline in the abortion rate over time. In 1980, when the CDC reported on all 50 states and D.C., it said there were 25 abortions per 1,000 women ages 15 to 44.

That said, both Guttmacher and the CDC say there were slight increases in the rate of abortions during the late 2010s and early 2020s. Guttmacher says the abortion rate per 1,000 women ages 15 to 44 rose from 13.5 in 2017 to 14.4 in 2020. The CDC says it rose from 11.2 per 1,000 in 2017 to 11.4 in 2019, before falling back to 11.1 in 2020 and then rising again to 11.6 in 2021. (The CDC’s figures for those years exclude data from California, D.C., Maryland, New Hampshire and New Jersey.)

The CDC broadly divides abortions into two categories: surgical abortions and medication abortions, which involve pills. Since the Food and Drug Administration first approved abortion pills in 2000, their use has increased over time as a share of abortions nationally, according to both the CDC and Guttmacher.

The majority of abortions in the U.S. now involve pills, according to both the CDC and Guttmacher. The CDC says 56% of U.S. abortions in 2021 involved pills, up from 53% in 2020 and 44% in 2019. Its figures for 2021 include the District of Columbia and 44 states that provided this data; its figures for 2020 include D.C. and 44 states (though not all of the same states as in 2021), and its figures for 2019 include D.C. and 45 states.

Guttmacher, which measures this every three years, says 53% of U.S. abortions involved pills in 2020, up from 39% in 2017.

Two pills commonly used together for medication abortions are mifepristone, which, taken first, blocks hormones that support a pregnancy, and misoprostol, which then causes the uterus to empty. According to the FDA, medication abortions are safe  until 10 weeks into pregnancy.

Surgical abortions conducted  during the first trimester  of pregnancy typically use a suction process, while the relatively few surgical abortions that occur  during the second trimester  of a pregnancy typically use a process called dilation and evacuation, according to the UCLA School of Medicine.

In 2020, there were 1,603 facilities in the U.S. that provided abortions,  according to Guttmacher . This included 807 clinics, 530 hospitals and 266 physicians’ offices.

A horizontal stacked bar chart showing the total number of abortion providers down since 1982.

While clinics make up half of the facilities that provide abortions, they are the sites where the vast majority (96%) of abortions are administered, either through procedures or the distribution of pills, according to Guttmacher’s 2020 data. (This includes 54% of abortions that are administered at specialized abortion clinics and 43% at nonspecialized clinics.) Hospitals made up 33% of the facilities that provided abortions in 2020 but accounted for only 3% of abortions that year, while just 1% of abortions were conducted by physicians’ offices.

Looking just at clinics – that is, the total number of specialized abortion clinics and nonspecialized clinics in the U.S. – Guttmacher found the total virtually unchanged between 2017 (808 clinics) and 2020 (807 clinics). However, there were regional differences. In the Midwest, the number of clinics that provide abortions increased by 11% during those years, and in the West by 6%. The number of clinics  decreased  during those years by 9% in the Northeast and 3% in the South.

The total number of abortion providers has declined dramatically since the 1980s. In 1982, according to Guttmacher, there were 2,908 facilities providing abortions in the U.S., including 789 clinics, 1,405 hospitals and 714 physicians’ offices.

The CDC does not track the number of abortion providers.

In the District of Columbia and the 46 states that provided abortion and residency information to the CDC in 2021, 10.9% of all abortions were performed on women known to live outside the state where the abortion occurred – slightly higher than the percentage in 2020 (9.7%). That year, D.C. and 46 states (though not the same ones as in 2021) reported abortion and residency data. (The total number of abortions used in these calculations included figures for women with both known and unknown residential status.)

The share of reported abortions performed on women outside their state of residence was much higher before the 1973 Roe decision that stopped states from banning abortion. In 1972, 41% of all abortions in D.C. and the 20 states that provided this information to the CDC that year were performed on women outside their state of residence. In 1973, the corresponding figure was 21% in the District of Columbia and the 41 states that provided this information, and in 1974 it was 11% in D.C. and the 43 states that provided data.

In the District of Columbia and the 46 states that reported age data to  the CDC in 2021, the majority of women who had abortions (57%) were in their 20s, while about three-in-ten (31%) were in their 30s. Teens ages 13 to 19 accounted for 8% of those who had abortions, while women ages 40 to 44 accounted for about 4%.

The vast majority of women who had abortions in 2021 were unmarried (87%), while married women accounted for 13%, according to  the CDC , which had data on this from 37 states.

A pie chart showing that, in 2021, majority of abortions were for women who had never had one before.

In the District of Columbia, New York City (but not the rest of New York) and the 31 states that reported racial and ethnic data on abortion to  the CDC , 42% of all women who had abortions in 2021 were non-Hispanic Black, while 30% were non-Hispanic White, 22% were Hispanic and 6% were of other races.

Looking at abortion rates among those ages 15 to 44, there were 28.6 abortions per 1,000 non-Hispanic Black women in 2021; 12.3 abortions per 1,000 Hispanic women; 6.4 abortions per 1,000 non-Hispanic White women; and 9.2 abortions per 1,000 women of other races, the  CDC reported  from those same 31 states, D.C. and New York City.

For 57% of U.S. women who had induced abortions in 2021, it was the first time they had ever had one,  according to the CDC.  For nearly a quarter (24%), it was their second abortion. For 11% of women who had an abortion that year, it was their third, and for 8% it was their fourth or more. These CDC figures include data from 41 states and New York City, but not the rest of New York.

A bar chart showing that most U.S. abortions in 2021 were for women who had previously given birth.

Nearly four-in-ten women who had abortions in 2021 (39%) had no previous live births at the time they had an abortion,  according to the CDC . Almost a quarter (24%) of women who had abortions in 2021 had one previous live birth, 20% had two previous live births, 10% had three, and 7% had four or more previous live births. These CDC figures include data from 41 states and New York City, but not the rest of New York.

The vast majority of abortions occur during the first trimester of a pregnancy. In 2021, 93% of abortions occurred during the first trimester – that is, at or before 13 weeks of gestation,  according to the CDC . An additional 6% occurred between 14 and 20 weeks of pregnancy, and about 1% were performed at 21 weeks or more of gestation. These CDC figures include data from 40 states and New York City, but not the rest of New York.

About 2% of all abortions in the U.S. involve some type of complication for the woman , according to an article in StatPearls, an online health care resource. “Most complications are considered minor such as pain, bleeding, infection and post-anesthesia complications,” according to the article.

The CDC calculates  case-fatality rates for women from induced abortions – that is, how many women die from abortion-related complications, for every 100,000 legal abortions that occur in the U.S .  The rate was lowest during the most recent period examined by the agency (2013 to 2020), when there were 0.45 deaths to women per 100,000 legal induced abortions. The case-fatality rate reported by the CDC was highest during the first period examined by the agency (1973 to 1977), when it was 2.09 deaths to women per 100,000 legal induced abortions. During the five-year periods in between, the figure ranged from 0.52 (from 1993 to 1997) to 0.78 (from 1978 to 1982).

The CDC calculates death rates by five-year and seven-year periods because of year-to-year fluctuation in the numbers and due to the relatively low number of women who die from legal induced abortions.

In 2020, the last year for which the CDC has information , six women in the U.S. died due to complications from induced abortions. Four women died in this way in 2019, two in 2018, and three in 2017. (These deaths all followed legal abortions.) Since 1990, the annual number of deaths among women due to legal induced abortion has ranged from two to 12.

The annual number of reported deaths from induced abortions (legal and illegal) tended to be higher in the 1980s, when it ranged from nine to 16, and from 1972 to 1979, when it ranged from 13 to 63. One driver of the decline was the drop in deaths from illegal abortions. There were 39 deaths from illegal abortions in 1972, the last full year before Roe v. Wade. The total fell to 19 in 1973 and to single digits or zero every year after that. (The number of deaths from legal abortions has also declined since then, though with some slight variation over time.)

The number of deaths from induced abortions was considerably higher in the 1960s than afterward. For instance, there were 119 deaths from induced abortions in  1963  and 99 in  1965 , according to reports by the then-U.S. Department of Health, Education and Welfare, a precursor to the Department of Health and Human Services. The CDC is a division of Health and Human Services.

Note: This is an update of a post originally published May 27, 2022, and first updated June 24, 2022.

Support for legal abortion is widespread in many countries, especially in Europe

Nearly a year after roe’s demise, americans’ views of abortion access increasingly vary by where they live, by more than two-to-one, americans say medication abortion should be legal in their state, most latinos say democrats care about them and work hard for their vote, far fewer say so of gop, positive views of supreme court decline sharply following abortion ruling, most popular.

1615 L St. NW, Suite 800 Washington, DC 20036 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Age & Generations
  • Coronavirus (COVID-19)
  • Economy & Work
  • Family & Relationships
  • Gender & LGBTQ
  • Immigration & Migration
  • International Affairs
  • Internet & Technology
  • Methodological Research
  • News Habits & Media
  • Non-U.S. Governments
  • Other Topics
  • Politics & Policy
  • Race & Ethnicity
  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

Copyright 2024 Pew Research Center

Terms & Conditions

Privacy Policy

Cookie Settings

Reprints, Permissions & Use Policy

COMMENTS

  1. Chapter 11 Early and Middle Adulthood: Relationships and Roles

    Choose the statement that is a fact established by research. Divorce rarely produces emotional growth. Ending a terrible marriage still doesn't bring relief to people.

  2. developmental psych ch 11 marriage Flashcards

    Choose the statement that is a fact established by research. People who had reported being miserable in their marriage feel better emotionally after a divorce. Scandinavian adults see marriage as. one of several acceptable alternatives. A core sign of commitment in a relationship is.

  3. PSYCH 230- LC 11A Flashcards

    Choose the statement that is a fact established by research. People who had reported being miserable in their marriage feel better emotionally after a divorce. Serial cohabitation is _____ the deinstitutionalization of marriage. related to. Drew has college class with Carrie, who he does not know. He finds that he is sexually attracted to ...

  4. The scientific method and experimental design

    Choose 1 answer: The facts collected from an experiment are written in the form of a hypothesis. A. The facts collected from an experiment are written in the form of a hypothesis. A hypothesis is the correct answer to a scientific question. B. A hypothesis is the correct answer to a scientific question. A hypothesis is a possible, testable ...

  5. 3. Fact or Opinion

    Fact - Facts are useful to inform or make an argument. Examples: The United States was established in 1776. The pH levels in acids are lower than the pH levels in alkalines. Beethoven had a reputation as a virtuoso pianist. Opinion - Opinions are useful to persuade, but careful readers and listeners will notice and demand evidence to back ...

  6. 2.4: Fact or Opinion

    Fact or Opinion. An author's purpose can influence the kind of information he or she choses to include. Thinking about the reason an author produced a source can be helpful to you because that reason was what dictated the kind of information he/she chose to include. Depending on that purpose, the author may have chosen to include factual ...

  7. Fact or Opinion

    Examples: The United States was established in 1776. The pH levels in acids are lower than pH levels in alkalines. Beethoven had a reputation as a virtuoso pianist. Opinion —Opinions are useful to persuade or to make an argument. Examples: That was a good movie. Strawberries taste better blueberries. George Clooney is the sexiest actor alive.

  8. Fact or Opinion

    Fact vs. Opinion vs. Objective vs. Subjective. Need to brush up on the differences between fact, objective information, subjective information, and opinion? Fact - Facts are useful to inform or make an argument. Examples: The United States was established in 1776. The pH levels in acids are lower than pH levels in alkalines.

  9. Formulation of Research Question

    Abstract. Formulation of research question (RQ) is an essentiality before starting any research. It aims to explore an existing uncertainty in an area of concern and points to a need for deliberate investigation. It is, therefore, pertinent to formulate a good RQ. The present paper aims to discuss the process of formulation of RQ with stepwise ...

  10. What Is Research?

    Research is the deliberate, purposeful, and systematic gathering of data, information, facts, and/or opinions for the advancement of personal, societal, or overall human knowledge. Based on this definition, we all do research all the time. Most of this research is casual research. Asking friends what they think of different restaurants, looking ...

  11. Types of evidence (article)

    Types of Evidence. It can be useful to separate and identify different types of evidence used in an argument to support a conclusion. This can help you avoid getting "lost" in the words; if you're reading actively and recognizing what type of evidence you're looking at, then you're more likely to stay focused.

  12. Chapter 4 Theories in Scientific Research

    Chapter 4 Theories in Scientific Research. As we know from previous chapters, science is knowledge represented as a collection of "theories" derived using the scientific method. In this chapter, we will examine what is a theory, why do we need theories in research, what are the building blocks of a theory, how to evaluate theories, how can ...

  13. Fact-Checking Essentials

    Check assertions about scientific theories and evidence. Sometimes, the easiest way to do this will be to contact scientists in the field (see below); other times, the information will be well-established in the literature. Confirm statistics. Check all proper names, titles, product names, place names, locations, etc. Check terms used.

  14. 2.3: Fact or Opinion

    Fact vs. Opinion vs. Objective vs. Subjective. Do you need to brush up on the differences between fact, objective information, subjective information, and opinion? Fact - Facts are useful to inform or make an argument. Examples: The United States was established in 1776. The pH levels in acids are lower than pH levels in alkalines.

  15. Fact or Opinion

    Fact - Facts are useful to inform or make an argument. Examples: The United States was established in 1776. The pH levels in acids are lower than pH levels in alkalines. Beethoven had a reputation as a virtuoso pianist. Opinion - Opinions are useful to persuade, but careful readers and listeners will notice and demand evidence to back them up.

  16. Science and the scientific method: Definitions and examples

    When conducting research, scientists use the scientific method to collect measurable, empirical evidence in an experiment related to a hypothesis (often in the form of an if/then statement) that ...

  17. Fact, Hypothesis, and Theory

    A theory has to have a basis, in fact, it must have a very strong basis. A theory is a scientifically acceptable principle that is offered to explain a vast body of facts, and is supported by an overwhelming body of evidence. You can't have a theory before you have the evidence. Science starts out with observations - facts that are not ...

  18. 11.4 Strategies for Gathering Reliable Information

    Using Primary and Secondary Sources. Writers classify research resources in two categories: primary sources and secondary sources. Primary sources are direct, firsthand sources of information or data. For example, if you were writing a paper about the First Amendment right to freedom of speech, the text of the First Amendment in the Bill of Rights would be a primary source.

  19. Read "Responsible Science: Ensuring the Integrity of the Research

    A well-established discipline can also experience profound changes during periods of new conceptual insights. ... the absence of formal statements by research institutions of the principles that should guide research conducted by their members has prompted criticism that scientists and their institutions lack a clearly identifiable means to ...

  20. Facts vs Evidence: Similarities, Differences, and Proper Use

    Similarly, in scientific research, the choice between using facts and evidence can depend on the nature of the study being conducted. In some cases, empirical evidence gathered through experiments or observations may be necessary to support a hypothesis. In other cases, established facts and theories may be sufficient to make a case.

  21. What is Research? Definition, Types, Methods and Process

    Conducting research involves a systematic and organized process that follows specific steps to ensure the collection of reliable and meaningful data. The research process typically consists of the following steps: Step 1. Identify the Research Topic. Choose a research topic that interests you and aligns with your expertise and resources.

  22. Fact or Opinion

    Fact - Facts are useful to inform or make an argument. Examples: The country of Canada was established in 1867. The pH levels in acids are lower than pH levels in alkalines. Beethoven was a composer and pianist. Opinion - Opinions are useful to persuade, but careful readers and listeners will notice and demand evidence to back them up.

  23. Fact vs Myth: When To Use Each One In Writing

    When using fact in a sentence, it is essential to ensure that the statement is accurate and supported by evidence. Here are some examples of how to use fact in a sentence: The fact that the earth revolves around the sun is well-established in science. It is a fact that smoking causes lung cancer. She presented the facts of the case to the judge.

  24. What the data says about abortion in the U.S.

    Pew Research Center has conducted many surveys about abortion over the years, providing a lens into Americans' views on whether the procedure should be legal, among a host of other questions. In a Center survey conducted nearly a year after the Supreme Court's June 2022 decision that ended the constitutional right to abortion, ...