Statology

Statistics Made Easy

What is a Confounding Variable? (Definition & Example)

In any experiment, there are two main variables:

The independent variable:  the variable that an experimenter changes or controls so that they can observe the effects on the dependent variable.

The dependent variable:  the variable being measured in an experiment that is “dependent” on the independent variable.

does random assignment reduce confounding variables

Researchers are often interested in understanding how changes in the independent variable affect the dependent variable.

However, sometimes there is a third variable that is not accounted for that can affect the relationship between the two variables under study.

Confounding variable

This type of variable is known as a confounding variable and it can  confound the results of a study and make it appear that there exists some type of cause-and-effect relationship between two variables that doesn’t actually exist.

Confounding variable: A variable that is not included in an experiment, yet affects the relationship between the two variables in an experiment.   This type of variable can confound the results of an experiment and lead to unreliable findings.

For example, suppose a researcher collects data on ice cream sales and shark attacks and finds that the two variables are highly correlated. Does this mean that increased ice cream sales cause more shark attacks?

That’s unlikely. The more likely cause is the confounding variable temperature . When it is warmer outside, more people buy ice cream and more people go in the ocean.

Example of confounding variable

Requirements for Confounding Variables

In order for a variable to be a confounding variable, it must meet the following requirements:

1. It must be correlated with the independent variable.

In the previous example, temperature was correlated with the independent variable of ice cream sales. In particular, warmer temperatures are associated with higher ice cream sales and cooler temperatures are associated with lower sales.

2. It must have a causal relationship with the dependent variable.

In the previous example, temperature had a direct causal effect on the number of shark attacks. In particular, warmer temperatures cause more people to go into the ocean which directly increases the probability of shark attacks occurring.

Why Are Confounding Variables Problematic?

Confounding variables are problematic for two reasons:

1. Confounding variables can make it seem that cause-and-effect relationships exist when they don’t.

In our previous example, the confounding variable of temperature made it seem like there existed a cause-and-effect relationship between ice cream sales and shark attacks.

However, we know that ice cream sales don’t cause shark attacks. The confounding variable of temperature just made it seem this way.

2. Confounding variables can mask the true cause-and-effect relationship between variables.

Suppose we’re studying the ability of exercise to reduce blood pressure. One potential confounding variable is starting weight, which is correlated with exercise and has a direct causal effect on blood pressure.

does random assignment reduce confounding variables

While increased exercise may lead to reduced blood pressure, an individual’s starting weight also has a big impact on the relationship between these two variables.

Confounding Variables & Internal Validity

In technical terms, confounding variables affect the  internal validity of a study, which refers to how valid it is to attribute any changes in the dependent variable to changes in the independent variable.

When confounding variables are present, we can’t always say with complete confidence that the changes we observe in the dependent variable are a direct result of changes in the independent variable.

How to Reduce the Effect of Confounding Variables

There are several ways to reduce the effect of confounding variables, including the following methods:

1. Random Assignment

Random assignment refers to the process of randomly assigning individuals in a study to either a treatment group or a control group.

For example, suppose we want to study the effect of a new pill on blood pressure. If we recruit 100 individuals to participate in the study then we might use a random number generator to randomly assign 50 individuals to a control group (no pill) and 50 individuals to a treatment group (new pill).

By using random assignment, we increase the chances that the two groups will have roughly similar characteristics, which means that any difference we observe between the two groups can be attributed to the treatment.

This means the study should have internal validity  – it’s valid to attribute any differences in blood pressure between the groups to the pill itself as opposed to differences between the individuals in the groups.

2. Blocking

Blocking refers to the practice of dividing individuals in a study into “blocks” based on some value of a confounding variable to eliminate the effect of the confounding variable.

For example, suppose researchers want to understand the effect that a new diet has on weight less. The independent variable is the new diet and the dependent variable is the amount of weight loss.

However, a confounding variable that will likely cause variation in weight loss is gender . It’s likely that the gender of an individual will effect the amount of weight they’ll lose, regardless of whether the new diet works or not.

One way to handle this problem is to place individuals into one of two blocks: 

Then, within each block we would randomly assign individuals to one of two treatments:

  • A standard diet

By doing this, the variation within each block would be much lower compared to the variation among all individuals and we would be able to gain a better understanding of how the new diet affects weight loss while controlling for gender.

3. Matching

A matched pairs design is a type of experimental design in which we “match” individuals based on values of potential confounding variables.

For example, suppose researchers want to know how a new diet affects weight loss compared to a standard diet. Two potential confounding variables in this situation are  age and  gender .

To account for this, researchers recruit 100 subjects, then group the subjects into 50 pairs based on their age and gender. For example:

  • A 25-year-old male will be paired with another 25-year-old male, since they “match” in terms of age and gender.
  • A 30-year-old female will be paired with another 30-year-old female since they also match on age and gender, and so on.

Then, within each pair, one subject will randomly be assigned to follow the new diet for 30 days and the other subject will be assigned to follow the standard diet for 30 days.

At the end of the 30 days, researchers will measure the total weight loss for each subject.

does random assignment reduce confounding variables

By using this type of design, researchers can be confident that any differences in weight loss can be attributed to the type of diet used rather than the confounding variables age and  gender .

This type of design does have a few drawbacks, including:

1. Losing two subjects if one drops out.  If one subject decides to drop out of the study, you actually lose two subjects since you no longer have a complete pair.

2. Time-consuming to find matches . It can be quite time-consuming to find subjects who match on certain variables, such as gender and age.

3. Impossible to match subjects perfectly . No matter how hard you try, there will always be some variation within the subjects in each pair.

However, if a study has the resources available to implement this design it can be highly effective at eliminating the effects of confounding variables.

' src=

Published by Zach

Leave a reply cancel reply.

Your email address will not be published. Required fields are marked *

7 Different Ways to Control for Confounding

Confounding can be controlled in the design phase of the study by using:

  • Random assignment
  • Restriction

Or in the data analysis phase by using:

  • Stratification
  • Inverse probability weighting
  • Instrumental variable estimation

Here’s a quick summary of the similarities and differences between these methods:

In what follows, we will explain how each of these methods works, and discuss its advantages and limitations.

1. Random assignment

How it works.

Random assignment is a process by which participants are assigned, with the same chance, to either receive or not a certain exposure.

each participant is assigned with the same chance to either being exposed or not

Randomizing the exposure adjusts for confounding by eliminating the influence of the confounder on the probability of receiving the exposure:

random assignment eliminates confounding by removing the association between the confounder and the exposure

Advantage of random assignment:

Random assignment controls for confounding due to both measurable and unmeasurable causes. So it is especially useful when confounding variables are unknown or cannot be measured.

It also controls for time-varying confounders, that is when the exposure and the confounders are measured repeatedly in studies where participants are followed over time.

Limitation of random assignment:

Here are 3 reasons not to use random assignment:

  • Ethical reason: Randomizing participants would be unethical when studying the effect of a harmful exposure, or on the contrary, when it is known for certain that the exposure is beneficial.
  • Practical reason: Some exposures are very hard to randomize, like air pollution and education. Also, random assignment is not an option when we are analyzing observational data that we did not collect ourselves.
  • Financial reason: Random assignment is a part of experimental designs where participants are followed over time, which turns out to be highly expensive in some cases.

Whenever the exposure cannot be randomly assigned to study participants, we will have to use an observational design and control for confounding by using another method from this list.

2. Restriction

Restriction refers to only including in the study participants of a certain confounder category, thereby eliminating its confounding effect.

For instance, if the relationship between smoking (the exposure) and heart disease (the outcome) is confounded by income, then restricting our study to only include participants of the same income category will eliminate its confounding effect:

causal diagram representing how restriction works

Advantage of restriction:

Unlike random assignment, restriction is easy to apply and also works for observational studies.

Limitation of restriction:

The biggest problem with restricting our study to 1 category of the confounder is that the results will not generalize well to the other categories. So restriction will limit the external validity of the study especially in cases where we have more than 1 confounder to control for.

3. Matching

Matching works by distributing the confounding variable evenly between the exposed and the unexposed groups.

The idea is to pair each exposed subject with an unexposed subject that shares the same characteristics regarding the variable that we want to control for. Then, by only analyzing participants for whom we found a match, we eliminate the confounding effect of that variable.

For example, suppose we want to control for income as a confounder of the relationship between smoking (the exposure) and heart disease (the outcome).

Representing the confounding effect of income in a causal diagram

In this case, each smoker should be matched with a non-smoker of the same income category.

Here’s a step-by-step description of how this works:

Initially: The confounder is unequally distributed among the exposed and unexposed groups.

graphical representation of the initial status before matching

Step 1: Match each smoker with a non-smoker of the same income category.

graphical representation of the first step of matching

Step 2: Exclude all unmatched participants from the study.

graphical representation of step 2 of matching

Result: The 2 groups will be balanced regarding the confounding variable.

Advantage of matching

Matching can be easy to apply in certain cases. For instance, matching on income in the example above can be done by selecting 1 smoker and 1 non-smoker from the same family, therefore having the same household income.

Limitation of matching

The more confounding variables we have to control for, the more difficult matching becomes, especially for continuous variables. The problem with matching on many characteristics is that a lot of participants will end up unmatched.

4. Stratification

Stratification controls for confounding by estimating the relationship between the exposure and the outcome within different subsets of the confounding variable, and then pooling these estimates.

Stratification works because, within each subset, the value of the confounder is the same for all participants and therefore cannot affect the estimated effect of the exposure on the outcome.

Here’s a step-by-step description of how to conduct a stratified analysis:

Step 1: Start by splitting the data into multiple subgroups (a.k.a. strata) according to the different categories of the confounding variable.

splitting the data into subgroups

Step 2: Within each subgroup (or stratum), estimate the relationship between the exposure and the outcome.

calculate the estimate in each subgroup

Step 3: Pool the obtained estimates:

  • By averaging them.
  • Or by weighting them by the size of each stratum — a method called standardization.

pooling the estimates

Result: The pooled estimate will be free of confounding.

Advantage of stratification

Stratified analysis is an old and intuitive method used to teach the logic of controlling for confounding. A more modern and practical approach would be regression analysis, which is next on our list.

Limitation of stratification

Stratification does not scale well, since controlling for multiple confounders simultaneously will lead to:

  • Complex calculations.
  • Subgroups that contain very few participants, and these will reflect the noise in the data more so than real effects.

5. Regression

Adjusting for confounding using regression simply means to include the confounding variable in the model used to estimate the influence of the exposure on the outcome.

A linear regression model, for example, will be of the form:

Where the coefficient β 1 will reflect the effect of the exposure on the outcome adjusted for the confounder.

Advantage of regression

Regression can easily control for multiple confounders simultaneously, as this simply means adding more variables to the model.

For more details on how to use it in practice, I wrote a separate article: An Example of Identifying and Adjusting for Confounding .

Limitation of regression

A regression model operates under certain assumptions that must be respected. For example, for linear regression these are:

  • A linear relationship between the predictors (the exposure and the confounder) and the outcome.
  • Independence, normality, and equal variance of the residuals.

6. Inverse probability weighting

Inverse probability weighting eliminates confounding by equalizing the frequency of the confounder between the exposed and the unexposed groups. This is done by counting each participant as many times as her inverse probability of being in a certain exposure category.

Here’s a step-by-step description of the process:

Suppose we want to control for income as a confounder of the relationship between smoking (the exposure) and heart disease (the outcome):

does random assignment reduce confounding variables

Initially: Since income and smoking are associated, participants of different income levels will have different probabilities of being smokers.

the variable income is unequally distributed between the exposure groups

First, let’s focus on high income participants:

considering high income participants

Step 1: Calculate the probability “P” that a person is a smoker.

calculating the probability of being in the smoking group for high income participants

Step 2: Calculate the probability that a person is a non-smoker.

calculating the probability of being in the non-smoking group for high income participants

Step 3: Multiply each person by the inverse of their calculated probability. So each participant will no longer count as 1 person in the analysis. Instead, each will be counted as many times as their calculated inverse probability weight (i.e. 1 person will be 1/P persons).

Weighting each person by their inverse probability

Now the smoking group has: 1 × 5 = 5 participants. And the non-smoking group also has: 4 × 5/4 = 5 participants

Finally, we have to repeat steps 1, 2, and 3 for participants in the low-income category.

repeating the process for the low-income group

Result: The smoker and non-smoker groups are now balanced regarding income. So its confounding effect will be eliminated because it is no longer associated with the exposure.

Advantage of inverse probability weighting

This method is a type of what is referred to as G-methods that are used to control for time-varying confounders, that is when the exposure and the confounders are measured repeatedly in studies where participants are followed over time.

Limitation of inverse probability weighting

If some participants have very large weights (i.e. when their probability of being in a certain exposure category is very low), then each of these participants would be counted as a large number of people, which leads to instability in the estimation of the causal effect of the exposure on the outcome.

One solution would be to exclude from the study participants with very high or very low weights.

7. Instrumental variable estimation

The instrumental variable method estimates the unconfounded effect of the exposure on the outcome indirectly by using a variable — the instrumental variable — that represents the exposure but is not affected by confounding.

An instrumental variable satisfies 3 properties:

  • It causes the exposure.
  • It does not cause the outcome directly — it affects the outcome only through the exposure.
  • Its association with the outcome is unconfounded.

Here’s a diagram that represents the relationship between the instrumental variable, the exposure, and the outcome:

instrumental variable representation in a causal diagram

An instrumental variable is chosen so that nothing appears to cause it. So in a sense, it resembles the coin flip in a randomized experiment, because it appears to be randomly assigned.

How the instrumental variable can be used to study causality?

Looking at the data, if an association is found between the instrumental variable and the outcome then it must be causal, since according to property (3) above, their relationship is unconfounded. And because the instrumental variable affects the outcome only through the exposure, according to property (2), we can conclude that the exposure has a causal effect on the outcome.

How the instrumental variable helps identifying a causal relationship between the exposure and the outcome

So how to quantify this causal/unconfounded effect of the exposure on the outcome?

Let “α 1 ” denote the magnitude of the causal effect of the instrumental variable on the exposure, and “β 1 ” that of the exposure on the outcome.

alpha 1 is the effect of the instrumental variable on the exposure and beta 1 is the effect of the exposure on the outcome

So our objective is to find β 1 .

Note that the simple regression model between the exposure and the outcome produces a confounded estimate of β 1 :

And therefore does not reflect the true β 1 that we are searching for.

So how to find this true unconfounded β 1 ?

Technically, if we think in terms of linear regression:

  • α 1 is the change in the exposure CAUSED by a 1 unit change in the instrumental variable.
  • β 1 is the change in the outcome CAUSED by a 1 unit change in the exposure.

It follows that a 1 unit change in the instrumental variable CAUSES an α 1 × β 1 change in the outcome (since the instrumental variable only affects the outcome through the exposure).

And as discussed above, any association between the instrumental variable and the outcome is causal. So, α 1 × β 1 can be estimated from the following regression model:

Where a 1 = α 1 × β 1

And because any association between the instrumental variable and the exposure is also causal (also unconfounded), the following model can be used to estimate α 1 :

Where b 1 = α 1

We end up with 2 equations:

  • α 1 × β 1 = a 1

A simple calculation yields: β 1 = a 1 /b 1 which will be our estimated causal effect of the exposure on the outcome.

Advantage of instrumental variable estimation

Because the calculations that we just did are not dependent on any information about the confounder, we can use the instrumental variable approach to control for any measured, unmeasured, and unknown confounder.

This method is so powerful that it can be used in cases even where we do not know whether there is confounding or not between the exposure and the outcome, and which variables are suspect.

Limitation of instrumental variable estimation

In cases where the instrumental variable and the exposure are weakly correlated, the estimated effect of the exposure on the outcome will be biased.

The use of linear regression is also constrained by its assumptions, especially linearity and constant variance of the residuals.

As a rule of thumb, use the instrumental variable approach in cases where there are unmeasured confounders, otherwise, use other methods from this list since they will, in general, provide a better estimate of the causal relationship between the exposure and the outcome.

If you are interested, here are 3 Real-World Examples of Using Instrumental Variables .

  • Hernán M, Robins JM. Causal Inference . Chapman & Hall/CRC; 2020.
  • Roy J. A Crash Course in Causality: Inferring Causal Effects from Observational Data | Coursera.
  • Pearl J, Mackenzie D. The Book of Why: The New Science of Cause and Effect . First edition. Basic Books; 2018.

Further reading

  • Front-Door Criterion to Adjust for Unmeasured Confounding
  • 4 Simple Ways to Identify Confounding
  • 5 Real-World Examples of Confounding [With References]
  • Why Confounding is Not a Type of Bias
  • Using the 4 D-Separation Rules to Study a Causal Association
  • List of All Biases in Research (Sorted by Popularity)

Random Assignment in Psychology: Definition & Examples

Julia Simkus

Editor at Simply Psychology

BA (Hons) Psychology, Princeton University

Julia Simkus is a graduate of Princeton University with a Bachelor of Arts in Psychology. She is currently studying for a Master's Degree in Counseling for Mental Health and Wellness in September 2023. Julia's research has been published in peer reviewed journals.

Learn about our Editorial Process

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

In psychology, random assignment refers to the practice of allocating participants to different experimental groups in a study in a completely unbiased way, ensuring each participant has an equal chance of being assigned to any group.

In experimental research, random assignment, or random placement, organizes participants from your sample into different groups using randomization. 

Random assignment uses chance procedures to ensure that each participant has an equal opportunity of being assigned to either a control or experimental group.

The control group does not receive the treatment in question, whereas the experimental group does receive the treatment.

When using random assignment, neither the researcher nor the participant can choose the group to which the participant is assigned. This ensures that any differences between and within the groups are not systematic at the onset of the study. 

In a study to test the success of a weight-loss program, investigators randomly assigned a pool of participants to one of two groups.

Group A participants participated in the weight-loss program for 10 weeks and took a class where they learned about the benefits of healthy eating and exercise.

Group B participants read a 200-page book that explains the benefits of weight loss. The investigator randomly assigned participants to one of the two groups.

The researchers found that those who participated in the program and took the class were more likely to lose weight than those in the other group that received only the book.

Importance 

Random assignment ensures that each group in the experiment is identical before applying the independent variable.

In experiments , researchers will manipulate an independent variable to assess its effect on a dependent variable, while controlling for other variables. Random assignment increases the likelihood that the treatment groups are the same at the onset of a study.

Thus, any changes that result from the independent variable can be assumed to be a result of the treatment of interest. This is particularly important for eliminating sources of bias and strengthening the internal validity of an experiment.

Random assignment is the best method for inferring a causal relationship between a treatment and an outcome.

Random Selection vs. Random Assignment 

Random selection (also called probability sampling or random sampling) is a way of randomly selecting members of a population to be included in your study.

On the other hand, random assignment is a way of sorting the sample participants into control and treatment groups. 

Random selection ensures that everyone in the population has an equal chance of being selected for the study. Once the pool of participants has been chosen, experimenters use random assignment to assign participants into groups. 

Random assignment is only used in between-subjects experimental designs, while random selection can be used in a variety of study designs.

Random Assignment vs Random Sampling

Random sampling refers to selecting participants from a population so that each individual has an equal chance of being chosen. This method enhances the representativeness of the sample.

Random assignment, on the other hand, is used in experimental designs once participants are selected. It involves allocating these participants to different experimental groups or conditions randomly.

This helps ensure that any differences in results across groups are due to manipulating the independent variable, not preexisting differences among participants.

When to Use Random Assignment

Random assignment is used in experiments with a between-groups or independent measures design.

In these research designs, researchers will manipulate an independent variable to assess its effect on a dependent variable, while controlling for other variables.

There is usually a control group and one or more experimental groups. Random assignment helps ensure that the groups are comparable at the onset of the study.

How to Use Random Assignment

There are a variety of ways to assign participants into study groups randomly. Here are a handful of popular methods: 

  • Random Number Generator : Give each member of the sample a unique number; use a computer program to randomly generate a number from the list for each group.
  • Lottery : Give each member of the sample a unique number. Place all numbers in a hat or bucket and draw numbers at random for each group.
  • Flipping a Coin : Flip a coin for each participant to decide if they will be in the control group or experimental group (this method can only be used when you have just two groups) 
  • Roll a Die : For each number on the list, roll a dice to decide which of the groups they will be in. For example, assume that rolling 1, 2, or 3 places them in a control group and rolling 3, 4, 5 lands them in an experimental group.

When is Random Assignment not used?

  • When it is not ethically permissible: Randomization is only ethical if the researcher has no evidence that one treatment is superior to the other or that one treatment might have harmful side effects. 
  • When answering non-causal questions : If the researcher is just interested in predicting the probability of an event, the causal relationship between the variables is not important and observational designs would be more suitable than random assignment. 
  • When studying the effect of variables that cannot be manipulated: Some risk factors cannot be manipulated and so it would not make any sense to study them in a randomized trial. For example, we cannot randomly assign participants into categories based on age, gender, or genetic factors.

Drawbacks of Random Assignment

While randomization assures an unbiased assignment of participants to groups, it does not guarantee the equality of these groups. There could still be extraneous variables that differ between groups or group differences that arise from chance. Additionally, there is still an element of luck with random assignments.

Thus, researchers can not produce perfectly equal groups for each specific study. Differences between the treatment group and control group might still exist, and the results of a randomized trial may sometimes be wrong, but this is absolutely okay.

Scientific evidence is a long and continuous process, and the groups will tend to be equal in the long run when data is aggregated in a meta-analysis.

Additionally, external validity (i.e., the extent to which the researcher can use the results of the study to generalize to the larger population) is compromised with random assignment.

Random assignment is challenging to implement outside of controlled laboratory conditions and might not represent what would happen in the real world at the population level. 

Random assignment can also be more costly than simple observational studies, where an investigator is just observing events without intervening with the population.

Randomization also can be time-consuming and challenging, especially when participants refuse to receive the assigned treatment or do not adhere to recommendations. 

What is the difference between random sampling and random assignment?

Random sampling refers to randomly selecting a sample of participants from a population. Random assignment refers to randomly assigning participants to treatment groups from the selected sample.

Does random assignment increase internal validity?

Yes, random assignment ensures that there are no systematic differences between the participants in each group, enhancing the study’s internal validity .

Does random assignment reduce sampling error?

Yes, with random assignment, participants have an equal chance of being assigned to either a control group or an experimental group, resulting in a sample that is, in theory, representative of the population.

Random assignment does not completely eliminate sampling error because a sample only approximates the population from which it is drawn. However, random sampling is a way to minimize sampling errors. 

When is random assignment not possible?

Random assignment is not possible when the experimenters cannot control the treatment or independent variable.

For example, if you want to compare how men and women perform on a test, you cannot randomly assign subjects to these groups.

Participants are not randomly assigned to different groups in this study, but instead assigned based on their characteristics.

Does random assignment eliminate confounding variables?

Yes, random assignment eliminates the influence of any confounding variables on the treatment because it distributes them at random among the study groups. Randomization invalidates any relationship between a confounding variable and the treatment.

Why is random assignment of participants to treatment conditions in an experiment used?

Random assignment is used to ensure that all groups are comparable at the start of a study. This allows researchers to conclude that the outcomes of the study can be attributed to the intervention at hand and to rule out alternative explanations for study results.

Further Reading

  • Bogomolnaia, A., & Moulin, H. (2001). A new solution to the random assignment problem .  Journal of Economic theory ,  100 (2), 295-328.
  • Krause, M. S., & Howard, K. I. (2003). What random assignment does and does not do .  Journal of Clinical Psychology ,  59 (7), 751-766.

Print Friendly, PDF & Email

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons
  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Statistics LibreTexts

1.3: Threats to Internal Validity and Different Control Techniques

  • Last updated
  • Save as PDF
  • Page ID 32915

  • Yang Lydia Yang
  • Kansas State University

Internal validity is often the focus from a research design perspective. To understand the pros and cons of various designs and to be able to better judge specific designs, we identify specific threats to internal validity . Before we do so, it is important to note that the primary challenge to establishing internal validity in social sciences is the fact that most of the phenomena we care about have multiple causes and are often a result of some complex set of interactions. For example, X may be only a partial cause of Y or X may cause Y, but only when Z is present. Multiple causation and interactive effects make it very difficult to demonstrate causality. Turning now to more specific threats, Figure 1.3.1 below identifies common threats to internal validity.

Different Control Techniques

All of the common threats mentioned above can introduce extraneous variables into your research design, which will potentially confound your research findings. In other words, we won't be able to tell whether it is the independent variable (i.e., the treatment we give participants), or the extraneous variable, that causes the changes in the dependent variable. Controlling for extraneous variables reduces its threats on the research design and gives us a better chance to claim the independent variable causes the changes in the dependent variable, i.e., internal validity. There are different techniques we can use to control for extraneous variables.

Random assignment

Random assignment is the single most powerful control technique we can use to minimize the potential threats of the confounding variables in research design. As we have seen in Dunn and her colleagues' study earlier, participants are not allowed to self select into either conditions (spend $20 on self or spend on others). Instead, they are randomly assigned into either group by the researcher(s). By doing so, the two groups are likely to be similar on all other factors except the independent variable itself. One confounding variable mentioned earlier is whether individuals had a happy childhood to begin with. Using random assignment, those who had a happy childhood will likely end up in each condition group. Similarly, those who didn't have a happy childhood will likely end up in each condition group too. As a consequence, we can expect the two condition groups to be very similar on this confounding variable. Applying the same logic, we can use random assignment to minimize all potential confounding variables (assuming your sample size is large enough!). With that, the only difference between the two groups is the condition participants are assigned to, which is the independent variable, then we are confident to infer that the independent variable actually causes the differences in the dependent variables.

It is critical to emphasize that random assignment is the only control technique to control for both known and unknown confounding variables. With all other control techniques mentioned below, we must first know what the confounding variable is before controlling it. Random assignment does not. With the simple act of randomly assigning participants into different conditions, we take care both the confounding variables we know of and the ones we don't even know that could threat the internal validity of our studies. As the saying goes, "what you don't know will hurt you." Random assignment take cares of it.

Matching is another technique we can use to control for extraneous variables. We must first identify the extraneous variable that can potentially confound the research design. Then we want to rank order the participants on this extraneous variable or list the participants in a ascending or descending order. Participants who are similar on the extraneous variable will be placed into different treatment groups. In other words, they are "matched" on the extraneous variable. Then we can carry out the intervention/treatment as usual. If different treatment groups do show differences on the dependent variable, we would know it is not the extraneous variables because participants are "matched" or equivalent on the extraneous variable. Rather it is more likely to the independent variable (i.e., the treatments) that causes the changes in the dependent variable. Use the example above (self-spending vs. others-spending on happiness) with the same extraneous variable of whether individuals had a happy childhood to begin with. Once we identify this extraneous variable, we do need to first collect some kind of data from the participants to measure how happy their childhood was. Or sometimes, data on the extraneous variables we plan to use may be already available (for example, you want to examine the effect of different types of tutoring on students' performance in Calculus I course and you plan to match them on this extraneous variable: college entrance test scores, which is already collected by the Admissions Office). In either case, getting the data on the identified extraneous variable is a typical step we need to do before matching. So going back to whether individuals had a happy childhood to begin with. Once we have data, we'd sort it in a certain order, for example, from the highest score (meaning participants reporting the happiest childhood) to the lowest score (meaning participants reporting the least happy childhood). We will then identify/match participants with the highest levels of childhood happiness and place them into different treatment groups. Then we go down the scale and match participants with relative high levels of childhood happiness and place them into different treatment groups. We repeat on the descending order until we match participants with the lowest levels of childhood happiness and place them into different treatment groups. By now, each treatment group will have participants with a full range of levels on childhood happiness (which is a strength...thinking about the variation, the representativeness of the sample). The two treatment groups will be similar or equivalent on this extraneous variable. If the treatments, self-spending vs. other-spending, eventually shows the differences on individual happiness, then we know it's not due to how happy their childhood was. We will be more confident it is due to the independent variable.

You may be thinking, but wait we have only taken care of one extraneous variable. What about other extraneous variables? Good thinking.That's exactly correct. We mentioned a few extraneous variables but have only matched them on one. This is the main limitation of matching. You can match participants on more than one extraneous variables, but it's cumbersome, if not impossible, to match them on 10 or 20 extraneous variables. More importantly, the more variables we try to match participants on, the less likely we will have a similar match. In other words, it may be easy to find/match participants on one particular extraneous variable (similar level of childhood happiness), but it's much harder to find/match participants to be similar on 10 different extraneous variables at once.

Holding Extraneous Variable Constant

Holding extraneous variable constant control technique is self-explanatory. We will use participants at one level of extraneous variable only, in other words, holding the extraneous variable constant. Using the same example above, for example we only want to study participants with the low level of childhood happiness. We do need to go through the same steps as in Matching: identifying the extraneous variable that can potentially confound the research design and getting the data on the identified extraneous variable. Once we have the data on childhood happiness scores, we will only include participants on the lower end of childhood happiness scores, then place them into different treatment groups and carry out the study as before. If the condition groups, self-spending vs. other-spending, eventually shows the differences on individual happiness, then we know it's not due to how happy their childhood was (since we already picked those on the lower end of childhood happiness only). We will be more confident it is due to the independent variable.

Similarly to Matching, we have to do this one extraneous variable at a time. As we increase the number of extraneous variables to be held constant, the more difficult it gets. The other limitation is by holding extraneous variable constant, we are excluding a big chunk of participants, in this case, anyone who are NOT low on childhood happiness. This is a major weakness, as we reduce the variability on the spectrum of childhood happiness levels, we decreases the representativeness of the sample and generalizabiliy suffers.

Building Extraneous Variables into Design

The last control technique building extraneous variables into research design is widely used. Like the name suggests, we would identify the extraneous variable that can potentially confound the research design, and include it into the research design by treating it as an independent variable. This control technique takes care of the limitation the previous control technique, holding extraneous variable constant, has. We don't need to excluding participants based on where they stand on the extraneous variable(s). Instead we can include participants with a wide range of levels on the extraneous variable(s). You can include multiple extraneous variables into the design at once. However, the more variables you include in the design, the large the sample size it requires for statistical analyses, which may be difficult to obtain due to limitations of time, staff, cost, access, etc.

Statistical Thinking: A Simulation Approach to Modeling Uncertainty (UM STAT 216 edition)

3.6 causation and random assignment.

Medical researchers may be interested in showing that a drug helps improve people’s health (the cause of improvement is the drug), while educational researchers may be interested in showing a curricular innovation improves students’ learning (the curricular innovation causes improved learning).

To attribute a causal relationship, there are three criteria a researcher needs to establish:

  • Association of the Cause and Effect: There needs to be a association between the cause and effect.
  • Timing: The cause needs to happen BEFORE the effect.
  • No Plausible Alternative Explanations: ALL other possible explanations for the effect need to be ruled out.

Please read more about each of these criteria at the Web Center for Social Research Methods .

The third criterion can be quite difficult to meet. To rule out ALL other possible explanations for the effect, we want to compare the world with the cause applied to the world without the cause. In practice, we do this by comparing two different groups: a “treatment” group that gets the cause applied to them, and a “control” group that does not. To rule out alternative explanations, the groups need to be “identical” with respect to every possible characteristic (aside from the treatment) that could explain differences. This way the only characteristic that will be different is that the treatment group gets the treatment and the control group doesn’t. If there are differences in the outcome, then it must be attributable to the treatment, because the other possible explanations are ruled out.

So, the key is to make the control and treatment groups “identical” when you are forming them. One thing that makes this task (slightly) easier is that they don’t have to be exactly identical, only probabilistically equivalent . This means, for example, that if you were matching groups on age that you don’t need the two groups to have identical age distributions; they would only need to have roughly the same AVERAGE age. Here roughly means “the average ages should be the same within what we expect because of sampling error.”

Now we just need to create the groups so that they have, on average, the same characteristics … for EVERY POSSIBLE CHARCTERISTIC that could explain differences in the outcome.

It turns out that creating probabilistically equivalent groups is a really difficult problem. One method that works pretty well for doing this is to randomly assign participants to the groups. This works best when you have large sample sizes, but even with small sample sizes random assignment has the advantage of at least removing the systematic bias between the two groups (any differences are due to chance and will probably even out between the groups). As Wikipedia’s page on random assignment points out,

Random assignment of participants helps to ensure that any differences between and within the groups are not systematic at the outset of the experiment. Thus, any differences between groups recorded at the end of the experiment can be more confidently attributed to the experimental procedures or treatment. … Random assignment does not guarantee that the groups are matched or equivalent. The groups may still differ on some preexisting attribute due to chance. The use of random assignment cannot eliminate this possibility, but it greatly reduces it.

We use the term internal validity to describe the degree to which cause-and-effect inferences are accurate and meaningful. Causal attribution is the goal for many researchers. Thus, by using random assignment we have a pretty high degree of evidence for internal validity; we have a much higher belief in causal inferences. Much like evidence used in a court of law, it is useful to think about validity evidence on a continuum. For example, a visualization of the internal validity evidence for a study that employed random assignment in the design might be:

does random assignment reduce confounding variables

The degree of internal validity evidence is high (in the upper-third). How high depends on other factors such as sample size.

To learn more about random assignment, you can read the following:

  • The research report, Random Assignment Evaluation Studies: A Guide for Out-of-School Time Program Practitioners

3.6.1 Example: Does sleep deprivation cause an decrease in performance?

Let’s consider the criteria with respect to the sleep deprivation study we explored in class.

3.6.1.1 Association of cause and effect

First, we ask, Is there an association between the cause and the effect? In the sleep deprivation study, we would ask, “Is sleep deprivation associated with an decrease in performance?”

This is what a hypothesis test helps us answer! If the result is statistically significant , then we have an association between the cause and the effect. If the result is not statistically significant, then there is not sufficient evidence for an association between cause and effect.

In the case of the sleep deprivation experiment, the result was statistically significant, so we can say that sleep deprivation is associated with a decrease in performance.

3.6.1.2 Timing

Second, we ask, Did the cause come before the effect? In the sleep deprivation study, the answer is yes. The participants were sleep deprived before their performance was tested. It may seem like this is a silly question to ask, but as the link above describes, it is not always so clear to establish the timing. Thus, it is important to consider this question any time we are interested in establishing causality.

3.6.1.3 No plausible alternative explanations

Finally, we ask Are there any plausible alternative explanations for the observed effect? In the sleep deprivation study, we would ask, “Are there plausible alternative explanations for the observed difference between the groups, other than sleep deprivation?” Because this is a question about plausibility, human judgment comes into play. Researchers must make an argument about why there are no plausible alternatives. As described above, a strong study design can help to strengthen the argument.

At first, it may seem like there are a lot of plausible alternative explanations for the difference in performance. There are a lot of things that might affect someone’s performance on a visual task! Sleep deprivation is just one of them! For example, artists may be more adept at visual discrimination than other people. This is an example of a potential confounding variable. A confounding variable is a variable that might affect the results, other than the causal variable that we are interested in.

Here’s the thing though. We are not interested in figuring out why any particular person got the score that they did. Instead, we are interested in determining why one group was different from another group. In the sleep deprivation study, the participants were randomly assigned. This means that the there is no systematic difference between the groups, with respect to any confounding variables. Yes—artistic experience is a possible confounding variable, and it may be the reason why two people score differently. BUT: There is no systematic difference between the groups with respect to artistic experience, and so artistic experience is not a plausible explanation as to why the groups would be different. The same can be said for any possible confounding variable. Because the groups were randomly assigned, it is not plausible to say that the groups are different with respect to any confounding variable. Random assignment helps us rule out plausible alternatives.

3.6.1.4 Making a causal claim

Now, let’s see about make a causal claim for the sleep deprivation study:

  • Association: There is a statistically significant result, so the cause is associated with the effect
  • Timing: The participants were sleep deprived before their performance was measured, so the cause came before the effect
  • Plausible alternative explanations: The participants were randomly assigned, so the groups are not systematically different on any confounding variable. The only systematic difference between the groups was sleep deprivation. Thus, there are no plausible alternative explanations for the difference between the groups, other than sleep deprivation

Thus, the internal validity evidence for this study is high, and we can make a causal claim. For the participants in this study, we can say that sleep deprivation caused a decrease in performance.

Key points: Causation and internal validity

To make a cause-and-effect inference, you need to consider three criteria:

  • Association of the Cause and Effect: There needs to be a association between the cause and effect. This can be established by a hypothesis test.

Random assignment removes any systematic differences between the groups (other than the treatment), and thus helps to rule out plausible alternative explanations.

Internal validity describes the degree to which cause-and-effect inferences are accurate and meaningful.

Confounding variables are variables that might affect the results, other than the causal variable that we are interested in.

Probabilistic equivalence means that there is not a systematic difference between groups. The groups are the same on average.

How can we make "equivalent" experimental groups?

does random assignment reduce confounding variables

PH717 Module 11 - Confounding and Effect Measure Modification

  •   1  
  • |   2  
  • |   3  
  • |   4  
  • |   5  
  • |   6  
  • |   7  
  • |   8  

On This Page sidebar

Three Methods for Minimizing Confounding in the Study Design Phase

Randomization in a clinical trial, strengths of randomization, limitations of randomization to control for confounding, restriction of enrollment, drawbacks of restriction, matching compared groups, advantages of matching, drawbacks of matching.

Learn More sidebar

Confounding is a major problem in epidemiologic research, and it accounts for many of the discrepancies among published studies. Nevertheless, there are ways of minimizing confounding in the design phase of a study, and there are also methods for adjusting for confounding during analysis of a study.

The ideal way to minimize the effects of confounding is to conduct a large randomized clinical trial so that each subject has an equal chance of being assigned to any of the treatment options. If this is done with a sufficiently large number of subjects, other risk factors (i.e., confounding factors) should be equally distributed among the exposure groups. The beauty of this is that even unknown confounding factors will be equally distributed among the comparison groups. If all of these other factors are distributed equally among the groups being compared, they will not distort the association between the treatment being studied and the outcome.

The success of randomization is usually evaluated in one of the first tables in a clinical trial, i.e., a table comparing characteristics of the exposure groups. If the groups have similar distributions of all of the known confounding factors, then randomization was successful. However, if randomization was not successful in producing equal distributions of confounding factors, then methods of adjusting for confounding must be used in the analysis of the data.

  • There is no limit on the number of confounders that can be controlled
  • It controls for both known and unknown confounders
  • If successful, there is no need to "adjust" for confounding
  • It is limited to intervention studies (clinical trials)
  • It may not be completely effective for small trials

Limiting the study to subjects in one category of the confounder is a simple way of ensuring that all participants have the same level of the confounder. For example,

  • If smoking is a confounding factor, one could limit the study population to only non-smokers or only smokers.
  • If sex is a confounding factor, limit the participants to only men or only women
  • If age is a confounding factor, restrict the study to subjects in a specific age category, e.g., persons >65.

Restriction is simple and generally effective, but it has several drawbacks:

  • It can only be used for known confounders and only when the status of potential subjects is known with respect to that variable
  • Residual confounding may occur if restriction is not narrow enough. For example, a study of the association between physical activity and heart disease might be restricted to subjects between the ages of 30-60, but that is a wide age range, and the risk of heart disease still varies widely within that range.
  • Investigators cannot evaluate the effect of the restricted variable, since it doesn't vary
  • Restriction limits the number of potential subjects and may limit sample size
  • If restriction is used, one cannot generalize the findings to those who were excluded.
  • Restriction is particularly cumbersome if used to control for multiple confounding variables.

Another risk factor can only cause confounding if it is distributed differently in the groups being compared. Therefore, another method of preventing confounding is to match the subjects with respect to confounding variables. This method can be used in both cohort studies and in case-control studies in order to enroll a reference group that has artificially been created to have the same distribution of a confounding factor as the index group. For example,

  • In a case-control study of lung cancer where age is a potential confounding factor, match each case with one or more control subjects of similar age. If this is done the age distribution of the comparison groups will be the same, and there will be no confounding by age.
  • In a cohort study on effects of smoking each smoker (the index group) who is enrolled is matched with a non-smoker (reference group) of similar age. Once again, the groups being compared will have the same age distribution, so confounding by age will be prevented
  • Matching is particularly useful when trying to control for complex or difficult to measure confounding variables, e.g., matching by neighborhood to control for confounding by air pollution.
  • It can also be used in case-control studies with few cases when additional control subjects are enrolled to increase statistical power, e.g., 4 to 1 matching of controls to cases.
  • It can only be used for known confounders.
  • It can be difficult, expensive, and time-consuming to find appropriate matches.
  • One cannot evaluate the effect of the matched variable.
  • Matching requires special analytic methods. 

return to top | previous page | next page

Content ©2021. All Rights Reserved. Date last modified: November 11, 2021. Wayne W. LaMorte, MD, PhD, MPH

Frequently asked questions

How do i prevent confounding variables from interfering with my research.

There are several methods you can use to decrease the impact of confounding variables on your research: restriction, matching, statistical control and randomization.

In restriction , you restrict your sample by only including certain subjects that have the same values of potential confounding variables.

In matching , you match each of the subjects in your treatment group with a counterpart in the comparison group. The matched subjects have the same values on any potential confounding variables, and only differ in the independent variable .

In statistical control , you include potential confounders as variables in your regression .

In randomization , you randomly assign the treatment (or independent variable) in your study to a sufficiently large number of subjects, which allows you to control for all potential confounding variables.

Frequently asked questions: Methodology

Attrition refers to participants leaving a study. It always happens to some extent—for example, in randomized controlled trials for medical research.

Differential attrition occurs when attrition or dropout rates differ systematically between the intervention and the control group . As a result, the characteristics of the participants who drop out differ from the characteristics of those who stay in the study. Because of this, study results may be biased .

Action research is conducted in order to solve a particular issue immediately, while case studies are often conducted over a longer period of time and focus more on observing and analyzing a particular ongoing phenomenon.

Action research is focused on solving a problem or informing individual and community-based knowledge in a way that impacts teaching, learning, and other related processes. It is less focused on contributing theoretical input, instead producing actionable input.

Action research is particularly popular with educators as a form of systematic inquiry because it prioritizes reflection and bridges the gap between theory and practice. Educators are able to simultaneously investigate an issue as they solve it, and the method is very iterative and flexible.

A cycle of inquiry is another name for action research . It is usually visualized in a spiral shape following a series of steps, such as “planning → acting → observing → reflecting.”

To make quantitative observations , you need to use instruments that are capable of measuring the quantity you want to observe. For example, you might use a ruler to measure the length of an object or a thermometer to measure its temperature.

Criterion validity and construct validity are both types of measurement validity . In other words, they both show you how accurately a method measures something.

While construct validity is the degree to which a test or other measurement method measures what it claims to measure, criterion validity is the degree to which a test can predictively (in the future) or concurrently (in the present) measure something.

Construct validity is often considered the overarching type of measurement validity . You need to have face validity , content validity , and criterion validity in order to achieve construct validity.

Convergent validity and discriminant validity are both subtypes of construct validity . Together, they help you evaluate whether a test measures the concept it was designed to measure.

  • Convergent validity indicates whether a test that is designed to measure a particular construct correlates with other tests that assess the same or similar construct.
  • Discriminant validity indicates whether two tests that should not be highly related to each other are indeed not related. This type of validity is also called divergent validity .

You need to assess both in order to demonstrate construct validity. Neither one alone is sufficient for establishing construct validity.

  • Discriminant validity indicates whether two tests that should not be highly related to each other are indeed not related

Content validity shows you how accurately a test or other measurement method taps  into the various aspects of the specific construct you are researching.

In other words, it helps you answer the question: “does the test measure all aspects of the construct I want to measure?” If it does, then the test has high content validity.

The higher the content validity, the more accurate the measurement of the construct.

If the test fails to include parts of the construct, or irrelevant parts are included, the validity of the instrument is threatened, which brings your results into question.

Face validity and content validity are similar in that they both evaluate how suitable the content of a test is. The difference is that face validity is subjective, and assesses content at surface level.

When a test has strong face validity, anyone would agree that the test’s questions appear to measure what they are intended to measure.

For example, looking at a 4th grade math test consisting of problems in which students have to add and multiply, most people would agree that it has strong face validity (i.e., it looks like a math test).

On the other hand, content validity evaluates how well a test represents all the aspects of a topic. Assessing content validity is more systematic and relies on expert evaluation. of each question, analyzing whether each one covers the aspects that the test was designed to cover.

A 4th grade math test would have high content validity if it covered all the skills taught in that grade. Experts(in this case, math teachers), would have to evaluate the content validity by comparing the test to the learning objectives.

Snowball sampling is a non-probability sampling method . Unlike probability sampling (which involves some form of random selection ), the initial individuals selected to be studied are the ones who recruit new participants.

Because not every member of the target population has an equal chance of being recruited into the sample, selection in snowball sampling is non-random.

Snowball sampling is a non-probability sampling method , where there is not an equal chance for every member of the population to be included in the sample .

This means that you cannot use inferential statistics and make generalizations —often the goal of quantitative research . As such, a snowball sample is not representative of the target population and is usually a better fit for qualitative research .

Snowball sampling relies on the use of referrals. Here, the researcher recruits one or more initial participants, who then recruit the next ones.

Participants share similar characteristics and/or know each other. Because of this, not every member of the population has an equal chance of being included in the sample, giving rise to sampling bias .

Snowball sampling is best used in the following cases:

  • If there is no sampling frame available (e.g., people with a rare disease)
  • If the population of interest is hard to access or locate (e.g., people experiencing homelessness)
  • If the research focuses on a sensitive topic (e.g., extramarital affairs)

The reproducibility and replicability of a study can be ensured by writing a transparent, detailed method section and using clear, unambiguous language.

Reproducibility and replicability are related terms.

  • Reproducing research entails reanalyzing the existing data in the same manner.
  • Replicating (or repeating ) the research entails reconducting the entire analysis, including the collection of new data . 
  • A successful reproduction shows that the data analyses were conducted in a fair and honest manner.
  • A successful replication shows that the reliability of the results is high.

Stratified sampling and quota sampling both involve dividing the population into subgroups and selecting units from each subgroup. The purpose in both cases is to select a representative sample and/or to allow comparisons between subgroups.

The main difference is that in stratified sampling, you draw a random sample from each subgroup ( probability sampling ). In quota sampling you select a predetermined number or proportion of units, in a non-random manner ( non-probability sampling ).

Purposive and convenience sampling are both sampling methods that are typically used in qualitative data collection.

A convenience sample is drawn from a source that is conveniently accessible to the researcher. Convenience sampling does not distinguish characteristics among the participants. On the other hand, purposive sampling focuses on selecting participants possessing characteristics associated with the research study.

The findings of studies based on either convenience or purposive sampling can only be generalized to the (sub)population from which the sample is drawn, and not to the entire population.

Random sampling or probability sampling is based on random selection. This means that each unit has an equal chance (i.e., equal probability) of being included in the sample.

On the other hand, convenience sampling involves stopping people at random, which means that not everyone has an equal chance of being selected depending on the place, time, or day you are collecting your data.

Convenience sampling and quota sampling are both non-probability sampling methods. They both use non-random criteria like availability, geographical proximity, or expert knowledge to recruit study participants.

However, in convenience sampling, you continue to sample units or cases until you reach the required sample size.

In quota sampling, you first need to divide your population of interest into subgroups (strata) and estimate their proportions (quota) in the population. Then you can start your data collection, using convenience sampling to recruit participants, until the proportions in each subgroup coincide with the estimated proportions in the population.

A sampling frame is a list of every member in the entire population . It is important that the sampling frame is as complete as possible, so that your sample accurately reflects your population.

Stratified and cluster sampling may look similar, but bear in mind that groups created in cluster sampling are heterogeneous , so the individual characteristics in the cluster vary. In contrast, groups created in stratified sampling are homogeneous , as units share characteristics.

Relatedly, in cluster sampling you randomly select entire groups and include all units of each group in your sample. However, in stratified sampling, you select some units of all groups and include them in your sample. In this way, both methods can ensure that your sample is representative of the target population .

A systematic review is secondary research because it uses existing research. You don’t collect new data yourself.

The key difference between observational studies and experimental designs is that a well-done observational study does not influence the responses of participants, while experiments do have some sort of treatment condition applied to at least some participants by random assignment .

An observational study is a great choice for you if your research question is based purely on observations. If there are ethical, logistical, or practical concerns that prevent you from conducting a traditional experiment , an observational study may be a good choice. In an observational study, there is no interference or manipulation of the research subjects, as well as no control or treatment groups .

It’s often best to ask a variety of people to review your measurements. You can ask experts, such as other researchers, or laypeople, such as potential participants, to judge the face validity of tests.

While experts have a deep understanding of research methods , the people you’re studying can provide you with valuable insights you may have missed otherwise.

Face validity is important because it’s a simple first step to measuring the overall validity of a test or technique. It’s a relatively intuitive, quick, and easy way to start checking whether a new measure seems useful at first glance.

Good face validity means that anyone who reviews your measure says that it seems to be measuring what it’s supposed to. With poor face validity, someone reviewing your measure may be left confused about what you’re measuring and why you’re using this method.

Face validity is about whether a test appears to measure what it’s supposed to measure. This type of validity is concerned with whether a measure seems relevant and appropriate for what it’s assessing only on the surface.

Statistical analyses are often applied to test validity with data from your measures. You test convergent validity and discriminant validity with correlations to see if results from your test are positively or negatively related to those of other established tests.

You can also use regression analyses to assess whether your measure is actually predictive of outcomes that you expect it to predict theoretically. A regression analysis that supports your expectations strengthens your claim of construct validity .

When designing or evaluating a measure, construct validity helps you ensure you’re actually measuring the construct you’re interested in. If you don’t have construct validity, you may inadvertently measure unrelated or distinct constructs and lose precision in your research.

Construct validity is often considered the overarching type of measurement validity ,  because it covers all of the other types. You need to have face validity , content validity , and criterion validity to achieve construct validity.

Construct validity is about how well a test measures the concept it was designed to evaluate. It’s one of four types of measurement validity , which includes construct validity, face validity , and criterion validity.

There are two subtypes of construct validity.

  • Convergent validity : The extent to which your measure corresponds to measures of related constructs
  • Discriminant validity : The extent to which your measure is unrelated or negatively related to measures of distinct constructs

Naturalistic observation is a valuable tool because of its flexibility, external validity , and suitability for topics that can’t be studied in a lab setting.

The downsides of naturalistic observation include its lack of scientific control , ethical considerations , and potential for bias from observers and subjects.

Naturalistic observation is a qualitative research method where you record the behaviors of your research subjects in real world settings. You avoid interfering or influencing anything in a naturalistic observation.

You can think of naturalistic observation as “people watching” with a purpose.

A dependent variable is what changes as a result of the independent variable manipulation in experiments . It’s what you’re interested in measuring, and it “depends” on your independent variable.

In statistics, dependent variables are also called:

  • Response variables (they respond to a change in another variable)
  • Outcome variables (they represent the outcome you want to measure)
  • Left-hand-side variables (they appear on the left-hand side of a regression equation)

An independent variable is the variable you manipulate, control, or vary in an experimental study to explore its effects. It’s called “independent” because it’s not influenced by any other variables in the study.

Independent variables are also called:

  • Explanatory variables (they explain an event or outcome)
  • Predictor variables (they can be used to predict the value of a dependent variable)
  • Right-hand-side variables (they appear on the right-hand side of a regression equation).

As a rule of thumb, questions related to thoughts, beliefs, and feelings work well in focus groups. Take your time formulating strong questions, paying special attention to phrasing. Be careful to avoid leading questions , which can bias your responses.

Overall, your focus group questions should be:

  • Open-ended and flexible
  • Impossible to answer with “yes” or “no” (questions that start with “why” or “how” are often best)
  • Unambiguous, getting straight to the point while still stimulating discussion
  • Unbiased and neutral

A structured interview is a data collection method that relies on asking questions in a set order to collect data on a topic. They are often quantitative in nature. Structured interviews are best used when: 

  • You already have a very clear understanding of your topic. Perhaps significant research has already been conducted, or you have done some prior research yourself, but you already possess a baseline for designing strong structured questions.
  • You are constrained in terms of time or resources and need to analyze your data quickly and efficiently.
  • Your research question depends on strong parity between participants, with environmental conditions held constant.

More flexible interview options include semi-structured interviews , unstructured interviews , and focus groups .

Social desirability bias is the tendency for interview participants to give responses that will be viewed favorably by the interviewer or other participants. It occurs in all types of interviews and surveys , but is most common in semi-structured interviews , unstructured interviews , and focus groups .

Social desirability bias can be mitigated by ensuring participants feel at ease and comfortable sharing their views. Make sure to pay attention to your own body language and any physical or verbal cues, such as nodding or widening your eyes.

This type of bias can also occur in observations if the participants know they’re being observed. They might alter their behavior accordingly.

The interviewer effect is a type of bias that emerges when a characteristic of an interviewer (race, age, gender identity, etc.) influences the responses given by the interviewee.

There is a risk of an interviewer effect in all types of interviews , but it can be mitigated by writing really high-quality interview questions.

A semi-structured interview is a blend of structured and unstructured types of interviews. Semi-structured interviews are best used when:

  • You have prior interview experience. Spontaneous questions are deceptively challenging, and it’s easy to accidentally ask a leading question or make a participant uncomfortable.
  • Your research question is exploratory in nature. Participant answers can guide future research questions and help you develop a more robust knowledge base for future research.

An unstructured interview is the most flexible type of interview, but it is not always the best fit for your research topic.

Unstructured interviews are best used when:

  • You are an experienced interviewer and have a very strong background in your research topic, since it is challenging to ask spontaneous, colloquial questions.
  • Your research question is exploratory in nature. While you may have developed hypotheses, you are open to discovering new or shifting viewpoints through the interview process.
  • You are seeking descriptive data, and are ready to ask questions that will deepen and contextualize your initial thoughts and hypotheses.
  • Your research depends on forming connections with your participants and making them feel comfortable revealing deeper emotions, lived experiences, or thoughts.

The four most common types of interviews are:

  • Structured interviews : The questions are predetermined in both topic and order. 
  • Semi-structured interviews : A few questions are predetermined, but other questions aren’t planned.
  • Unstructured interviews : None of the questions are predetermined.
  • Focus group interviews : The questions are presented to a group instead of one individual.

Deductive reasoning is commonly used in scientific research, and it’s especially associated with quantitative research .

In research, you might have come across something called the hypothetico-deductive method . It’s the scientific method of testing hypotheses to check whether your predictions are substantiated by real-world data.

Deductive reasoning is a logical approach where you progress from general ideas to specific conclusions. It’s often contrasted with inductive reasoning , where you start with specific observations and form general conclusions.

Deductive reasoning is also called deductive logic.

There are many different types of inductive reasoning that people use formally or informally.

Here are a few common types:

  • Inductive generalization : You use observations about a sample to come to a conclusion about the population it came from.
  • Statistical generalization: You use specific numbers about samples to make statements about populations.
  • Causal reasoning: You make cause-and-effect links between different things.
  • Sign reasoning: You make a conclusion about a correlational relationship between different things.
  • Analogical reasoning: You make a conclusion about something based on its similarities to something else.

Inductive reasoning is a bottom-up approach, while deductive reasoning is top-down.

Inductive reasoning takes you from the specific to the general, while in deductive reasoning, you make inferences by going from general premises to specific conclusions.

In inductive research , you start by making observations or gathering data. Then, you take a broad scan of your data and search for patterns. Finally, you make general conclusions that you might incorporate into theories.

Inductive reasoning is a method of drawing conclusions by going from the specific to the general. It’s usually contrasted with deductive reasoning, where you proceed from general information to specific conclusions.

Inductive reasoning is also called inductive logic or bottom-up reasoning.

A hypothesis states your predictions about what your research will find. It is a tentative answer to your research question that has not yet been tested. For some research projects, you might have to write several hypotheses that address different aspects of your research question.

A hypothesis is not just a guess — it should be based on existing theories and knowledge. It also has to be testable, which means you can support or refute it through scientific research methods (such as experiments, observations and statistical analysis of data).

Triangulation can help:

  • Reduce research bias that comes from using a single method, theory, or investigator
  • Enhance validity by approaching the same topic with different tools
  • Establish credibility by giving you a complete picture of the research problem

But triangulation can also pose problems:

  • It’s time-consuming and labor-intensive, often involving an interdisciplinary team.
  • Your results may be inconsistent or even contradictory.

There are four main types of triangulation :

  • Data triangulation : Using data from different times, spaces, and people
  • Investigator triangulation : Involving multiple researchers in collecting or analyzing data
  • Theory triangulation : Using varying theoretical perspectives in your research
  • Methodological triangulation : Using different methodologies to approach the same topic

Many academic fields use peer review , largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the published manuscript.

However, peer review is also common in non-academic settings. The United Nations, the European Union, and many individual nations use peer review to evaluate grant applications. It is also widely used in medical and health-related fields as a teaching or quality-of-care measure. 

Peer assessment is often used in the classroom as a pedagogical tool. Both receiving feedback and providing it are thought to enhance the learning process, helping students think critically and collaboratively.

Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. It also represents an excellent opportunity to get feedback from renowned experts in your field. It acts as a first defense, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process.

Peer-reviewed articles are considered a highly credible source due to this stringent process they go through before publication.

In general, the peer review process follows the following steps: 

  • First, the author submits the manuscript to the editor.
  • Reject the manuscript and send it back to author, or 
  • Send it onward to the selected peer reviewer(s) 
  • Next, the peer review process occurs. The reviewer provides feedback, addressing any major or minor issues with the manuscript, and gives their advice regarding what edits should be made. 
  • Lastly, the edited manuscript is sent back to the author. They input the edits, and resubmit it to the editor for publication.

Exploratory research is often used when the issue you’re studying is new or when the data collection process is challenging for some reason.

You can use exploratory research if you have a general idea or a specific question that you want to study but there is no preexisting knowledge or paradigm with which to study it.

Exploratory research is a methodology approach that explores research questions that have not previously been studied in depth. It is often used when the issue you’re studying is new, or the data collection process is challenging in some way.

Explanatory research is used to investigate how or why a phenomenon occurs. Therefore, this type of research is often one of the first stages in the research process , serving as a jumping-off point for future research.

Exploratory research aims to explore the main aspects of an under-researched problem, while explanatory research aims to explain the causes and consequences of a well-defined problem.

Explanatory research is a research method used to investigate how or why something occurs when only a small amount of information is available pertaining to that topic. It can help you increase your understanding of a given topic.

Clean data are valid, accurate, complete, consistent, unique, and uniform. Dirty data include inconsistencies and errors.

Dirty data can come from any part of the research process, including poor research design , inappropriate measurement materials, or flawed data entry.

Data cleaning takes place between data collection and data analyses. But you can use some methods even before collecting data.

For clean data, you should start by designing measures that collect valid data. Data validation at the time of data entry or collection helps you minimize the amount of data cleaning you’ll need to do.

After data collection, you can use data standardization and data transformation to clean your data. You’ll also deal with any missing values, outliers, and duplicate values.

Every dataset requires different techniques to clean dirty data , but you need to address these issues in a systematic way. You focus on finding and resolving data points that don’t agree or fit with the rest of your dataset.

These data might be missing values, outliers, duplicate values, incorrectly formatted, or irrelevant. You’ll start with screening and diagnosing your data. Then, you’ll often standardize and accept or remove data to make your dataset consistent and valid.

Data cleaning is necessary for valid and appropriate analyses. Dirty data contain inconsistencies or errors , but cleaning your data helps you minimize or resolve these.

Without data cleaning, you could end up with a Type I or II error in your conclusion. These types of erroneous conclusions can be practically significant with important consequences, because they lead to misplaced investments or missed opportunities.

Data cleaning involves spotting and resolving potential data inconsistencies or errors to improve your data quality. An error is any value (e.g., recorded weight) that doesn’t reflect the true value (e.g., actual weight) of something that’s being measured.

In this process, you review, analyze, detect, modify, or remove “dirty” data to make your dataset “clean.” Data cleaning is also called data cleansing or data scrubbing.

Research misconduct means making up or falsifying data, manipulating data analyses, or misrepresenting results in research reports. It’s a form of academic fraud.

These actions are committed intentionally and can have serious consequences; research misconduct is not a simple mistake or a point of disagreement but a serious ethical failure.

Anonymity means you don’t know who the participants are, while confidentiality means you know who they are but remove identifying information from your research report. Both are important ethical considerations .

You can only guarantee anonymity by not collecting any personally identifying information—for example, names, phone numbers, email addresses, IP addresses, physical characteristics, photos, or videos.

You can keep data confidential by using aggregate information in your research report, so that you only refer to groups of participants rather than individuals.

Research ethics matter for scientific integrity, human rights and dignity, and collaboration between science and society. These principles make sure that participation in studies is voluntary, informed, and safe.

Ethical considerations in research are a set of principles that guide your research designs and practices. These principles include voluntary participation, informed consent, anonymity, confidentiality, potential for harm, and results communication.

Scientists and researchers must always adhere to a certain code of conduct when collecting data from others .

These considerations protect the rights of research participants, enhance research validity , and maintain scientific integrity.

In multistage sampling , you can use probability or non-probability sampling methods .

For a probability sample, you have to conduct probability sampling at every stage.

You can mix it up by using simple random sampling , systematic sampling , or stratified sampling to select units at different stages, depending on what is applicable and relevant to your study.

Multistage sampling can simplify data collection when you have large, geographically spread samples, and you can obtain a probability sample without a complete sampling frame.

But multistage sampling may not lead to a representative sample, and larger samples are needed for multistage samples to achieve the statistical properties of simple random samples .

These are four of the most common mixed methods designs :

  • Convergent parallel: Quantitative and qualitative data are collected at the same time and analyzed separately. After both analyses are complete, compare your results to draw overall conclusions. 
  • Embedded: Quantitative and qualitative data are collected at the same time, but within a larger quantitative or qualitative design. One type of data is secondary to the other.
  • Explanatory sequential: Quantitative data is collected and analyzed first, followed by qualitative data. You can use this design if you think your qualitative data will explain and contextualize your quantitative findings.
  • Exploratory sequential: Qualitative data is collected and analyzed first, followed by quantitative data. You can use this design if you think the quantitative data will confirm or validate your qualitative findings.

Triangulation in research means using multiple datasets, methods, theories and/or investigators to address a research question. It’s a research strategy that can help you enhance the validity and credibility of your findings.

Triangulation is mainly used in qualitative research , but it’s also commonly applied in quantitative research . Mixed methods research always uses triangulation.

In multistage sampling , or multistage cluster sampling, you draw a sample from a population using smaller and smaller groups at each stage.

This method is often used to collect data from a large, geographically spread group of people in national surveys, for example. You take advantage of hierarchical groupings (e.g., from state to city to neighborhood) to create a sample that’s less expensive and time-consuming to collect data from.

No, the steepness or slope of the line isn’t related to the correlation coefficient value. The correlation coefficient only tells you how closely your data fit on a line, so two datasets with the same correlation coefficient can have very different slopes.

To find the slope of the line, you’ll need to perform a regression analysis .

Correlation coefficients always range between -1 and 1.

The sign of the coefficient tells you the direction of the relationship: a positive value means the variables change together in the same direction, while a negative value means they change together in opposite directions.

The absolute value of a number is equal to the number without its sign. The absolute value of a correlation coefficient tells you the magnitude of the correlation: the greater the absolute value, the stronger the correlation.

These are the assumptions your data must meet if you want to use Pearson’s r :

  • Both variables are on an interval or ratio level of measurement
  • Data from both variables follow normal distributions
  • Your data have no outliers
  • Your data is from a random or representative sample
  • You expect a linear relationship between the two variables

Quantitative research designs can be divided into two main categories:

  • Correlational and descriptive designs are used to investigate characteristics, averages, trends, and associations between variables.
  • Experimental and quasi-experimental designs are used to test causal relationships .

Qualitative research designs tend to be more flexible. Common types of qualitative design include case study , ethnography , and grounded theory designs.

A well-planned research design helps ensure that your methods match your research aims, that you collect high-quality data, and that you use the right kind of analysis to answer your questions, utilizing credible sources . This allows you to draw valid , trustworthy conclusions.

The priorities of a research design can vary depending on the field, but you usually have to specify:

  • Your research questions and/or hypotheses
  • Your overall approach (e.g., qualitative or quantitative )
  • The type of design you’re using (e.g., a survey , experiment , or case study )
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods (e.g., questionnaires , observations)
  • Your data collection procedures (e.g., operationalization , timing and data management)
  • Your data analysis methods (e.g., statistical tests  or thematic analysis )

A research design is a strategy for answering your   research question . It defines your overall approach and determines how you will collect and analyze data.

Questionnaires can be self-administered or researcher-administered.

Self-administered questionnaires can be delivered online or in paper-and-pen formats, in person or through mail. All questions are standardized so that all respondents receive the same questions with identical wording.

Researcher-administered questionnaires are interviews that take place by phone, in-person, or online between researchers and respondents. You can gain deeper insights by clarifying questions for respondents or asking follow-up questions.

You can organize the questions logically, with a clear progression from simple to complex, or randomly between respondents. A logical flow helps respondents process the questionnaire easier and quicker, but it may lead to bias. Randomization can minimize the bias from order effects.

Closed-ended, or restricted-choice, questions offer respondents a fixed set of choices to select from. These questions are easier to answer quickly.

Open-ended or long-form questions allow respondents to answer in their own words. Because there are no restrictions on their choices, respondents can answer in ways that researchers may not have otherwise considered.

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analyzing data from people using questionnaires.

The third variable and directionality problems are two main reasons why correlation isn’t causation .

The third variable problem means that a confounding variable affects both variables to make them seem causally related when they are not.

The directionality problem is when two variables correlate and might actually have a causal relationship, but it’s impossible to conclude which variable causes changes in the other.

Correlation describes an association between variables : when one variable changes, so does the other. A correlation is a statistical indicator of the relationship between variables.

Causation means that changes in one variable brings about changes in the other (i.e., there is a cause-and-effect relationship between variables). The two variables are correlated with each other, and there’s also a causal link between them.

While causation and correlation can exist simultaneously, correlation does not imply causation. In other words, correlation is simply a relationship where A relates to B—but A doesn’t necessarily cause B to happen (or vice versa). Mistaking correlation for causation is a common error and can lead to false cause fallacy .

Controlled experiments establish causality, whereas correlational studies only show associations between variables.

  • In an experimental design , you manipulate an independent variable and measure its effect on a dependent variable. Other variables are controlled so they can’t impact the results.
  • In a correlational design , you measure variables without manipulating any of them. You can test whether your variables change together, but you can’t be sure that one variable caused a change in another.

In general, correlational research is high in external validity while experimental research is high in internal validity .

A correlation is usually tested for two variables at a time, but you can test correlations between three or more variables.

A correlation coefficient is a single number that describes the strength and direction of the relationship between your variables.

Different types of correlation coefficients might be appropriate for your data based on their levels of measurement and distributions . The Pearson product-moment correlation coefficient (Pearson’s r ) is commonly used to assess a linear relationship between two quantitative variables.

A correlational research design investigates relationships between two variables (or more) without the researcher controlling or manipulating any of them. It’s a non-experimental type of quantitative research .

A correlation reflects the strength and/or direction of the association between two or more variables.

  • A positive correlation means that both variables change in the same direction.
  • A negative correlation means that the variables change in opposite directions.
  • A zero correlation means there’s no relationship between the variables.

Random error  is almost always present in scientific studies, even in highly controlled settings. While you can’t eradicate it completely, you can reduce random error by taking repeated measurements, using a large sample, and controlling extraneous variables .

You can avoid systematic error through careful design of your sampling , data collection , and analysis procedures. For example, use triangulation to measure your variables using multiple methods; regularly calibrate instruments or procedures; use random sampling and random assignment ; and apply masking (blinding) where possible.

Systematic error is generally a bigger problem in research.

With random error, multiple measurements will tend to cluster around the true value. When you’re collecting data from a large sample , the errors in different directions will cancel each other out.

Systematic errors are much more problematic because they can skew your data away from the true value. This can lead you to false conclusions ( Type I and II errors ) about the relationship between the variables you’re studying.

Random and systematic error are two types of measurement error.

Random error is a chance difference between the observed and true values of something (e.g., a researcher misreading a weighing scale records an incorrect measurement).

Systematic error is a consistent or proportional difference between the observed and true values of something (e.g., a miscalibrated scale consistently records weights as higher than they actually are).

On graphs, the explanatory variable is conventionally placed on the x-axis, while the response variable is placed on the y-axis.

  • If you have quantitative variables , use a scatterplot or a line graph.
  • If your response variable is categorical, use a scatterplot or a line graph.
  • If your explanatory variable is categorical, use a bar graph.

The term “ explanatory variable ” is sometimes preferred over “ independent variable ” because, in real world contexts, independent variables are often influenced by other variables. This means they aren’t totally independent.

Multiple independent variables may also be correlated with each other, so “explanatory variables” is a more appropriate term.

The difference between explanatory and response variables is simple:

  • An explanatory variable is the expected cause, and it explains the results.
  • A response variable is the expected effect, and it responds to other variables.

In a controlled experiment , all extraneous variables are held constant so that they can’t influence the results. Controlled experiments require:

  • A control group that receives a standard treatment, a fake treatment, or no treatment.
  • Random assignment of participants to ensure the groups are equivalent.

Depending on your study topic, there are various other methods of controlling variables .

There are 4 main types of extraneous variables :

  • Demand characteristics : environmental cues that encourage participants to conform to researchers’ expectations.
  • Experimenter effects : unintentional actions by researchers that influence study outcomes.
  • Situational variables : environmental variables that alter participants’ behaviors.
  • Participant variables : any characteristic or aspect of a participant’s background that could affect study results.

An extraneous variable is any variable that you’re not investigating that can potentially affect the dependent variable of your research study.

A confounding variable is a type of extraneous variable that not only affects the dependent variable, but is also related to the independent variable.

In a factorial design, multiple independent variables are tested.

If you test two variables, each level of one independent variable is combined with each level of the other independent variable to create different conditions.

Within-subjects designs have many potential threats to internal validity , but they are also very statistically powerful .

Advantages:

  • Only requires small samples
  • Statistically powerful
  • Removes the effects of individual differences on the outcomes

Disadvantages:

  • Internal validity threats reduce the likelihood of establishing a direct relationship between variables
  • Time-related effects, such as growth, can influence the outcomes
  • Carryover effects mean that the specific order of different treatments affect the outcomes

While a between-subjects design has fewer threats to internal validity , it also requires more participants for high statistical power than a within-subjects design .

  • Prevents carryover effects of learning and fatigue.
  • Shorter study duration.
  • Needs larger samples for high power.
  • Uses more resources to recruit participants, administer sessions, cover costs, etc.
  • Individual differences may be an alternative explanation for results.

Yes. Between-subjects and within-subjects designs can be combined in a single study when you have two or more independent variables (a factorial design). In a mixed factorial design, one variable is altered between subjects and another is altered within subjects.

In a between-subjects design , every participant experiences only one condition, and researchers assess group differences between participants in various conditions.

In a within-subjects design , each participant experiences all conditions, and researchers test the same participants repeatedly for differences between conditions.

The word “between” means that you’re comparing different conditions between groups, while the word “within” means you’re comparing different conditions within the same group.

Random assignment is used in experiments with a between-groups or independent measures design. In this research design, there’s usually a control group and one or more experimental groups. Random assignment helps ensure that the groups are comparable.

In general, you should always use random assignment in this type of experimental design when it is ethically possible and makes sense for your study topic.

To implement random assignment , assign a unique number to every member of your study’s sample .

Then, you can use a random number generator or a lottery method to randomly assign each number to a control or experimental group. You can also do so manually, by flipping a coin or rolling a dice to randomly assign participants to groups.

Random selection, or random sampling , is a way of selecting members of a population for your study’s sample.

In contrast, random assignment is a way of sorting the sample into control and experimental groups.

Random sampling enhances the external validity or generalizability of your results, while random assignment improves the internal validity of your study.

In experimental research, random assignment is a way of placing participants from your sample into different groups using randomization. With this method, every member of the sample has a known or equal chance of being placed in a control group or an experimental group.

“Controlling for a variable” means measuring extraneous variables and accounting for them statistically to remove their effects on other variables.

Researchers often model control variable data along with independent and dependent variable data in regression analyses and ANCOVAs . That way, you can isolate the control variable’s effects from the relationship between the variables of interest.

Control variables help you establish a correlational or causal relationship between variables by enhancing internal validity .

If you don’t control relevant extraneous variables , they may influence the outcomes of your study, and you may not be able to demonstrate that your results are really an effect of your independent variable .

A control variable is any variable that’s held constant in a research study. It’s not a variable of interest in the study, but it’s controlled because it could influence the outcomes.

Including mediators and moderators in your research helps you go beyond studying a simple relationship between two variables for a fuller picture of the real world. They are important to consider when studying complex correlational or causal relationships.

Mediators are part of the causal pathway of an effect, and they tell you how or why an effect takes place. Moderators usually help you judge the external validity of your study by identifying the limitations of when the relationship between variables holds.

If something is a mediating variable :

  • It’s caused by the independent variable .
  • It influences the dependent variable
  • When it’s taken into account, the statistical correlation between the independent and dependent variables is higher than when it isn’t considered.

A confounder is a third variable that affects variables of interest and makes them seem related when they are not. In contrast, a mediator is the mechanism of a relationship between two variables: it explains the process by which they are related.

A mediator variable explains the process through which two variables are related, while a moderator variable affects the strength and direction of that relationship.

There are three key steps in systematic sampling :

  • Define and list your population , ensuring that it is not ordered in a cyclical or periodic order.
  • Decide on your sample size and calculate your interval, k , by dividing your population by your target sample size.
  • Choose every k th member of the population as your sample.

Systematic sampling is a probability sampling method where researchers select members of the population at a regular interval – for example, by selecting every 15th person on a list of the population. If the population is in a random order, this can imitate the benefits of simple random sampling .

Yes, you can create a stratified sample using multiple characteristics, but you must ensure that every participant in your study belongs to one and only one subgroup. In this case, you multiply the numbers of subgroups for each characteristic to get the total number of groups.

For example, if you were stratifying by location with three subgroups (urban, rural, or suburban) and marital status with five subgroups (single, divorced, widowed, married, or partnered), you would have 3 x 5 = 15 subgroups.

You should use stratified sampling when your sample can be divided into mutually exclusive and exhaustive subgroups that you believe will take on different mean values for the variable that you’re studying.

Using stratified sampling will allow you to obtain more precise (with lower variance ) statistical estimates of whatever you are trying to measure.

For example, say you want to investigate how income differs based on educational attainment, but you know that this relationship can vary based on race. Using stratified sampling, you can ensure you obtain a large enough sample from each racial group, allowing you to draw more precise conclusions.

In stratified sampling , researchers divide subjects into subgroups called strata based on characteristics that they share (e.g., race, gender, educational attainment).

Once divided, each subgroup is randomly sampled using another probability sampling method.

Cluster sampling is more time- and cost-efficient than other probability sampling methods , particularly when it comes to large samples spread across a wide geographical area.

However, it provides less statistical certainty than other methods, such as simple random sampling , because it is difficult to ensure that your clusters properly represent the population as a whole.

There are three types of cluster sampling : single-stage, double-stage and multi-stage clustering. In all three types, you first divide the population into clusters, then randomly select clusters for use in your sample.

  • In single-stage sampling , you collect data from every unit within the selected clusters.
  • In double-stage sampling , you select a random sample of units from within the clusters.
  • In multi-stage sampling , you repeat the procedure of randomly sampling elements from within the clusters until you have reached a manageable sample.

Cluster sampling is a probability sampling method in which you divide a population into clusters, such as districts or schools, and then randomly select some of these clusters as your sample.

The clusters should ideally each be mini-representations of the population as a whole.

If properly implemented, simple random sampling is usually the best sampling method for ensuring both internal and external validity . However, it can sometimes be impractical and expensive to implement, depending on the size of the population to be studied,

If you have a list of every member of the population and the ability to reach whichever members are selected, you can use simple random sampling.

The American Community Survey  is an example of simple random sampling . In order to collect detailed data on the population of the US, the Census Bureau officials randomly select 3.5 million households per year and use a variety of methods to convince them to fill out the survey.

Simple random sampling is a type of probability sampling in which the researcher randomly selects a subset of participants from a population . Each member of the population has an equal chance of being selected. Data is then collected from as large a percentage as possible of this random subset.

Quasi-experimental design is most useful in situations where it would be unethical or impractical to run a true experiment .

Quasi-experiments have lower internal validity than true experiments, but they often have higher external validity  as they can use real-world interventions instead of artificial laboratory settings.

A quasi-experiment is a type of research design that attempts to establish a cause-and-effect relationship. The main difference with a true experiment is that the groups are not randomly assigned.

Blinding is important to reduce research bias (e.g., observer bias , demand characteristics ) and ensure a study’s internal validity .

If participants know whether they are in a control or treatment group , they may adjust their behavior in ways that affect the outcome that researchers are trying to measure. If the people administering the treatment are aware of group assignment, they may treat participants differently and thus directly or indirectly influence the final results.

  • In a single-blind study , only the participants are blinded.
  • In a double-blind study , both participants and experimenters are blinded.
  • In a triple-blind study , the assignment is hidden not only from participants and experimenters, but also from the researchers analyzing the data.

Blinding means hiding who is assigned to the treatment group and who is assigned to the control group in an experiment .

A true experiment (a.k.a. a controlled experiment) always includes at least one control group that doesn’t receive the experimental treatment.

However, some experiments use a within-subjects design to test treatments without a control group. In these designs, you usually compare one group’s outcomes before and after a treatment (instead of comparing outcomes between different groups).

For strong internal validity , it’s usually best to include a control group if possible. Without a control group, it’s harder to be certain that the outcome was caused by the experimental treatment and not by other variables.

An experimental group, also known as a treatment group, receives the treatment whose effect researchers wish to study, whereas a control group does not. They should be identical in all other ways.

Individual Likert-type questions are generally considered ordinal data , because the items have clear rank order, but don’t have an even distribution.

Overall Likert scale scores are sometimes treated as interval data. These scores are considered to have directionality and even spacing between them.

The type of data determines what statistical tests you should use to analyze your data.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviors. It is made up of 4 or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with 5 or 7 possible responses, to capture their degree of agreement.

In scientific research, concepts are the abstract ideas or phenomena that are being studied (e.g., educational achievement). Variables are properties or characteristics of the concept (e.g., performance at school), while indicators are ways of measuring or quantifying variables (e.g., yearly grade reports).

The process of turning abstract concepts into measurable variables and indicators is called operationalization .

There are various approaches to qualitative data analysis , but they all share five steps in common:

  • Prepare and organize your data.
  • Review and explore your data.
  • Develop a data coding system.
  • Assign codes to the data.
  • Identify recurring themes.

The specifics of each step depend on the focus of the analysis. Some common approaches include textual analysis , thematic analysis , and discourse analysis .

There are five common approaches to qualitative research :

  • Grounded theory involves collecting data in order to develop new theories.
  • Ethnography involves immersing yourself in a group or organization to understand its culture.
  • Narrative research involves interpreting stories to understand how people make sense of their experiences and perceptions.
  • Phenomenological research involves investigating phenomena through people’s lived experiences.
  • Action research links theory and practice in several cycles to drive innovative changes.

Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is used by scientists to test specific predictions, called hypotheses , by calculating how likely it is that a pattern or relationship between variables could have arisen by chance.

Operationalization means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioral avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalize the variables that you want to measure.

When conducting research, collecting original data has significant advantages:

  • You can tailor data collection to your specific research aims (e.g. understanding the needs of your consumers or user testing your website)
  • You can control and standardize the process for high reliability and validity (e.g. choosing appropriate measurements and sampling methods )

However, there are also some drawbacks: data collection can be time-consuming, labor-intensive and expensive. In some cases, it’s more efficient to use secondary data that has already been collected by someone else, but the data might be less reliable.

Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organizations.

A confounding variable is closely related to both the independent and dependent variables in a study. An independent variable represents the supposed cause , while the dependent variable is the supposed effect . A confounding variable is a third variable that influences both the independent and dependent variables.

Failing to account for confounding variables can cause you to wrongly estimate the relationship between your independent and dependent variables.

To ensure the internal validity of your research, you must consider the impact of confounding variables. If you fail to account for them, you might over- or underestimate the causal relationship between your independent and dependent variables , or even find a causal relationship where none exists.

Yes, but including more than one of either type requires multiple research questions .

For example, if you are interested in the effect of a diet on health, you can use multiple measures of health: blood sugar, blood pressure, weight, pulse, and many more. Each of these is its own dependent variable with its own research question.

You could also choose to look at the effect of exercise levels as well as diet, or even the additional effect of the two combined. Each of these is a separate independent variable .

To ensure the internal validity of an experiment , you should only change one independent variable at a time.

No. The value of a dependent variable depends on an independent variable, so a variable cannot be both independent and dependent at the same time. It must be either the cause or the effect, not both!

You want to find out how blood sugar levels are affected by drinking diet soda and regular soda, so you conduct an experiment .

  • The type of soda – diet or regular – is the independent variable .
  • The level of blood sugar that you measure is the dependent variable – it changes depending on the type of soda.

Determining cause and effect is one of the most important parts of scientific research. It’s essential to know which is the cause – the independent variable – and which is the effect – the dependent variable.

In non-probability sampling , the sample is selected based on non-random criteria, and not every member of the population has a chance of being included.

Common non-probability sampling methods include convenience sampling , voluntary response sampling, purposive sampling , snowball sampling, and quota sampling .

Probability sampling means that every member of the target population has a known chance of being included in the sample.

Probability sampling methods include simple random sampling , systematic sampling , stratified sampling , and cluster sampling .

Using careful research design and sampling procedures can help you avoid sampling bias . Oversampling can be used to correct undercoverage bias .

Some common types of sampling bias include self-selection bias , nonresponse bias , undercoverage bias , survivorship bias , pre-screening or advertising bias, and healthy user bias.

Sampling bias is a threat to external validity – it limits the generalizability of your findings to a broader group of people.

A sampling error is the difference between a population parameter and a sample statistic .

A statistic refers to measures about the sample , while a parameter refers to measures about the population .

Populations are used when a research question requires data from every member of the population. This is usually only feasible when the population is small and easily accessible.

Samples are used to make inferences about populations . Samples are easier to collect data from because they are practical, cost-effective, convenient, and manageable.

There are seven threats to external validity : selection bias , history, experimenter effect, Hawthorne effect , testing effect, aptitude-treatment and situation effect.

The two types of external validity are population validity (whether you can generalize to other groups of people) and ecological validity (whether you can generalize to other situations and settings).

The external validity of a study is the extent to which you can generalize your findings to different groups of people, situations, and measures.

Cross-sectional studies cannot establish a cause-and-effect relationship or analyze behavior over a period of time. To investigate cause and effect, you need to do a longitudinal study or an experimental study .

Cross-sectional studies are less expensive and time-consuming than many other types of study. They can provide useful insights into a population’s characteristics and identify correlations for further research.

Sometimes only cross-sectional data is available for analysis; other times your research question may only require a cross-sectional study to answer it.

Longitudinal studies can last anywhere from weeks to decades, although they tend to be at least a year long.

The 1970 British Cohort Study , which has collected data on the lives of 17,000 Brits since their births in 1970, is one well-known example of a longitudinal study .

Longitudinal studies are better to establish the correct sequence of events, identify changes over time, and provide insight into cause-and-effect relationships, but they also tend to be more expensive and time-consuming than other types of studies.

Longitudinal studies and cross-sectional studies are two different types of research design . In a cross-sectional study you collect data from a population at a specific point in time; in a longitudinal study you repeatedly collect data from the same sample over an extended period of time.

There are eight threats to internal validity : history, maturation, instrumentation, testing, selection bias , regression to the mean, social interaction and attrition .

Internal validity is the extent to which you can be confident that a cause-and-effect relationship established in a study cannot be explained by other factors.

In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts and meanings, use qualitative methods .
  • If you want to analyze a large amount of readily-available data, use secondary data. If you want data specific to your purposes with control over how it is generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

A confounding variable , also called a confounder or confounding factor, is a third variable in a study examining a potential cause-and-effect relationship.

A confounding variable is related to both the supposed cause and the supposed effect of the study. It can be difficult to separate the true effect of the independent variable from the effect of the confounding variable.

In your research design , it’s important to identify potential confounding variables and plan how you will reduce their impact.

Discrete and continuous variables are two types of quantitative variables :

  • Discrete variables represent counts (e.g. the number of objects in a collection).
  • Continuous variables represent measurable amounts (e.g. water volume or weight).

Quantitative variables are any variables where the data represent amounts (e.g. height, weight, or age).

Categorical variables are any variables where the data represent groups. This includes rankings (e.g. finishing places in a race), classifications (e.g. brands of cereal), and binary outcomes (e.g. coin flips).

You need to know what type of variables you are working with to choose the right statistical test for your data and interpret your results .

You can think of independent and dependent variables in terms of cause and effect: an independent variable is the variable you think is the cause , while a dependent variable is the effect .

In an experiment, you manipulate the independent variable and measure the outcome in the dependent variable. For example, in an experiment about the effect of nutrients on crop growth:

  • The  independent variable  is the amount of nutrients added to the crop field.
  • The  dependent variable is the biomass of the crops at harvest time.

Defining your variables, and deciding how you will manipulate and measure them, is an important part of experimental design .

Experimental design means planning a set of procedures to investigate a relationship between variables . To design a controlled experiment, you need:

  • A testable hypothesis
  • At least one independent variable that can be precisely manipulated
  • At least one dependent variable that can be precisely measured

When designing the experiment, you decide:

  • How you will manipulate the variable(s)
  • How you will control for any potential confounding variables
  • How many subjects or samples will be included in the study
  • How subjects will be assigned to treatment levels

Experimental design is essential to the internal and external validity of your experiment.

I nternal validity is the degree of confidence that the causal relationship you are testing is not influenced by other factors or variables .

External validity is the extent to which your results can be generalized to other contexts.

The validity of your experiment depends on your experimental design .

Reliability and validity are both about how well a method measures something:

  • Reliability refers to the  consistency of a measure (whether the results can be reproduced under the same conditions).
  • Validity   refers to the  accuracy of a measure (whether the results really do represent what they are supposed to measure).

If you are doing experimental research, you also have to consider the internal and external validity of your experiment.

A sample is a subset of individuals from a larger population . Sampling means selecting the group that you will actually collect data from in your research. For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

In statistics, sampling allows you to test a hypothesis about the characteristics of a population.

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

Methodology refers to the overarching strategy and rationale of your research project . It involves studying the methods used in your field and the theories or principles behind them, in order to develop an approach that matches your objectives.

Methods are the specific tools and procedures you use to collect and analyze data (for example, experiments, surveys , and statistical tests ).

In shorter scientific papers, where the aim is to report the findings of a specific study, you might simply describe what you did in a methods section .

In a longer or more complex research project, such as a thesis or dissertation , you will probably include a methodology section , where you explain your approach to answering the research questions and cite relevant sources to support your choice of methods.

Ask our team

Want to contact us directly? No problem.  We  are always here for you.

Support team - Nina

Our team helps students graduate by offering:

  • A world-class citation generator
  • Plagiarism Checker software powered by Turnitin
  • Innovative Citation Checker software
  • Professional proofreading services
  • Over 300 helpful articles about academic writing, citing sources, plagiarism, and more

Scribbr specializes in editing study-related documents . We proofread:

  • PhD dissertations
  • Research proposals
  • Personal statements
  • Admission essays
  • Motivation letters
  • Reflection papers
  • Journal articles
  • Capstone projects

Scribbr’s Plagiarism Checker is powered by elements of Turnitin’s Similarity Checker , namely the plagiarism detection software and the Internet Archive and Premium Scholarly Publications content databases .

The add-on AI detector is powered by Scribbr’s proprietary software.

The Scribbr Citation Generator is developed using the open-source Citation Style Language (CSL) project and Frank Bennett’s citeproc-js . It’s the same technology used by dozens of other popular citation tools, including Mendeley and Zotero.

You can find all the citation styles and locales used in the Scribbr Citation Generator in our publicly accessible repository on Github .

What random assignment does and does not do

Affiliation.

Random assignment of patients to comparison groups stochastically tends, with increasing sample size or number of experiment replications, to minimize the confounding of treatment outcome differences by the effects of differences among these groups in unknown/unmeasured patient characteristics. To what degree such confounding is actually avoided we cannot know unless we have validly measured these patient variables, but completely avoiding it is quite unlikely. Even if this confounding were completely avoided, confounding by unmeasured Patient Variable x Treatment Variable interactions remains a possibility. And the causal power of the confounding variables is no less important for internal validity than the degree of confounding.

Copyright 2003 Wiley Periodicals, Inc. J Clin Psychol.

Publication types

  • Comparative Study
  • Confounding Factors, Epidemiologic
  • Observer Variation
  • Psychotherapy
  • Random Allocation*
  • Randomized Controlled Trials as Topic / methods*
  • Randomized Controlled Trials as Topic / statistics & numerical data
  • Reproducibility of Results
  • Selection Bias
  • Stochastic Processes*
  • Treatment Outcome*

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Indian Dermatol Online J
  • v.10(5); Sep-Oct 2019

Selection of Control, Randomization, Blinding, and Allocation Concealment

Department of Pharmacology, Rampurhat Government Medical College, Rampurhat, Birbhum, West Bengal, India

Piyush Kumar

1 Department of Dermatology, Katihar Medical College and Hospital, Bihar, India

Rajesh Kumar

2 Department of Dermatology, Grant Medical College and Bombay Hospital Institute of Medical Sciences, Mumbai, Maharashtra, India

Nilay Kanti Das

3 Department of Dermatology, Bankura Sammilani Medical College, Kenduadihi, Bankura, West Bengal, India

Clinical trials looking at which treatment is better must have certain checks in place. Appropriate “control” selection while comparing the investigating agent to the “control group is essential to rule out selection bias. Randomization is another step to minimize variability or “confounders.” By randomization, research participants have an equal chance of being selected into any treatment group of the study, generating comparable intervention groups, thereby distributing the confounders. A trial can be “open labeled” or “blinded.” By the process of blinding, we make the participant and/or assessing physician unaware of the treatment he/she is going to receive. Thus, the element of bias which can creep in owing to personal preference or subjective component to the assessment of outcome can be eliminated. Concealment of allocation is done as the participant enters the trial. Concealment secures randomization and prevents “selection bias”.

Introduction

Clinical trials or interventional studies are to be designed in such a way that it gives a comprehensive idea about the effectiveness/efficacy or safety of any new agent introduced for the treatment of any clinical condition. To understand the fact that the improvement (or deterioration) is not happening by chance , it is essential that the treatment modality is compared against another modality of treatment (active control) or no treatment (placebo control). Thus, the role of having control is paramount, and it decides the level of evidence of any trial and in-turn decides the grade of recommendation.

Apart from having control, there is another important factor which can affect the interpretation of result in any clinical trial, and this factor is “bias.” The bias can be while selecting the participant and the control (selection bias), owing to the confounding factors (confounding bias) and also while assessing the outcome (assessment bias). Randomization is the method adopted to eliminate the bias of selection and confounder. It has got two steps; generation of random number and concealment of the random number from the dispensing physician (allocation concealment). For eliminating the assessment bias, the method adopted is blinding and it can be done at a different level using the participant of the trial, assessing physician, and even the statistician analyzing the results.

The article will attempt in elaborating on these facets of clinical trial and impart practical clues to implement the same.

A. Selection of control- The “control” is used in clinical trials to nullify the effect of known or unknown factors (other than the factor being tested) on the research outcome and hence, to increase the reliability of the results. For example, if a new topical medication is shown to be effective in psoriasis patient, the inclusion of “control” in the study allows the investigator to conclude that the new medication is truly effective and improvement did not happen by chance.

Controls in case-control studies:

  • While choosing control, two principles should be followed:[ 1 ]
  • The control or the comparison group should be representative of the source population from which cases are derived
  • The controls should be independent of the exposure, i.e., less likely to have the exposure of interest.

For e.g., to test the role of “topical corticosteroids causing the unresponsiveness to standard antifungal therapy” it is essential that case (those who have used steroid) and control (those who have not) be chosen from the same socio-cultural strata to eliminate the confounders such as hot and humid working environment, cleanliness, etc. This prevents “selection bias.” Thus, a control minimizes the effects of variables other than the variable under evaluation.

Controls in clinical trials:

In making a decision about a new treatment, the control arm is usually taken as the “gold standard treatment” (or the “best available treatment”). Comparison between the “test” arm (or “experimental” arm) and the “control” arm in such clinical studies makes a fruitful assessment of the new treatment compared with the previous one and increases the reliability of the study.

  • Placebo control: A placebo is an inactive substance that looks like the drug or treatment being tested.[ 2 ] A placebo control may be used where no standard treatment exists or else using a placebo control becomes unethical and substandard care in patients with active disease, where there is an approved treatment. Guidelines state that there should be a condition of “clinical equipoise” before a placebo-controlled trial is started. Clinical equipoise assumes that there exists not one better intervention either for the control or experimental group during the design of the trial, e.g., a trial on systemic sclerosis by using an experimental drug understanding the fact that there is no gold-standard therapy for systemic sclerosis.[ 3 ] The use of “active” control or “historical” control (stated below) can address this issue. Participants who are going to receive placebo are not going to benefit from the trial. This therapeutic misconception should be eliminated during the informed consent process.
  • Dose-response control: A new dose for a known drug makes it a “new drug.”[ 4 ] During its clinical trial, the control is usually the previously used dose. For e.g., 10 mg levocetirizine compared to 5 mg levocetirizine
  • Active control: Here, the control is an active drug, usually the standard therapy or known effective treatment
  • Historical control: Such control uses data from previously conducted studies and administrative databases. The studies that can be chosen for historical control can be a prospective natural history study or a control group from a previous randomized controlled trial.

B. Randomization

Minimizing variability of evaluation is the core of conducting good research or experiment. This variability is also known as “confounders.” Confounders can be known or unknown. Confounders have the possibility of generating erroneous results because of the unknown effects of unmeasured variables. The process by which confounders can be reduced is known as “randomization.” By randomization, research participants have an equal chance of being selected into any treatment group of the study, generating comparable intervention groups, thereby distributing the confounders. The confounders can, therefore, be ignored. Thus, the difference in outcome and results can be explained by treatment alone. However, if one wants to gain greater experience using a new treatment or drug, one may opt for “Unequal randomization”- randomization in 2:1 ratio (2/3 of patients on new treatment). The power of the study does get reduced [power decreases from 0.95 (for 1:1) to 0.925 (for 2:1)], but this technique is statistically feasible and is especially suited to phase II randomized trials.

Benefits of randomization:

  • Balances the treatment groups with respect to baseline variability, known, and unknown confounding factors, thus eliminates “confounding bias”
  • Eliminates “selection bias.” Selection bias occurs when the researcher voluntarily or involuntarily steers the less sick patients to the treatment he feels is better and vice versa
  • Forms the basis for statistical tests.

How to randomize?

  • Computer generated random number table. The statistical software has a provision of choosing equal or unequal randomized groups, choosing stratification, etc
  • Random number table from statistical textbooks
  • For smaller experiments: tossing coins (heads-control and tails-treatment), roll of dice (≤3-treatment and >3-control), and shuffled deck of cards (even-Group A and odd-Group B). However, these methods are replaced by the aforementioned methods.

What is not randomization?

  • Alternate assignment: Study participants alternatively assigned to treatment, e.g., Odd numbers go to Treatment A and even numbers go to Treatment B
  • Assignment according to the date of entry to study, e.g., 2 weeks of the month to Group A, next 2 weeks to Group B
  • Assignment according to the days of the week, e.g., Monday OPD patients to Group A, Wednesday OPD patients to Group B. This gives rise to Berksonian bias.

Techniques for randomization

  • Simple randomization: Randomization according to a single sequence of random assignments is known as simple randomization.[ 5 ] Assignment to the treatment groups is random and not concerned with other variables. For e.g. toss of coin, roll of dice, etc. This is the most simple and easy approach of randomization. In clinical studies with large sample size (at least 1,000 participants), simple randomization usually balances the number of subjects in each group. However, simple randomization could be problematic in smaller samples resulting in an unequal number of participants in treatment arms

e.g., Block randomization of two treatment groups A and B, number of blocks = 5, size of blocks = 10, and fixed size blocks.

1: A 2: B 3: B 4: A 5: A 6: B 7: B 8: A 9: B 10: A

1: A 2: B 3: B 4: B 5: A 6: B 7: A 8: A 9: B 10: A

1: A 2: B 3: B 4: A 5: B 6: A 7: B 8: A 9: A 10: B

1: A 2: B 3: B 4: B 5: B 6: A 7: A 8: A 9: A 10: B

1: B 2: A 3: B 4: A 5: B 6: A 7: B 8: B 9: A 10: A

  • Stratified randomization: When specific variables are known to influence the outcome, stratification of the sample is required to keep the variables (e.g., age, gender, weight, prognostic status, etc) as similar as possible in between the treatment groups. This method achieves a balance between baseline characteristics. At first, variable is identified, strata is created. Participants are assigned to strata. Simple randomization is then applied to each stratum to assign subjects to either group, e.g. in case of assessing results of immunotherapy for viral warts, stratification can be done with respect to the types of warts viz. verruca vulgaris, verruca plana, plantar wart, condyloma acuminata.
  • Cluster randomization: This method randomizes groups of people instead of individuals. This method is also known as “group randomization.” Cluster randomization is particularly favored to avoid complaint among the group of people living in close vicinity, e.g., vaccine trials where all participants of the same locality receive the same vaccine, lifestyle modification studies, and studies involving nutritional interventions. Here, sampling units are groups and not individuals.

C. Blinding

A trial can be “open labeled” or “blinded”. By the process of blinding, we make the participant and/or assessing physician unaware of the treatment he/she is going to receive. Thus, the element of bias which can creep in owing to personal preference or subjective component to the assessment of outcome (e.g., a tool like physician global score is used to assess the outcome) can be eliminated. The process has now been further extended to include the statistician analyzing the result to make it fool proof. Thus, blinding is helpful in eliminating intentional or unintentional bias, increasing the objectivity of results, and ensuring the credibility of study conclusions.

Types of blinding:

  • Open-labeled or unblinded: All parties involved in a study are aware of the treatment the participants are receiving. Although blinding is desirable, sometimes it may not be possible or feasible. This type of study design suffers from low credibility but may be acceptable if endpoints are indisputably objective (e.g., survival or death)
  • Single-blind: The participants in a study might drop out from study or might give false assessment if they come to know that they are receiving “no treatment.” In addition, they might develop a placebo effect, if they know they are receiving “new treatment.” All these biases can be eliminated by single-blinding. In this, a group of individuals (usually the participants) do not know the intervention he or she is going to receive. Conventionally, it refers to participant-blinded but logically the group of individuals blinded can also be the outcome assessor. Thus, a single-blind trial can be either participant-blind or assessor-blind, and it is better to specify who is blinded, instead of saying single-blind
  • Double-blind: Like participants, the investigator/observer may influence the results of the study, if they are aware if a group of individuals are receiving a particular treatment. For example, if the endpoint is subjective (e.g. physician global scale), they might record a more favorable response for treatment of their preference. In addition, they might influence participants’ assessment of a particular treatment during follow-up meetings. In double-blinding, neither the participant nor the investigator/observer/outcome assessor is aware of the treatment allotted. The investigator is the person carrying out the research. The observer or the outcome assessor is the person who assesses the parameters of the study
  • Triple-blind: Triple-blinding is done to eliminate the bias of data analysts. In triple-blinding, the participant, investigator, and the data analyst are unaware of the treatment given.

However, instead of expressing whether the trial is single, double, or triple blinded, it is more pertinent to specify who exactly is going to be blinded.

Masking: It is a term used interchangeably with “blinding” and is usually used by ophthalmologists.

Advantages of blinding:

  • Avoids observation bias. For e.g., during the evaluation of a subjective score like urticaria severity score, blinding prevents favoring of the test drug by the investigator
  • Can also reduce the opportunity for bias to enter into the evaluation of the trial results owing to the knowledge of the treatment.

Procedures of blinding a trial:

  • By using identical looking dosage forms, for e.g. in a placebo-controlled trial, the placebo used should be similar looking in shape, size, color, and odor as that of the active drug

Investigational group = Active drug + Placebo

Control group = Placebo + Active control

  • Both the active drugs can be taken out from their packaging and repacked in similar looking opaque containers. The containers can be labeled according to randomization
  • The observer can be blinded by separating the room where the person is dispensing the drug and the person observing the effects of the drug.

Assessment of the efficacy of blinding:

There might be untoward effects in which the trial can be unblinded. The curiosity of participants or staff, differences in taste and smell of the drug, and placebo or a cross-over study are such instances. Ideal placebos are not always easy to procure or manufacture. Thus, the assessment of blinding should be done prior to the decoding of randomization. The participants, investigators, and staff are asked to guess what the participants had received. If the guess is 50% in each group, blinding has been maintained. If the guess is >50%, there had been a breach in blinding. If guess is <50% a suspicion about non-admittance of breach of blinding should be made.

Instances when blinding may be broken:

  • The study is completed and data analyzed
  • For individual patient during an emergency, e.g., road traffic accident due to an antihistamine trial in urticaria, participant in psoriasis trial progressing to erythroderma.

Instances when blinding is not possible or difficult to achieve:

  • A surgical discipline is tested against a medical therapy, e.g., electrosurgery in pyogenic granuloma is tested against topical timolol
  • “Sham procedure” is the improvisation, while working with surgical therapy that can be utilized to make it blinded, e.g., creating a dermal pocket without introducing any warty tissue against a regular auto-inoculation of wart can be done to make the placebo-arm blinded.[ 8 ] Ethical issues limits it uses of sham procedure, but in this instance, the authors have argued to have used the procedure to rule out the role of psychological effect, which has proven its role in wart therapy.

What to do if blinding is not possible or ethical:

  • Researchers should ensure that the outcomes being measured are as objective as possible
  • In addition, a duplicate assessment of outcome may be considered and researchers should report the level of agreement achieved
  • Expertise-based trial design- It can be done for surgical procedures, where patients are randomly assigned to different surgeons
  • Partial blinding- Sometimes, independent blinded evaluators may be sufficient to reduce bias
  • Limitations and potential biases due to lack of blinding need to be acknowledged and discussed.

D. Allocation concealment

Concealment of allocation is done as the participant enters the trial. Concealment secures randomization and prevents “selection bias.”

Every researcher tries to prove his hypothesis as correct. This can lead to conscious or unconscious steering of certain “good” patients to the desired group and others to the alternate group. If the investigator knows the randomization, such bias can lead to an imbalance in the study and wrong conclusions can be drawn. Such bias can be avoided by the following allocation of concealment techniques:

  • Third party randomization by phone or pharmacy. In large multi-centric trials, interactive-voice-response-service is used to ensure the allocation concealment in different centers
  • Sequentially numbered, opaque, sealed envelope (SNOSE) technique: the randomization group is written on a paper and is kept in an opaque sealed envelope. The envelope is labeled with a serial number. The investigator opens the sealed envelope once the patient has consented to participate and then assigns the treatment group accordingly
  • Sequentially numbered opaque containers: Similar to SNOSE, but here, instead of a piece of paper, the medicines are stored in opaque containers according to randomization, and there is no possibility of the dispenser to know which medicine is kept in which container.[ 9 ] Thus, the allocation is concealed.

E. Differences between allocation concealment and blinding

To conclude, a randomized controlled trial (RCT) is the gold-standard study design to evaluate any therapeutic method and carries the highest level of evidence (Level I b). Undoubtedly, as a researcher, we all are interested in conducting an RCT and contribute to the knowledge of the scientific world. Choosing the correct control group and avoiding biases are the most important aspect of any RCT. There can be a situation where blinding is not possible because of operational issues, but in every trial, effort should be thrust on randomization, which can eliminate two major biases: the bias of selection and confounding bias. Proper randomization would ensure that the baseline confounders are balanced lest; complex statistical methods are called for balancing them (e.g., multivariate analysis). This article is an attempt to provide practical tips for the researchers interested in a clinical trial, so that the data generated is more valid and credible.

Financial support and sponsorship

Conflicts of interest.

There are no conflicts of interest.

  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Guided Meditations
  • Verywell Mind Insights
  • 2023 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

The Definition of Random Assignment According to Psychology

Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

does random assignment reduce confounding variables

Emily is a board-certified science editor who has worked with top digital publishing brands like Voices for Biodiversity, Study.com, GoodTherapy, Vox, and Verywell.

does random assignment reduce confounding variables

Materio / Getty Images

Random assignment refers to the use of chance procedures in psychology experiments to ensure that each participant has the same opportunity to be assigned to any given group in a study to eliminate any potential bias in the experiment at the outset. Participants are randomly assigned to different groups, such as the treatment group versus the control group. In clinical research, randomized clinical trials are known as the gold standard for meaningful results.

Simple random assignment techniques might involve tactics such as flipping a coin, drawing names out of a hat, rolling dice, or assigning random numbers to a list of participants. It is important to note that random assignment differs from random selection .

While random selection refers to how participants are randomly chosen from a target population as representatives of that population, random assignment refers to how those chosen participants are then assigned to experimental groups.

Random Assignment In Research

To determine if changes in one variable will cause changes in another variable, psychologists must perform an experiment. Random assignment is a critical part of the experimental design that helps ensure the reliability of the study outcomes.

Researchers often begin by forming a testable hypothesis predicting that one variable of interest will have some predictable impact on another variable.

The variable that the experimenters will manipulate in the experiment is known as the independent variable , while the variable that they will then measure for different outcomes is known as the dependent variable. While there are different ways to look at relationships between variables, an experiment is the best way to get a clear idea if there is a cause-and-effect relationship between two or more variables.

Once researchers have formulated a hypothesis, conducted background research, and chosen an experimental design, it is time to find participants for their experiment. How exactly do researchers decide who will be part of an experiment? As mentioned previously, this is often accomplished through something known as random selection.

Random Selection

In order to generalize the results of an experiment to a larger group, it is important to choose a sample that is representative of the qualities found in that population. For example, if the total population is 60% female and 40% male, then the sample should reflect those same percentages.

Choosing a representative sample is often accomplished by randomly picking people from the population to be participants in a study. Random selection means that everyone in the group stands an equal chance of being chosen to minimize any bias. Once a pool of participants has been selected, it is time to assign them to groups.

By randomly assigning the participants into groups, the experimenters can be fairly sure that each group will have the same characteristics before the independent variable is applied.

Participants might be randomly assigned to the control group , which does not receive the treatment in question. The control group may receive a placebo or receive the standard treatment. Participants may also be randomly assigned to the experimental group , which receives the treatment of interest. In larger studies, there can be multiple treatment groups for comparison.

There are simple methods of random assignment, like rolling the die. However, there are more complex techniques that involve random number generators to remove any human error.

There can also be random assignment to groups with pre-established rules or parameters. For example, if you want to have an equal number of men and women in each of your study groups, you might separate your sample into two groups (by sex) before randomly assigning each of those groups into the treatment group and control group.

Random assignment is essential because it increases the likelihood that the groups are the same at the outset. With all characteristics being equal between groups, other than the application of the independent variable, any differences found between group outcomes can be more confidently attributed to the effect of the intervention.

Example of Random Assignment

Imagine that a researcher is interested in learning whether or not drinking caffeinated beverages prior to an exam will improve test performance. After randomly selecting a pool of participants, each person is randomly assigned to either the control group or the experimental group.

The participants in the control group consume a placebo drink prior to the exam that does not contain any caffeine. Those in the experimental group, on the other hand, consume a caffeinated beverage before taking the test.

Participants in both groups then take the test, and the researcher compares the results to determine if the caffeinated beverage had any impact on test performance.

A Word From Verywell

Random assignment plays an important role in the psychology research process. Not only does this process help eliminate possible sources of bias, but it also makes it easier to generalize the results of a tested sample of participants to a larger population.

Random assignment helps ensure that members of each group in the experiment are the same, which means that the groups are also likely more representative of what is present in the larger population of interest. Through the use of this technique, psychology researchers are able to study complex phenomena and contribute to our understanding of the human mind and behavior.

Lin Y, Zhu M, Su Z. The pursuit of balance: An overview of covariate-adaptive randomization techniques in clinical trials . Contemp Clin Trials. 2015;45(Pt A):21-25. doi:10.1016/j.cct.2015.07.011

Sullivan L. Random assignment versus random selection . In: The SAGE Glossary of the Social and Behavioral Sciences. SAGE Publications, Inc.; 2009. doi:10.4135/9781412972024.n2108

Alferes VR. Methods of Randomization in Experimental Design . SAGE Publications, Inc.; 2012. doi:10.4135/9781452270012

Nestor PG, Schutt RK. Research Methods in Psychology: Investigating Human Behavior. (2nd Ed.). SAGE Publications, Inc.; 2015.

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

IMAGES

  1. Introduction to Random Assignment -Voxco

    does random assignment reduce confounding variables

  2. How to control for confounding variables in research??

    does random assignment reduce confounding variables

  3. PPT

    does random assignment reduce confounding variables

  4. Purpose and Limitations of Random Assignment

    does random assignment reduce confounding variables

  5. 7 Different Ways to Control for Confounding

    does random assignment reduce confounding variables

  6. Solved How does random assignment control confounding

    does random assignment reduce confounding variables

VIDEO

  1. Random Assignment- 2023/24 UD Series 2 #2 & #4 Full Case Random With A Twist! (3/6/24)

  2. random sampling & assignment

  3. RANDOM ASSIGNMENT

  4. Confounding/Lurking Variables (Statistics)

  5. Conditions Most Conducive to Random Assignment

  6. COSM

COMMENTS

  1. Random Assignment in Experiments

    Correlation, Causation, and Confounding Variables. Random assignment helps you separation causation from correlation and rule out confounding variables. As a critical component of the scientific method, experiments typically set up contrasts between a control group and one or more treatment groups. The idea is to determine whether the effect, which is the difference between a treatment group ...

  2. Confounding Variables

    Confounding variables (a.k.a. confounders or confounding factors) are a type of extraneous variable that are related to a study's independent and dependent variables. A variable must meet two conditions to be a confounder: It must be correlated with the independent variable. This may be a causal relationship, but it does not have to be.

  3. How to control confounding effects by statistical analysis

    Abstract. A Confounder is a variable whose presence affects the variables being studied so that the results do not reflect the actual relationship. There are various ways to exclude or control confounding variables including Randomization, Restriction and Matching. But all these methods are applicable at the time of study design.

  4. What is a Confounding Variable? (Definition & Example)

    How to Reduce the Effect of Confounding Variables. There are several ways to reduce the effect of confounding variables, including the following methods: 1. Random Assignment. Random assignment refers to the process of randomly assigning individuals in a study to either a treatment group or a control group.

  5. 7 Different Ways to Control for Confounding

    Random assignment controls for confounding due to both measurable and unmeasurable causes. So it is especially useful when confounding variables are unknown or cannot be measured. It also controls for time-varying confounders, that is when the exposure and the confounders are measured repeatedly in studies where participants are followed over ...

  6. Random Assignment in Psychology: Definition & Examples

    Random selection (also called probability sampling or random sampling) is a way of randomly selecting members of a population to be included in your study. On the other hand, random assignment is a way of sorting the sample participants into control and treatment groups. Random selection ensures that everyone in the population has an equal ...

  7. The Role of Randomization to Address Confounding Variables in Machine

    Confounding variables or confounders are often defined as the variables correlate (positively or negatively) with both the dependent variable and the independent variable. ... In randomization the random assignment of study subjects to exposure categories to breaking any links between exposure and confounders. This reduces potential for ...

  8. 1.3: Threats to Internal Validity and Different Control Techniques

    Random assignment. Random assignment is the single most powerful control technique we can use to minimize the potential threats of the confounding variables in research design. As we have seen in Dunn and her colleagues' study earlier, participants are not allowed to self select into either conditions (spend $20 on self or spend on others).

  9. Confounding Variable: Definition & Examples

    Additionally confounding variables reduce an experiment's internal validity, ... The random assignment process should break any correlation between a confounder and the outcome, making it essentially zero. Consequently, randomizes experiments tend to prevent confounding variables from affecting the results.

  10. Random Assignment in Experiments

    Why does random assignment matter? Random assignment is an important part of control in experimental research, because it helps strengthen the internal validity of an experiment and avoid biases.. In experiments, researchers manipulate an independent variable to assess its effect on a dependent variable, while controlling for other variables. To do so, they often use different levels of an ...

  11. PDF Internal Validity page 1

    Random assignment. The most common procedure to reduce the risk of confounding variables (and thus increase internal validity) is random assignment of participants to levels of the independent variable. This means that each participant in a study has an equal probability of being assigned to any of the levels of the independent variable.

  12. 3.6 Causation and Random Assignment

    Random assignment does not guarantee that the groups are matched or equivalent. The groups may still differ on some preexisting attribute due to chance. The use of random assignment cannot eliminate this possibility, but it greatly reduces it. ... This is an example of a potential confounding variable. A confounding variable is a variable that ...

  13. Three Methods for Minimizing Confounding in the Study Design Phase

    Matching Compared Groups. Another risk factor can only cause confounding if it is distributed differently in the groups being compared. Therefore, another method of preventing confounding is to match the subjects with respect to confounding variables. This method can be used in both cohort studies and in case-control studies in order to enroll ...

  14. How do I prevent confounding variables from interfering with ...

    A confounding variable is related to both the supposed cause and the supposed effect of the study. It can be difficult to separate the true effect of the independent variable from the effect of the confounding variable. In your research design, it's important to identify potential confounding variables and plan how you will reduce their impact.

  15. Assessing bias: the importance of considering confounding

    Confounding is often referred to as a "mixing of effects" 1, 2 wherein the effects of the exposure under study on a given outcome are mixed in with the effects of an additional factor (or set of factors) resulting in a distortion of the true relationship. In a clinical trial, this can happen when the distribution of a known prognostic ...

  16. Using random effects to adjust for cluster-level confounding?

    Based on this advice and examples from the literature in a similar spirit, one would imagine that random effects can be used for adjustment in a DAG like this, where there is an unobserved confounder on the cluster level: For example, imagine a clinical study where hospitals differ in their propensity to enroll high-risk patients (more likely ...

  17. What random assignment does and does not do.

    Random assignment of patients to comparison groups stochastically tends, with increasing sample size or number of experiment replications, to minimize the confounding of treatment outcome differences by the effects of differences among these groups in unknown/unmeasured patient characteristics. To what degree such confounding is actually avoided we cannot know unless we have validly measured ...

  18. What random assignment does and does not do

    Abstract. Random assignment of patients to comparison groups stochastically tends, with increasing sample size or number of experiment replications, to minimize the confounding of treatment outcome differences by the effects of differences among these groups in unknown/unmeasured patient characteristics. To what degree such confounding is ...

  19. Selection of Control, Randomization, Blinding, and Allocation

    The bias can be while selecting the participant and the control (selection bias), owing to the confounding factors (confounding bias) and also while assessing the outcome (assessment bias). Randomization is the method adopted to eliminate the bias of selection and confounder. It has got two steps; generation of random number and concealment of ...

  20. random allocation

    In ecological research, nonrandom assignment of treatments to experimental units (subjects) is standard practice when sample sizes are small and there is evidence of one or more confounding variables. This nonrandom assignment "intersperses" the subjects across the spectrum of possibly confounding variables, which is exactly what random ...

  21. The Definition of Random Assignment In Psychology

    Random assignment refers to the use of chance procedures in psychology experiments to ensure that each participant has the same opportunity to be assigned to any given group in a study to eliminate any potential bias in the experiment at the outset. Participants are randomly assigned to different groups, such as the treatment group versus the ...

  22. Are random effects confounding variables?

    4. Random effects are confounding variables if the variables they represent cause selection into the exposure and variation in the outcome. For example, if what school a student attends both affects their propensity to be suspended and their academic success, then a random effect varying by school may account for this confounding relationship.