What Is the Purpose of Research Design?

purpose of research design is

Research design, like any framework of research, has its methods and purpose. The purpose of research design is to provide a clear plan of the research, based on independent and dependent variables, and to consider the cause and effect evoked by these variables.

Methods that constitute research design can include:

  • Observations
  • Experiments
  • Working with archives
  • Other methods

As we’ve explained here , the methods of research always back up the purpose and the goals of the research.

8 steps of research design process

Main Goals of Research Design

  • The main goal of research design is for a researcher to make sure that the conclusions they’ve come to are justified. It means that the research has to confirm or deny the hypothesis.
  • Another purpose of research design is to broaden the researcher’s understanding of the topic, and to make them more conscious about various places, groups, and settings.
  • Finally, research design allows the researcher to achieve an accurate understanding of the topic they are working on, and be able to explain the topic to others.

Usually, you cannot accomplish all these three goals at the same time, but you can try to do that by using multi-stage design in your study, which is also known as multi-stage sampling .

tips on how to develop research design

Examples of Research Design

One should choose the type of research design corresponding to the particular purpose of their research. Here are some examples to choose from when you’re doing research.

General Structure and Writing Style

The chosen type should help the researcher work on the hypothesis accurately; while collecting data, you have to choose specific evidence, which will show you the differences between the variables. Before conducting research, think about what information you will need and how you're going to use it. Otherwise, you can get lost.

Action Research

When you’ve developed an understanding of the problem, and you’ve begun planning an interventional strategy, action research design helps to bring the deviation to the surface. You make an observation and bookmark the deviation. Then you commit an action, and compare the outcome of having committed the action with the outcome of not having committed the action.

We would use a case study when studying a particular problem and when we take some broad samples and try to narrow them down to subjects that are researchable. This one is also cool when we need to check if a certain theory applies to a certain real-world problem.

Causal Research

If A is X, then B is Y — that's how we can simplify causality. By comparing two or more variables, a researcher can understand the effect they have on each other. This effect can then be measured in terms of causality: i.e., what constitutes the cause and the effect.

Cohort Study

Cohort studies are more often used in medical studies, but are now gaining popularity in social science. They are based on a quantitative framework, where a researcher makes notes on a statistical occurrence within a subgroup, with members of the group having similar features that are related to the study topic, rather than analyzing mathematical occurrences within a custom group. Observation is the main method of data gathering in cohort studies.

How to Study Research Design

You can learn research design:

  • At universities
  • Through online tutorials
  • In online classes
  • By reading scientific papers

Research with FlowMapp’s Tools

FlowMapp specializes in strategic website planning and user experience analysis and develops special tools for such analysis. Although all of these products focus on web design, FlowMapp's analysis and planning tools can be used by a wide range of other professionals:

  • developers;
  • copywriters;
  • UX strategists;
  • researchers;
  • sales managers;
  • product managers;
  • project managers;

User Flow Tool

User flow is a visual representation of the sequence of actions that users perform to achieve their goal. In fact, User flow tool allow you to look at the interaction between the user and the web product through the eyes of the user.

You can see how this looks like in the following example :

FlowMapp Flowchart UX design tool app interface

Wireframe tool

The Wireframe tool by FlowMapp is a powerful online tool that allows for the rapid creation of website prototypes. With its extensive library of templates for every website block, this tool enables designers to quickly build hi-fi prototypes. These wireframes can be easily shared via a link as a real webpage.

This tool is useful for quickly analyzing the labor required for website development, visualize future project ideas, and effectively present them to stakeholders.

The Notes tool by FlowMapp is a content gathering system that allows you to store all types of data, files, and ideas in one place, and collaborate on them with others. This tool is particularly useful for creating briefs, reports, documentation, and for storing all project-related data.

During the research phase, the Notes tool is a valuable asset for gathering and organizing ideas, especially when working on UX/UI design, user behavior and user flow analysis, or website development projects.

Remember that the step-by-step approach is the one that will help you most. Also, you can try to implement other research formats from multiple spheres and try some more mixed methods of research. Don’t be afraid to experiment – register at our website and get access to all tools we mentioned above!

purpose of research design is

Peak-end Rule in UX Design

purpose of research design is

What Is System Usability Scale (SUS)?

  • What is New
  • Download Your Software
  • Behavioral Research
  • Software for Consumer Research
  • Software for Human Factors R&D
  • Request Live Demo
  • Contact Sales

Sensor Hardware

Man wearing VR headset

We carry a range of biosensors from the top hardware producers. All compatible with iMotions

iMotions for Higher Education

Imotions for business.

purpose of research design is

The World Runs on Behavioral Insights

Consumer Insights

Morten Pedersen

purpose of research design is

New OSM Reference System from iMotions

Laila Mowla

News & Events

  • iMotions Lab
  • iMotions Online
  • Eye Tracking
  • Eye Tracking Screen Based
  • Eye Tracking VR
  • Eye Tracking Glasses
  • Eye Tracking Webcam
  • FEA (Facial Expression Analysis)
  • Voice Analysis
  • EDA/GSR (Electrodermal Activity)
  • EEG (Electroencephalography)
  • ECG (Electrocardiography)
  • EMG (Electromyography)
  • Respiration
  • iMotions Lab: New features
  • iMotions Lab: Developers
  • EEG sensors
  • Sensory and Perceptual
  • Consumer Inights
  • Human Factors R&D
  • Work Environments, Training and Safety
  • Customer Stories
  • Published Research Papers
  • Document Library
  • Customer Support Program
  • Help Center
  • Release Notes
  • Contact Support
  • Partnerships
  • Mission Statement
  • Ownership and Structure
  • Executive Management
  • Job Opportunities

Publications

  • Newsletter Sign Up

The Importance of Research Design: A Comprehensive Guide

Morten Pedersen

Research design plays a crucial role in conducting scientific studies and gaining meaningful insights. A well-designed research enhances the validity and reliability of the findings and allows for the replication of studies by other researchers. This comprehensive guide will provide an in-depth understanding of research design, its key components, different types, and its role in scientific inquiry. Furthermore, it will discuss the necessary steps in developing a research design and highlight some of the challenges that researchers commonly face.

Table of Contents

Understanding research design.

Research design refers to the overall plan or strategy that outlines how a study is conducted. It serves as a blueprint for researchers, guiding them in their investigation, and helps ensure that the study objectives are met. Understanding research design is essential for researchers to effectively gather and analyze data to answer research questions.

When embarking on a research study, researchers must carefully consider the design they will use. The design determines the structure of the study, including the research questions, data collection methods, and analysis techniques. It provides clarity on how the study will be conducted and helps researchers determine the best approach to achieve their research objectives. A well-designed study increases the chances of obtaining valid and reliable results.

Definition and Purpose of Research Design

Research design is the framework that outlines the structure of a study, including the research questions, data collection methods, and analysis techniques. It provides a systematic approach to conducting research and ensures that all aspects of the study are carefully planned and executed.

The purpose of research design is to provide a clear roadmap for researchers to follow. It helps them define the research questions they want to answer and identify the variables they will study. By clearly defining the purpose of the study, researchers can ensure that their research design aligns with their objectives.

Key Components of Research Design

A research design consists of several key components that influence the study’s validity and reliability. These components include the research questions, variables and operational definitions, sampling techniques, data collection methods, and statistical analysis procedures.

The research questions are the foundation of any study. They guide the entire research process and help researchers focus their efforts. By formulating clear and concise research questions, researchers can ensure that their study addresses the specific issues they want to investigate.

purpose of research design is

Variables and operational definitions are also crucial components of research design. Variables are the concepts or phenomena that researchers want to measure or study. Operational definitions provide a clear and specific description of how these variables will be measured or observed. By clearly defining variables and their operational definitions, researchers can ensure that their study is consistent and replicable.

Sampling techniques play a vital role in research design as well. Researchers must carefully select the participants or samples they will study to ensure that their findings are generalizable to the larger population. Different sampling techniques, such as random sampling or purposive sampling, can be used depending on the research objectives and constraints.

Data collection methods are another important component of research design. Researchers must decide how they will collect data, whether through surveys, interviews, observations, or experiments. The choice of data collection method depends on the research questions and the type of data needed to answer them.

Finally, statistical analysis procedures are used to analyze the collected data and draw meaningful conclusions. Researchers must determine the appropriate statistical tests or techniques to use based on the nature of their data and research questions. The choice of statistical analysis procedures ensures that the data is analyzed accurately and that the results are valid and reliable.

Types of Research Design

Research design encompasses various types that researchers can choose depending on their research goals and the nature of the phenomenon being studied. Understanding the different types of research design is essential for researchers to select the most appropriate approach for their study.

When embarking on a research project, researchers must carefully consider the design they will employ. The design chosen will shape the entire study, from the data collection process to the analysis and interpretation of results. Let’s explore some of the most common types of research design in more detail.

Experimental Design

Experimental design involves manipulating one or more variables to observe their effect on the dependent variable. This type of design allows researchers to establish cause-and-effect relationships between variables by controlling for extraneous factors. Experimental design often relies on random assignment and control groups to minimize biases.

Imagine a group of researchers interested in studying the effects of a new teaching method on student performance. They could randomly assign students to two groups: one group would receive instruction using the new teaching method, while the other group would receive instruction using the traditional method. By comparing the performance of the two groups, the researchers can determine whether the new teaching method has a significant impact on student learning.

Experimental design provides a strong foundation for making causal claims, as it allows researchers to control for confounding variables and isolate the effects of the independent variable. However, it may not always be feasible or ethical to manipulate variables, leading researchers to explore alternative designs.

Free 44-page Experimental Design Guide

For Beginners and Intermediates

  • Introduction to experimental methods
  • Respondent management with groups and populations
  • How to set up stimulus selection and arrangement

purpose of research design is

Non-Experimental Design

Non-experimental design is used when it is not feasible or ethical to manipulate variables. This design relies on naturally occurring variations in data and focuses on observing and describing relationships between variables. Non-experimental design can be useful for exploratory research or when studying phenomena that cannot be controlled, such as human behavior.

For instance, researchers interested in studying the relationship between socioeconomic status and health outcomes may collect data from a large sample of individuals and analyze the existing differences. By examining the data, they can determine whether there is a correlation between socioeconomic status and health, without manipulating any variables.

Non-experimental design allows researchers to study real-world phenomena in their natural setting, providing valuable insights into complex social, psychological, and economic processes. However, it is important to note that non-experimental designs cannot establish causality, as there may be other variables at play that influence the observed relationships.

Quasi-Experimental Design

Quasi-experimental design resembles experimental design but lacks the element of random assignment. In situations where random assignment is not possible or practical, researchers can utilize quasi-experimental designs to gather data and make inferences. However, caution must be exercised when drawing causal conclusions from quasi-experimental studies.

Consider a scenario where researchers are interested in studying the effects of a new drug on patient recovery time. They cannot randomly assign patients to receive the drug or a placebo due to ethical considerations. Instead, they can compare the recovery times of patients who voluntarily choose to take the drug with those who do not. While this design allows for data collection and analysis, it is important to acknowledge that other factors, such as patient motivation or severity of illness, may influence the observed outcomes.

Quasi-experimental designs are valuable when experimental designs are not feasible or ethical. They provide an opportunity to explore relationships and gather data in real-world contexts. However, researchers must be cautious when interpreting the results, as causal claims may be limited due to the lack of random assignment.

By understanding the different types of research design, researchers can make informed decisions about the most appropriate approach for their study. Each design offers unique advantages and limitations, and the choice depends on the research question, available resources, and ethical considerations. Regardless of the design chosen, rigorous methodology and careful data analysis are crucial for producing reliable and valid research findings.

The Role of Research Design in Scientific Inquiry

A well-designed research study enhances the validity and reliability of the findings. Research design plays a crucial role in ensuring the scientific rigor of a study and facilitates the replication of studies by other researchers. Understanding the role of research design in scientific inquiry is vital for researchers to conduct impactful and robust research.

Ensuring Validity and Reliability

Research design plays a critical role in ensuring the validity and reliability of the study’s findings. Validity refers to the degree to which the study measures what it intends to measure, while reliability pertains to the consistency and stability of the results. Through careful consideration of the research design, researchers can minimize potential biases and increase the accuracy of their measurements.

Facilitating Replication of Studies

A robust research design allows for the replication of studies by other researchers. Replication plays a vital role in the scientific process as it helps confirm the validity and generalizability of research findings. By clearly documenting the research design, researchers enable others to reproduce the study and validate the results, thereby contributing to the cumulative knowledge in a field.

Steps in Developing a Research Design

Developing a research design involves a systematic process that includes several important steps. Researchers need to carefully consider each step to ensure that their study is well-designed and capable of addressing their research questions effectively.

Identifying Research Questions

The first step in developing a research design is to identify and define the research questions or hypotheses. Researchers need to clearly articulate what they aim to investigate and what specific information they want to gather. Clear research questions provide guidance for the subsequent steps in the research design process.

Selecting Appropriate Design Type

Once the research questions are identified, researchers need to select the most appropriate type of research design. The choice of design depends on various factors, including the research goals, the nature of the research questions, and the available resources. Careful consideration of these factors is crucial to ensure that the chosen design aligns with the study objectives.

Determining Data Collection Methods

After selecting the research design, researchers need to determine the most suitable data collection methods. Depending on the research questions and the type of data required, researchers can utilize a range of methods, such as surveys, interviews, observations, or experiments. The chosen methods should align with the research objectives and allow for the collection of high-quality data.

One of the most important considerations when designing a study in human behavior research is participant recruitment. We have written a comprehensive guide on best practices and pitfalls to be aware of when recruiting participants, which can be read here.

Enhancing Research Design with iMotions and Biosensors

Introduction to enhanced research design.

In the realm of scientific studies, especially within human cognitive-behavioral research, the deployment of advanced technologies such as iMotions software and biosensors has revolutionized research design. This chapter delves into how these technologies can be integrated into various research designs, improving the depth, accuracy, and reliability of scientific inquiries.

Integrating iMotions in Research Design

Imotions software: a key to multimodal data integration.

The iMotions platform stands as a pivotal tool in modern research design. It’s designed to integrate data from a plethora of biosensors, providing a comprehensive analysis of human behavior. This software facilitates the synchronizing of physiological, cognitive, and emotional responses with external stimuli, thus enriching the understanding of human behavior in various contexts.

Biosensors: Gateways to Deeper Insights

Biosensors, including eye trackers, EEG, GSR, ECG, and facial expression analysis tools, provide nuanced insights into the subconscious and conscious aspects of human behavior. These tools help researchers in capturing data that is often unattainable through traditional data collection methods like surveys and interviews.

Application in Different Research Designs

  • Eye Tracking : In experimental designs, where the impact of visual stimuli is crucial, eye trackers can reveal how subjects interact with these stimuli, thereby offering insights into cognitive processes and attention.
  • EEG : EEG biosensors allow researchers to monitor brain activity in response to controlled experimental manipulations, offering a window into cognitive and emotional responses.

purpose of research design is

  • Facial Expression Analysis : In observational studies, analyzing facial expressions can provide objective data on emotional responses in natural settings, complementing subjective self-reports.
  • GSR/EDA : These tools measure physiological arousal in real-life scenarios, giving researchers insights into emotional states without the need for intrusive measures.
  • EMG : In studies where direct manipulation isn’t feasible, EMG can indicate subtle responses to stimuli, which might be overlooked in traditional observational methods.
  • ECG/PPG : These sensors can be used to understand the impact of various interventions on physiological states such as stress or relaxation.

Streamlining Research Design with iMotions

The iMotions platform offers a streamlined process for integrating various biosensors into a research design. Researchers can easily design experiments, collect multimodal data, and analyze results in a unified interface. This reduces the complexity often associated with handling multiple streams of data and ensures a cohesive and comprehensive research approach.

Integrating iMotions software and biosensors into research design opens new horizons for scientific inquiry. This technology enhances the depth and breadth of data collection, paving the way for more nuanced and comprehensive findings.

Whether in experimental, non-experimental, or quasi-experimental designs, iMotions and biosensors offer invaluable tools for researchers aiming to uncover the intricate layers of human behavior and cognitive processes. The future of research design is undeniably intertwined with the advancements in these technologies, leading to more robust, reliable, and insightful scientific discoveries.

Challenges in Research Design

Research design can present several challenges that researchers need to overcome to conduct reliable and valid studies. Being aware of these challenges is essential for researchers to address them effectively and ensure the integrity of their research.

Ethical Considerations

Research design must adhere to ethical guidelines and principles to protect the rights and well-being of participants. Researchers need to obtain informed consent, ensure participant confidentiality, and minimize potential harm or discomfort. Ethical considerations should be carefully integrated into the research design to promote ethical research practices.

Practical Limitations

Researchers often face practical limitations that may impact the design and execution of their studies. Limited resources, time constraints, access to participants or data, and logistical challenges can pose obstacles during the research process. Researchers need to navigate these limitations and make thoughtful choices to ensure the feasibility and quality of their research.

Research design is a vital aspect of conducting scientific studies. It provides a structured framework for researchers to answer their research questions and obtain reliable and valid results. By understanding the different types of research design and following the necessary steps in developing a research design, researchers can enhance the rigor and impact of their studies.

However, researchers must also be mindful of the challenges they may encounter, such as ethical considerations and practical limitations, and take appropriate measures to address them. Ultimately, a well-designed research study contributes to the advancement of knowledge and promotes evidence-based decision-making in various fields.

Last edited

About the author

See what is next in human behavior research

Follow our newsletter to get the latest insights and events send to your inbox.

Related Posts

purpose of research design is

Can you use HTC VIVE Pro Eye for eye tracking research?

purpose of research design is

Top 5 Publications of 2023

purpose of research design is

Understanding Cognitive Workload: What Is It and How Does It Affect Us?

purpose of research design is

The Best Neuroscience Software

You might also like these.

Human Factors and UX

purpose of research design is

How to Measure Stress

Work and Safety

purpose of research design is

How to Measure the 4 Types of Attention – with Biosensors 

Case Stories

Explore Blog Categories

Best Practice

Collaboration, product guides, product news, research fundamentals, research insights, 🍪 use of cookies.

We are committed to protecting your privacy and only use cookies to improve the user experience.

Chose which third-party services that you will allow to drop cookies. You can always change your cookie settings via the Cookie Settings link in the footer of the website. For more information read our Privacy Policy.

  • gtag This tag is from Google and is used to associate user actions with Google Ad campaigns to measure their effectiveness. Enabling this will load the gtag and allow for the website to share information with Google. This service is essential and can not be disabled.
  • Livechat Livechat provides you with direct access to the experts in our office. The service tracks visitors to the website but does not store any information unless consent is given. This service is essential and can not be disabled.
  • Pardot Collects information such as the IP address, browser type, and referring URL. This information is used to create reports on website traffic and track the effectiveness of marketing campaigns.
  • Third-party iFrames Allows you to see thirdparty iFrames.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology

Research Design | Step-by-Step Guide with Examples

Published on 5 May 2022 by Shona McCombes . Revised on 20 March 2023.

A research design is a strategy for answering your research question  using empirical data. Creating a research design means making decisions about:

  • Your overall aims and approach
  • The type of research design you’ll use
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods
  • The procedures you’ll follow to collect data
  • Your data analysis methods

A well-planned research design helps ensure that your methods match your research aims and that you use the right kind of analysis for your data.

Table of contents

Step 1: consider your aims and approach, step 2: choose a type of research design, step 3: identify your population and sampling method, step 4: choose your data collection methods, step 5: plan your data collection procedures, step 6: decide on your data analysis strategies, frequently asked questions.

  • Introduction

Before you can start designing your research, you should already have a clear idea of the research question you want to investigate.

There are many different ways you could go about answering this question. Your research design choices should be driven by your aims and priorities – start by thinking carefully about what you want to achieve.

The first choice you need to make is whether you’ll take a qualitative or quantitative approach.

Qualitative research designs tend to be more flexible and inductive , allowing you to adjust your approach based on what you find throughout the research process.

Quantitative research designs tend to be more fixed and deductive , with variables and hypotheses clearly defined in advance of data collection.

It’s also possible to use a mixed methods design that integrates aspects of both approaches. By combining qualitative and quantitative insights, you can gain a more complete picture of the problem you’re studying and strengthen the credibility of your conclusions.

Practical and ethical considerations when designing research

As well as scientific considerations, you need to think practically when designing your research. If your research involves people or animals, you also need to consider research ethics .

  • How much time do you have to collect data and write up the research?
  • Will you be able to gain access to the data you need (e.g., by travelling to a specific location or contacting specific people)?
  • Do you have the necessary research skills (e.g., statistical analysis or interview techniques)?
  • Will you need ethical approval ?

At each stage of the research design process, make sure that your choices are practically feasible.

Prevent plagiarism, run a free check.

Within both qualitative and quantitative approaches, there are several types of research design to choose from. Each type provides a framework for the overall shape of your research.

Types of quantitative research designs

Quantitative designs can be split into four main types. Experimental and   quasi-experimental designs allow you to test cause-and-effect relationships, while descriptive and correlational designs allow you to measure variables and describe relationships between them.

With descriptive and correlational designs, you can get a clear picture of characteristics, trends, and relationships as they exist in the real world. However, you can’t draw conclusions about cause and effect (because correlation doesn’t imply causation ).

Experiments are the strongest way to test cause-and-effect relationships without the risk of other variables influencing the results. However, their controlled conditions may not always reflect how things work in the real world. They’re often also more difficult and expensive to implement.

Types of qualitative research designs

Qualitative designs are less strictly defined. This approach is about gaining a rich, detailed understanding of a specific context or phenomenon, and you can often be more creative and flexible in designing your research.

The table below shows some common types of qualitative design. They often have similar approaches in terms of data collection, but focus on different aspects when analysing the data.

Your research design should clearly define who or what your research will focus on, and how you’ll go about choosing your participants or subjects.

In research, a population is the entire group that you want to draw conclusions about, while a sample is the smaller group of individuals you’ll actually collect data from.

Defining the population

A population can be made up of anything you want to study – plants, animals, organisations, texts, countries, etc. In the social sciences, it most often refers to a group of people.

For example, will you focus on people from a specific demographic, region, or background? Are you interested in people with a certain job or medical condition, or users of a particular product?

The more precisely you define your population, the easier it will be to gather a representative sample.

Sampling methods

Even with a narrowly defined population, it’s rarely possible to collect data from every individual. Instead, you’ll collect data from a sample.

To select a sample, there are two main approaches: probability sampling and non-probability sampling . The sampling method you use affects how confidently you can generalise your results to the population as a whole.

Probability sampling is the most statistically valid option, but it’s often difficult to achieve unless you’re dealing with a very small and accessible population.

For practical reasons, many studies use non-probability sampling, but it’s important to be aware of the limitations and carefully consider potential biases. You should always make an effort to gather a sample that’s as representative as possible of the population.

Case selection in qualitative research

In some types of qualitative designs, sampling may not be relevant.

For example, in an ethnography or a case study, your aim is to deeply understand a specific context, not to generalise to a population. Instead of sampling, you may simply aim to collect as much data as possible about the context you are studying.

In these types of design, you still have to carefully consider your choice of case or community. You should have a clear rationale for why this particular case is suitable for answering your research question.

For example, you might choose a case study that reveals an unusual or neglected aspect of your research problem, or you might choose several very similar or very different cases in order to compare them.

Data collection methods are ways of directly measuring variables and gathering information. They allow you to gain first-hand knowledge and original insights into your research problem.

You can choose just one data collection method, or use several methods in the same study.

Survey methods

Surveys allow you to collect data about opinions, behaviours, experiences, and characteristics by asking people directly. There are two main survey methods to choose from: questionnaires and interviews.

Observation methods

Observations allow you to collect data unobtrusively, observing characteristics, behaviours, or social interactions without relying on self-reporting.

Observations may be conducted in real time, taking notes as you observe, or you might make audiovisual recordings for later analysis. They can be qualitative or quantitative.

Other methods of data collection

There are many other ways you might collect data depending on your field and topic.

If you’re not sure which methods will work best for your research design, try reading some papers in your field to see what data collection methods they used.

Secondary data

If you don’t have the time or resources to collect data from the population you’re interested in, you can also choose to use secondary data that other researchers already collected – for example, datasets from government surveys or previous studies on your topic.

With this raw data, you can do your own analysis to answer new research questions that weren’t addressed by the original study.

Using secondary data can expand the scope of your research, as you may be able to access much larger and more varied samples than you could collect yourself.

However, it also means you don’t have any control over which variables to measure or how to measure them, so the conclusions you can draw may be limited.

As well as deciding on your methods, you need to plan exactly how you’ll use these methods to collect data that’s consistent, accurate, and unbiased.

Planning systematic procedures is especially important in quantitative research, where you need to precisely define your variables and ensure your measurements are reliable and valid.

Operationalisation

Some variables, like height or age, are easily measured. But often you’ll be dealing with more abstract concepts, like satisfaction, anxiety, or competence. Operationalisation means turning these fuzzy ideas into measurable indicators.

If you’re using observations , which events or actions will you count?

If you’re using surveys , which questions will you ask and what range of responses will be offered?

You may also choose to use or adapt existing materials designed to measure the concept you’re interested in – for example, questionnaires or inventories whose reliability and validity has already been established.

Reliability and validity

Reliability means your results can be consistently reproduced , while validity means that you’re actually measuring the concept you’re interested in.

For valid and reliable results, your measurement materials should be thoroughly researched and carefully designed. Plan your procedures to make sure you carry out the same steps in the same way for each participant.

If you’re developing a new questionnaire or other instrument to measure a specific concept, running a pilot study allows you to check its validity and reliability in advance.

Sampling procedures

As well as choosing an appropriate sampling method, you need a concrete plan for how you’ll actually contact and recruit your selected sample.

That means making decisions about things like:

  • How many participants do you need for an adequate sample size?
  • What inclusion and exclusion criteria will you use to identify eligible participants?
  • How will you contact your sample – by mail, online, by phone, or in person?

If you’re using a probability sampling method, it’s important that everyone who is randomly selected actually participates in the study. How will you ensure a high response rate?

If you’re using a non-probability method, how will you avoid bias and ensure a representative sample?

Data management

It’s also important to create a data management plan for organising and storing your data.

Will you need to transcribe interviews or perform data entry for observations? You should anonymise and safeguard any sensitive data, and make sure it’s backed up regularly.

Keeping your data well organised will save time when it comes to analysing them. It can also help other researchers validate and add to your findings.

On their own, raw data can’t answer your research question. The last step of designing your research is planning how you’ll analyse the data.

Quantitative data analysis

In quantitative research, you’ll most likely use some form of statistical analysis . With statistics, you can summarise your sample data, make estimates, and test hypotheses.

Using descriptive statistics , you can summarise your sample data in terms of:

  • The distribution of the data (e.g., the frequency of each score on a test)
  • The central tendency of the data (e.g., the mean to describe the average score)
  • The variability of the data (e.g., the standard deviation to describe how spread out the scores are)

The specific calculations you can do depend on the level of measurement of your variables.

Using inferential statistics , you can:

  • Make estimates about the population based on your sample data.
  • Test hypotheses about a relationship between variables.

Regression and correlation tests look for associations between two or more variables, while comparison tests (such as t tests and ANOVAs ) look for differences in the outcomes of different groups.

Your choice of statistical test depends on various aspects of your research design, including the types of variables you’re dealing with and the distribution of your data.

Qualitative data analysis

In qualitative research, your data will usually be very dense with information and ideas. Instead of summing it up in numbers, you’ll need to comb through the data in detail, interpret its meanings, identify patterns, and extract the parts that are most relevant to your research question.

Two of the most common approaches to doing this are thematic analysis and discourse analysis .

There are many other ways of analysing qualitative data depending on the aims of your research. To get a sense of potential approaches, try reading some qualitative research papers in your field.

A sample is a subset of individuals from a larger population. Sampling means selecting the group that you will actually collect data from in your research.

For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

Statistical sampling allows you to test a hypothesis about the characteristics of a population. There are various sampling methods you can use to ensure that your sample is representative of the population as a whole.

Operationalisation means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioural avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalise the variables that you want to measure.

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts, and meanings, use qualitative methods .
  • If you want to analyse a large amount of readily available data, use secondary data. If you want data specific to your purposes with control over how they are generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2023, March 20). Research Design | Step-by-Step Guide with Examples. Scribbr. Retrieved 20 May 2024, from https://www.scribbr.co.uk/research-methods/research-design/

Is this article helpful?

Shona McCombes

Shona McCombes

Grad Coach

Research Design 101

Everything You Need To Get Started (With Examples)

By: Derek Jansen (MBA) | Reviewers: Eunice Rautenbach (DTech) & Kerryn Warren (PhD) | April 2023

Research design for qualitative and quantitative studies

Navigating the world of research can be daunting, especially if you’re a first-time researcher. One concept you’re bound to run into fairly early in your research journey is that of “ research design ”. Here, we’ll guide you through the basics using practical examples , so that you can approach your research with confidence.

Overview: Research Design 101

What is research design.

  • Research design types for quantitative studies
  • Video explainer : quantitative research design
  • Research design types for qualitative studies
  • Video explainer : qualitative research design
  • How to choose a research design
  • Key takeaways

Research design refers to the overall plan, structure or strategy that guides a research project , from its conception to the final data analysis. A good research design serves as the blueprint for how you, as the researcher, will collect and analyse data while ensuring consistency, reliability and validity throughout your study.

Understanding different types of research designs is essential as helps ensure that your approach is suitable  given your research aims, objectives and questions , as well as the resources you have available to you. Without a clear big-picture view of how you’ll design your research, you run the risk of potentially making misaligned choices in terms of your methodology – especially your sampling , data collection and data analysis decisions.

The problem with defining research design…

One of the reasons students struggle with a clear definition of research design is because the term is used very loosely across the internet, and even within academia.

Some sources claim that the three research design types are qualitative, quantitative and mixed methods , which isn’t quite accurate (these just refer to the type of data that you’ll collect and analyse). Other sources state that research design refers to the sum of all your design choices, suggesting it’s more like a research methodology . Others run off on other less common tangents. No wonder there’s confusion!

In this article, we’ll clear up the confusion. We’ll explain the most common research design types for both qualitative and quantitative research projects, whether that is for a full dissertation or thesis, or a smaller research paper or article.

Free Webinar: Research Methodology 101

Research Design: Quantitative Studies

Quantitative research involves collecting and analysing data in a numerical form. Broadly speaking, there are four types of quantitative research designs: descriptive , correlational , experimental , and quasi-experimental . 

Descriptive Research Design

As the name suggests, descriptive research design focuses on describing existing conditions, behaviours, or characteristics by systematically gathering information without manipulating any variables. In other words, there is no intervention on the researcher’s part – only data collection.

For example, if you’re studying smartphone addiction among adolescents in your community, you could deploy a survey to a sample of teens asking them to rate their agreement with certain statements that relate to smartphone addiction. The collected data would then provide insight regarding how widespread the issue may be – in other words, it would describe the situation.

The key defining attribute of this type of research design is that it purely describes the situation . In other words, descriptive research design does not explore potential relationships between different variables or the causes that may underlie those relationships. Therefore, descriptive research is useful for generating insight into a research problem by describing its characteristics . By doing so, it can provide valuable insights and is often used as a precursor to other research design types.

Correlational Research Design

Correlational design is a popular choice for researchers aiming to identify and measure the relationship between two or more variables without manipulating them . In other words, this type of research design is useful when you want to know whether a change in one thing tends to be accompanied by a change in another thing.

For example, if you wanted to explore the relationship between exercise frequency and overall health, you could use a correlational design to help you achieve this. In this case, you might gather data on participants’ exercise habits, as well as records of their health indicators like blood pressure, heart rate, or body mass index. Thereafter, you’d use a statistical test to assess whether there’s a relationship between the two variables (exercise frequency and health).

As you can see, correlational research design is useful when you want to explore potential relationships between variables that cannot be manipulated or controlled for ethical, practical, or logistical reasons. It is particularly helpful in terms of developing predictions , and given that it doesn’t involve the manipulation of variables, it can be implemented at a large scale more easily than experimental designs (which will look at next).

That said, it’s important to keep in mind that correlational research design has limitations – most notably that it cannot be used to establish causality . In other words, correlation does not equal causation . To establish causality, you’ll need to move into the realm of experimental design, coming up next…

Need a helping hand?

purpose of research design is

Experimental Research Design

Experimental research design is used to determine if there is a causal relationship between two or more variables . With this type of research design, you, as the researcher, manipulate one variable (the independent variable) while controlling others (dependent variables). Doing so allows you to observe the effect of the former on the latter and draw conclusions about potential causality.

For example, if you wanted to measure if/how different types of fertiliser affect plant growth, you could set up several groups of plants, with each group receiving a different type of fertiliser, as well as one with no fertiliser at all. You could then measure how much each plant group grew (on average) over time and compare the results from the different groups to see which fertiliser was most effective.

Overall, experimental research design provides researchers with a powerful way to identify and measure causal relationships (and the direction of causality) between variables. However, developing a rigorous experimental design can be challenging as it’s not always easy to control all the variables in a study. This often results in smaller sample sizes , which can reduce the statistical power and generalisability of the results.

Moreover, experimental research design requires random assignment . This means that the researcher needs to assign participants to different groups or conditions in a way that each participant has an equal chance of being assigned to any group (note that this is not the same as random sampling ). Doing so helps reduce the potential for bias and confounding variables . This need for random assignment can lead to ethics-related issues . For example, withholding a potentially beneficial medical treatment from a control group may be considered unethical in certain situations.

Quasi-Experimental Research Design

Quasi-experimental research design is used when the research aims involve identifying causal relations , but one cannot (or doesn’t want to) randomly assign participants to different groups (for practical or ethical reasons). Instead, with a quasi-experimental research design, the researcher relies on existing groups or pre-existing conditions to form groups for comparison.

For example, if you were studying the effects of a new teaching method on student achievement in a particular school district, you may be unable to randomly assign students to either group and instead have to choose classes or schools that already use different teaching methods. This way, you still achieve separate groups, without having to assign participants to specific groups yourself.

Naturally, quasi-experimental research designs have limitations when compared to experimental designs. Given that participant assignment is not random, it’s more difficult to confidently establish causality between variables, and, as a researcher, you have less control over other variables that may impact findings.

All that said, quasi-experimental designs can still be valuable in research contexts where random assignment is not possible and can often be undertaken on a much larger scale than experimental research, thus increasing the statistical power of the results. What’s important is that you, as the researcher, understand the limitations of the design and conduct your quasi-experiment as rigorously as possible, paying careful attention to any potential confounding variables .

The four most common quantitative research design types are descriptive, correlational, experimental and quasi-experimental.

Research Design: Qualitative Studies

There are many different research design types when it comes to qualitative studies, but here we’ll narrow our focus to explore the “Big 4”. Specifically, we’ll look at phenomenological design, grounded theory design, ethnographic design, and case study design.

Phenomenological Research Design

Phenomenological design involves exploring the meaning of lived experiences and how they are perceived by individuals. This type of research design seeks to understand people’s perspectives , emotions, and behaviours in specific situations. Here, the aim for researchers is to uncover the essence of human experience without making any assumptions or imposing preconceived ideas on their subjects.

For example, you could adopt a phenomenological design to study why cancer survivors have such varied perceptions of their lives after overcoming their disease. This could be achieved by interviewing survivors and then analysing the data using a qualitative analysis method such as thematic analysis to identify commonalities and differences.

Phenomenological research design typically involves in-depth interviews or open-ended questionnaires to collect rich, detailed data about participants’ subjective experiences. This richness is one of the key strengths of phenomenological research design but, naturally, it also has limitations. These include potential biases in data collection and interpretation and the lack of generalisability of findings to broader populations.

Grounded Theory Research Design

Grounded theory (also referred to as “GT”) aims to develop theories by continuously and iteratively analysing and comparing data collected from a relatively large number of participants in a study. It takes an inductive (bottom-up) approach, with a focus on letting the data “speak for itself”, without being influenced by preexisting theories or the researcher’s preconceptions.

As an example, let’s assume your research aims involved understanding how people cope with chronic pain from a specific medical condition, with a view to developing a theory around this. In this case, grounded theory design would allow you to explore this concept thoroughly without preconceptions about what coping mechanisms might exist. You may find that some patients prefer cognitive-behavioural therapy (CBT) while others prefer to rely on herbal remedies. Based on multiple, iterative rounds of analysis, you could then develop a theory in this regard, derived directly from the data (as opposed to other preexisting theories and models).

Grounded theory typically involves collecting data through interviews or observations and then analysing it to identify patterns and themes that emerge from the data. These emerging ideas are then validated by collecting more data until a saturation point is reached (i.e., no new information can be squeezed from the data). From that base, a theory can then be developed .

As you can see, grounded theory is ideally suited to studies where the research aims involve theory generation , especially in under-researched areas. Keep in mind though that this type of research design can be quite time-intensive , given the need for multiple rounds of data collection and analysis.

purpose of research design is

Ethnographic Research Design

Ethnographic design involves observing and studying a culture-sharing group of people in their natural setting to gain insight into their behaviours, beliefs, and values. The focus here is on observing participants in their natural environment (as opposed to a controlled environment). This typically involves the researcher spending an extended period of time with the participants in their environment, carefully observing and taking field notes .

All of this is not to say that ethnographic research design relies purely on observation. On the contrary, this design typically also involves in-depth interviews to explore participants’ views, beliefs, etc. However, unobtrusive observation is a core component of the ethnographic approach.

As an example, an ethnographer may study how different communities celebrate traditional festivals or how individuals from different generations interact with technology differently. This may involve a lengthy period of observation, combined with in-depth interviews to further explore specific areas of interest that emerge as a result of the observations that the researcher has made.

As you can probably imagine, ethnographic research design has the ability to provide rich, contextually embedded insights into the socio-cultural dynamics of human behaviour within a natural, uncontrived setting. Naturally, however, it does come with its own set of challenges, including researcher bias (since the researcher can become quite immersed in the group), participant confidentiality and, predictably, ethical complexities . All of these need to be carefully managed if you choose to adopt this type of research design.

Case Study Design

With case study research design, you, as the researcher, investigate a single individual (or a single group of individuals) to gain an in-depth understanding of their experiences, behaviours or outcomes. Unlike other research designs that are aimed at larger sample sizes, case studies offer a deep dive into the specific circumstances surrounding a person, group of people, event or phenomenon, generally within a bounded setting or context .

As an example, a case study design could be used to explore the factors influencing the success of a specific small business. This would involve diving deeply into the organisation to explore and understand what makes it tick – from marketing to HR to finance. In terms of data collection, this could include interviews with staff and management, review of policy documents and financial statements, surveying customers, etc.

While the above example is focused squarely on one organisation, it’s worth noting that case study research designs can have different variation s, including single-case, multiple-case and longitudinal designs. As you can see in the example, a single-case design involves intensely examining a single entity to understand its unique characteristics and complexities. Conversely, in a multiple-case design , multiple cases are compared and contrasted to identify patterns and commonalities. Lastly, in a longitudinal case design , a single case or multiple cases are studied over an extended period of time to understand how factors develop over time.

As you can see, a case study research design is particularly useful where a deep and contextualised understanding of a specific phenomenon or issue is desired. However, this strength is also its weakness. In other words, you can’t generalise the findings from a case study to the broader population. So, keep this in mind if you’re considering going the case study route.

Case study design often involves investigating an individual to gain an in-depth understanding of their experiences, behaviours or outcomes.

How To Choose A Research Design

Having worked through all of these potential research designs, you’d be forgiven for feeling a little overwhelmed and wondering, “ But how do I decide which research design to use? ”. While we could write an entire post covering that alone, here are a few factors to consider that will help you choose a suitable research design for your study.

Data type: The first determining factor is naturally the type of data you plan to be collecting – i.e., qualitative or quantitative. This may sound obvious, but we have to be clear about this – don’t try to use a quantitative research design on qualitative data (or vice versa)!

Research aim(s) and question(s): As with all methodological decisions, your research aim and research questions will heavily influence your research design. For example, if your research aims involve developing a theory from qualitative data, grounded theory would be a strong option. Similarly, if your research aims involve identifying and measuring relationships between variables, one of the experimental designs would likely be a better option.

Time: It’s essential that you consider any time constraints you have, as this will impact the type of research design you can choose. For example, if you’ve only got a month to complete your project, a lengthy design such as ethnography wouldn’t be a good fit.

Resources: Take into account the resources realistically available to you, as these need to factor into your research design choice. For example, if you require highly specialised lab equipment to execute an experimental design, you need to be sure that you’ll have access to that before you make a decision.

Keep in mind that when it comes to research, it’s important to manage your risks and play as conservatively as possible. If your entire project relies on you achieving a huge sample, having access to niche equipment or holding interviews with very difficult-to-reach participants, you’re creating risks that could kill your project. So, be sure to think through your choices carefully and make sure that you have backup plans for any existential risks. Remember that a relatively simple methodology executed well generally will typically earn better marks than a highly-complex methodology executed poorly.

purpose of research design is

Recap: Key Takeaways

We’ve covered a lot of ground here. Let’s recap by looking at the key takeaways:

  • Research design refers to the overall plan, structure or strategy that guides a research project, from its conception to the final analysis of data.
  • Research designs for quantitative studies include descriptive , correlational , experimental and quasi-experimenta l designs.
  • Research designs for qualitative studies include phenomenological , grounded theory , ethnographic and case study designs.
  • When choosing a research design, you need to consider a variety of factors, including the type of data you’ll be working with, your research aims and questions, your time and the resources available to you.

If you need a helping hand with your research design (or any other aspect of your research), check out our private coaching services .

purpose of research design is

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

You Might Also Like:

Survey Design 101: The Basics

10 Comments

Wei Leong YONG

Is there any blog article explaining more on Case study research design? Is there a Case study write-up template? Thank you.

Solly Khan

Thanks this was quite valuable to clarify such an important concept.

hetty

Thanks for this simplified explanations. it is quite very helpful.

Belz

This was really helpful. thanks

Imur

Thank you for your explanation. I think case study research design and the use of secondary data in researches needs to be talked about more in your videos and articles because there a lot of case studies research design tailored projects out there.

Please is there any template for a case study research design whose data type is a secondary data on your repository?

Sam Msongole

This post is very clear, comprehensive and has been very helpful to me. It has cleared the confusion I had in regard to research design and methodology.

Robyn Pritchard

This post is helpful, easy to understand, and deconstructs what a research design is. Thanks

kelebogile

how to cite this page

Peter

Thank you very much for the post. It is wonderful and has cleared many worries in my mind regarding research designs. I really appreciate .

ali

how can I put this blog as my reference(APA style) in bibliography part?

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly
  • USC Libraries
  • Research Guides

Organizing Your Social Sciences Research Paper

  • Types of Research Designs
  • Purpose of Guide
  • Design Flaws to Avoid
  • Independent and Dependent Variables
  • Glossary of Research Terms
  • Reading Research Effectively
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Extending the Timeliness of a Topic Idea
  • Academic Writing Style
  • Applying Critical Thinking
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • Research Process Video Series
  • Executive Summary
  • The C.A.R.S. Model
  • Background Information
  • The Research Problem/Question
  • Theoretical Framework
  • Citation Tracking
  • Content Alert Services
  • Evaluating Sources
  • Primary Sources
  • Secondary Sources
  • Tiertiary Sources
  • Scholarly vs. Popular Publications
  • Qualitative Methods
  • Quantitative Methods
  • Insiderness
  • Using Non-Textual Elements
  • Limitations of the Study
  • Common Grammar Mistakes
  • Writing Concisely
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Further Readings
  • Generative AI and Writing
  • USC Libraries Tutorials and Other Guides
  • Bibliography

Introduction

Before beginning your paper, you need to decide how you plan to design the study .

The research design refers to the overall strategy and analytical approach that you have chosen in order to integrate, in a coherent and logical way, the different components of the study, thus ensuring that the research problem will be thoroughly investigated. It constitutes the blueprint for the collection, measurement, and interpretation of information and data. Note that the research problem determines the type of design you choose, not the other way around!

De Vaus, D. A. Research Design in Social Research . London: SAGE, 2001; Trochim, William M.K. Research Methods Knowledge Base. 2006.

General Structure and Writing Style

The function of a research design is to ensure that the evidence obtained enables you to effectively address the research problem logically and as unambiguously as possible . In social sciences research, obtaining information relevant to the research problem generally entails specifying the type of evidence needed to test the underlying assumptions of a theory, to evaluate a program, or to accurately describe and assess meaning related to an observable phenomenon.

With this in mind, a common mistake made by researchers is that they begin their investigations before they have thought critically about what information is required to address the research problem. Without attending to these design issues beforehand, the overall research problem will not be adequately addressed and any conclusions drawn will run the risk of being weak and unconvincing. As a consequence, the overall validity of the study will be undermined.

The length and complexity of describing the research design in your paper can vary considerably, but any well-developed description will achieve the following :

  • Identify the research problem clearly and justify its selection, particularly in relation to any valid alternative designs that could have been used,
  • Review and synthesize previously published literature associated with the research problem,
  • Clearly and explicitly specify hypotheses [i.e., research questions] central to the problem,
  • Effectively describe the information and/or data which will be necessary for an adequate testing of the hypotheses and explain how such information and/or data will be obtained, and
  • Describe the methods of analysis to be applied to the data in determining whether or not the hypotheses are true or false.

The research design is usually incorporated into the introduction of your paper . You can obtain an overall sense of what to do by reviewing studies that have utilized the same research design [e.g., using a case study approach]. This can help you develop an outline to follow for your own paper.

NOTE : Use the SAGE Research Methods Online and Cases and the SAGE Research Methods Videos databases to search for scholarly resources on how to apply specific research designs and methods . The Research Methods Online database contains links to more than 175,000 pages of SAGE publisher's book, journal, and reference content on quantitative, qualitative, and mixed research methodologies. Also included is a collection of case studies of social research projects that can be used to help you better understand abstract or complex methodological concepts. The Research Methods Videos database contains hours of tutorials, interviews, video case studies, and mini-documentaries covering the entire research process.

Creswell, John W. and J. David Creswell. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches . 5th edition. Thousand Oaks, CA: Sage, 2018; De Vaus, D. A. Research Design in Social Research . London: SAGE, 2001; Gorard, Stephen. Research Design: Creating Robust Approaches for the Social Sciences . Thousand Oaks, CA: Sage, 2013; Leedy, Paul D. and Jeanne Ellis Ormrod. Practical Research: Planning and Design . Tenth edition. Boston, MA: Pearson, 2013; Vogt, W. Paul, Dianna C. Gardner, and Lynne M. Haeffele. When to Use What Research Design . New York: Guilford, 2012.

Action Research Design

Definition and Purpose

The essentials of action research design follow a characteristic cycle whereby initially an exploratory stance is adopted, where an understanding of a problem is developed and plans are made for some form of interventionary strategy. Then the intervention is carried out [the "action" in action research] during which time, pertinent observations are collected in various forms. The new interventional strategies are carried out, and this cyclic process repeats, continuing until a sufficient understanding of [or a valid implementation solution for] the problem is achieved. The protocol is iterative or cyclical in nature and is intended to foster deeper understanding of a given situation, starting with conceptualizing and particularizing the problem and moving through several interventions and evaluations.

What do these studies tell you ?

  • This is a collaborative and adaptive research design that lends itself to use in work or community situations.
  • Design focuses on pragmatic and solution-driven research outcomes rather than testing theories.
  • When practitioners use action research, it has the potential to increase the amount they learn consciously from their experience; the action research cycle can be regarded as a learning cycle.
  • Action research studies often have direct and obvious relevance to improving practice and advocating for change.
  • There are no hidden controls or preemption of direction by the researcher.

What these studies don't tell you ?

  • It is harder to do than conducting conventional research because the researcher takes on responsibilities of advocating for change as well as for researching the topic.
  • Action research is much harder to write up because it is less likely that you can use a standard format to report your findings effectively [i.e., data is often in the form of stories or observation].
  • Personal over-involvement of the researcher may bias research results.
  • The cyclic nature of action research to achieve its twin outcomes of action [e.g. change] and research [e.g. understanding] is time-consuming and complex to conduct.
  • Advocating for change usually requires buy-in from study participants.

Coghlan, David and Mary Brydon-Miller. The Sage Encyclopedia of Action Research . Thousand Oaks, CA:  Sage, 2014; Efron, Sara Efrat and Ruth Ravid. Action Research in Education: A Practical Guide . New York: Guilford, 2013; Gall, Meredith. Educational Research: An Introduction . Chapter 18, Action Research. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007; Gorard, Stephen. Research Design: Creating Robust Approaches for the Social Sciences . Thousand Oaks, CA: Sage, 2013; Kemmis, Stephen and Robin McTaggart. “Participatory Action Research.” In Handbook of Qualitative Research . Norman Denzin and Yvonna S. Lincoln, eds. 2nd ed. (Thousand Oaks, CA: SAGE, 2000), pp. 567-605; McNiff, Jean. Writing and Doing Action Research . London: Sage, 2014; Reason, Peter and Hilary Bradbury. Handbook of Action Research: Participative Inquiry and Practice . Thousand Oaks, CA: SAGE, 2001.

Case Study Design

A case study is an in-depth study of a particular research problem rather than a sweeping statistical survey or comprehensive comparative inquiry. It is often used to narrow down a very broad field of research into one or a few easily researchable examples. The case study research design is also useful for testing whether a specific theory and model actually applies to phenomena in the real world. It is a useful design when not much is known about an issue or phenomenon.

  • Approach excels at bringing us to an understanding of a complex issue through detailed contextual analysis of a limited number of events or conditions and their relationships.
  • A researcher using a case study design can apply a variety of methodologies and rely on a variety of sources to investigate a research problem.
  • Design can extend experience or add strength to what is already known through previous research.
  • Social scientists, in particular, make wide use of this research design to examine contemporary real-life situations and provide the basis for the application of concepts and theories and the extension of methodologies.
  • The design can provide detailed descriptions of specific and rare cases.
  • A single or small number of cases offers little basis for establishing reliability or to generalize the findings to a wider population of people, places, or things.
  • Intense exposure to the study of a case may bias a researcher's interpretation of the findings.
  • Design does not facilitate assessment of cause and effect relationships.
  • Vital information may be missing, making the case hard to interpret.
  • The case may not be representative or typical of the larger problem being investigated.
  • If the criteria for selecting a case is because it represents a very unusual or unique phenomenon or problem for study, then your interpretation of the findings can only apply to that particular case.

Case Studies. Writing@CSU. Colorado State University; Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 4, Flexible Methods: Case Study Design. 2nd ed. New York: Columbia University Press, 1999; Gerring, John. “What Is a Case Study and What Is It Good for?” American Political Science Review 98 (May 2004): 341-354; Greenhalgh, Trisha, editor. Case Study Evaluation: Past, Present and Future Challenges . Bingley, UK: Emerald Group Publishing, 2015; Mills, Albert J. , Gabrielle Durepos, and Eiden Wiebe, editors. Encyclopedia of Case Study Research . Thousand Oaks, CA: SAGE Publications, 2010; Stake, Robert E. The Art of Case Study Research . Thousand Oaks, CA: SAGE, 1995; Yin, Robert K. Case Study Research: Design and Theory . Applied Social Research Methods Series, no. 5. 3rd ed. Thousand Oaks, CA: SAGE, 2003.

Causal Design

Causality studies may be thought of as understanding a phenomenon in terms of conditional statements in the form, “If X, then Y.” This type of research is used to measure what impact a specific change will have on existing norms and assumptions. Most social scientists seek causal explanations that reflect tests of hypotheses. Causal effect (nomothetic perspective) occurs when variation in one phenomenon, an independent variable, leads to or results, on average, in variation in another phenomenon, the dependent variable.

Conditions necessary for determining causality:

  • Empirical association -- a valid conclusion is based on finding an association between the independent variable and the dependent variable.
  • Appropriate time order -- to conclude that causation was involved, one must see that cases were exposed to variation in the independent variable before variation in the dependent variable.
  • Nonspuriousness -- a relationship between two variables that is not due to variation in a third variable.
  • Causality research designs assist researchers in understanding why the world works the way it does through the process of proving a causal link between variables and by the process of eliminating other possibilities.
  • Replication is possible.
  • There is greater confidence the study has internal validity due to the systematic subject selection and equity of groups being compared.
  • Not all relationships are causal! The possibility always exists that, by sheer coincidence, two unrelated events appear to be related [e.g., Punxatawney Phil could accurately predict the duration of Winter for five consecutive years but, the fact remains, he's just a big, furry rodent].
  • Conclusions about causal relationships are difficult to determine due to a variety of extraneous and confounding variables that exist in a social environment. This means causality can only be inferred, never proven.
  • If two variables are correlated, the cause must come before the effect. However, even though two variables might be causally related, it can sometimes be difficult to determine which variable comes first and, therefore, to establish which variable is the actual cause and which is the  actual effect.

Beach, Derek and Rasmus Brun Pedersen. Causal Case Study Methods: Foundations and Guidelines for Comparing, Matching, and Tracing . Ann Arbor, MI: University of Michigan Press, 2016; Bachman, Ronet. The Practice of Research in Criminology and Criminal Justice . Chapter 5, Causation and Research Designs. 3rd ed. Thousand Oaks, CA: Pine Forge Press, 2007; Brewer, Ernest W. and Jennifer Kubn. “Causal-Comparative Design.” In Encyclopedia of Research Design . Neil J. Salkind, editor. (Thousand Oaks, CA: Sage, 2010), pp. 125-132; Causal Research Design: Experimentation. Anonymous SlideShare Presentation; Gall, Meredith. Educational Research: An Introduction . Chapter 11, Nonexperimental Research: Correlational Designs. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007; Trochim, William M.K. Research Methods Knowledge Base. 2006.

Cohort Design

Often used in the medical sciences, but also found in the applied social sciences, a cohort study generally refers to a study conducted over a period of time involving members of a population which the subject or representative member comes from, and who are united by some commonality or similarity. Using a quantitative framework, a cohort study makes note of statistical occurrence within a specialized subgroup, united by same or similar characteristics that are relevant to the research problem being investigated, rather than studying statistical occurrence within the general population. Using a qualitative framework, cohort studies generally gather data using methods of observation. Cohorts can be either "open" or "closed."

  • Open Cohort Studies [dynamic populations, such as the population of Los Angeles] involve a population that is defined just by the state of being a part of the study in question (and being monitored for the outcome). Date of entry and exit from the study is individually defined, therefore, the size of the study population is not constant. In open cohort studies, researchers can only calculate rate based data, such as, incidence rates and variants thereof.
  • Closed Cohort Studies [static populations, such as patients entered into a clinical trial] involve participants who enter into the study at one defining point in time and where it is presumed that no new participants can enter the cohort. Given this, the number of study participants remains constant (or can only decrease).
  • The use of cohorts is often mandatory because a randomized control study may be unethical. For example, you cannot deliberately expose people to asbestos, you can only study its effects on those who have already been exposed. Research that measures risk factors often relies upon cohort designs.
  • Because cohort studies measure potential causes before the outcome has occurred, they can demonstrate that these “causes” preceded the outcome, thereby avoiding the debate as to which is the cause and which is the effect.
  • Cohort analysis is highly flexible and can provide insight into effects over time and related to a variety of different types of changes [e.g., social, cultural, political, economic, etc.].
  • Either original data or secondary data can be used in this design.
  • In cases where a comparative analysis of two cohorts is made [e.g., studying the effects of one group exposed to asbestos and one that has not], a researcher cannot control for all other factors that might differ between the two groups. These factors are known as confounding variables.
  • Cohort studies can end up taking a long time to complete if the researcher must wait for the conditions of interest to develop within the group. This also increases the chance that key variables change during the course of the study, potentially impacting the validity of the findings.
  • Due to the lack of randominization in the cohort design, its external validity is lower than that of study designs where the researcher randomly assigns participants.

Healy P, Devane D. “Methodological Considerations in Cohort Study Designs.” Nurse Researcher 18 (2011): 32-36; Glenn, Norval D, editor. Cohort Analysis . 2nd edition. Thousand Oaks, CA: Sage, 2005; Levin, Kate Ann. Study Design IV: Cohort Studies. Evidence-Based Dentistry 7 (2003): 51–52; Payne, Geoff. “Cohort Study.” In The SAGE Dictionary of Social Research Methods . Victor Jupp, editor. (Thousand Oaks, CA: Sage, 2006), pp. 31-33; Study Design 101. Himmelfarb Health Sciences Library. George Washington University, November 2011; Cohort Study. Wikipedia.

Cross-Sectional Design

Cross-sectional research designs have three distinctive features: no time dimension; a reliance on existing differences rather than change following intervention; and, groups are selected based on existing differences rather than random allocation. The cross-sectional design can only measure differences between or from among a variety of people, subjects, or phenomena rather than a process of change. As such, researchers using this design can only employ a relatively passive approach to making causal inferences based on findings.

  • Cross-sectional studies provide a clear 'snapshot' of the outcome and the characteristics associated with it, at a specific point in time.
  • Unlike an experimental design, where there is an active intervention by the researcher to produce and measure change or to create differences, cross-sectional designs focus on studying and drawing inferences from existing differences between people, subjects, or phenomena.
  • Entails collecting data at and concerning one point in time. While longitudinal studies involve taking multiple measures over an extended period of time, cross-sectional research is focused on finding relationships between variables at one moment in time.
  • Groups identified for study are purposely selected based upon existing differences in the sample rather than seeking random sampling.
  • Cross-section studies are capable of using data from a large number of subjects and, unlike observational studies, is not geographically bound.
  • Can estimate prevalence of an outcome of interest because the sample is usually taken from the whole population.
  • Because cross-sectional designs generally use survey techniques to gather data, they are relatively inexpensive and take up little time to conduct.
  • Finding people, subjects, or phenomena to study that are very similar except in one specific variable can be difficult.
  • Results are static and time bound and, therefore, give no indication of a sequence of events or reveal historical or temporal contexts.
  • Studies cannot be utilized to establish cause and effect relationships.
  • This design only provides a snapshot of analysis so there is always the possibility that a study could have differing results if another time-frame had been chosen.
  • There is no follow up to the findings.

Bethlehem, Jelke. "7: Cross-sectional Research." In Research Methodology in the Social, Behavioural and Life Sciences . Herman J Adèr and Gideon J Mellenbergh, editors. (London, England: Sage, 1999), pp. 110-43; Bourque, Linda B. “Cross-Sectional Design.” In  The SAGE Encyclopedia of Social Science Research Methods . Michael S. Lewis-Beck, Alan Bryman, and Tim Futing Liao. (Thousand Oaks, CA: 2004), pp. 230-231; Hall, John. “Cross-Sectional Survey Design.” In Encyclopedia of Survey Research Methods . Paul J. Lavrakas, ed. (Thousand Oaks, CA: Sage, 2008), pp. 173-174; Helen Barratt, Maria Kirwan. Cross-Sectional Studies: Design Application, Strengths and Weaknesses of Cross-Sectional Studies. Healthknowledge, 2009. Cross-Sectional Study. Wikipedia.

Descriptive Design

Descriptive research designs help provide answers to the questions of who, what, when, where, and how associated with a particular research problem; a descriptive study cannot conclusively ascertain answers to why. Descriptive research is used to obtain information concerning the current status of the phenomena and to describe "what exists" with respect to variables or conditions in a situation.

  • The subject is being observed in a completely natural and unchanged natural environment. True experiments, whilst giving analyzable data, often adversely influence the normal behavior of the subject [a.k.a., the Heisenberg effect whereby measurements of certain systems cannot be made without affecting the systems].
  • Descriptive research is often used as a pre-cursor to more quantitative research designs with the general overview giving some valuable pointers as to what variables are worth testing quantitatively.
  • If the limitations are understood, they can be a useful tool in developing a more focused study.
  • Descriptive studies can yield rich data that lead to important recommendations in practice.
  • Appoach collects a large amount of data for detailed analysis.
  • The results from a descriptive research cannot be used to discover a definitive answer or to disprove a hypothesis.
  • Because descriptive designs often utilize observational methods [as opposed to quantitative methods], the results cannot be replicated.
  • The descriptive function of research is heavily dependent on instrumentation for measurement and observation.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 5, Flexible Methods: Descriptive Research. 2nd ed. New York: Columbia University Press, 1999; Given, Lisa M. "Descriptive Research." In Encyclopedia of Measurement and Statistics . Neil J. Salkind and Kristin Rasmussen, editors. (Thousand Oaks, CA: Sage, 2007), pp. 251-254; McNabb, Connie. Descriptive Research Methodologies. Powerpoint Presentation; Shuttleworth, Martyn. Descriptive Research Design, September 26, 2008; Erickson, G. Scott. "Descriptive Research Design." In New Methods of Market Research and Analysis . (Northampton, MA: Edward Elgar Publishing, 2017), pp. 51-77; Sahin, Sagufta, and Jayanta Mete. "A Brief Study on Descriptive Research: Its Nature and Application in Social Science." International Journal of Research and Analysis in Humanities 1 (2021): 11; K. Swatzell and P. Jennings. “Descriptive Research: The Nuts and Bolts.” Journal of the American Academy of Physician Assistants 20 (2007), pp. 55-56; Kane, E. Doing Your Own Research: Basic Descriptive Research in the Social Sciences and Humanities . London: Marion Boyars, 1985.

Experimental Design

A blueprint of the procedure that enables the researcher to maintain control over all factors that may affect the result of an experiment. In doing this, the researcher attempts to determine or predict what may occur. Experimental research is often used where there is time priority in a causal relationship (cause precedes effect), there is consistency in a causal relationship (a cause will always lead to the same effect), and the magnitude of the correlation is great. The classic experimental design specifies an experimental group and a control group. The independent variable is administered to the experimental group and not to the control group, and both groups are measured on the same dependent variable. Subsequent experimental designs have used more groups and more measurements over longer periods. True experiments must have control, randomization, and manipulation.

  • Experimental research allows the researcher to control the situation. In so doing, it allows researchers to answer the question, “What causes something to occur?”
  • Permits the researcher to identify cause and effect relationships between variables and to distinguish placebo effects from treatment effects.
  • Experimental research designs support the ability to limit alternative explanations and to infer direct causal relationships in the study.
  • Approach provides the highest level of evidence for single studies.
  • The design is artificial, and results may not generalize well to the real world.
  • The artificial settings of experiments may alter the behaviors or responses of participants.
  • Experimental designs can be costly if special equipment or facilities are needed.
  • Some research problems cannot be studied using an experiment because of ethical or technical reasons.
  • Difficult to apply ethnographic and other qualitative methods to experimentally designed studies.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 7, Flexible Methods: Experimental Research. 2nd ed. New York: Columbia University Press, 1999; Chapter 2: Research Design, Experimental Designs. School of Psychology, University of New England, 2000; Chow, Siu L. "Experimental Design." In Encyclopedia of Research Design . Neil J. Salkind, editor. (Thousand Oaks, CA: Sage, 2010), pp. 448-453; "Experimental Design." In Social Research Methods . Nicholas Walliman, editor. (London, England: Sage, 2006), pp, 101-110; Experimental Research. Research Methods by Dummies. Department of Psychology. California State University, Fresno, 2006; Kirk, Roger E. Experimental Design: Procedures for the Behavioral Sciences . 4th edition. Thousand Oaks, CA: Sage, 2013; Trochim, William M.K. Experimental Design. Research Methods Knowledge Base. 2006; Rasool, Shafqat. Experimental Research. Slideshare presentation.

Exploratory Design

An exploratory design is conducted about a research problem when there are few or no earlier studies to refer to or rely upon to predict an outcome . The focus is on gaining insights and familiarity for later investigation or undertaken when research problems are in a preliminary stage of investigation. Exploratory designs are often used to establish an understanding of how best to proceed in studying an issue or what methodology would effectively apply to gathering information about the issue.

The goals of exploratory research are intended to produce the following possible insights:

  • Familiarity with basic details, settings, and concerns.
  • Well grounded picture of the situation being developed.
  • Generation of new ideas and assumptions.
  • Development of tentative theories or hypotheses.
  • Determination about whether a study is feasible in the future.
  • Issues get refined for more systematic investigation and formulation of new research questions.
  • Direction for future research and techniques get developed.
  • Design is a useful approach for gaining background information on a particular topic.
  • Exploratory research is flexible and can address research questions of all types (what, why, how).
  • Provides an opportunity to define new terms and clarify existing concepts.
  • Exploratory research is often used to generate formal hypotheses and develop more precise research problems.
  • In the policy arena or applied to practice, exploratory studies help establish research priorities and where resources should be allocated.
  • Exploratory research generally utilizes small sample sizes and, thus, findings are typically not generalizable to the population at large.
  • The exploratory nature of the research inhibits an ability to make definitive conclusions about the findings. They provide insight but not definitive conclusions.
  • The research process underpinning exploratory studies is flexible but often unstructured, leading to only tentative results that have limited value to decision-makers.
  • Design lacks rigorous standards applied to methods of data gathering and analysis because one of the areas for exploration could be to determine what method or methodologies could best fit the research problem.

Cuthill, Michael. “Exploratory Research: Citizen Participation, Local Government, and Sustainable Development in Australia.” Sustainable Development 10 (2002): 79-89; Streb, Christoph K. "Exploratory Case Study." In Encyclopedia of Case Study Research . Albert J. Mills, Gabrielle Durepos and Eiden Wiebe, editors. (Thousand Oaks, CA: Sage, 2010), pp. 372-374; Taylor, P. J., G. Catalano, and D.R.F. Walker. “Exploratory Analysis of the World City Network.” Urban Studies 39 (December 2002): 2377-2394; Exploratory Research. Wikipedia.

Field Research Design

Sometimes referred to as ethnography or participant observation, designs around field research encompass a variety of interpretative procedures [e.g., observation and interviews] rooted in qualitative approaches to studying people individually or in groups while inhabiting their natural environment as opposed to using survey instruments or other forms of impersonal methods of data gathering. Information acquired from observational research takes the form of “ field notes ” that involves documenting what the researcher actually sees and hears while in the field. Findings do not consist of conclusive statements derived from numbers and statistics because field research involves analysis of words and observations of behavior. Conclusions, therefore, are developed from an interpretation of findings that reveal overriding themes, concepts, and ideas. More information can be found HERE .

  • Field research is often necessary to fill gaps in understanding the research problem applied to local conditions or to specific groups of people that cannot be ascertained from existing data.
  • The research helps contextualize already known information about a research problem, thereby facilitating ways to assess the origins, scope, and scale of a problem and to gage the causes, consequences, and means to resolve an issue based on deliberate interaction with people in their natural inhabited spaces.
  • Enables the researcher to corroborate or confirm data by gathering additional information that supports or refutes findings reported in prior studies of the topic.
  • Because the researcher in embedded in the field, they are better able to make observations or ask questions that reflect the specific cultural context of the setting being investigated.
  • Observing the local reality offers the opportunity to gain new perspectives or obtain unique data that challenges existing theoretical propositions or long-standing assumptions found in the literature.

What these studies don't tell you

  • A field research study requires extensive time and resources to carry out the multiple steps involved with preparing for the gathering of information, including for example, examining background information about the study site, obtaining permission to access the study site, and building trust and rapport with subjects.
  • Requires a commitment to staying engaged in the field to ensure that you can adequately document events and behaviors as they unfold.
  • The unpredictable nature of fieldwork means that researchers can never fully control the process of data gathering. They must maintain a flexible approach to studying the setting because events and circumstances can change quickly or unexpectedly.
  • Findings can be difficult to interpret and verify without access to documents and other source materials that help to enhance the credibility of information obtained from the field  [i.e., the act of triangulating the data].
  • Linking the research problem to the selection of study participants inhabiting their natural environment is critical. However, this specificity limits the ability to generalize findings to different situations or in other contexts or to infer courses of action applied to other settings or groups of people.
  • The reporting of findings must take into account how the researcher themselves may have inadvertently affected respondents and their behaviors.

Historical Design

The purpose of a historical research design is to collect, verify, and synthesize evidence from the past to establish facts that defend or refute a hypothesis. It uses secondary sources and a variety of primary documentary evidence, such as, diaries, official records, reports, archives, and non-textual information [maps, pictures, audio and visual recordings]. The limitation is that the sources must be both authentic and valid.

  • The historical research design is unobtrusive; the act of research does not affect the results of the study.
  • The historical approach is well suited for trend analysis.
  • Historical records can add important contextual background required to more fully understand and interpret a research problem.
  • There is often no possibility of researcher-subject interaction that could affect the findings.
  • Historical sources can be used over and over to study different research problems or to replicate a previous study.
  • The ability to fulfill the aims of your research are directly related to the amount and quality of documentation available to understand the research problem.
  • Since historical research relies on data from the past, there is no way to manipulate it to control for contemporary contexts.
  • Interpreting historical sources can be very time consuming.
  • The sources of historical materials must be archived consistently to ensure access. This may especially challenging for digital or online-only sources.
  • Original authors bring their own perspectives and biases to the interpretation of past events and these biases are more difficult to ascertain in historical resources.
  • Due to the lack of control over external variables, historical research is very weak with regard to the demands of internal validity.
  • It is rare that the entirety of historical documentation needed to fully address a research problem is available for interpretation, therefore, gaps need to be acknowledged.

Howell, Martha C. and Walter Prevenier. From Reliable Sources: An Introduction to Historical Methods . Ithaca, NY: Cornell University Press, 2001; Lundy, Karen Saucier. "Historical Research." In The Sage Encyclopedia of Qualitative Research Methods . Lisa M. Given, editor. (Thousand Oaks, CA: Sage, 2008), pp. 396-400; Marius, Richard. and Melvin E. Page. A Short Guide to Writing about History . 9th edition. Boston, MA: Pearson, 2015; Savitt, Ronald. “Historical Research in Marketing.” Journal of Marketing 44 (Autumn, 1980): 52-58;  Gall, Meredith. Educational Research: An Introduction . Chapter 16, Historical Research. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007.

Longitudinal Design

A longitudinal study follows the same sample over time and makes repeated observations. For example, with longitudinal surveys, the same group of people is interviewed at regular intervals, enabling researchers to track changes over time and to relate them to variables that might explain why the changes occur. Longitudinal research designs describe patterns of change and help establish the direction and magnitude of causal relationships. Measurements are taken on each variable over two or more distinct time periods. This allows the researcher to measure change in variables over time. It is a type of observational study sometimes referred to as a panel study.

  • Longitudinal data facilitate the analysis of the duration of a particular phenomenon.
  • Enables survey researchers to get close to the kinds of causal explanations usually attainable only with experiments.
  • The design permits the measurement of differences or change in a variable from one period to another [i.e., the description of patterns of change over time].
  • Longitudinal studies facilitate the prediction of future outcomes based upon earlier factors.
  • The data collection method may change over time.
  • Maintaining the integrity of the original sample can be difficult over an extended period of time.
  • It can be difficult to show more than one variable at a time.
  • This design often needs qualitative research data to explain fluctuations in the results.
  • A longitudinal research design assumes present trends will continue unchanged.
  • It can take a long period of time to gather results.
  • There is a need to have a large sample size and accurate sampling to reach representativness.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 6, Flexible Methods: Relational and Longitudinal Research. 2nd ed. New York: Columbia University Press, 1999; Forgues, Bernard, and Isabelle Vandangeon-Derumez. "Longitudinal Analyses." In Doing Management Research . Raymond-Alain Thiétart and Samantha Wauchope, editors. (London, England: Sage, 2001), pp. 332-351; Kalaian, Sema A. and Rafa M. Kasim. "Longitudinal Studies." In Encyclopedia of Survey Research Methods . Paul J. Lavrakas, ed. (Thousand Oaks, CA: Sage, 2008), pp. 440-441; Menard, Scott, editor. Longitudinal Research . Thousand Oaks, CA: Sage, 2002; Ployhart, Robert E. and Robert J. Vandenberg. "Longitudinal Research: The Theory, Design, and Analysis of Change.” Journal of Management 36 (January 2010): 94-120; Longitudinal Study. Wikipedia.

Meta-Analysis Design

Meta-analysis is an analytical methodology designed to systematically evaluate and summarize the results from a number of individual studies, thereby, increasing the overall sample size and the ability of the researcher to study effects of interest. The purpose is to not simply summarize existing knowledge, but to develop a new understanding of a research problem using synoptic reasoning. The main objectives of meta-analysis include analyzing differences in the results among studies and increasing the precision by which effects are estimated. A well-designed meta-analysis depends upon strict adherence to the criteria used for selecting studies and the availability of information in each study to properly analyze their findings. Lack of information can severely limit the type of analyzes and conclusions that can be reached. In addition, the more dissimilarity there is in the results among individual studies [heterogeneity], the more difficult it is to justify interpretations that govern a valid synopsis of results. A meta-analysis needs to fulfill the following requirements to ensure the validity of your findings:

  • Clearly defined description of objectives, including precise definitions of the variables and outcomes that are being evaluated;
  • A well-reasoned and well-documented justification for identification and selection of the studies;
  • Assessment and explicit acknowledgment of any researcher bias in the identification and selection of those studies;
  • Description and evaluation of the degree of heterogeneity among the sample size of studies reviewed; and,
  • Justification of the techniques used to evaluate the studies.
  • Can be an effective strategy for determining gaps in the literature.
  • Provides a means of reviewing research published about a particular topic over an extended period of time and from a variety of sources.
  • Is useful in clarifying what policy or programmatic actions can be justified on the basis of analyzing research results from multiple studies.
  • Provides a method for overcoming small sample sizes in individual studies that previously may have had little relationship to each other.
  • Can be used to generate new hypotheses or highlight research problems for future studies.
  • Small violations in defining the criteria used for content analysis can lead to difficult to interpret and/or meaningless findings.
  • A large sample size can yield reliable, but not necessarily valid, results.
  • A lack of uniformity regarding, for example, the type of literature reviewed, how methods are applied, and how findings are measured within the sample of studies you are analyzing, can make the process of synthesis difficult to perform.
  • Depending on the sample size, the process of reviewing and synthesizing multiple studies can be very time consuming.

Beck, Lewis W. "The Synoptic Method." The Journal of Philosophy 36 (1939): 337-345; Cooper, Harris, Larry V. Hedges, and Jeffrey C. Valentine, eds. The Handbook of Research Synthesis and Meta-Analysis . 2nd edition. New York: Russell Sage Foundation, 2009; Guzzo, Richard A., Susan E. Jackson and Raymond A. Katzell. “Meta-Analysis Analysis.” In Research in Organizational Behavior , Volume 9. (Greenwich, CT: JAI Press, 1987), pp 407-442; Lipsey, Mark W. and David B. Wilson. Practical Meta-Analysis . Thousand Oaks, CA: Sage Publications, 2001; Study Design 101. Meta-Analysis. The Himmelfarb Health Sciences Library, George Washington University; Timulak, Ladislav. “Qualitative Meta-Analysis.” In The SAGE Handbook of Qualitative Data Analysis . Uwe Flick, editor. (Los Angeles, CA: Sage, 2013), pp. 481-495; Walker, Esteban, Adrian V. Hernandez, and Micheal W. Kattan. "Meta-Analysis: It's Strengths and Limitations." Cleveland Clinic Journal of Medicine 75 (June 2008): 431-439.

Mixed-Method Design

  • Narrative and non-textual information can add meaning to numeric data, while numeric data can add precision to narrative and non-textual information.
  • Can utilize existing data while at the same time generating and testing a grounded theory approach to describe and explain the phenomenon under study.
  • A broader, more complex research problem can be investigated because the researcher is not constrained by using only one method.
  • The strengths of one method can be used to overcome the inherent weaknesses of another method.
  • Can provide stronger, more robust evidence to support a conclusion or set of recommendations.
  • May generate new knowledge new insights or uncover hidden insights, patterns, or relationships that a single methodological approach might not reveal.
  • Produces more complete knowledge and understanding of the research problem that can be used to increase the generalizability of findings applied to theory or practice.
  • A researcher must be proficient in understanding how to apply multiple methods to investigating a research problem as well as be proficient in optimizing how to design a study that coherently melds them together.
  • Can increase the likelihood of conflicting results or ambiguous findings that inhibit drawing a valid conclusion or setting forth a recommended course of action [e.g., sample interview responses do not support existing statistical data].
  • Because the research design can be very complex, reporting the findings requires a well-organized narrative, clear writing style, and precise word choice.
  • Design invites collaboration among experts. However, merging different investigative approaches and writing styles requires more attention to the overall research process than studies conducted using only one methodological paradigm.
  • Concurrent merging of quantitative and qualitative research requires greater attention to having adequate sample sizes, using comparable samples, and applying a consistent unit of analysis. For sequential designs where one phase of qualitative research builds on the quantitative phase or vice versa, decisions about what results from the first phase to use in the next phase, the choice of samples and estimating reasonable sample sizes for both phases, and the interpretation of results from both phases can be difficult.
  • Due to multiple forms of data being collected and analyzed, this design requires extensive time and resources to carry out the multiple steps involved in data gathering and interpretation.

Burch, Patricia and Carolyn J. Heinrich. Mixed Methods for Policy Research and Program Evaluation . Thousand Oaks, CA: Sage, 2016; Creswell, John w. et al. Best Practices for Mixed Methods Research in the Health Sciences . Bethesda, MD: Office of Behavioral and Social Sciences Research, National Institutes of Health, 2010Creswell, John W. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches . 4th edition. Thousand Oaks, CA: Sage Publications, 2014; Domínguez, Silvia, editor. Mixed Methods Social Networks Research . Cambridge, UK: Cambridge University Press, 2014; Hesse-Biber, Sharlene Nagy. Mixed Methods Research: Merging Theory with Practice . New York: Guilford Press, 2010; Niglas, Katrin. “How the Novice Researcher Can Make Sense of Mixed Methods Designs.” International Journal of Multiple Research Approaches 3 (2009): 34-46; Onwuegbuzie, Anthony J. and Nancy L. Leech. “Linking Research Questions to Mixed Methods Data Analysis Procedures.” The Qualitative Report 11 (September 2006): 474-498; Tashakorri, Abbas and John W. Creswell. “The New Era of Mixed Methods.” Journal of Mixed Methods Research 1 (January 2007): 3-7; Zhanga, Wanqing. “Mixed Methods Application in Health Intervention Research: A Multiple Case Study.” International Journal of Multiple Research Approaches 8 (2014): 24-35 .

Observational Design

This type of research design draws a conclusion by comparing subjects against a control group, in cases where the researcher has no control over the experiment. There are two general types of observational designs. In direct observations, people know that you are watching them. Unobtrusive measures involve any method for studying behavior where individuals do not know they are being observed. An observational study allows a useful insight into a phenomenon and avoids the ethical and practical difficulties of setting up a large and cumbersome research project.

  • Observational studies are usually flexible and do not necessarily need to be structured around a hypothesis about what you expect to observe [data is emergent rather than pre-existing].
  • The researcher is able to collect in-depth information about a particular behavior.
  • Can reveal interrelationships among multifaceted dimensions of group interactions.
  • You can generalize your results to real life situations.
  • Observational research is useful for discovering what variables may be important before applying other methods like experiments.
  • Observation research designs account for the complexity of group behaviors.
  • Reliability of data is low because seeing behaviors occur over and over again may be a time consuming task and are difficult to replicate.
  • In observational research, findings may only reflect a unique sample population and, thus, cannot be generalized to other groups.
  • There can be problems with bias as the researcher may only "see what they want to see."
  • There is no possibility to determine "cause and effect" relationships since nothing is manipulated.
  • Sources or subjects may not all be equally credible.
  • Any group that is knowingly studied is altered to some degree by the presence of the researcher, therefore, potentially skewing any data collected.

Atkinson, Paul and Martyn Hammersley. “Ethnography and Participant Observation.” In Handbook of Qualitative Research . Norman K. Denzin and Yvonna S. Lincoln, eds. (Thousand Oaks, CA: Sage, 1994), pp. 248-261; Observational Research. Research Methods by Dummies. Department of Psychology. California State University, Fresno, 2006; Patton Michael Quinn. Qualitiative Research and Evaluation Methods . Chapter 6, Fieldwork Strategies and Observational Methods. 3rd ed. Thousand Oaks, CA: Sage, 2002; Payne, Geoff and Judy Payne. "Observation." In Key Concepts in Social Research . The SAGE Key Concepts series. (London, England: Sage, 2004), pp. 158-162; Rosenbaum, Paul R. Design of Observational Studies . New York: Springer, 2010;Williams, J. Patrick. "Nonparticipant Observation." In The Sage Encyclopedia of Qualitative Research Methods . Lisa M. Given, editor.(Thousand Oaks, CA: Sage, 2008), pp. 562-563.

Philosophical Design

Understood more as an broad approach to examining a research problem than a methodological design, philosophical analysis and argumentation is intended to challenge deeply embedded, often intractable, assumptions underpinning an area of study. This approach uses the tools of argumentation derived from philosophical traditions, concepts, models, and theories to critically explore and challenge, for example, the relevance of logic and evidence in academic debates, to analyze arguments about fundamental issues, or to discuss the root of existing discourse about a research problem. These overarching tools of analysis can be framed in three ways:

  • Ontology -- the study that describes the nature of reality; for example, what is real and what is not, what is fundamental and what is derivative?
  • Epistemology -- the study that explores the nature of knowledge; for example, by what means does knowledge and understanding depend upon and how can we be certain of what we know?
  • Axiology -- the study of values; for example, what values does an individual or group hold and why? How are values related to interest, desire, will, experience, and means-to-end? And, what is the difference between a matter of fact and a matter of value?
  • Can provide a basis for applying ethical decision-making to practice.
  • Functions as a means of gaining greater self-understanding and self-knowledge about the purposes of research.
  • Brings clarity to general guiding practices and principles of an individual or group.
  • Philosophy informs methodology.
  • Refine concepts and theories that are invoked in relatively unreflective modes of thought and discourse.
  • Beyond methodology, philosophy also informs critical thinking about epistemology and the structure of reality (metaphysics).
  • Offers clarity and definition to the practical and theoretical uses of terms, concepts, and ideas.
  • Limited application to specific research problems [answering the "So What?" question in social science research].
  • Analysis can be abstract, argumentative, and limited in its practical application to real-life issues.
  • While a philosophical analysis may render problematic that which was once simple or taken-for-granted, the writing can be dense and subject to unnecessary jargon, overstatement, and/or excessive quotation and documentation.
  • There are limitations in the use of metaphor as a vehicle of philosophical analysis.
  • There can be analytical difficulties in moving from philosophy to advocacy and between abstract thought and application to the phenomenal world.

Burton, Dawn. "Part I, Philosophy of the Social Sciences." In Research Training for Social Scientists . (London, England: Sage, 2000), pp. 1-5; Chapter 4, Research Methodology and Design. Unisa Institutional Repository (UnisaIR), University of South Africa; Jarvie, Ian C., and Jesús Zamora-Bonilla, editors. The SAGE Handbook of the Philosophy of Social Sciences . London: Sage, 2011; Labaree, Robert V. and Ross Scimeca. “The Philosophical Problem of Truth in Librarianship.” The Library Quarterly 78 (January 2008): 43-70; Maykut, Pamela S. Beginning Qualitative Research: A Philosophic and Practical Guide . Washington, DC: Falmer Press, 1994; McLaughlin, Hugh. "The Philosophy of Social Research." In Understanding Social Work Research . 2nd edition. (London: SAGE Publications Ltd., 2012), pp. 24-47; Stanford Encyclopedia of Philosophy . Metaphysics Research Lab, CSLI, Stanford University, 2013.

Sequential Design

  • The researcher has a limitless option when it comes to sample size and the sampling schedule.
  • Due to the repetitive nature of this research design, minor changes and adjustments can be done during the initial parts of the study to correct and hone the research method.
  • This is a useful design for exploratory studies.
  • There is very little effort on the part of the researcher when performing this technique. It is generally not expensive, time consuming, or workforce intensive.
  • Because the study is conducted serially, the results of one sample are known before the next sample is taken and analyzed. This provides opportunities for continuous improvement of sampling and methods of analysis.
  • The sampling method is not representative of the entire population. The only possibility of approaching representativeness is when the researcher chooses to use a very large sample size significant enough to represent a significant portion of the entire population. In this case, moving on to study a second or more specific sample can be difficult.
  • The design cannot be used to create conclusions and interpretations that pertain to an entire population because the sampling technique is not randomized. Generalizability from findings is, therefore, limited.
  • Difficult to account for and interpret variation from one sample to another over time, particularly when using qualitative methods of data collection.

Betensky, Rebecca. Harvard University, Course Lecture Note slides; Bovaird, James A. and Kevin A. Kupzyk. "Sequential Design." In Encyclopedia of Research Design . Neil J. Salkind, editor. (Thousand Oaks, CA: Sage, 2010), pp. 1347-1352; Cresswell, John W. Et al. “Advanced Mixed-Methods Research Designs.” In Handbook of Mixed Methods in Social and Behavioral Research . Abbas Tashakkori and Charles Teddle, eds. (Thousand Oaks, CA: Sage, 2003), pp. 209-240; Henry, Gary T. "Sequential Sampling." In The SAGE Encyclopedia of Social Science Research Methods . Michael S. Lewis-Beck, Alan Bryman and Tim Futing Liao, editors. (Thousand Oaks, CA: Sage, 2004), pp. 1027-1028; Nataliya V. Ivankova. “Using Mixed-Methods Sequential Explanatory Design: From Theory to Practice.” Field Methods 18 (February 2006): 3-20; Bovaird, James A. and Kevin A. Kupzyk. “Sequential Design.” In Encyclopedia of Research Design . Neil J. Salkind, ed. Thousand Oaks, CA: Sage, 2010; Sequential Analysis. Wikipedia.

Systematic Review

  • A systematic review synthesizes the findings of multiple studies related to each other by incorporating strategies of analysis and interpretation intended to reduce biases and random errors.
  • The application of critical exploration, evaluation, and synthesis methods separates insignificant, unsound, or redundant research from the most salient and relevant studies worthy of reflection.
  • They can be use to identify, justify, and refine hypotheses, recognize and avoid hidden problems in prior studies, and explain data inconsistencies and conflicts in data.
  • Systematic reviews can be used to help policy makers formulate evidence-based guidelines and regulations.
  • The use of strict, explicit, and pre-determined methods of synthesis, when applied appropriately, provide reliable estimates about the effects of interventions, evaluations, and effects related to the overarching research problem investigated by each study under review.
  • Systematic reviews illuminate where knowledge or thorough understanding of a research problem is lacking and, therefore, can then be used to guide future research.
  • The accepted inclusion of unpublished studies [i.e., grey literature] ensures the broadest possible way to analyze and interpret research on a topic.
  • Results of the synthesis can be generalized and the findings extrapolated into the general population with more validity than most other types of studies .
  • Systematic reviews do not create new knowledge per se; they are a method for synthesizing existing studies about a research problem in order to gain new insights and determine gaps in the literature.
  • The way researchers have carried out their investigations [e.g., the period of time covered, number of participants, sources of data analyzed, etc.] can make it difficult to effectively synthesize studies.
  • The inclusion of unpublished studies can introduce bias into the review because they may not have undergone a rigorous peer-review process prior to publication. Examples may include conference presentations or proceedings, publications from government agencies, white papers, working papers, and internal documents from organizations, and doctoral dissertations and Master's theses.

Denyer, David and David Tranfield. "Producing a Systematic Review." In The Sage Handbook of Organizational Research Methods .  David A. Buchanan and Alan Bryman, editors. ( Thousand Oaks, CA: Sage Publications, 2009), pp. 671-689; Foster, Margaret J. and Sarah T. Jewell, editors. Assembling the Pieces of a Systematic Review: A Guide for Librarians . Lanham, MD: Rowman and Littlefield, 2017; Gough, David, Sandy Oliver, James Thomas, editors. Introduction to Systematic Reviews . 2nd edition. Los Angeles, CA: Sage Publications, 2017; Gopalakrishnan, S. and P. Ganeshkumar. “Systematic Reviews and Meta-analysis: Understanding the Best Evidence in Primary Healthcare.” Journal of Family Medicine and Primary Care 2 (2013): 9-14; Gough, David, James Thomas, and Sandy Oliver. "Clarifying Differences between Review Designs and Methods." Systematic Reviews 1 (2012): 1-9; Khan, Khalid S., Regina Kunz, Jos Kleijnen, and Gerd Antes. “Five Steps to Conducting a Systematic Review.” Journal of the Royal Society of Medicine 96 (2003): 118-121; Mulrow, C. D. “Systematic Reviews: Rationale for Systematic Reviews.” BMJ 309:597 (September 1994); O'Dwyer, Linda C., and Q. Eileen Wafford. "Addressing Challenges with Systematic Review Teams through Effective Communication: A Case Report." Journal of the Medical Library Association 109 (October 2021): 643-647; Okoli, Chitu, and Kira Schabram. "A Guide to Conducting a Systematic Literature Review of Information Systems Research."  Sprouts: Working Papers on Information Systems 10 (2010); Siddaway, Andy P., Alex M. Wood, and Larry V. Hedges. "How to Do a Systematic Review: A Best Practice Guide for Conducting and Reporting Narrative Reviews, Meta-analyses, and Meta-syntheses." Annual Review of Psychology 70 (2019): 747-770; Torgerson, Carole J. “Publication Bias: The Achilles’ Heel of Systematic Reviews?” British Journal of Educational Studies 54 (March 2006): 89-102; Torgerson, Carole. Systematic Reviews . New York: Continuum, 2003.

  • << Previous: Purpose of Guide
  • Next: Design Flaws to Avoid >>
  • Last Updated: May 18, 2024 11:38 AM
  • URL: https://libguides.usc.edu/writingguide

Logo for University of Southern Queensland

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

5 Research design

Research design is a comprehensive plan for data collection in an empirical research project. It is a ‘blueprint’ for empirical research aimed at answering specific research questions or testing specific hypotheses, and must specify at least three processes: the data collection process, the instrument development process, and the sampling process. The instrument development and sampling processes are described in the next two chapters, and the data collection process—which is often loosely called ‘research design’—is introduced in this chapter and is described in further detail in Chapters 9–12.

Broadly speaking, data collection methods can be grouped into two categories: positivist and interpretive. Positivist methods , such as laboratory experiments and survey research, are aimed at theory (or hypotheses) testing, while interpretive methods, such as action research and ethnography, are aimed at theory building. Positivist methods employ a deductive approach to research, starting with a theory and testing theoretical postulates using empirical data. In contrast, interpretive methods employ an inductive approach that starts with data and tries to derive a theory about the phenomenon of interest from the observed data. Often times, these methods are incorrectly equated with quantitative and qualitative research. Quantitative and qualitative methods refers to the type of data being collected—quantitative data involve numeric scores, metrics, and so on, while qualitative data includes interviews, observations, and so forth—and analysed (i.e., using quantitative techniques such as regression or qualitative techniques such as coding). Positivist research uses predominantly quantitative data, but can also use qualitative data. Interpretive research relies heavily on qualitative data, but can sometimes benefit from including quantitative data as well. Sometimes, joint use of qualitative and quantitative data may help generate unique insight into a complex social phenomenon that is not available from either type of data alone, and hence, mixed-mode designs that combine qualitative and quantitative data are often highly desirable.

Key attributes of a research design

The quality of research designs can be defined in terms of four key design attributes: internal validity, external validity, construct validity, and statistical conclusion validity.

Internal validity , also called causality, examines whether the observed change in a dependent variable is indeed caused by a corresponding change in a hypothesised independent variable, and not by variables extraneous to the research context. Causality requires three conditions: covariation of cause and effect (i.e., if cause happens, then effect also happens; if cause does not happen, effect does not happen), temporal precedence (cause must precede effect in time), and spurious correlation, or there is no plausible alternative explanation for the change. Certain research designs, such as laboratory experiments, are strong in internal validity by virtue of their ability to manipulate the independent variable (cause) via a treatment and observe the effect (dependent variable) of that treatment after a certain point in time, while controlling for the effects of extraneous variables. Other designs, such as field surveys, are poor in internal validity because of their inability to manipulate the independent variable (cause), and because cause and effect are measured at the same point in time which defeats temporal precedence making it equally likely that the expected effect might have influenced the expected cause rather than the reverse. Although higher in internal validity compared to other methods, laboratory experiments are by no means immune to threats of internal validity, and are susceptible to history, testing, instrumentation, regression, and other threats that are discussed later in the chapter on experimental designs. Nonetheless, different research designs vary considerably in their respective level of internal validity.

External validity or generalisability refers to whether the observed associations can be generalised from the sample to the population (population validity), or to other people, organisations, contexts, or time (ecological validity). For instance, can results drawn from a sample of financial firms in the United States be generalised to the population of financial firms (population validity) or to other firms within the United States (ecological validity)? Survey research, where data is sourced from a wide variety of individuals, firms, or other units of analysis, tends to have broader generalisability than laboratory experiments where treatments and extraneous variables are more controlled. The variation in internal and external validity for a wide range of research designs is shown in Figure 5.1.

Internal and external validity

Some researchers claim that there is a trade-off between internal and external validity—higher external validity can come only at the cost of internal validity and vice versa. But this is not always the case. Research designs such as field experiments, longitudinal field surveys, and multiple case studies have higher degrees of both internal and external validities. Personally, I prefer research designs that have reasonable degrees of both internal and external validities, i.e., those that fall within the cone of validity shown in Figure 5.1. But this should not suggest that designs outside this cone are any less useful or valuable. Researchers’ choice of designs are ultimately a matter of their personal preference and competence, and the level of internal and external validity they desire.

Construct validity examines how well a given measurement scale is measuring the theoretical construct that it is expected to measure. Many constructs used in social science research such as empathy, resistance to change, and organisational learning are difficult to define, much less measure. For instance, construct validity must ensure that a measure of empathy is indeed measuring empathy and not compassion, which may be difficult since these constructs are somewhat similar in meaning. Construct validity is assessed in positivist research based on correlational or factor analysis of pilot test data, as described in the next chapter.

Statistical conclusion validity examines the extent to which conclusions derived using a statistical procedure are valid. For example, it examines whether the right statistical method was used for hypotheses testing, whether the variables used meet the assumptions of that statistical test (such as sample size or distributional requirements), and so forth. Because interpretive research designs do not employ statistical tests, statistical conclusion validity is not applicable for such analysis. The different kinds of validity and where they exist at the theoretical/empirical levels are illustrated in Figure 5.2.

Different types of validity in scientific research

Improving internal and external validity

The best research designs are those that can ensure high levels of internal and external validity. Such designs would guard against spurious correlations, inspire greater faith in the hypotheses testing, and ensure that the results drawn from a small sample are generalisable to the population at large. Controls are required to ensure internal validity (causality) of research designs, and can be accomplished in five ways: manipulation, elimination, inclusion, and statistical control, and randomisation.

In manipulation , the researcher manipulates the independent variables in one or more levels (called ‘treatments’), and compares the effects of the treatments against a control group where subjects do not receive the treatment. Treatments may include a new drug or different dosage of drug (for treating a medical condition), a teaching style (for students), and so forth. This type of control is achieved in experimental or quasi-experimental designs, but not in non-experimental designs such as surveys. Note that if subjects cannot distinguish adequately between different levels of treatment manipulations, their responses across treatments may not be different, and manipulation would fail.

The elimination technique relies on eliminating extraneous variables by holding them constant across treatments, such as by restricting the study to a single gender or a single socioeconomic status. In the inclusion technique, the role of extraneous variables is considered by including them in the research design and separately estimating their effects on the dependent variable, such as via factorial designs where one factor is gender (male versus female). Such technique allows for greater generalisability, but also requires substantially larger samples. In statistical control , extraneous variables are measured and used as covariates during the statistical testing process.

Finally, the randomisation technique is aimed at cancelling out the effects of extraneous variables through a process of random sampling, if it can be assured that these effects are of a random (non-systematic) nature. Two types of randomisation are: random selection , where a sample is selected randomly from a population, and random assignment , where subjects selected in a non-random manner are randomly assigned to treatment groups.

Randomisation also ensures external validity, allowing inferences drawn from the sample to be generalised to the population from which the sample is drawn. Note that random assignment is mandatory when random selection is not possible because of resource or access constraints. However, generalisability across populations is harder to ascertain since populations may differ on multiple dimensions and you can only control for a few of those dimensions.

Popular research designs

As noted earlier, research designs can be classified into two categories—positivist and interpretive—depending on the goal of the research. Positivist designs are meant for theory testing, while interpretive designs are meant for theory building. Positivist designs seek generalised patterns based on an objective view of reality, while interpretive designs seek subjective interpretations of social phenomena from the perspectives of the subjects involved. Some popular examples of positivist designs include laboratory experiments, field experiments, field surveys, secondary data analysis, and case research, while examples of interpretive designs include case research, phenomenology, and ethnography. Note that case research can be used for theory building or theory testing, though not at the same time. Not all techniques are suited for all kinds of scientific research. Some techniques such as focus groups are best suited for exploratory research, others such as ethnography are best for descriptive research, and still others such as laboratory experiments are ideal for explanatory research. Following are brief descriptions of some of these designs. Additional details are provided in Chapters 9–12.

Experimental studies are those that are intended to test cause-effect relationships (hypotheses) in a tightly controlled setting by separating the cause from the effect in time, administering the cause to one group of subjects (the ‘treatment group’) but not to another group (‘control group’), and observing how the mean effects vary between subjects in these two groups. For instance, if we design a laboratory experiment to test the efficacy of a new drug in treating a certain ailment, we can get a random sample of people afflicted with that ailment, randomly assign them to one of two groups (treatment and control groups), administer the drug to subjects in the treatment group, but only give a placebo (e.g., a sugar pill with no medicinal value) to subjects in the control group. More complex designs may include multiple treatment groups, such as low versus high dosage of the drug or combining drug administration with dietary interventions. In a true experimental design , subjects must be randomly assigned to each group. If random assignment is not followed, then the design becomes quasi-experimental . Experiments can be conducted in an artificial or laboratory setting such as at a university (laboratory experiments) or in field settings such as in an organisation where the phenomenon of interest is actually occurring (field experiments). Laboratory experiments allow the researcher to isolate the variables of interest and control for extraneous variables, which may not be possible in field experiments. Hence, inferences drawn from laboratory experiments tend to be stronger in internal validity, but those from field experiments tend to be stronger in external validity. Experimental data is analysed using quantitative statistical techniques. The primary strength of the experimental design is its strong internal validity due to its ability to isolate, control, and intensively examine a small number of variables, while its primary weakness is limited external generalisability since real life is often more complex (i.e., involving more extraneous variables) than contrived lab settings. Furthermore, if the research does not identify ex ante relevant extraneous variables and control for such variables, such lack of controls may hurt internal validity and may lead to spurious correlations.

Field surveys are non-experimental designs that do not control for or manipulate independent variables or treatments, but measure these variables and test their effects using statistical methods. Field surveys capture snapshots of practices, beliefs, or situations from a random sample of subjects in field settings through a survey questionnaire or less frequently, through a structured interview. In cross-sectional field surveys , independent and dependent variables are measured at the same point in time (e.g., using a single questionnaire), while in longitudinal field surveys , dependent variables are measured at a later point in time than the independent variables. The strengths of field surveys are their external validity (since data is collected in field settings), their ability to capture and control for a large number of variables, and their ability to study a problem from multiple perspectives or using multiple theories. However, because of their non-temporal nature, internal validity (cause-effect relationships) are difficult to infer, and surveys may be subject to respondent biases (e.g., subjects may provide a ‘socially desirable’ response rather than their true response) which further hurts internal validity.

Secondary data analysis is an analysis of data that has previously been collected and tabulated by other sources. Such data may include data from government agencies such as employment statistics from the U.S. Bureau of Labor Services or development statistics by countries from the United Nations Development Program, data collected by other researchers (often used in meta-analytic studies), or publicly available third-party data, such as financial data from stock markets or real-time auction data from eBay. This is in contrast to most other research designs where collecting primary data for research is part of the researcher’s job. Secondary data analysis may be an effective means of research where primary data collection is too costly or infeasible, and secondary data is available at a level of analysis suitable for answering the researcher’s questions. The limitations of this design are that the data might not have been collected in a systematic or scientific manner and hence unsuitable for scientific research, since the data was collected for a presumably different purpose, they may not adequately address the research questions of interest to the researcher, and interval validity is problematic if the temporal precedence between cause and effect is unclear.

Case research is an in-depth investigation of a problem in one or more real-life settings (case sites) over an extended period of time. Data may be collected using a combination of interviews, personal observations, and internal or external documents. Case studies can be positivist in nature (for hypotheses testing) or interpretive (for theory building). The strength of this research method is its ability to discover a wide variety of social, cultural, and political factors potentially related to the phenomenon of interest that may not be known in advance. Analysis tends to be qualitative in nature, but heavily contextualised and nuanced. However, interpretation of findings may depend on the observational and integrative ability of the researcher, lack of control may make it difficult to establish causality, and findings from a single case site may not be readily generalised to other case sites. Generalisability can be improved by replicating and comparing the analysis in other case sites in a multiple case design .

Focus group research is a type of research that involves bringing in a small group of subjects (typically six to ten people) at one location, and having them discuss a phenomenon of interest for a period of one and a half to two hours. The discussion is moderated and led by a trained facilitator, who sets the agenda and poses an initial set of questions for participants, makes sure that the ideas and experiences of all participants are represented, and attempts to build a holistic understanding of the problem situation based on participants’ comments and experiences. Internal validity cannot be established due to lack of controls and the findings may not be generalised to other settings because of the small sample size. Hence, focus groups are not generally used for explanatory or descriptive research, but are more suited for exploratory research.

Action research assumes that complex social phenomena are best understood by introducing interventions or ‘actions’ into those phenomena and observing the effects of those actions. In this method, the researcher is embedded within a social context such as an organisation and initiates an action—such as new organisational procedures or new technologies—in response to a real problem such as declining profitability or operational bottlenecks. The researcher’s choice of actions must be based on theory, which should explain why and how such actions may cause the desired change. The researcher then observes the results of that action, modifying it as necessary, while simultaneously learning from the action and generating theoretical insights about the target problem and interventions. The initial theory is validated by the extent to which the chosen action successfully solves the target problem. Simultaneous problem solving and insight generation is the central feature that distinguishes action research from all other research methods, and hence, action research is an excellent method for bridging research and practice. This method is also suited for studying unique social problems that cannot be replicated outside that context, but it is also subject to researcher bias and subjectivity, and the generalisability of findings is often restricted to the context where the study was conducted.

Ethnography is an interpretive research design inspired by anthropology that emphasises that research phenomenon must be studied within the context of its culture. The researcher is deeply immersed in a certain culture over an extended period of time—eight months to two years—and during that period, engages, observes, and records the daily life of the studied culture, and theorises about the evolution and behaviours in that culture. Data is collected primarily via observational techniques, formal and informal interaction with participants in that culture, and personal field notes, while data analysis involves ‘sense-making’. The researcher must narrate her experience in great detail so that readers may experience that same culture without necessarily being there. The advantages of this approach are its sensitiveness to the context, the rich and nuanced understanding it generates, and minimal respondent bias. However, this is also an extremely time and resource-intensive approach, and findings are specific to a given culture and less generalisable to other cultures.

Selecting research designs

Given the above multitude of research designs, which design should researchers choose for their research? Generally speaking, researchers tend to select those research designs that they are most comfortable with and feel most competent to handle, but ideally, the choice should depend on the nature of the research phenomenon being studied. In the preliminary phases of research, when the research problem is unclear and the researcher wants to scope out the nature and extent of a certain research problem, a focus group (for an individual unit of analysis) or a case study (for an organisational unit of analysis) is an ideal strategy for exploratory research. As one delves further into the research domain, but finds that there are no good theories to explain the phenomenon of interest and wants to build a theory to fill in the unmet gap in that area, interpretive designs such as case research or ethnography may be useful designs. If competing theories exist and the researcher wishes to test these different theories or integrate them into a larger theory, positivist designs such as experimental design, survey research, or secondary data analysis are more appropriate.

Regardless of the specific research design chosen, the researcher should strive to collect quantitative and qualitative data using a combination of techniques such as questionnaires, interviews, observations, documents, or secondary data. For instance, even in a highly structured survey questionnaire, intended to collect quantitative data, the researcher may leave some room for a few open-ended questions to collect qualitative data that may generate unexpected insights not otherwise available from structured quantitative data alone. Likewise, while case research employ mostly face-to-face interviews to collect most qualitative data, the potential and value of collecting quantitative data should not be ignored. As an example, in a study of organisational decision-making processes, the case interviewer can record numeric quantities such as how many months it took to make certain organisational decisions, how many people were involved in that decision process, and how many decision alternatives were considered, which can provide valuable insights not otherwise available from interviewees’ narrative responses. Irrespective of the specific research design employed, the goal of the researcher should be to collect as much and as diverse data as possible that can help generate the best possible insights about the phenomenon of interest.

Social Science Research: Principles, Methods and Practices (Revised edition) Copyright © 2019 by Anol Bhattacherjee is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Perspect Clin Res
  • v.9(4); Oct-Dec 2018

Study designs: Part 1 – An overview and classification

Priya ranganathan.

Department of Anaesthesiology, Tata Memorial Centre, Mumbai, Maharashtra, India

Rakesh Aggarwal

1 Department of Gastroenterology, Sanjay Gandhi Postgraduate Institute of Medical Sciences, Lucknow, Uttar Pradesh, India

There are several types of research study designs, each with its inherent strengths and flaws. The study design used to answer a particular research question depends on the nature of the question and the availability of resources. In this article, which is the first part of a series on “study designs,” we provide an overview of research study designs and their classification. The subsequent articles will focus on individual designs.

INTRODUCTION

Research study design is a framework, or the set of methods and procedures used to collect and analyze data on variables specified in a particular research problem.

Research study designs are of many types, each with its advantages and limitations. The type of study design used to answer a particular research question is determined by the nature of question, the goal of research, and the availability of resources. Since the design of a study can affect the validity of its results, it is important to understand the different types of study designs and their strengths and limitations.

There are some terms that are used frequently while classifying study designs which are described in the following sections.

A variable represents a measurable attribute that varies across study units, for example, individual participants in a study, or at times even when measured in an individual person over time. Some examples of variables include age, sex, weight, height, health status, alive/dead, diseased/healthy, annual income, smoking yes/no, and treated/untreated.

Exposure (or intervention) and outcome variables

A large proportion of research studies assess the relationship between two variables. Here, the question is whether one variable is associated with or responsible for change in the value of the other variable. Exposure (or intervention) refers to the risk factor whose effect is being studied. It is also referred to as the independent or the predictor variable. The outcome (or predicted or dependent) variable develops as a consequence of the exposure (or intervention). Typically, the term “exposure” is used when the “causative” variable is naturally determined (as in observational studies – examples include age, sex, smoking, and educational status), and the term “intervention” is preferred where the researcher assigns some or all participants to receive a particular treatment for the purpose of the study (experimental studies – e.g., administration of a drug). If a drug had been started in some individuals but not in the others, before the study started, this counts as exposure, and not as intervention – since the drug was not started specifically for the study.

Observational versus interventional (or experimental) studies

Observational studies are those where the researcher is documenting a naturally occurring relationship between the exposure and the outcome that he/she is studying. The researcher does not do any active intervention in any individual, and the exposure has already been decided naturally or by some other factor. For example, looking at the incidence of lung cancer in smokers versus nonsmokers, or comparing the antenatal dietary habits of mothers with normal and low-birth babies. In these studies, the investigator did not play any role in determining the smoking or dietary habit in individuals.

For an exposure to determine the outcome, it must precede the latter. Any variable that occurs simultaneously with or following the outcome cannot be causative, and hence is not considered as an “exposure.”

Observational studies can be either descriptive (nonanalytical) or analytical (inferential) – this is discussed later in this article.

Interventional studies are experiments where the researcher actively performs an intervention in some or all members of a group of participants. This intervention could take many forms – for example, administration of a drug or vaccine, performance of a diagnostic or therapeutic procedure, and introduction of an educational tool. For example, a study could randomly assign persons to receive aspirin or placebo for a specific duration and assess the effect on the risk of developing cerebrovascular events.

Descriptive versus analytical studies

Descriptive (or nonanalytical) studies, as the name suggests, merely try to describe the data on one or more characteristics of a group of individuals. These do not try to answer questions or establish relationships between variables. Examples of descriptive studies include case reports, case series, and cross-sectional surveys (please note that cross-sectional surveys may be analytical studies as well – this will be discussed in the next article in this series). Examples of descriptive studies include a survey of dietary habits among pregnant women or a case series of patients with an unusual reaction to a drug.

Analytical studies attempt to test a hypothesis and establish causal relationships between variables. In these studies, the researcher assesses the effect of an exposure (or intervention) on an outcome. As described earlier, analytical studies can be observational (if the exposure is naturally determined) or interventional (if the researcher actively administers the intervention).

Directionality of study designs

Based on the direction of inquiry, study designs may be classified as forward-direction or backward-direction. In forward-direction studies, the researcher starts with determining the exposure to a risk factor and then assesses whether the outcome occurs at a future time point. This design is known as a cohort study. For example, a researcher can follow a group of smokers and a group of nonsmokers to determine the incidence of lung cancer in each. In backward-direction studies, the researcher begins by determining whether the outcome is present (cases vs. noncases [also called controls]) and then traces the presence of prior exposure to a risk factor. These are known as case–control studies. For example, a researcher identifies a group of normal-weight babies and a group of low-birth weight babies and then asks the mothers about their dietary habits during the index pregnancy.

Prospective versus retrospective study designs

The terms “prospective” and “retrospective” refer to the timing of the research in relation to the development of the outcome. In retrospective studies, the outcome of interest has already occurred (or not occurred – e.g., in controls) in each individual by the time s/he is enrolled, and the data are collected either from records or by asking participants to recall exposures. There is no follow-up of participants. By contrast, in prospective studies, the outcome (and sometimes even the exposure or intervention) has not occurred when the study starts and participants are followed up over a period of time to determine the occurrence of outcomes. Typically, most cohort studies are prospective studies (though there may be retrospective cohorts), whereas case–control studies are retrospective studies. An interventional study has to be, by definition, a prospective study since the investigator determines the exposure for each study participant and then follows them to observe outcomes.

The terms “prospective” versus “retrospective” studies can be confusing. Let us think of an investigator who starts a case–control study. To him/her, the process of enrolling cases and controls over a period of several months appears prospective. Hence, the use of these terms is best avoided. Or, at the very least, one must be clear that the terms relate to work flow for each individual study participant, and not to the study as a whole.

Classification of study designs

Figure 1 depicts a simple classification of research study designs. The Centre for Evidence-based Medicine has put forward a useful three-point algorithm which can help determine the design of a research study from its methods section:[ 1 ]

An external file that holds a picture, illustration, etc.
Object name is PCR-9-184-g001.jpg

Classification of research study designs

  • Does the study describe the characteristics of a sample or does it attempt to analyze (or draw inferences about) the relationship between two variables? – If no, then it is a descriptive study, and if yes, it is an analytical (inferential) study
  • If analytical, did the investigator determine the exposure? – If no, it is an observational study, and if yes, it is an experimental study
  • If observational, when was the outcome determined? – at the start of the study (case–control study), at the end of a period of follow-up (cohort study), or simultaneously (cross sectional).

In the next few pieces in the series, we will discuss various study designs in greater detail.

Financial support and sponsorship

Conflicts of interest.

There are no conflicts of interest.

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

purpose of research design is

Home Market Research Research Tools and Apps

Research Design: What it is, Elements & Types

Research Design

Can you imagine doing research without a plan? Probably not. When we discuss a strategy to collect, study, and evaluate data, we talk about research design. This design addresses problems and creates a consistent and logical model for data analysis. Let’s learn more about it.

What is Research Design?

Research design is the framework of research methods and techniques chosen by a researcher to conduct a study. The design allows researchers to sharpen the research methods suitable for the subject matter and set up their studies for success.

Creating a research topic explains the type of research (experimental,  survey research ,  correlational , semi-experimental, review) and its sub-type (experimental design, research problem , descriptive case-study). 

There are three main types of designs for research:

  • Data collection
  • Measurement
  • Data Analysis

The research problem an organization faces will determine the design, not vice-versa. The design phase of a study determines which tools to use and how they are used.

The Process of Research Design

The research design process is a systematic and structured approach to conducting research. The process is essential to ensure that the study is valid, reliable, and produces meaningful results.

  • Consider your aims and approaches: Determine the research questions and objectives, and identify the theoretical framework and methodology for the study.
  • Choose a type of Research Design: Select the appropriate research design, such as experimental, correlational, survey, case study, or ethnographic, based on the research questions and objectives.
  • Identify your population and sampling method: Determine the target population and sample size, and choose the sampling method, such as random , stratified random sampling , or convenience sampling.
  • Choose your data collection methods: Decide on the data collection methods , such as surveys, interviews, observations, or experiments, and select the appropriate instruments or tools for collecting data.
  • Plan your data collection procedures: Develop a plan for data collection, including the timeframe, location, and personnel involved, and ensure ethical considerations.
  • Decide on your data analysis strategies: Select the appropriate data analysis techniques, such as statistical analysis , content analysis, or discourse analysis, and plan how to interpret the results.

The process of research design is a critical step in conducting research. By following the steps of research design, researchers can ensure that their study is well-planned, ethical, and rigorous.

Research Design Elements

Impactful research usually creates a minimum bias in data and increases trust in the accuracy of collected data. A design that produces the slightest margin of error in experimental research is generally considered the desired outcome. The essential elements are:

  • Accurate purpose statement
  • Techniques to be implemented for collecting and analyzing research
  • The method applied for analyzing collected details
  • Type of research methodology
  • Probable objections to research
  • Settings for the research study
  • Measurement of analysis

Characteristics of Research Design

A proper design sets your study up for success. Successful research studies provide insights that are accurate and unbiased. You’ll need to create a survey that meets all of the main characteristics of a design. There are four key characteristics:

Characteristics of Research Design

  • Neutrality: When you set up your study, you may have to make assumptions about the data you expect to collect. The results projected in the research should be free from research bias and neutral. Understand opinions about the final evaluated scores and conclusions from multiple individuals and consider those who agree with the results.
  • Reliability: With regularly conducted research, the researcher expects similar results every time. You’ll only be able to reach the desired results if your design is reliable. Your plan should indicate how to form research questions to ensure the standard of results.
  • Validity: There are multiple measuring tools available. However, the only correct measuring tools are those which help a researcher in gauging results according to the objective of the research. The  questionnaire  developed from this design will then be valid.
  • Generalization:  The outcome of your design should apply to a population and not just a restricted sample . A generalized method implies that your survey can be conducted on any part of a population with similar accuracy.

The above factors affect how respondents answer the research questions, so they should balance all the above characteristics in a good design. If you want, you can also learn about Selection Bias through our blog.

Research Design Types

A researcher must clearly understand the various types to select which model to implement for a study. Like the research itself, the design of your analysis can be broadly classified into quantitative and qualitative.

Qualitative research

Qualitative research determines relationships between collected data and observations based on mathematical calculations. Statistical methods can prove or disprove theories related to a naturally existing phenomenon. Researchers rely on qualitative observation research methods that conclude “why” a particular theory exists and “what” respondents have to say about it.

Quantitative research

Quantitative research is for cases where statistical conclusions to collect actionable insights are essential. Numbers provide a better perspective for making critical business decisions. Quantitative research methods are necessary for the growth of any organization. Insights drawn from complex numerical data and analysis prove to be highly effective when making decisions about the business’s future.

Qualitative Research vs Quantitative Research

Here is a chart that highlights the major differences between qualitative and quantitative research:

In summary or analysis , the step of qualitative research is more exploratory and focuses on understanding the subjective experiences of individuals, while quantitative research is more focused on objective data and statistical analysis.

You can further break down the types of research design into five categories:

types of research design

1. Descriptive: In a descriptive composition, a researcher is solely interested in describing the situation or case under their research study. It is a theory-based design method created by gathering, analyzing, and presenting collected data. This allows a researcher to provide insights into the why and how of research. Descriptive design helps others better understand the need for the research. If the problem statement is not clear, you can conduct exploratory research. 

2. Experimental: Experimental research establishes a relationship between the cause and effect of a situation. It is a causal research design where one observes the impact caused by the independent variable on the dependent variable. For example, one monitors the influence of an independent variable such as a price on a dependent variable such as customer satisfaction or brand loyalty. It is an efficient research method as it contributes to solving a problem.

The independent variables are manipulated to monitor the change it has on the dependent variable. Social sciences often use it to observe human behavior by analyzing two groups. Researchers can have participants change their actions and study how the people around them react to understand social psychology better.

3. Correlational research: Correlational research  is a non-experimental research technique. It helps researchers establish a relationship between two closely connected variables. There is no assumption while evaluating a relationship between two other variables, and statistical analysis techniques calculate the relationship between them. This type of research requires two different groups.

A correlation coefficient determines the correlation between two variables whose values range between -1 and +1. If the correlation coefficient is towards +1, it indicates a positive relationship between the variables, and -1 means a negative relationship between the two variables. 

4. Diagnostic research: In diagnostic design, the researcher is looking to evaluate the underlying cause of a specific topic or phenomenon. This method helps one learn more about the factors that create troublesome situations. 

This design has three parts of the research:

  • Inception of the issue
  • Diagnosis of the issue
  • Solution for the issue

5. Explanatory research : Explanatory design uses a researcher’s ideas and thoughts on a subject to further explore their theories. The study explains unexplored aspects of a subject and details the research questions’ what, how, and why.

Benefits of Research Design

There are several benefits of having a well-designed research plan. Including:

  • Clarity of research objectives: Research design provides a clear understanding of the research objectives and the desired outcomes.
  • Increased validity and reliability: To ensure the validity and reliability of results, research design help to minimize the risk of bias and helps to control extraneous variables.
  • Improved data collection: Research design helps to ensure that the proper data is collected and data is collected systematically and consistently.
  • Better data analysis: Research design helps ensure that the collected data can be analyzed effectively, providing meaningful insights and conclusions.
  • Improved communication: A well-designed research helps ensure the results are clean and influential within the research team and external stakeholders.
  • Efficient use of resources: reducing the risk of waste and maximizing the impact of the research, research design helps to ensure that resources are used efficiently.

A well-designed research plan is essential for successful research, providing clear and meaningful insights and ensuring that resources are practical.

QuestionPro offers a comprehensive solution for researchers looking to conduct research. With its user-friendly interface, robust data collection and analysis tools, and the ability to integrate results from multiple sources, QuestionPro provides a versatile platform for designing and executing research projects.

Our robust suite of research tools provides you with all you need to derive research results. Our online survey platform includes custom point-and-click logic and advanced question types. Uncover the insights that matter the most.

FREE TRIAL         LEARN MORE

MORE LIKE THIS

data information vs insight

Data Information vs Insight: Essential differences

May 14, 2024

pricing analytics software

Pricing Analytics Software: Optimize Your Pricing Strategy

May 13, 2024

relationship marketing

Relationship Marketing: What It Is, Examples & Top 7 Benefits

May 8, 2024

email survey tool

The Best Email Survey Tool to Boost Your Feedback Game

May 7, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence
  • Interesting
  • Scholarships
  • UGC-CARE Journals

What is a Research Design? Importance and Types

Why Research Design is Important for a Researcher?

Dr. Sowndarya Somasundaram

A research design is a systematic procedure or an idea to carry out different tasks of the research study. It is important to know the research design and its types for the researcher to carry out the work in a proper way.

The purpose of research design is that enable the researcher to proceed in the right direction without any deviation from the tasks. It is an overall detailed strategy of the research process.

The design of experiments is a very important aspect of a research study. A poor research design may collapse the entire research project in terms of time, manpower, and money.

7 Importance of Research Design – iLovePhD

What is a Research Design in Research Methodology ?

A research design is a plan or framework for conducting research. It includes a set of plans and procedures that aim to produce reliable and valid data. The research design must be appropriate to the type of research question being asked and the type of data being collected.

A typical research design is a detailed methodology or a roadmap for the successful completion of any research work. ilovephd.com

Importance of Research Design

A Good research design consists of the following important points:

  • Formulating a research design helps the researcher to make correct decisions in each and every step of the study.
  • It helps to identify the major and minor tasks of the study.
  • It makes the research study effective and interesting by providing minute details at each step of the research process.
  • Based on the design of experiments (research design), a researcher can easily frame the objectives of the research work.
  • A good research design helps the researcher to complete the objectives of the study in a given time and facilitates getting the best solution for the research problems .
  • It helps the researcher to complete all the tasks even with limited resources in a better way.
  • The main advantage of a good research design is that it provides accuracy, reliability, consistency, and legitimacy to the research.

How to Create a Research Design?                      

According to Thyer, the research design has the following components:

Research Design

  • A researcher begins the study by framing the problem statement of the research work.
  • Then, the researcher has to identify the sampling points, the number of samples, the sample size, and the location.
  • The next step is to identify the operating variables or parameters of the study and detail how the variables are to be measured.
  • The final step is the collection, interpretation, and dissemination of results.

Considerations in selecting the research design

The researchers should know the various types of research designs and their applicability. The selection of a research design can only be made after a careful understanding of the different research design types . The factors to be considered in choosing a research design are

  • Qualitative Vs quantitative
  • Basic Vs applied
  • Empirical Vs Non-empirical

Types of Research Design?

There are four main types of research designs: experimental, observational, quasi-experimental, and descriptive.

  • Experimental designs: are used to test cause-and-effect relationships. In an experiment, the researcher manipulates one or more independent variables and observes the effect on a dependent variable.
  • Observational designs are used to study behavior without manipulating any variables. The researcher simply observes and records the behavior.
  • Quasi-experimental designs are used when it is not possible to manipulate the independent variable. The researcher uses a naturally occurring independent variable and controls for other variables.
  • Descriptive designs are used to describe a behavior or phenomenon. The researcher does not manipulate any variables, but simply observes and records the behavior.

I hope, this article would help you to know about what is research design, the types of research design, and what are the important points to be considered in carrying out the research work.

purpose of research design is

  • classification of research design
  • experimental research design
  • research design
  • research design and methodology
  • research design and methods
  • research design example
  • research design explained
  • research design in hindi
  • research design lecture
  • research design meaning
  • research design types
  • Research Methodology
  • research methods
  • types of research design
  • what is research design

Dr. Sowndarya Somasundaram

List of PhD and Postdoc Fellowships in India 2024

Phd in india 2024 – cost, duration, and eligibility for admission, 100 connective words for research paper writing, leave a reply cancel reply, most popular, photopea tutorial – online free photo editor for thesis images, eight effective tips to overcome writer’s block in phd thesis writing, google ai for phd research – tools and techniques, phd supervisors – unsung heroes of doctoral students, india-canada collaborative industrial r&d grant, call for mobility plus project proposal – india and the czech republic, effective tips on how to read research paper, best for you, 24 best free plagiarism checkers in 2024, what is phd, popular posts, how to check scopus indexed journals 2024, how to write a research paper a complete guide, 480 ugc-care list of journals – science – 2024, popular category.

  • POSTDOC 317
  • Interesting 258
  • Journals 234
  • Fellowship 128
  • Research Methodology 102
  • All Scopus Indexed Journals 92

ilovephd_logo

iLovePhD is a research education website to know updated research-related information. It helps researchers to find top journals for publishing research articles and get an easy manual for research tools. The main aim of this website is to help Ph.D. scholars who are working in various domains to get more valuable ideas to carry out their research. Learn the current groundbreaking research activities around the world, love the process of getting a Ph.D.

Contact us: [email protected]

Google News

Copyright © 2024 iLovePhD. All rights reserved

  • Artificial intelligence

purpose of research design is

  • Privacy Policy

Research Method

Home » Descriptive Research Design – Types, Methods and Examples

Descriptive Research Design – Types, Methods and Examples

Table of Contents

Descriptive Research Design

Descriptive Research Design

Definition:

Descriptive research design is a type of research methodology that aims to describe or document the characteristics, behaviors, attitudes, opinions, or perceptions of a group or population being studied.

Descriptive research design does not attempt to establish cause-and-effect relationships between variables or make predictions about future outcomes. Instead, it focuses on providing a detailed and accurate representation of the data collected, which can be useful for generating hypotheses, exploring trends, and identifying patterns in the data.

Types of Descriptive Research Design

Types of Descriptive Research Design are as follows:

Cross-sectional Study

This involves collecting data at a single point in time from a sample or population to describe their characteristics or behaviors. For example, a researcher may conduct a cross-sectional study to investigate the prevalence of certain health conditions among a population, or to describe the attitudes and beliefs of a particular group.

Longitudinal Study

This involves collecting data over an extended period of time, often through repeated observations or surveys of the same group or population. Longitudinal studies can be used to track changes in attitudes, behaviors, or outcomes over time, or to investigate the effects of interventions or treatments.

This involves an in-depth examination of a single individual, group, or situation to gain a detailed understanding of its characteristics or dynamics. Case studies are often used in psychology, sociology, and business to explore complex phenomena or to generate hypotheses for further research.

Survey Research

This involves collecting data from a sample or population through standardized questionnaires or interviews. Surveys can be used to describe attitudes, opinions, behaviors, or demographic characteristics of a group, and can be conducted in person, by phone, or online.

Observational Research

This involves observing and documenting the behavior or interactions of individuals or groups in a natural or controlled setting. Observational studies can be used to describe social, cultural, or environmental phenomena, or to investigate the effects of interventions or treatments.

Correlational Research

This involves examining the relationships between two or more variables to describe their patterns or associations. Correlational studies can be used to identify potential causal relationships or to explore the strength and direction of relationships between variables.

Data Analysis Methods

Descriptive research design data analysis methods depend on the type of data collected and the research question being addressed. Here are some common methods of data analysis for descriptive research:

Descriptive Statistics

This method involves analyzing data to summarize and describe the key features of a sample or population. Descriptive statistics can include measures of central tendency (e.g., mean, median, mode) and measures of variability (e.g., range, standard deviation).

Cross-tabulation

This method involves analyzing data by creating a table that shows the frequency of two or more variables together. Cross-tabulation can help identify patterns or relationships between variables.

Content Analysis

This method involves analyzing qualitative data (e.g., text, images, audio) to identify themes, patterns, or trends. Content analysis can be used to describe the characteristics of a sample or population, or to identify factors that influence attitudes or behaviors.

Qualitative Coding

This method involves analyzing qualitative data by assigning codes to segments of data based on their meaning or content. Qualitative coding can be used to identify common themes, patterns, or categories within the data.

Visualization

This method involves creating graphs or charts to represent data visually. Visualization can help identify patterns or relationships between variables and make it easier to communicate findings to others.

Comparative Analysis

This method involves comparing data across different groups or time periods to identify similarities and differences. Comparative analysis can help describe changes in attitudes or behaviors over time or differences between subgroups within a population.

Applications of Descriptive Research Design

Descriptive research design has numerous applications in various fields. Some of the common applications of descriptive research design are:

  • Market research: Descriptive research design is widely used in market research to understand consumer preferences, behavior, and attitudes. This helps companies to develop new products and services, improve marketing strategies, and increase customer satisfaction.
  • Health research: Descriptive research design is used in health research to describe the prevalence and distribution of a disease or health condition in a population. This helps healthcare providers to develop prevention and treatment strategies.
  • Educational research: Descriptive research design is used in educational research to describe the performance of students, schools, or educational programs. This helps educators to improve teaching methods and develop effective educational programs.
  • Social science research: Descriptive research design is used in social science research to describe social phenomena such as cultural norms, values, and beliefs. This helps researchers to understand social behavior and develop effective policies.
  • Public opinion research: Descriptive research design is used in public opinion research to understand the opinions and attitudes of the general public on various issues. This helps policymakers to develop effective policies that are aligned with public opinion.
  • Environmental research: Descriptive research design is used in environmental research to describe the environmental conditions of a particular region or ecosystem. This helps policymakers and environmentalists to develop effective conservation and preservation strategies.

Descriptive Research Design Examples

Here are some real-time examples of descriptive research designs:

  • A restaurant chain wants to understand the demographics and attitudes of its customers. They conduct a survey asking customers about their age, gender, income, frequency of visits, favorite menu items, and overall satisfaction. The survey data is analyzed using descriptive statistics and cross-tabulation to describe the characteristics of their customer base.
  • A medical researcher wants to describe the prevalence and risk factors of a particular disease in a population. They conduct a cross-sectional study in which they collect data from a sample of individuals using a standardized questionnaire. The data is analyzed using descriptive statistics and cross-tabulation to identify patterns in the prevalence and risk factors of the disease.
  • An education researcher wants to describe the learning outcomes of students in a particular school district. They collect test scores from a representative sample of students in the district and use descriptive statistics to calculate the mean, median, and standard deviation of the scores. They also create visualizations such as histograms and box plots to show the distribution of scores.
  • A marketing team wants to understand the attitudes and behaviors of consumers towards a new product. They conduct a series of focus groups and use qualitative coding to identify common themes and patterns in the data. They also create visualizations such as word clouds to show the most frequently mentioned topics.
  • An environmental scientist wants to describe the biodiversity of a particular ecosystem. They conduct an observational study in which they collect data on the species and abundance of plants and animals in the ecosystem. The data is analyzed using descriptive statistics to describe the diversity and richness of the ecosystem.

How to Conduct Descriptive Research Design

To conduct a descriptive research design, you can follow these general steps:

  • Define your research question: Clearly define the research question or problem that you want to address. Your research question should be specific and focused to guide your data collection and analysis.
  • Choose your research method: Select the most appropriate research method for your research question. As discussed earlier, common research methods for descriptive research include surveys, case studies, observational studies, cross-sectional studies, and longitudinal studies.
  • Design your study: Plan the details of your study, including the sampling strategy, data collection methods, and data analysis plan. Determine the sample size and sampling method, decide on the data collection tools (such as questionnaires, interviews, or observations), and outline your data analysis plan.
  • Collect data: Collect data from your sample or population using the data collection tools you have chosen. Ensure that you follow ethical guidelines for research and obtain informed consent from participants.
  • Analyze data: Use appropriate statistical or qualitative analysis methods to analyze your data. As discussed earlier, common data analysis methods for descriptive research include descriptive statistics, cross-tabulation, content analysis, qualitative coding, visualization, and comparative analysis.
  • I nterpret results: Interpret your findings in light of your research question and objectives. Identify patterns, trends, and relationships in the data, and describe the characteristics of your sample or population.
  • Draw conclusions and report results: Draw conclusions based on your analysis and interpretation of the data. Report your results in a clear and concise manner, using appropriate tables, graphs, or figures to present your findings. Ensure that your report follows accepted research standards and guidelines.

When to Use Descriptive Research Design

Descriptive research design is used in situations where the researcher wants to describe a population or phenomenon in detail. It is used to gather information about the current status or condition of a group or phenomenon without making any causal inferences. Descriptive research design is useful in the following situations:

  • Exploratory research: Descriptive research design is often used in exploratory research to gain an initial understanding of a phenomenon or population.
  • Identifying trends: Descriptive research design can be used to identify trends or patterns in a population, such as changes in consumer behavior or attitudes over time.
  • Market research: Descriptive research design is commonly used in market research to understand consumer preferences, behavior, and attitudes.
  • Health research: Descriptive research design is useful in health research to describe the prevalence and distribution of a disease or health condition in a population.
  • Social science research: Descriptive research design is used in social science research to describe social phenomena such as cultural norms, values, and beliefs.
  • Educational research: Descriptive research design is used in educational research to describe the performance of students, schools, or educational programs.

Purpose of Descriptive Research Design

The main purpose of descriptive research design is to describe and measure the characteristics of a population or phenomenon in a systematic and objective manner. It involves collecting data that describe the current status or condition of the population or phenomenon of interest, without manipulating or altering any variables.

The purpose of descriptive research design can be summarized as follows:

  • To provide an accurate description of a population or phenomenon: Descriptive research design aims to provide a comprehensive and accurate description of a population or phenomenon of interest. This can help researchers to develop a better understanding of the characteristics of the population or phenomenon.
  • To identify trends and patterns: Descriptive research design can help researchers to identify trends and patterns in the data, such as changes in behavior or attitudes over time. This can be useful for making predictions and developing strategies.
  • To generate hypotheses: Descriptive research design can be used to generate hypotheses or research questions that can be tested in future studies. For example, if a descriptive study finds a correlation between two variables, this could lead to the development of a hypothesis about the causal relationship between the variables.
  • To establish a baseline: Descriptive research design can establish a baseline or starting point for future research. This can be useful for comparing data from different time periods or populations.

Characteristics of Descriptive Research Design

Descriptive research design has several key characteristics that distinguish it from other research designs. Some of the main characteristics of descriptive research design are:

  • Objective : Descriptive research design is objective in nature, which means that it focuses on collecting factual and accurate data without any personal bias. The researcher aims to report the data objectively without any personal interpretation.
  • Non-experimental: Descriptive research design is non-experimental, which means that the researcher does not manipulate any variables. The researcher simply observes and records the behavior or characteristics of the population or phenomenon of interest.
  • Quantitative : Descriptive research design is quantitative in nature, which means that it involves collecting numerical data that can be analyzed using statistical techniques. This helps to provide a more precise and accurate description of the population or phenomenon.
  • Cross-sectional: Descriptive research design is often cross-sectional, which means that the data is collected at a single point in time. This can be useful for understanding the current state of the population or phenomenon, but it may not provide information about changes over time.
  • Large sample size: Descriptive research design typically involves a large sample size, which helps to ensure that the data is representative of the population of interest. A large sample size also helps to increase the reliability and validity of the data.
  • Systematic and structured: Descriptive research design involves a systematic and structured approach to data collection, which helps to ensure that the data is accurate and reliable. This involves using standardized procedures for data collection, such as surveys, questionnaires, or observation checklists.

Advantages of Descriptive Research Design

Descriptive research design has several advantages that make it a popular choice for researchers. Some of the main advantages of descriptive research design are:

  • Provides an accurate description: Descriptive research design is focused on accurately describing the characteristics of a population or phenomenon. This can help researchers to develop a better understanding of the subject of interest.
  • Easy to conduct: Descriptive research design is relatively easy to conduct and requires minimal resources compared to other research designs. It can be conducted quickly and efficiently, and data can be collected through surveys, questionnaires, or observations.
  • Useful for generating hypotheses: Descriptive research design can be used to generate hypotheses or research questions that can be tested in future studies. For example, if a descriptive study finds a correlation between two variables, this could lead to the development of a hypothesis about the causal relationship between the variables.
  • Large sample size : Descriptive research design typically involves a large sample size, which helps to ensure that the data is representative of the population of interest. A large sample size also helps to increase the reliability and validity of the data.
  • Can be used to monitor changes : Descriptive research design can be used to monitor changes over time in a population or phenomenon. This can be useful for identifying trends and patterns, and for making predictions about future behavior or attitudes.
  • Can be used in a variety of fields : Descriptive research design can be used in a variety of fields, including social sciences, healthcare, business, and education.

Limitation of Descriptive Research Design

Descriptive research design also has some limitations that researchers should consider before using this design. Some of the main limitations of descriptive research design are:

  • Cannot establish cause and effect: Descriptive research design cannot establish cause and effect relationships between variables. It only provides a description of the characteristics of the population or phenomenon of interest.
  • Limited generalizability: The results of a descriptive study may not be generalizable to other populations or situations. This is because descriptive research design often involves a specific sample or situation, which may not be representative of the broader population.
  • Potential for bias: Descriptive research design can be subject to bias, particularly if the researcher is not objective in their data collection or interpretation. This can lead to inaccurate or incomplete descriptions of the population or phenomenon of interest.
  • Limited depth: Descriptive research design may provide a superficial description of the population or phenomenon of interest. It does not delve into the underlying causes or mechanisms behind the observed behavior or characteristics.
  • Limited utility for theory development: Descriptive research design may not be useful for developing theories about the relationship between variables. It only provides a description of the variables themselves.
  • Relies on self-report data: Descriptive research design often relies on self-report data, such as surveys or questionnaires. This type of data may be subject to biases, such as social desirability bias or recall bias.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Questionnaire

Questionnaire – Definition, Types, and Examples

Case Study Research

Case Study – Methods, Examples and Guide

Observational Research

Observational Research – Methods and Guide

Quantitative Research

Quantitative Research – Methods, Types and...

Qualitative Research Methods

Qualitative Research Methods

Explanatory Research

Explanatory Research – Types, Methods, Guide

Introducing Research Designs

  • First Online: 10 November 2021

Cite this chapter

purpose of research design is

  • Stefan Hunziker 3 &
  • Michael Blankenagel 3  

3374 Accesses

We define research design as a combination of decisions within a research process. These decisions enable us to make a specific type of argument by answering the research question. It is the implementation plan for the research study that allows reaching the desired (type of) conclusion. Different research designs make it possible to draw different conclusions. These conclusions produce various kinds of intellectual contributions. As all kinds of intellectual contributions are necessary to increase the body of knowledge, no research design is inherently better than another, only more appropriate to answer a specific question.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Alvesson, M., & Skoldburg, K. (2000). Reflexive methodology . SAGE.

Google Scholar  

Alvesson, M. (2004). Reflexive methodology: New vistas for qualitative research. SAGE.

Attia, M., & Edge, J. (2017). Be(com)ing a reflexive researcher: A developmental approach to research methodology. Open Review of Educational Research, 4 (1), 33–45.

Article   Google Scholar  

Brahler, C. (2018). Chapter 9 “Validity in Experimental Design”. University of Dayton. Retrieved May 27, 2021, from https://www.coursehero.com/file/30778216/CHAPTER-9-VALIDITY-IN-EXPERIMENTAL-DESIGN-KEYdocx/ .

Brown, J. D. (1996). Testing in language programs. Prentice Hall Regents.

Cambridge University Press. (n.d.a). Design. In  Cambridge dictionary . Retrieved May 19, 2021, from  https://dictionary.cambridge.org/dictionary/english/design .

Cambridge University Press. (n.d.b). Method. In  Cambridge dictionary . Retrieved May 19, 2021, from https://dictionary.cambridge.org/dictionary/english/method .

Cambridge University Press. (n.d.c). Methodology. In  Cambridge dictionary . Retrieved June 8, 2021, from https://dictionary.cambridge.org/dictionary/english/methodology .

Charmaz, K. (2017). The power of constructivist grounded theory for critical inquiry. Qualitative Inquiry, 23 (1), 34–45.

Cohen, D. J., & Crabtree, B. F. (2008). Evaluative criteria for qualitative research in health care: Controversies and recommendations. Annals of Family Medicine, 6 (4), 331–339.

de Vaus, D. A. (2001). Research design in social research. Reprinted . SAGE.

Hall, W. A., & Callery, P. (2001). Enhancing the rigor of grounded theory: Incorporating reflexivity and relationality. Qualitative Health Research, 11 (2), 257–272.

Haynes, K. (2012). Reflexivity in qualitative research. In Qualitative organizational research: Core methods and current challenges (pp. 72–89).

Koch, T., & Harrington, A. (1998). Reconceptualizing rigour: The case for reflexivity. Journal of Advanced Nursing., 28 (4), 882–890.

Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic Inquiry . Sage.

Malterud, K. (2001). Qualitative research: Standards, challenges and guidelines. The Lancet, 358 , 483–488.

Orr, K., & Bennett, M. (2009). Reflexivity in the co-production of academic-practitioner research. Qual Research in Orgs & Mgmt, 4, 85–102.

Trochim, W. (2005). Research methods: The concise knowledge base. Atomic Dog Pub.

Subramani, S. (2019). Practising reflexivity: Ethics, methodology and theory construction. Methodological Innovations , 12 (2).

Sue, V., & Ritter, L. (Eds.). (2007). Conducting online surveys . SAGE.

Yin, R. K. (1994). Discovering the future of the case study. method in evaluation research. American Journal of Evaluation, 15 (3), 283–290.

Yin, R. K. (2014). Case study research. Design and methods (5th ed.). SAGE.

Download references

Author information

Authors and affiliations.

Wirtschaft/IFZ – Campus Zug-Rotkreuz, Hochschule Luzern, Zug-Rotkreuz, Zug , Switzerland

Stefan Hunziker & Michael Blankenagel

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Stefan Hunziker .

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Author(s), under exclusive license to Springer Fachmedien Wiesbaden GmbH, part of Springer Nature

About this chapter

Hunziker, S., Blankenagel, M. (2021). Introducing Research Designs. In: Research Design in Business and Management. Springer Gabler, Wiesbaden. https://doi.org/10.1007/978-3-658-34357-6_1

Download citation

DOI : https://doi.org/10.1007/978-3-658-34357-6_1

Published : 10 November 2021

Publisher Name : Springer Gabler, Wiesbaden

Print ISBN : 978-3-658-34356-9

Online ISBN : 978-3-658-34357-6

eBook Packages : Business and Economics (German Language)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Academic Success Center

Research Writing and Analysis

  • NVivo Group and Study Sessions
  • SPSS This link opens in a new window
  • Statistical Analysis Group sessions
  • Using Qualtrics
  • Dissertation and Data Analysis Group Sessions
  • Defense Schedule - Commons Calendar This link opens in a new window
  • Research Process Flow Chart
  • Research Alignment Chapter 1 This link opens in a new window
  • Step 1: Seek Out Evidence
  • Step 2: Explain
  • Step 3: The Big Picture
  • Step 4: Own It
  • Step 5: Illustrate
  • Annotated Bibliography
  • Literature Review This link opens in a new window
  • Systematic Reviews & Meta-Analyses
  • How to Synthesize and Analyze
  • Synthesis and Analysis Practice
  • Synthesis and Analysis Group Sessions
  • Problem Statement
  • Purpose Statement
  • Conceptual Framework
  • Theoretical Framework
  • Quantitative Research Questions
  • Qualitative Research Questions
  • Trustworthiness of Qualitative Data
  • Analysis and Coding Example- Qualitative Data
  • Thematic Data Analysis in Qualitative Design
  • Dissertation to Journal Article This link opens in a new window
  • International Journal of Online Graduate Education (IJOGE) This link opens in a new window
  • Journal of Research in Innovative Teaching & Learning (JRIT&L) This link opens in a new window

Jump to DSE Guide

Purpose statement overview.

The purpose statement succinctly explains (on no more than 1 page) the objectives of the research study. These objectives must directly address the problem and help close the stated gap. Expressed as a formula:

purpose of research design is

Good purpose statements:

  • Flow from the problem statement and actually address the proposed problem
  • Are concise and clear
  • Answer the question ‘Why are you doing this research?’
  • Match the methodology (similar to research questions)
  • Have a ‘hook’ to get the reader’s attention
  • Set the stage by clearly stating, “The purpose of this (qualitative or quantitative) study is to ...

In PhD studies, the purpose usually involves applying a theory to solve the problem. In other words, the purpose tells the reader what the goal of the study is, and what your study will accomplish, through which theoretical lens. The purpose statement also includes brief information about direction, scope, and where the data will come from.

A problem and gap in combination can lead to different research objectives, and hence, different purpose statements. In the example from above where the problem was severe underrepresentation of female CEOs in Fortune 500 companies and the identified gap related to lack of research of male-dominated boards; one purpose might be to explore implicit biases in male-dominated boards through the lens of feminist theory. Another purpose may be to determine how board members rated female and male candidates on scales of competency, professionalism, and experience to predict which candidate will be selected for the CEO position. The first purpose may involve a qualitative ethnographic study in which the researcher observes board meetings and hiring interviews; the second may involve a quantitative regression analysis. The outcomes will be very different, so it’s important that you find out exactly how you want to address a problem and help close a gap!

The purpose of the study must not only align with the problem and address a gap; it must also align with the chosen research method. In fact, the DP/DM template requires you to name the  research method at the very beginning of the purpose statement. The research verb must match the chosen method. In general, quantitative studies involve “closed-ended” research verbs such as determine , measure , correlate , explain , compare , validate , identify , or examine ; whereas qualitative studies involve “open-ended” research verbs such as explore , understand , narrate , articulate [meanings], discover , or develop .

A qualitative purpose statement following the color-coded problem statement (assumed here to be low well-being among financial sector employees) + gap (lack of research on followers of mid-level managers), might start like this:

In response to declining levels of employee well-being, the purpose of the qualitative phenomenology was to explore and understand the lived experiences related to the well-being of the followers of novice mid-level managers in the financial services industry. The levels of follower well-being have been shown to correlate to employee morale, turnover intention, and customer orientation (Eren et al., 2013). A combined framework of Leader-Member Exchange (LMX) Theory and the employee well-being concept informed the research questions and supported the inquiry, analysis, and interpretation of the experiences of followers of novice managers in the financial services industry.

A quantitative purpose statement for the same problem and gap might start like this:

In response to declining levels of employee well-being, the purpose of the quantitative correlational study was to determine which leadership factors predict employee well-being of the followers of novice mid-level managers in the financial services industry. Leadership factors were measured by the Leader-Member Exchange (LMX) assessment framework  by Mantlekow (2015), and employee well-being was conceptualized as a compound variable consisting of self-reported turnover-intent and psychological test scores from the Mental Health Survey (MHS) developed by Johns Hopkins University researchers.

Both of these purpose statements reflect viable research strategies and both align with the problem and gap so it’s up to the researcher to design a study in a manner that reflects personal preferences and desired study outcomes. Note that the quantitative research purpose incorporates operationalized concepts  or variables ; that reflect the way the researcher intends to measure the key concepts under study; whereas the qualitative purpose statement isn’t about translating the concepts under study as variables but instead aim to explore and understand the core research phenomenon.  

Best Practices for Writing your Purpose Statement

Always keep in mind that the dissertation process is iterative, and your writing, over time, will be refined as clarity is gradually achieved. Most of the time, greater clarity for the purpose statement and other components of the Dissertation is the result of a growing understanding of the literature in the field. As you increasingly master the literature you will also increasingly clarify the purpose of your study.

The purpose statement should flow directly from the problem statement. There should be clear and obvious alignment between the two and that alignment will get tighter and more pronounced as your work progresses.

The purpose statement should specifically address the reason for conducting the study, with emphasis on the word specifically. There should not be any doubt in your readers’ minds as to the purpose of your study. To achieve this level of clarity you will need to also insure there is no doubt in your mind as to the purpose of your study.

Many researchers benefit from stopping your work during the research process when insight strikes you and write about it while it is still fresh in your mind. This can help you clarify all aspects of a dissertation, including clarifying its purpose.

Your Chair and your committee members can help you to clarify your study’s purpose so carefully attend to any feedback they offer.

The purpose statement should reflect the research questions and vice versa. The chain of alignment that began with the research problem description and continues on to the research purpose, research questions, and methodology must be respected at all times during dissertation development. You are to succinctly describe the overarching goal of the study that reflects the research questions. Each research question narrows and focuses the purpose statement. Conversely, the purpose statement encompasses all of the research questions.

Identify in the purpose statement the research method as quantitative, qualitative or mixed (i.e., “The purpose of this [qualitative/quantitative/mixed] study is to ...)

Avoid the use of the phrase “research study” since the two words together are redundant.

Follow the initial declaration of purpose with a brief overview of how, with what instruments/data, with whom and where (as applicable) the study will be conducted. Identify variables/constructs and/or phenomenon/concept/idea. Since this section is to be a concise paragraph, emphasis must be placed on the word brief. However, adding these details will give your readers a very clear picture of the purpose of your research.

Developing the purpose section of your dissertation is usually not achieved in a single flash of insight. The process involves a great deal of reading to find out what other scholars have done to address the research topic and problem you have identified. The purpose section of your dissertation could well be the most important paragraph you write during your academic career, and every word should be carefully selected. Think of it as the DNA of your dissertation. Everything else you write should emerge directly and clearly from your purpose statement. In turn, your purpose statement should emerge directly and clearly from your research problem description. It is good practice to print out your problem statement and purpose statement and keep them in front of you as you work on each part of your dissertation in order to insure alignment.

It is helpful to collect several dissertations similar to the one you envision creating. Extract the problem descriptions and purpose statements of other dissertation authors and compare them in order to sharpen your thinking about your own work.  Comparing how other dissertation authors have handled the many challenges you are facing can be an invaluable exercise. Keep in mind that individual universities use their own tailored protocols for presenting key components of the dissertation so your review of these purpose statements should focus on content rather than form.

Once your purpose statement is set it must be consistently presented throughout the dissertation. This may require some recursive editing because the way you articulate your purpose may evolve as you work on various aspects of your dissertation. Whenever you make an adjustment to your purpose statement you should carefully follow up on the editing and conceptual ramifications throughout the entire document.

In establishing your purpose you should NOT advocate for a particular outcome. Research should be done to answer questions not prove a point. As a researcher, you are to inquire with an open mind, and even when you come to the work with clear assumptions, your job is to prove the validity of the conclusions reached. For example, you would not say the purpose of your research project is to demonstrate that there is a relationship between two variables. Such a statement presupposes you know the answer before your research is conducted and promotes or supports (advocates on behalf of) a particular outcome. A more appropriate purpose statement would be to examine or explore the relationship between two variables.

Your purpose statement should not imply that you are going to prove something. You may be surprised to learn that we cannot prove anything in scholarly research for two reasons. First, in quantitative analyses, statistical tests calculate the probability that something is true rather than establishing it as true. Second, in qualitative research, the study can only purport to describe what is occurring from the perspective of the participants. Whether or not the phenomenon they are describing is true in a larger context is not knowable. We cannot observe the phenomenon in all settings and in all circumstances.

Writing your Purpose Statement

It is important to distinguish in your mind the differences between the Problem Statement and Purpose Statement.

The Problem Statement is why I am doing the research

The Purpose Statement is what type of research I am doing to fit or address the problem

The Purpose Statement includes:

  • Method of Study
  • Specific Population

Remember, as you are contemplating what to include in your purpose statement and then when you are writing it, the purpose statement is a concise paragraph that describes the intent of the study, and it should flow directly from the problem statement.  It should specifically address the reason for conducting the study, and reflect the research questions.  Further, it should identify the research method as qualitative, quantitative, or mixed.  Then provide a brief overview of how the study will be conducted, with what instruments/data collection methods, and with whom (subjects) and where (as applicable). Finally, you should identify variables/constructs and/or phenomenon/concept/idea.

Qualitative Purpose Statement

Creswell (2002) suggested for writing purpose statements in qualitative research include using deliberate phrasing to alert the reader to the purpose statement. Verbs that indicate what will take place in the research and the use of non-directional language that do not suggest an outcome are key. A purpose statement should focus on a single idea or concept, with a broad definition of the idea or concept. How the concept was investigated should also be included, as well as participants in the study and locations for the research to give the reader a sense of with whom and where the study took place. 

Creswell (2003) advised the following script for purpose statements in qualitative research:

“The purpose of this qualitative_________________ (strategy of inquiry, such as ethnography, case study, or other type) study is (was? will be?) to ________________ (understand? describe? develop? discover?) the _________________(central phenomenon being studied) for ______________ (the participants, such as the individual, groups, organization) at __________(research site). At this stage in the research, the __________ (central phenomenon being studied) will be generally defined as ___________________ (provide a general definition)” (pg. 90).

Quantitative Purpose Statement

Creswell (2003) offers vast differences between the purpose statements written for qualitative research and those written for quantitative research, particularly with respect to language and the inclusion of variables. The comparison of variables is often a focus of quantitative research, with the variables distinguishable by either the temporal order or how they are measured. As with qualitative research purpose statements, Creswell (2003) recommends the use of deliberate language to alert the reader to the purpose of the study, but quantitative purpose statements also include the theory or conceptual framework guiding the study and the variables that are being studied and how they are related. 

Creswell (2003) suggests the following script for drafting purpose statements in quantitative research:

“The purpose of this _____________________ (experiment? survey?) study is (was? will be?) to test the theory of _________________that _________________ (compares? relates?) the ___________(independent variable) to _________________________(dependent variable), controlling for _______________________ (control variables) for ___________________ (participants) at _________________________ (the research site). The independent variable(s) _____________________ will be generally defined as _______________________ (provide a general definition). The dependent variable(s) will be generally defined as _____________________ (provide a general definition), and the control and intervening variables(s), _________________ (identify the control and intervening variables) will be statistically controlled in this study” (pg. 97).

Sample Purpose Statements

  • The purpose of this qualitative study was to determine how participation in service-learning in an alternative school impacted students academically, civically, and personally.  There is ample evidence demonstrating the failure of schools for students at-risk; however, there is still a need to demonstrate why these students are successful in non-traditional educational programs like the service-learning model used at TDS.  This study was unique in that it examined one alternative school’s approach to service-learning in a setting where students not only serve, but faculty serve as volunteer teachers.  The use of a constructivist approach in service-learning in an alternative school setting was examined in an effort to determine whether service-learning participation contributes positively to academic, personal, and civic gain for students, and to examine student and teacher views regarding the overall outcomes of service-learning.  This study was completed using an ethnographic approach that included observations, content analysis, and interviews with teachers at The David School.
  • The purpose of this quantitative non-experimental cross-sectional linear multiple regression design was to investigate the relationship among early childhood teachers’ self-reported assessment of multicultural awareness as measured by responses from the Teacher Multicultural Attitude Survey (TMAS) and supervisors’ observed assessment of teachers’ multicultural competency skills as measured by the Multicultural Teaching Competency Scale (MTCS) survey. Demographic data such as number of multicultural training hours, years teaching in Dubai, curriculum program at current school, and age were also examined and their relationship to multicultural teaching competency. The study took place in the emirate of Dubai where there were 14,333 expatriate teachers employed in private schools (KHDA, 2013b).
  • The purpose of this quantitative, non-experimental study is to examine the degree to which stages of change, gender, acculturation level and trauma types predicts the reluctance of Arab refugees, aged 18 and over, in the Dearborn, MI area, to seek professional help for their mental health needs. This study will utilize four instruments to measure these variables: University of Rhode Island Change Assessment (URICA: DiClemente & Hughes, 1990); Cumulative Trauma Scale (Kira, 2012); Acculturation Rating Scale for Arabic Americans-II Arabic and English (ARSAA-IIA, ARSAA-IIE: Jadalla & Lee, 2013), and a demographic survey. This study will examine 1) the relationship between stages of change, gender, acculturation levels, and trauma types and Arab refugees’ help-seeking behavior, 2) the degree to which any of these variables can predict Arab refugee help-seeking behavior.  Additionally, the outcome of this study could provide researchers and clinicians with a stage-based model, TTM, for measuring Arab refugees’ help-seeking behavior and lay a foundation for how TTM can help target the clinical needs of Arab refugees. Lastly, this attempt to apply the TTM model to Arab refugees’ condition could lay the foundation for future research to investigate the application of TTM to clinical work among refugee populations.
  • The purpose of this qualitative, phenomenological study is to describe the lived experiences of LLM for 10 EFL learners in rural Guatemala and to utilize that data to determine how it conforms to, or possibly challenges, current theoretical conceptions of LLM. In accordance with Morse’s (1994) suggestion that a phenomenological study should utilize at least six participants, this study utilized semi-structured interviews with 10 EFL learners to explore why and how they have experienced the motivation to learn English throughout their lives. The methodology of horizontalization was used to break the interview protocols into individual units of meaning before analyzing these units to extract the overarching themes (Moustakas, 1994). These themes were then interpreted into a detailed description of LLM as experienced by EFL students in this context. Finally, the resulting description was analyzed to discover how these learners’ lived experiences with LLM conformed with and/or diverged from current theories of LLM.
  • The purpose of this qualitative, embedded, multiple case study was to examine how both parent-child attachment relationships are impacted by the quality of the paternal and maternal caregiver-child interactions that occur throughout a maternal deployment, within the context of dual-military couples. In order to examine this phenomenon, an embedded, multiple case study was conducted, utilizing an attachment systems metatheory perspective. The study included four dual-military couples who experienced a maternal deployment to Operation Iraqi Freedom (OIF) or Operation Enduring Freedom (OEF) when they had at least one child between 8 weeks-old to 5 years-old.  Each member of the couple participated in an individual, semi-structured interview with the researcher and completed the Parenting Relationship Questionnaire (PRQ). “The PRQ is designed to capture a parent’s perspective on the parent-child relationship” (Pearson, 2012, para. 1) and was used within the proposed study for this purpose. The PRQ was utilized to triangulate the data (Bekhet & Zauszniewski, 2012) as well as to provide some additional information on the parents’ perspective of the quality of the parent-child attachment relationship in regards to communication, discipline, parenting confidence, relationship satisfaction, and time spent together (Pearson, 2012). The researcher utilized the semi-structured interview to collect information regarding the parents' perspectives of the quality of their parental caregiver behaviors during the deployment cycle, the mother's parent-child interactions while deployed, the behavior of the child or children at time of reunification, and the strategies or behaviors the parents believe may have contributed to their child's behavior at the time of reunification. The results of this study may be utilized by the military, and by civilian providers, to develop proactive and preventive measures that both providers and parents can implement, to address any potential adverse effects on the parent-child attachment relationship, identified through the proposed study. The results of this study may also be utilized to further refine and understand the integration of attachment theory and systems theory, in both clinical and research settings, within the field of marriage and family therapy.

Was this resource helpful?

  • << Previous: Problem Statement
  • Next: Conceptual Framework >>
  • Last Updated: May 16, 2024 8:25 AM
  • URL: https://resources.nu.edu/researchtools

NCU Library Home

In the spotlight: Performance management that puts people first

In volatile times, companies are under outsize pressure to respond to economic, technological, and social changes. Effective performance management systems can be a powerful part of this response. They’re designed to help people get better in their work, and they offer clarity in career development and professional performance. And then there’s the big picture: companies that focus on their people’s performance are 4.2 times more likely to outperform their peers, realizing an average 30 percent higher revenue growth and experiencing attrition five percentage points lower (see sidebar, “About the research”). Companies that focus on their people and organizational health also reap dividends in culture, collaboration, and innovation—as well as sustained competitive performance. 1 Alex Camp, Arne Gast, Drew Goldstein, and Brooke Weddle, “ Organizational health is (still) the key to long-term performance ,” McKinsey, February 12, 2024.

Today, company leaders lack full confidence in most performance management systems—despite these systems’ importance and value—citing fragmentation, the existence of informal or “shadow” systems, misalignment, and inconsistency as common challenges. What sort of systems fit the company’s needs? Should rewards focus on individual or team goals? Where are limited resources best spent?

About the research

The insights in this article draw from a comprehensive review of industry best practices, including the experiences of more than 30 global companies across sectors, as well as research by the McKinsey Global Institute (MGI) into how companies gain a competitive edge and deliver top-tier financial results. Specifically, MGI studied more than 1,800 companies with revenues of greater than $100 million. 1 Performance through people: Transforming human capital into competitive advantage ; MGI, February 2, 2023. The article’s author team also completed a study of more than 50 companies’ performance management practices, aiming to provide a nuanced understanding of how organizations approach and execute performance management.

An understanding of the four basic elements of performance management—goal setting, performance reviews, ongoing development, and rewards—provides a foundation for answering these questions and more. Of course, the right performance management system will vary by organization. Leaders who embrace a fit-for-purpose design built on a proven set of core innovations can build motivational and meritocratic companies that attract and retain outstanding employees.

How leading companies approach performance management

Our research across a set of global companies found that despite widespread agreement about certain performance management best practices—such as offering regular feedback outside of an annual review—many companies remain stuck in old ways of working. There are many design choices that can determine the characteristics of a performance management system, but some are more critical than others (Exhibit 1). These decisions—and how they interact with each other—will help determine how the performance management system maps onto the company’s overarching strategy.

Goal setting

Two critical design decisions relate to goal setting: the number of performance management systems used and whether to prioritize individual or team performance goals.

Degree of differentiation. The simplest and best option for many organizations is a single performance management system to address the needs of all employees. However, in more-complex companies with several employee groups, more than one system might be necessary. Manufacturing companies, for instance, may employ three performance management systems with few commonalities: one for sales, in which sales agents are provided direct incentives for the number of goods sold; one for production, with a monthly rhythm focusing on improving core production KPIs; and one for executives, in which the focus might be related more to annual objectives and leadership behavior.

Considerations for these choices often revolve around the nature of the work and the ease of quantifying outputs. For roles in which performance can be easily measured through tangible metrics, such as sales and production, a system emphasizing quantifiable outcomes may be more suitable. On the other hand, for roles involving tasks that are less easily measured, such as those in R&D, a performance management system should be designed to accommodate the nuanced and less tangible aspects of their contributions.

The nucleus of performance. Many organizations have traditionally placed a strong emphasis on individual performance, rooted in the belief that individual accountability drives results. In recent years, there has been a noticeable shift toward recognizing the importance of the team in achieving overall organizational success.

At a large European online retailer, for instance, the focus of performance management has been put on the team rather than the individual. Goals are set for the team, feedback is given to the team, and the performance appraisal is conducted for the team. Example performance metrics for teams can include project completion timelines, cross-functional collaboration success, and the achievement of collective milestones. On an individual level, the company assesses performance using a sophisticated model that prescribes skills and behaviors for 14 job families, each with up to four hierarchies.

Another prominent company in the automotive industry underscores the team as the cornerstone of performance. The teams could be defined along both functional and organizational lines—such as the division or the business line—and the company linked the organizational lines’ performance to the individuals’ compensation.

Performance reviews

Performance reviews raise the question of how to balance the individual objectives and their appraisal with respect to the “what” and the “how,” as well as whether review responsibility should lie primarily with managers, committees, or a combination of both.

Performance formula: What versus how. The balance between setting objectives and assessing what employees accomplish and how they go about their work is the central focus here. To measure the “what,” reviews have traditionally used KPIs, concentrating on quantifiable metrics and specific targets and emphasizing measurable outcomes and achievements. 2 For more on metrics best practices and how they can help leaders avoid pitfalls in their performance management systems, see Raffaele Carpi, John Douglas, and Frédéric Gascon, “ Performance management: Why keeping score is so important, and so hard ,” McKinsey, October 4, 2017.

However, for many roles and in many segments of the company, the work is complex, multifaceted, and fast-paced and can be difficult to capture with rather static KPIs. Consequently, many companies have reverted to using objective key results (OKRs) to link results to defined objectives. The objectives represent the qualitative, aspirational goals an individual or team aims to achieve, while the key results are the quantifiable metrics used to measure progress toward those objectives. The objectives provide context and direction, capturing the broader strategic intent behind the measurable key results.

Companies that explicitly focus a portion of performance reviews on the “how” consider qualities such as collaboration, communication, adaptability, and ethical decision making. Considering behavior and conduct, in particular, can help assess leaders whose teams’ outcomes are hard to measure—such as long-term projects, complex initiatives, or qualitative improvements that may not have easily quantifiable metrics. About three in five companies in our sample look at a mix of both what and how, which can equip managers with a more comprehensive understanding of not only tangible results but also the underlying approach and mindset that contributed to those outcomes.

Review responsibility. In structuring accountability for conducting performance reviews, companies tend to lean on managers, committees, or a combination of both.

Managers should play a central role, and their discretion should be a significant factor in performance assessments because they can judge the context in which an employee has been working. For example, when evaluating performance, it’s crucial to consider the headwinds and tailwinds that the business, team, or employee faced during the evaluation period. External factors, market conditions, and organizational dynamics can significantly affect an employee’s ability to achieve their goals, and considering them helps provide a fair and contextual assessment.

In this context, another design question emerges: whether to appraise employees against OKR fulfilment or the effort they put into achieving the desired outcome. Particularly in many large digital players, OKRs are set as “moonshot” goals—objectives so ambitious they are difficult to achieve. Managers can help ensure that, at the end of the performance cycle, an employee is assessed against not only OKR fulfillment but also—and to an even greater degree—how hard they tried given the resources available to them.

Managers’ points of view, formed with knowledge of the circumstances that produced employees’ performance, produce richer assessments that are sensitive to context—given that managers work closely with their team members and have firsthand knowledge of the challenges, workloads, and specific situations that each employee encounters.

Committees, meanwhile, bring diverse perspectives and can mitigate biases that might arise from individual managers’ subjectivity. Committees can provide a checks-and-balances system, promoting consistency and standardization in the evaluation process.

A combination of these two approaches can be an effective solution. Senior managers and high performers across hierarchies could be discussed in committees, while the rest of the workforce could be evaluated by their direct managers. This integrated approach leverages the contextual insights of managers while also incorporating the diverse viewpoints and standardization that committees offer, particularly for more-senior or high-impact roles.

Regardless of the review responsibility structure, it’s worth noting that more and more managers, committees, and employees are using generative AI (gen AI) to aggregate and extract information to inform performance reviews. For example, some employees may toil to define clear, specific, and measurable goals that align with their career aspirations; gen AI can help create a first draft and iterate based on their role, helping the employee focus on their specific growth areas as well as gauge improvement on an ongoing basis. Managers and committees, meanwhile, used to spend a lot of time gathering performance metrics from different sources and systems for employee evaluation. Gen AI can aggregate input from various sources into a consolidated format to provide managers with a more comprehensive starting point for reviews.

Beyond employees’ formal professional-development opportunities, their managers’ capability to set goals, appraise performance fairly and motivationally, and provide feedback is one of the most critical success factors for an effective performance management system. As a result, many companies have pivoted to invest in focused capability building.

Ongoing development

Another key aspect to consider when designing a performance management system is the focus of the assessment: will it evaluate past performances, or will the emphasis be placed on creating an understanding and foundation for further growth?

A backward-looking assessment will focus on fulfillment of the what and how objectives to create a fair basis for ranking and related consequences. However, many companies are pivoting to complement this assessment or are even focusing entirely on a developmental appraisal. In this approach, the focus is on truly understanding the strengths and weaknesses of the individual as a basis for further development, capability building, and personal growth.

Against that backdrop, rather than concentrating solely on top performers, an inclusive developmental system should cater to the growth needs of employees across all levels and backgrounds. McKinsey research emphasizes the importance of ongoing development for all employees, including—crucially—efforts tailored specifically for women 3 Women in the Workplace 2023 , McKinsey, October 5, 2023. and other underrepresented groups. 4 Diversity matters even more: The case for holistic impact , McKinsey, December 5, 2023. Such development programs not only foster a more equitable culture but also help unlock the full potential of the entire workforce.

Traditionally, many companies have used relative ratings to compare and rank employees against one another, often resulting in a forced distribution or curve. Employees are placed into categories or tiers based on their relative performance, with a predetermined percentage falling into each category (for example, top 10 percent, middle 70 percent, and bottom 20 percent).

Many companies today are simplifying their ratings systems so employees understand where they stand while shifting toward development approaches tailored to individuals’ strengths and weaknesses. The goal is to identify areas for growth and provide targeted support to help employees enhance their capabilities and skills.

While assessing performance remains important, the emphasis should be on using those assessments as a starting point for identifying developmental opportunities, with an understanding of both strengths and weaknesses and the specific development needs to improve performance. The focus shifts from mere evaluation to understanding the underlying factors that contribute to an individual’s performance, be it skills gaps, mindsets, or environmental factors.

Four reward categories—compensation, career progression, development opportunities, and recognition—remain the core pillars of an effective performance management system. Most leading companies provide individual rewards (as opposed to team- or corporate-driven ones), with equal relevance given to short- and long-term incentives, looking at impact holistically and balancing investment in all four reward categories.

Under certain circumstances, it may make sense to emphasize financial rewards, particularly in sales functions or other roles where monetary incentives are highly valued. Indeed, some organizations may double down on monetary compensation, offering significantly higher pay packages to their top performers, because money is seen as a key motivator in these roles.

In other cases, it may be more effective to take money off the table and emphasize nonfinancial rewards, such as recognition, flexibility, and career development opportunities. While base pay may remain the same across the firm, high performers can be rewarded with faster career progression, more recognition, and better development opportunities. A 2009 McKinsey survey found that “three noncash motivators—praise from immediate managers, leadership attention (for example, one-on-one conversations), and a chance to lead projects or task forces” were “no less or even more effective motivators than the three highest-rated financial incentives: cash bonuses, increased base pay, and stock or stock options.” Furthermore, “The survey’s top three nonfinancial motivators play critical roles in making employees feel that their companies value them, take their well-being seriously, and strive to create opportunities for career growth.” 5 “ Motivating people: Getting beyond money ,” McKinsey Quarterly , November 1, 2009. More than a decade later, McKinsey research found that managers and employees remain misaligned: specifically, employers overlook the relational elements—such as feeling valued by a manager and the organization and feeling a sense of belonging—relative to how important these factors are to employee retention (Exhibit 2). 6 “ ‘ Great Attrition’ or ‘Great Attraction’? The choice is yours ,” McKinsey Quarterly , September 8, 2021. Indeed, the importance of nonmonetary incentives represents a consistent theme in performance management research and inquiry.

Given the time and effort required to effectively implement nonfinancial rewards, it’s crucial for organizations to carefully consider how to deploy these rewards strategically with employee groups. The decision of where to place emphasis should align with the organization’s culture, values, and the specific workforce’s motivations.

It’s worth noting that companies focusing on team achievement over individual performance also tend to value praise of the team. Public recognition and praise for effective teamwork and joint accomplishments can foster a sense of unity, camaraderie, and motivation.

Things to get right

Of the global companies we observed, there was a shared set of enabling factors across those with effective performance management systems. These things are fairly intuitive, but they are hard to practice well. Done consistently, they can produce powerful results.

  • Ensure that performance management systems are agile. Systems should allow for goals to be easily updated so the workforce—and therefore the organization—can respond to quickly changing conditions. The processes themselves should also be agile. For instance, relationships and interactions between managers and employees should allow for coaching that is close to real time so employees are consistently being pushed in the right direction—and learning to create that momentum themselves.
  • Provide regular feedback. Annual reviews can create a bottleneck on managers and the C-suite. More regular performance conversations can be successful in a variety of formats; quarterly, weekly, and casual check-ins should supplement formal reviews. Conversations can be about both the what and the how of the work and be a source of ongoing coaching.

If reviews remain once a year rather than more frequent, top management may consider prioritizing their direct involvement in the evaluation process to keep a pulse on employee sentiment and progress. A leading financial institution in Europe chose this route and found it was able to build a strong capability-building program around a feedback culture that is unafraid of difficult conversations.

  • Establish an effective fact base. According to our research, only two in five companies use both upward and downward evaluation in individual performance reviews. To establish a more comprehensive fact base, organizations can implement robust 360° review processes that solicit feedback from an employee’s manager, peers, direct reports, and even customers or stakeholders outside the company. Many leaders have found that 360° reviews offer a comprehensive understanding of an individual’s performance because such reviews consider perspectives from both those who are led and those who are in leadership roles.
  • Maintain rating and differentiation. Many companies have reassessed their approach to employee ratings and the subsequent differentiation of consequences. While some companies have eliminated ratings altogether, most companies have been evolving their systems to drive motivation, recognize and incentivize performance, and create a “talent currency.” This means a high performer from one division is considered by the organization to be of the same caliber as one from another division. Overall, leaders are pushing for simplification, such as moving from a seven-tier approach to a four-tier or even three-tier system. There is also a stronger link between ratings and outcomes, as well as a shift from forced distribution to distribution guidance.
  • Employ gen AI. Gen AI—the latest technology to change the business landscape—can be a tool to support select elements of performance management, such as setting goals and drafting performance reviews. A manager could use the technology to aggregate and synthesize input from different sources to draft communications to and about employees more efficiently, freeing them to focus on the core value driving parts of performance management and giving more time for personal interactions with their employees, such as coaching and feedback. 7 For more, see People and Organization Blog , “ Four ways to start using generative AI in HR ,” blog post by Julian Kirchherr, Dana Maor, Kira Rupietta, and Kirsten Weerda, McKinsey, March 4, 2024.

Getting started

Companies can get started by understanding where they are now. Specifically, they should assess their organizations’ current performance culture, including the level of adoption of the existing performance management system and its quality. Decision makers should then use the following three questions to check the health of their performance management efforts and outline their ambitions for performance management:

  • Are we getting the expected returns from the time invested in the performance management process, and does it drive higher performance and capabilities?
  • Does the current performance management system reflect the needs and context of this particular business or workforce segment?
  • Do we have a performance culture? (Hint: How frequent are employees’ coaching interactions? How clear and differentiated is feedback?)

Many traditional approaches to people management are unlikely to suffice in today’s top-performing organizations. The research-backed benefits of prioritizing people’s performance, from enhanced revenue growth to lower attrition rates, underscore the strategic importance of these systems. By embracing a fit-for-purpose design anchored in the key elements of performance management, organizations can position themselves as dynamic and adaptive employers.

Simon Gallot Lavallée is an associate partner in McKinsey’s Milan office, where Andrea Pedroni  is a partner; Asmus Komm is a partner in the Hamburg office; and Amaia Noguera Lasa is a partner in the Madrid office.

The authors wish to thank Katharina Wagner, Brooke Weddle, and the many industry professionals who contributed to the development of this article.

Explore a career with us

Related articles.

Image of people celebrating in an office with a woman smiling and being hugged by her coworker.

Gen AI talent: Your next flight risk

Vector illustration of a tree with blossoms and roots going into the ground.

Increasing your return on talent: The moves and metrics that matter

Glass ball abstract background. 3d

CEO excellence: How do leaders assess their own performance?

Scholars Crossing

  • Liberty University
  • Jerry Falwell Library
  • Special Collections
  • < Previous

Home > ETD > Doctoral > 5555

Doctoral Dissertations and Projects

Impact of leadership decisions on police officer well-being: a covid-19 response.

Jason N. Spencer , Liberty University Follow

Helms School of Government

Doctor of Philosophy in Criminal Justice (PhD)

Gregory Koehle

Leadership, COVID-19, police officer well-being, Servant Leadership, decision-making

Disciplines

Leadership Studies | Public Affairs, Public Policy and Public Administration

Recommended Citation

Spencer, Jason N., "Impact of Leadership Decisions on Police Officer Well-Being: A COVID-19 Response" (2024). Doctoral Dissertations and Projects . 5555. https://digitalcommons.liberty.edu/doctoral/5555

The purpose of this qualitative dissertation research project is to determine the impact that law enforcement leaders, their leadership styles, and decision-making processes have on the well-being of police officers. This study sought to understand this impact by focusing on the perspectives of frontline police officers, detectives, and first-line supervisors from various law enforcement organizations in the Central Virginia Region and within the context of leadership decisions made in response to the COVID-19 pandemic. Using the constructivist grounded theory approach to research design, 12 participants responded to an initial qualitative questionnaire, followed by a semi-structured interview to gain the rich, detailed data necessary to answer the research questions. Through the constant comparative analysis of the data, the key themes of unprecedented, job to do, family impact, negative impact, and positive impact emerged. These themes were synthesized to form an emerging theory explaining the research questions. This theory suggests that the processes law enforcement leaders use to make decisions impact police officer well-being, specifically in long-term, uncertain incidents like the global coronavirus pandemic. The study has implications for academic researchers and practitioners concerned with leadership in law enforcement organizations and police officer well-being. Future research recommendations are included in this study. Additionally, this research discusses recommendations for law enforcement leaders for future long-term, uncertain incidents like COVID-19.

Included in

Leadership Studies Commons , Public Affairs, Public Policy and Public Administration Commons

  • Collections
  • Faculty Expert Gallery
  • Theses and Dissertations
  • Conferences and Events
  • Open Educational Resources (OER)
  • Explore Disciplines

Advanced Search

  • Notify me via email or RSS .

Faculty Authors

  • Submit Research
  • Expert Gallery Login

Student Authors

  • Undergraduate Submissions
  • Graduate Submissions
  • Honors Submissions

Home | About | FAQ | My Account | Accessibility Statement

Privacy Copyright

IMAGES

  1. PPT

    purpose of research design is

  2. What is Research Design in Qualitative Research

    purpose of research design is

  3. Purpose of Research

    purpose of research design is

  4. What Is the Purpose of Research Design?

    purpose of research design is

  5. How to Write a Research Design

    purpose of research design is

  6. What is a Research Design? Importance and Types

    purpose of research design is

VIDEO

  1. Basic design of experimental research design

  2. Research Design, Research Method: What's the Difference?

  3. THE PURPOSE OF RESEARCH DESIGN #practicalresearch

  4. What is Research and It's Purpose?

  5. Need of research design and features of a good design

  6. What is research design? #how to design a research advantages of research design

COMMENTS

  1. What Is a Research Design

    A research design is a strategy for answering your research question using empirical data. Creating a research design means making decisions about: Your overall research objectives and approach. Whether you'll rely on primary research or secondary research. Your sampling methods or criteria for selecting subjects. Your data collection methods.

  2. What Is the Purpose of Research Design?

    Research design, like any framework of research, has its methods and purpose. The purpose of research design is to provide a clear plan of the research, based on independent and dependent variables, and to consider the cause and effect evoked by these variables. Methods that constitute research design can include: Observations. Surveys.

  3. Research Design

    The purpose of research design is to plan and structure a research study in a way that enables the researcher to achieve the desired research goals with accuracy, validity, and reliability. Research design is the blueprint or the framework for conducting a study that outlines the methods, procedures, techniques, and tools for data collection ...

  4. The Importance of Research Design: A Comprehensive Guide

    The purpose of research design is to provide a clear roadmap for researchers to follow. It helps them define the research questions they want to answer and identify the variables they will study. By clearly defining the purpose of the study, researchers can ensure that their research design aligns with their objectives. ...

  5. Research Design

    A research design is a strategy for answering your research question using empirical data. Creating a research design means making decisions about: ... Type of design Purpose and characteristics; Case study: Detailed study of a specific subject (e.g., a place, event, organisation)

  6. What Is Research Design? 8 Types + Examples

    Research design refers to the overall plan, structure or strategy that guides a research project, from its conception to the final analysis of data. Research designs for quantitative studies include descriptive, correlational, experimental and quasi-experimenta l designs. Research designs for qualitative studies include phenomenological ...

  7. What is a Research Design? Definition, Types, Methods and Examples

    Research design methods refer to the systematic approaches and techniques used to plan, structure, and conduct a research study. The choice of research design method depends on the research questions, objectives, and the nature of the study. Here are some key research design methods commonly used in various fields: 1.

  8. PDF WHAT IS RESEARCH DESIGN?

    Before examining types of research designs it is important to be clear about the role and purpose of research design. We need to understand what research design is and what it is not. We need to know where design fits into the whole research process from framing a question to finally analysing and reporting data. This is the purpose of this ...

  9. Introducing Research Designs

    The purpose of good research designs is to find the best or at least good arguments to explain how (parts of) the world works (or how we construct it). Thus, research design is rooted in logic. In fact, it is an effort in logic, enabling us to draw, argue, ...

  10. Research design

    A research design is a framework that has been created to find answers to research questions. Design types and sub-types. There are many ways to classify research designs. Nonetheless, the list below offers a number of useful distinctions between possible research designs. A research design is an arrangement of conditions or collection.

  11. Types of Research Designs

    The purpose of a historical research design is to collect, verify, and synthesize evidence from the past to establish facts that defend or refute a hypothesis. It uses secondary sources and a variety of primary documentary evidence, such as, diaries, official records, reports, archives, and non-textual information [maps, pictures, audio and ...

  12. Research design

    Research design is a comprehensive plan for data collection in an empirical research project. It is a 'blueprint' for empirical research aimed at answering specific research questions or testing specific hypotheses, and must specify at least three processes: the data collection process, the instrument development process, and the sampling process.

  13. Types of Research Designs Compared

    Laboratory experiments have higher internal validity but lower external validity. Fixed design vs flexible design. In a fixed research design the subjects, timescale and location are set before data collection begins, while in a flexible design these aspects may develop through the data collection process.

  14. Study designs: Part 1

    The study design used to answer a particular research question depends on the nature of the question and the availability of resources. In this article, which is the first part of a series on "study designs," we provide an overview of research study designs and their classification. The subsequent articles will focus on individual designs.

  15. Research Design: What it is, Elements & Types

    Research design is the framework of research methods and techniques chosen by a researcher to conduct a study. The design allows researchers to sharpen the research methods suitable for the subject matter and set up their studies for success. Creating a research topic explains the type of research (experimental,survey research,correlational ...

  16. Research Design & Methods

    The five main types of research design are: 1. Descriptive - describes a situation or scenario statistically. 2. Experimental - allows for cause-and-effect conclusions. 3. Correlational - shows ...

  17. Types of Research Design in 2024: Perspective and Methodological

    Yin (2014) has a succinct way of differentiating the two: design is logical, while method is logistical. In other words, the design is the plan, the method is how to realize that plan. There are important factors at play when creating a methodology in research. These include ethics, the validity of data, and reliability.

  18. (PDF) Basics of Research Design: A Guide to selecting appropriate

    for validity and reliability. Design is basically concerned with the aims, uses, purposes, intentions and plans within the. pr actical constraint of location, time, money and the researcher's ...

  19. Planning Qualitative Research: Design and Decision Making for New

    While many books and articles guide various qualitative research methods and analyses, there is currently no concise resource that explains and differentiates among the most common qualitative approaches. We believe novice qualitative researchers, students planning the design of a qualitative study or taking an introductory qualitative research course, and faculty teaching such courses can ...

  20. What is a Research Design? Importance and Types

    The purpose of research design is that enable the researcher to proceed in the right direction without any deviation from the tasks. It is an overall detailed strategy of the research process. The design of experiments is a very important aspect of a research study. A poor research design may collapse the entire research project in terms of ...

  21. Descriptive Research Design

    Purpose of Descriptive Research Design. The main purpose of descriptive research design is to describe and measure the characteristics of a population or phenomenon in a systematic and objective manner. It involves collecting data that describe the current status or condition of the population or phenomenon of interest, without manipulating or ...

  22. (PDF) Research Design

    According to Jahoda, Deutch & Cook "A research design is the arrangement of conditions for the collection and analysis of data in a manner that aims to combine relevance to the research purpose ...

  23. PDF 1 Introducing Research Designs

    The purpose of good research designs is to nd the best or at least good arguments to explain how (parts of) the world works (or how we construct it). Thus, research design is rooted in logic. In fact, it is an effort in logic, enabling us to draw, argue, and assess

  24. LibGuides: Research Writing and Analysis: Purpose Statement

    In PhD studies, the purpose usually involves applying a theory to solve the problem. In other words, the purpose tells the reader what the goal of the study is, and what your study will accomplish, through which theoretical lens. The purpose statement also includes brief information about direction, scope, and where the data will come from.

  25. Performance management that puts people first

    The research-backed benefits of prioritizing people's performance, from enhanced revenue growth to lower attrition rates, underscore the strategic importance of these systems. By embracing a fit-for-purpose design anchored in the key elements of performance management, organizations can position themselves as dynamic and adaptive employers.

  26. Impact of Leadership Decisions on Police Officer Well-Being: A COVID-19

    The purpose of this qualitative dissertation research project is to determine the impact that law enforcement leaders, their leadership styles, and decision-making processes have on the well-being of police officers. This study sought to understand this impact by focusing on the perspectives of frontline police officers, detectives, and first-line supervisors from various law enforcement ...

  27. How to Write a White Paper in 10 Steps (+ Tips & Templates)

    Add a text box and choose to write with AI. In the prompt window, explain the main topic of your white paper, what you're planning to cover and ask it to write an outline for you. If you're writing from scratch, use high-level headings for the critical sections of your content, and then branch out into subheadings.

  28. Purpose & Significance , Research Question(s): Design and

    Summary: This review delves into the relationship between ODD and Conduct Disorder (CD), exploring their diagnostic criteria, comorbidity rates, etiology, and treatment strategies. It examines the developmental trajectories of ODD and CD, emphasizing the importance of early intervention and comprehensive assessments.

  29. USDA

    Access the portal of NASS, the official source of agricultural data and statistics in the US, and explore various reports and products.