Skip navigation

  • Log in to UX Certification

Nielsen Norman Group logo

World Leaders in Research-Based User Experience

Formative vs. summative evaluations.

Portrait of Alita Joyce

July 28, 2019 2019-07-28

  • Email article
  • Share on LinkedIn
  • Share on Twitter

In the user-experience profession, we preach iteration and evaluation. There are two types of evaluation, formative and summative, and where you are in the design process determines what type of evaluation you should conduct.

Formative evaluations focus on determining which aspects of the design work well or not, and why. These evaluations occur throughout a redesign and provide information to incrementally improve the interface.

Let’s say we’re designing the onboarding experience for a new, completely redesigned version of our mobile app. In the design process, we prototype a solution and then test it with ( usually a few ) users to see how usable it is. The study identifies several issues with our prototype, which are then fixed by a new design. This test is an example of formative evaluation — it helps designers identify what needs to be changed to improve the interface.

Formative evaluations of interfaces involve testing and changing the product, usually multiple times, and therefore are well-suited for the redesign process or while creating a new product.

In both cases, you iterate through the prototyping and testing steps until you are as ready for production as you’ll get (even more iterations would form an even better design, but you have to ship at some point). Thus, formative evaluations are meant to steer the design on the right path.

Summative evaluations describe how well a design performs , often compared to a benchmark such as a prior version of the design or a competitor. Unlike formative evaluations, whose goals is to inform the design process, summative evaluations involve getting the big picture and assessing the overall experience of a finished product. Summative evaluations occur less frequently than formative evaluations, usually right before or right after a redesign.

Let’s go back to our mobile-app example. Now that we’ve shipped the new mobile app, it is time to run a study and see how our app stands in comparison to the previous version of the app. We can gather the time on task and the success rates for the core app functionalities. Then we can compare these metrics against those obtained with the previous version of the app to see if there was any improvement. We will also save the results of this study to evaluate subsequent major versions of the app. This type of study is a summative evaluation since it assesses the shipped product with the goal of tracking performance over time and ultimately calculating our return on investment . However, during this study, we might uncover some usability issues. We should make note of those issues and address them during our next design iteration.

Alternatively, another type of summative evaluations could compare our results with those obtained with one or more competitor apps or with known industry-wide data.

All summative evaluations paint an overview picture of the usability of a system. They are intended to serve as reference points so that you can determine whether you’re improving your own designs over time or beating out a competitor.

The ultimate summative evaluation is the go/no-go decision of whether to release a product. After all is said and done, is your design good enough to be inflicted on the public, or do we think that it will harm our brand so badly that it should never see the light of day? It’s actually rare for companies to have a formal process to kill off bad design, which may be why we encounter many releases that do more harm than good for a brand. If you truly embrace our proposition that brand is experience in the digital age, then consider a final summative evaluation before release.

In This Article:

Origin of the terms, when each type of evaluation is used, research methods for formative vs. summative evaluations.

The terms ‘formative’ and ‘summative’ evaluation were coined by Michael Scriven in 1967. These terms were presented in the context of instructional design and education theory, but are just as valuable for any sort of evaluation-based industry.

In the educational context, formative evaluations are ongoing and occur throughout the development of the course, while summative evaluations occur less frequently and are used to determine whether the program met its intended goals. The formative evaluations are used to steer the teaching, by testing whether content was understood or needs to be revisited, while summative evaluations assess the student’s mastery of the material.

Recall that formative and summative evaluations align with your place in the design process. Formative evaluations go with prototype and testing iterations throughout a redesign project, while summative evaluations are best for right before or right after a major redesign.

Great researchers begin their study by determining what question they’re trying to answer. Essentially, your research question is the same as the type of evaluation. Below is a list of possible research questions you might have and the corresponding evaluation. For that reason, this table is descriptive, not prescriptive.

After it is clear which type of evaluation you will conduct, you have to determine which research method you should use. There is a common misconception that summative equals quantitative and formative equals qualitative ­­— this is not the case.  

Summative evaluations can be either qualitative or quantitative. The same is true for formative evaluations.

Although summative evaluations are often quantitative, they can be qualitative studies, too. For example, you might like to know where your product stands compared with your competition. You could hire a UX expert to do an expert review of your interface and a competitor’s. The expert review would use the 10 usability heuristics as well as the reviewer’s knowledge of UI and human behavior to produce a list of strength and weaknesses for both your interface and your competitor’s. The study is summative because the overall interface is being evaluated with the goal of understanding whether the UX of your product stands up to the competition and whether a major redesign is warranted.

Additionally, formative evaluations aren’t always qualitative, although that is often the case. (Since it’s recommended to run an extended series of formative evaluations, it makes financial sense to use a cheaper qualitative study for each of them.) But sometimes big companies with large UX budgets and high level of UX maturity  might use quantitative studies for formative purposes in order to ensure that a change to one of their essential features will perform satisfactorily.  For instance, before launching a new homepage design, a large company may want to run a quantitative test on the prototype to make sure that the number of people who will scroll below the fold is high enough. 

Formative and summative evaluations correspond to different research goals. Formative evaluations are meant to steer the design on the correct path so that the final product has satisfactory user experience. They are a natural part of any iterative user-centered design process. Summative evaluations assess the overall usability of a product and are instrumental in tracking its usability over time and in comparing it with competitors.

Greenstein, Laura.  What Teachers Really Need to Know About Formative Assessment . ASCD, 2010.

Related Courses

Discovery: building the right thing.

Conduct successful discovery phases to ensure you build the best solution

User Research Methods: From Strategy to Requirements to Design

How to pick the best UX research method for each stage in the development process

ResearchOps: Scaling User Research

Orchestrate and optimize research to amplify its impact

Related Topics

  • Research Methods Research Methods

Learn More:

Please accept marketing cookies to view the embedded video. https://www.youtube.com/watch?v=730UiP7dZeo

Formative vs. Summative Usability Evaluation

summative evaluation research design

Always Pilot Test User Research Studies

Kim Salazar · 3 min

summative evaluation research design

Level Up Your Focus Groups

Therese Fessenden · 5 min

summative evaluation research design

Inductively Analyzing Qualitative Data

Tanner Kohler · 3 min

Related Articles:

Open-Ended vs. Closed Questions in User Research

Maria Rosala · 5 min

Cognitive Mapping in User Research

Sarah Gibbons · 16 min

UX Research Methods: Glossary

Raluca Budiu · 12 min

Iterative Design of a Survey Question: A Case Study

Feifei Liu · 8 min

When to Use Which User-Experience Research Methods

Christian Rohrer · 9 min

You Are Not the User: The False-Consensus Effect

Raluca Budiu · 4 min

  • Why Blitzllama?

Evaluative research: Methods, types, and examples (2024)

Master evaluative research with our guide, offering a detailed look at methods, types, and real-life examples for a complete understanding.

Product owners and user researchers often grapple with the challenge of gauging the success and impact of their products. 

The struggle lies in understanding what methods and types of evaluative research can provide meaningful insights. 

Empathy is crucial in this process, as identifying user needs and preferences requires a deep understanding of their experiences. 

In this article, we present a concise guide to evaluative research, offering practical methods, highlighting various types, and providing real-world examples. 

By delving into the realm of evaluative research, product owners and user researchers can navigate the complexities of product assessment with clarity and effectiveness.

What is evaluative research?

Evaluative research assesses the effectiveness and usability of products or services. It involves gathering user feedback to measure performance and identify areas for improvement. 

Product owners and user researchers employ evaluative research to make informed decisions. Users' experiences and preferences are actively observed and analyzed to enhance the overall quality of a product. 

A quote on evaluative research

This research method aids in identifying strengths and weaknesses, enabling iterative refinement. Through surveys, usability testing, and direct user interaction, evaluative research provides valuable insights. 

It guides product development, ensuring that user needs are met and expectations exceeded. For product owners and user researchers, embracing evaluative research is pivotal in creating successful, user-centric solutions.

Now that we understand what evaluative research entails, let's explore why it holds a pivotal role in product development and user research.

Why is evaluative research important?

Evaluative research holds immense importance for product owners and user researchers as it offers concrete data and feedback to gauge the success of a product or service. 

By identifying strengths and weaknesses, it becomes a powerful tool for informed decision-making, leading to product improvements and enhanced user experiences:

1) Unlocking product potential

Evaluative research stands as a crucial pillar in product development, offering invaluable insights into a product's effectiveness. By actively assessing user experiences, product owners gain a clearer understanding of what works and what needs improvement. 

This process facilitates targeted enhancements, ensuring that products align with user expectations and preferences. In essence, evaluative research empowers product owners to unlock their product's full potential, resulting in more satisfied users and increased market success.

2) Mitigating risk and reducing iteration cycles

For product owners navigating the competitive landscape, mitigating risks is paramount. Evaluative research serves as a proactive measure, identifying potential issues before they escalate. Through systematic testing and user feedback, product owners can pinpoint weaknesses, allowing for timely adjustments. 

This not only reduces the likelihood of costly post-launch issues but also streamlines iteration cycles. By addressing concerns early in the development phase, product owners can refine their offerings efficiently, staying agile in response to user needs and industry dynamics.

3) Enhancing user-centric design

User researchers play a pivotal role in shaping products that resonate with their intended audience. Evaluative research is the compass guiding user-centric design, ensuring that every iteration aligns with user expectations. By actively involving users in the assessment process, researchers gain firsthand insights into user behavior and preferences. 

This information is invaluable for crafting a seamless user experience, ultimately fostering loyalty and satisfaction. In the ever-evolving landscape of user preferences, ongoing evaluative research becomes a strategic tool for user researchers to consistently refine and elevate the design, fostering products that stand the test of time.

With the significance of evaluative research established, it's essential to know when is the right time to conduct it.

When should you conduct evaluative research?

Knowing the opportune moments to conduct evaluative research is vital. Whether in the early stages of development or after a product launch, this research helps pinpoint areas for enhancement:

When should you conduct evaluative research

Prototype stage

During the prototype stage, conducting evaluative research is crucial to gather insights and refine the product. 

Engage users with prototypes to identify usability issues, gauge user satisfaction, and validate design decisions. 

This early evaluation ensures that potential problems are addressed before moving forward, saving time and resources in the later stages of development. 

By actively involving users at this stage, product owners can enhance the user experience and align the product with user expectations.

Pre-launch stage

In the pre-launch stage, evaluative research becomes instrumental in assessing the final product's readiness. 

Evaluate user interactions, uncover any remaining usability concerns, and verify that the product meets user needs. 

This phase helps refine features, optimize user flows, and address any last-minute issues. 

By actively seeking user feedback before launch, product owners can make informed decisions to improve the overall quality and performance of the product, ultimately enhancing its market success.

Post-launch stage

After the product is launched, evaluative research remains essential for ongoing improvement. Monitor user behavior, gather feedback, and identify areas for enhancement. 

This active approach allows product owners to respond swiftly to emerging issues, optimize features based on real-world usage, and adapt to changing user preferences. 

Continuous evaluative research in the post-launch stage helps maintain a competitive edge, ensuring the product evolves in tandem with user expectations, thus fostering long-term success.

Now that we understand the timing of evaluative research, let's distinguish it from generative research and understand their respective roles.

Evaluative vs. generative research

While evaluative research assesses existing products, generative research focuses on generating new ideas. Understanding this dichotomy is crucial for product owners and user researchers to choose the right approach for the specific goals of their projects:

Difference between evaluative research and generative research

With the differentiation between evaluative and generative research clear, let's delve into the three primary types of evaluative research.

What are the 3 types of evaluative research?

Evaluative research can take various forms. The three main types include formative evaluation, summative evaluation, and outcome evaluation. 

Types of evaluative research

Each type serves a distinct purpose, offering valuable insights throughout different stages of a product's life cycle:

1) Formative evaluation research

Formative evaluation research is a crucial phase in the development process, focusing on improving and refining a product or program. 

It involves gathering feedback early in the development cycle, allowing product owners to make informed adjustments. 

This type of research seeks to identify strengths and weaknesses, providing insights to enhance the user experience. 

Through surveys, usability testing, and focus groups, formative evaluation guides iterative development, ensuring that the end product aligns with user expectations and needs.

2) Summative evaluation research

Summative evaluation research occurs after the completion of a product or program, aiming to assess its overall effectiveness. 

This type of research evaluates the final outcome against predefined criteria and objectives. 

Summative research is particularly relevant for product owners seeking to understand the overall impact and success of their offering. 

Through methods like surveys, analytics, and performance metrics, it provides a comprehensive overview of the product's performance, helping stakeholders make informed decisions about future developments or investments.

3) Outcome evaluation research

Outcome evaluation research delves into the long-term effects and impact of a product or program on its users. 

It goes beyond immediate outcomes, assessing whether the intended goals and objectives have been met over time. 

Product owners can utilize this research to understand the sustained benefits and challenges associated with their offerings. 

By employing methods such as longitudinal studies and trend analysis, outcome evaluation research helps in crafting strategies for continuous improvement and adaptation based on evolving user needs and market dynamics.

Now that we've identified the types, let's explore five key evaluative research methods commonly employed by product owners and user researchers.

5 Key evaluative research methods

Product owners and user researchers utilize a variety of methods to conduct evaluative research. Choosing the right method depends on the specific goals and context of the research:

Surveys represent a versatile evaluative research method for product owners and user researchers seeking valuable insights into user experiences. These structured questionnaires gather quantitative data, offering a snapshot of user opinions and preferences.

Types of surveys:

Customer satisfaction (CSAT) survey: measures users' satisfaction with a product or service through a straightforward rating scale, typically ranging from 1 to 5.

Customer satisfaction (CSAT) survey

Net promoter score (NPS) survey: evaluates the likelihood of users recommending a product or service on a scale from 0 to 10, categorizing respondents as promoters, passives, or detractors.

Net promoter score (NPS) survey

Customer effort score (CES) survey: focuses on the ease with which users can accomplish tasks or resolve issues, providing insights into the overall user experience.

Customer effort score (CES) survey

When to use surveys:

  • Product launches: Gauge initial user reactions and identify areas for improvement.
  • Post-interaction: Capture real-time feedback immediately after a user engages with a feature or completes a task.

2) Closed card sorting

Closed card sorting is a powerful method for organizing and evaluating information architecture. Participants categorize predefined content into predetermined groups, shedding light on users' mental models and expectations.

Closed card sorting for evaluative research

What closed card sorting entails:

  • Predefined categories: users sort content into categories predetermined by the researcher, allowing for targeted analysis.
  • Quantitative insights: provides quantitative data on how often participants correctly place items in designated categories.

When to employ closed card sorting:

  • Information architecture overhaul: ideal for refining and optimizing the structure of a product's content.
  • Prototyping phase: use early in the design process to inform the creation of prototypes based on user expectations.

3) Tree testing

Tree testing is a method specifically focused on evaluating the navigational structure of a product. Participants are presented with a text-based representation of the product's hierarchy and are tasked with finding specific items, highlighting areas where the navigation may fall short.

Tree testing for evaluative research

What tree testing involves:

  • Text-based navigation: users explore the product hierarchy without the influence of visual design, focusing solely on the structure.
  • Task-based evaluation: research participants complete tasks that reveal the effectiveness of the navigational structure.

When to opt for tree testing:

  • Pre-launch assessment: evaluate the effectiveness of the proposed navigation structure before a product release.
  • Redesign initiatives: use when considering changes to the existing navigational hierarchy.

4) Usability testing

Usability testing is a cornerstone of evaluative research, providing direct insights into how users interact with a product. By observing users completing tasks, product owners and user researchers can identify pain points and areas for improvement.

Usability testing for evaluative research

What usability testing entails:

  • Task performance observation: Researchers observe users as they navigate through tasks, noting areas of ease and difficulty.
  • Think-aloud protocol: Participants vocalize their thoughts and feelings during the testing process, providing additional insights.

When to conduct usability testing:

  • Early design phases: Gather feedback on wireframes and prototypes to address fundamental usability concerns.
  • Post-launch iterations: Continuously improve the user experience based on real-world usage and feedback.

5) A/B testing

A/B testing, also known as split testing, is a method for comparing two versions of a webpage or product to determine which performs better. This method allows for data-driven decision-making by comparing user responses to different variations.

A/B testing for evaluative research

What A/B testing involves:

  • Variant comparison: Users are randomly assigned to either version A or version B, and their interactions are analyzed to identify the more effective option.
  • Quantitative metrics: Metrics such as click-through rates, conversion rates, and engagement help assess the success of each variant.

When to implement A/B testing:

  • Feature optimization: Compare different versions of a specific feature to determine which resonates better with users.
  • Continuous improvement: Use A/B testing regularly to refine and enhance the product based on user preferences and behavior.

Now that we're familiar with the methods, let's see some practical evaluative research question examples to guide your research efforts.

Evaluative research question examples

The formulation of well-crafted research questions is fundamental to the success of evaluative research. Clear and targeted questions guide the research process, ensuring that valuable insights are gained to inform decision-making and improvements:

Usability evaluation questions:

Usability evaluation is a critical aspect of understanding how users interact with a product or system. It involves assessing the ease with which users can complete tasks and the overall user experience. Here are essential evaluative research questions for usability:

How was your experience completing this task? (Gain insights into the overall user experience and identify any pain points or positive aspects encountered during the task.)

What technical difficulties did you experience while completing the task? (Pinpoint specific technical challenges users faced, helping developers address potential issues affecting the usability of the product.)

How intuitive was the navigation? (Assess the user-friendliness of the navigation system, ensuring that users can easily understand and move through the product.)

How would you prefer to do this action instead? (Encourage users to provide alternative methods or suggestions, offering valuable input for enhancing user interactions and task completion.)

Were there any unnecessary features? (Identify features that users find superfluous or confusing, streamlining the product and improving overall usability.)

How easy was the task to complete? (Gauge the perceived difficulty of the task, helping to refine processes and ensure they align with user expectations.)

Were there any features missing? (Identify any gaps in the product’s features, helping the development team prioritize enhancements based on user needs and expectations.)

Product survey research questions:

Product surveys allow for a broader understanding of user satisfaction, preferences, and the likelihood of recommending a product. Here are evaluative research questions for product surveys:

Would you recommend the product to your colleagues/friends? (Measure user satisfaction and gauge the likelihood of users advocating for the product within their network.)

How disappointed would you be if you could no longer use the feature/product? (Assess the emotional impact of potential disruptions or discontinuation, providing insights into the product's perceived value.)

How satisfied are you with the product/feature? (Quantify user satisfaction levels to understand overall sentiment and identify areas for improvement.)

What is the one thing you wish the product/feature could do that it doesn’t already? (Solicit specific user suggestions for improvements, guiding the product development roadmap to align with user expectations.)

What would make you cancel your subscription? (Identify potential pain points or deal-breakers that might lead users to discontinue their subscription, allowing for proactive mitigation strategies.)

As we delve into the questions, let’s explore the case study on evaluative research.

Case study on evaluative research: Spotify

Spotify's case study on evaluative research

The case study discusses the redesign of Spotify's Your Library feature, a significant change that included the introduction of podcasts in 2020 and audiobooks in 2022. The goal was to accommodate content growth while minimizing negative effects on user experience. The study, presented at the CHI conference in 2023, emphasizes three key factors for the successful launch:

Early involvement: Data science and user research were involved early in the product development process to understand user behaviors and mental models. An ethnographic study explored users' experiences and attitudes towards library organization, revealing the Library as a personal space. Personal prototypes were used to involve users in the evaluation of new solutions, ensuring alignment with their mental models.

Evaluating safely at scale: To address the challenge of disruptive changes, the team employed a two-step evaluation process. First, a beta test allowed a small group of users to try the new experience and provide feedback. This observational data helped identify pain points and guided iterative improvements. Subsequently, A/B testing at scale assessed the impact on key metrics, using non-inferiority testing to ensure the new design was not unacceptably worse than the old one.

Mixed method studies: The study employed a combination of qualitative and quantitative methods throughout the process. This mixed methods approach provided a comprehensive understanding of user behaviors, motivations, and needs. Qualitative research, including interviews, diaries, and observational studies, was conducted alongside quantitative data collection to gain deeper insights at all stages.

More details can be found here: Minimizing change aversion through mixed methods research: a case study of redesigning Spotify’s Your Library

Ingrid Pettersson, Carl Fredriksson, Raha Dadgar, John Richardson, Lisa Shields, Duncan McKenzie

Best tools for evaluative research

Utilizing the right tools is instrumental in the success of evaluative research endeavors. From usability testing platforms to survey tools, having a well-equipped toolkit enhances the efficiency and accuracy of data collection.

Product owners and user researchers can leverage these tools to streamline processes and derive actionable insights, ultimately driving continuous improvement:

1) Blitzllama

Blitzllama

Blitzllama stands out as a powerhouse tool for evaluative research, aiding product owners and user researchers in comprehensive testing. Its user-friendly interface facilitates the quick creation of surveys and usability tests, streamlining data collection. With real-time analytics, it offers immediate insights into user behavior. The tool's flexibility accommodates both moderated and unmoderated studies, making it an invaluable asset for product teams seeking actionable feedback to enhance user experiences.

Maze

Maze emerges as a top-tier choice for evaluative research, delivering a seamless user testing experience. Product owners and user researchers benefit from its intuitive platform, allowing the creation of interactive prototypes for realistic assessments. Maze excels in remote usability testing, enabling diverse user groups to provide valuable feedback. Its robust analytics provide a deep dive into user journeys, highlighting pain points and areas of improvement. With features like A/B testing and metrics tracking, Maze empowers teams to make informed decisions and iterate rapidly based on user insights.

3) Survicate

Survicate

Survicate proves to be an essential tool in the arsenal of product owners and user researchers for evaluative research. This versatile survey and feedback platform simplifies the process of gathering user opinions and preferences. Survicate's customization options cater to specific research goals, ensuring targeted and relevant data collection. Real-time reporting and analytics enable quick interpretation of results, facilitating swift decision-making. Whether measuring user satisfaction or testing new features, Survicate’s agility makes it a valuable asset for teams aiming to refine products based on user feedback.

In conclusion, evaluative research equips product owners and user researchers with indispensable tools to enhance product effectiveness. By employing various methods such as usability testing and surveys, they gain valuable insights into user experiences. 

This knowledge empowers swift and informed decision-making, fostering continuous product improvement. The types of evaluative research, including formative, summative, and outcome evaluations, cater to diverse needs, ensuring a comprehensive understanding of user interactions. Real-world examples underscore the practical applications of these methodologies. 

In essence, embracing evaluative research is a proactive strategy for refining products, elevating user satisfaction, and ultimately achieving success in the dynamic landscape of user-centric design.

FAQs related to evaluative research

1) what is evaluative research and examples.

Evaluative research assesses the effectiveness, efficiency, and impact of programs, policies, products, or interventions. For instance, a company may conduct evaluative research to determine how well a new website design functions for users or to gauge customer satisfaction with a revamped product. Other examples include measuring the success of educational programs or evaluating the effectiveness of healthcare interventions.

2) What are the goals of evaluative research?

The primary goals of evaluative research are to determine the strengths and weaknesses of a program, product, or policy and to provide actionable insights for improvement. Through evaluative research, product owners and UX researchers aim to understand how well their offerings meet user needs, identify areas for enhancement, and make informed decisions based on data-driven findings. Ultimately, the goal is to optimize outcomes and enhance user experiences.

3) What are the three types of evaluation research methods?

Evaluation research employs three main methods: formative evaluation, summative evaluation, and developmental evaluation. Formative evaluation focuses on assessing and improving a program or product during its development stages. Summative evaluation, on the other hand, evaluates the overall effectiveness and impact of a completed program or product. Developmental evaluation is particularly useful in complex or rapidly changing environments, emphasizing real-time feedback and adaptation to emergent circumstances.

4) What is the difference between evaluative and formative research?

Evaluative research and formative research serve distinct purposes in the product development and assessment process. Evaluative research examines the outcomes and impacts of a completed program, product, or policy to determine its effectiveness and inform decision-making for future iterations or improvements. In contrast, formative research focuses on gathering insights during the developmental stages to refine and enhance the program or product before its implementation. While evaluative research assesses the end results, formative research shapes the design and development process along the way.

Latest articles

Implementing a CSAT Survey Strategy: A Guide for Product Leaders

Implementing a CSAT Survey Strategy: A Guide for Product Leaders

What is CSAT

What is CSAT

15 Essential Customer Satisfaction Survey Questions for Actionable Insights

15 Essential Customer Satisfaction Survey Questions for Actionable Insights

Site logo

  • Understanding Summative Evaluation: Definition, Benefits, and Best Practices
  • Learning Center

What is Summative Evaluation?

This article provides an overview of summative evaluation, including its definition, benefits, and best practices. Discover how summative evaluation can help you assess the effectiveness of your program or project, identify areas for improvement, and promote evidence-based decision-making. Learn about best practices for conducting summative evaluation and how to address common challenges and limitations.

Table of Contents

What is Summative Evaluation and Why is it Important?

Summative evaluation: purpose, goals, benefits of summative evaluation, types of summative evaluation, best practices for conducting summative evaluation, examples of summative evaluation in practice, examples of summative evaluation questions, challenges and limitations of summative evaluation, ensuring ethical considerations in summative evaluation, future directions for summative evaluation research and practice.

Ready to optimize your resume for a better career? Try Our FREE Resume Scanner!

Optimize your resume for ATS with formatting, keywords, and quantified experience.

  • Compare job description keywords with your resume.
  • Tailor your resume to match as many keywords as possible.
  • Grab the recruiter’s attention with a standout resume.

Resume Scanner Dashboard

Summative evaluation is a type of evaluation that is conducted at the end of a program or project, with the goal of assessing its overall effectiveness. The primary focus of summative evaluation is to determine whether the program or project achieved its goals and objectives. Summative evaluation is often used to inform decisions about future program or project development, as well as to determine whether or not to continue funding a particular program or project.

Summative evaluation is important for several reasons. First, it provides a comprehensive assessment of the overall effectiveness of a program or project, which can help to inform decisions about future development and implementation. Second, it can help to identify areas where improvements can be made in program delivery, such as in program design or implementation. Third, it can help to determine whether the program or project is a worthwhile investment, and whether it is meeting the needs of stakeholders.

In addition to these benefits, summative evaluation can also help to promote accountability and transparency in program or project implementation. By conducting a thorough evaluation of the program or project, stakeholders can be assured that their resources are being used effectively and that the program or project is achieving its intended outcomes.

Summative evaluation plays an important role in assessing the overall effectiveness of a program or project, and in informing decisions about future development and implementation. It is an essential tool for promoting accountability, transparency, and effectiveness in program or project implementation.

Summative evaluation is an approach to program evaluation that is conducted at the end of a program or project, with the goal of assessing its overall effectiveness. Here are some of the key purposes and goals of summative evaluation.

Purpose of Summative Evaluation

  • Assess effectiveness: Summative evaluation is focused on assessing the overall effectiveness of a program or project in achieving its intended goals and objectives.
  • Determine impact: Summative evaluation is used to determine the impact of a program or project on its intended audience or stakeholders, as well as on the broader community or environment.
  • Inform decision-making: Summative evaluation is used to inform decision-making about future program or project development, as well as resource allocation.

Goals of Summative Evaluation

  • Measure program outcomes: Summative evaluation is used to measure program outcomes, including the extent to which the program achieved its intended goals and objectives, and the impact of the program on its intended audience or stakeholders.
  • Assess program effectiveness: Summative evaluation is used to assess the overall effectiveness of a program, by comparing program outcomes to its intended goals and objectives, as well as to similar programs or initiatives.
  • Inform program improvement: Summative evaluation is used to inform program improvement by identifying areas where the program could be modified or improved in order to enhance its effectiveness.

Summative evaluation is a critical tool for assessing the overall effectiveness and impact of programs or projects, and for informing decision-making about future program or project development. By measuring program outcomes, assessing program effectiveness, and identifying areas for program improvement, summative evaluation can help to ensure that programs and projects are meeting their intended goals and making a positive impact on their intended audience or stakeholders.

Summative evaluation is an important tool for assessing the overall effectiveness of a program or project. Here are some of the benefits of conducting summative evaluation:

  • Provides a Comprehensive Assessment: Summative evaluation provides a comprehensive assessment of the overall effectiveness of a program or project, which can help to inform decisions about future development and implementation.
  • Identifies Areas for Improvement : Summative evaluation can help to identify areas where improvements can be made in program delivery, such as in program design or implementation.
  • Promotes Accountability and Transparency: Summative evaluation can help to promote accountability and transparency in program or project implementation, by ensuring that resources are being used effectively and that the program or project is achieving its intended outcomes.
  • Supports Evidence-Based Decision-Making : Summative evaluation provides evidence-based data and insights that can inform decisions about future development and implementation.
  • Demonstrates Impact : Summative evaluation can help to demonstrate the impact of a program or project, which can be useful for securing funding or support for future initiatives.
  • Increases Stakeholder Engagement : Summative evaluation can increase stakeholder engagement and ownership of the program or project being evaluated, by involving stakeholders in the evaluation process and soliciting their feedback.

Summative evaluation is an essential tool for assessing the overall effectiveness of a program or project, and for informing decisions about future development and implementation. It provides a comprehensive assessment of the program or project, identifies areas for improvement, promotes accountability and transparency, and supports evidence-based decision-making.

There are different types of summative evaluation that can be used to assess the overall effectiveness of a program or project. Here are some of the most common types of summative evaluation:

  • Outcome Evaluation: This type of evaluation focuses on the outcomes or results of the program or project, such as changes in behavior, knowledge, or attitudes. Outcome evaluation is often used to determine the effectiveness of an intervention or program in achieving its intended outcomes.
  • Impact Evaluation: This type of evaluation focuses on the broader impact of the program or project, such as changes in the community or society. Impact evaluation is often used to assess the overall impact of a program or project on the target population or community.
  • Cost-Benefit Evaluation: This type of evaluation focuses on the costs and benefits of the program or project, and is often used to determine whether the program or project is a worthwhile investment. Cost-benefit evaluation can help to determine whether the benefits of the program or project outweigh the costs.

The type of summative evaluation used will depend on the specific goals and objectives of the program or project being evaluated, as well as the resources and data available for evaluation. Each type of summative evaluation serves a specific purpose in assessing the overall effectiveness of a program or project, and should be tailored to the specific needs of the program or project being evaluated.

Conducting a successful summative evaluation requires careful planning and attention to best practices. Here are some best practices for conducting summative evaluation:

  • Clearly Define Goals and Objectives : Before conducting a summative evaluation, it is important to clearly define the goals and objectives of the program or project being evaluated. This will help to ensure that the evaluation is focused and relevant to the needs of stakeholders.
  • Use Valid and Reliable Measures: The measures used in a summative evaluation should be valid and reliable, in order to ensure that the results are accurate and meaningful. This may involve selecting or developing appropriate evaluation tools, such as surveys or assessments, and ensuring that they are properly administered.
  • Collect Data from Multiple Sources : Data for a summative evaluation should be collected from multiple sources, in order to ensure that the results are comprehensive and representative. This may involve collecting data from program participants, stakeholders, and other relevant sources.
  • Analyze and Interpret Results : Once the data has been collected, it is important to analyze and interpret the results in order to determine the overall effectiveness of the program or project. This may involve using statistical analysis or other techniques to identify patterns or trends in the data.
  • Use Results to Inform Future Development : The results of a summative evaluation should be used to inform future program or project development, in order to improve the effectiveness of the program or project. This may involve making changes to program design or delivery, or identifying areas where additional resources or support may be needed.

Conducting a successful summative evaluation requires careful planning, attention to detail, and a commitment to using the results to inform future development and improvement. By following best practices for conducting summative evaluation, stakeholders can ensure that their programs and projects are effective and relevant to the needs of their communities.

Summative evaluation is an important tool for assessing the overall effectiveness of a program or project. Here are some examples of summative evaluation in practice:

  • Educational Programs : A school district may conduct a summative evaluation of a new educational program, such as a reading intervention program. The evaluation may focus on the program’s outcomes, such as improvements in reading skills, and may involve collecting data from multiple sources, such as teacher assessments, student tests, and parent surveys.
  • Health Interventions : A public health agency may conduct a summative evaluation of a health intervention, such as a vaccination campaign. The evaluation may focus on the impact of the intervention on health outcomes, such as reductions in disease incidence, and may involve collecting data from multiple sources, such as healthcare providers, patients, and community members.
  • Social Service Programs: A non-profit organization may conduct a summative evaluation of a social service program, such as a job training program for disadvantaged youth. The evaluation may focus on the impact of the program on outcomes such as employment rates and job retention, and may involve collecting data from multiple sources, such as program participants, employers, and community partners.
  • Technology Products : A software company may conduct a summative evaluation of a new technology product, such as a mobile app. The evaluation may focus on user satisfaction and effectiveness, and may involve collecting data from multiple sources, such as user surveys, user testing, and usage data.
  • Environmental Programs : An environmental organization may conduct a summative evaluation of a conservation program, such as a land protection initiative. The evaluation may focus on the impact of the program on environmental outcomes, such as the protection of natural habitats or the reduction of greenhouse gas emissions, and may involve collecting data from multiple sources, such as program participants, community members, and scientific data.

Summative evaluation can be used in a wide range of programs and initiatives to assess their overall effectiveness and inform future development and improvement.

Summative evaluation is an important tool for assessing the overall effectiveness of a program or project. Here are some examples of summative evaluation questions that can be used to guide the evaluation process:

  • Did the program or project achieve its intended outcomes and goals?
  • To what extent did the program or project meet the needs of its intended audience or stakeholders?
  • What were the most effective components of the program or project, and what areas could be improved?
  • What impact did the program or project have on its intended audience or stakeholders?
  • Was the program or project implemented effectively, and were resources used efficiently?
  • What unintended consequences or challenges arose during the program or project, and how were they addressed?
  • How does the program or project compare to similar initiatives or interventions in terms of effectiveness and impact?
  • What were the costs and benefits of the program or project, and were they reasonable given the outcomes achieved?
  • What lessons can be learned from the program or project, and how can they inform future development and improvement?

The questions asked during a summative evaluation are designed to provide a comprehensive understanding of the impact and effectiveness of the program or project. The answers to these questions can inform future programming and resource allocation decisions and help to identify areas for improvement. Overall, summative evaluation is an essential tool for assessing the overall impact and effectiveness of a program or project.

Summative evaluation is an important tool for assessing the overall effectiveness of a program or project. However, there are several challenges and limitations that should be considered when conducting summative evaluation. Here are some of the most common challenges and limitations of summative evaluation:

  • Timing: Summative evaluation is typically conducted at the end of a program or project, which may limit the ability to make real-time improvements during the implementation phase.
  • Resource Constraints: Summative evaluation can be resource-intensive, requiring significant time, effort, and funding to collect and analyze data.
  • Bias: The data collected during summative evaluation may be subject to bias, such as social desirability bias, which can affect the accuracy and reliability of the evaluation results.
  • Difficulty of Measurement: Some outcomes of a program or project may be difficult to measure, which can make it challenging to assess the overall effectiveness of the program or project.
  • Difficulty of Generalization: The results of a summative evaluation may not be generalizable to other contexts or settings, which can limit the broader applicability of the evaluation findings.
  • Limited Stakeholder Involvement: Summative evaluation may not involve all stakeholders, which can limit the representation of diverse perspectives and lead to incomplete evaluation findings.
  • Limited Focus on Process: Summative evaluation typically focuses on outcomes and impact, which may not provide a full understanding of the program or project’s implementation process and effectiveness.

These challenges and limitations of summative evaluation should be considered when planning and conducting evaluations. By understanding these limitations, evaluators can work to mitigate potential biases and limitations and ensure that the evaluation results are accurate, reliable, and useful for program or project improvement.

While conducting summative evaluation, it’s imperative to uphold ethical principles to ensure the integrity and fairness of the evaluation process. Ethical considerations are essential for maintaining trust with stakeholders, respecting the rights of participants, and safeguarding the integrity of evaluation findings. Here are key ethical considerations to integrate into summative evaluation:

Informed Consent: Ensure that participants are fully informed about the purpose, procedures, risks, and benefits of the evaluation before consenting to participate. Provide clear and accessible information, allowing participants to make voluntary and informed decisions about their involvement.

Confidentiality and Privacy: Safeguard the confidentiality and privacy of participants’ information throughout the evaluation process. Implement secure data management practices, anonymize data whenever possible, and only share findings in aggregate or de-identified formats to protect participants’ identities.

Respect for Diversity and Inclusion: Respect and embrace the diversity of participants, acknowledging their unique perspectives, backgrounds, and experiences. Ensure that evaluation methods are culturally sensitive and inclusive, avoiding biases and stereotypes, and accommodating diverse communication styles and preferences.

Avoiding Harm: Take proactive measures to minimize the risk of harm to participants and stakeholders throughout the evaluation process. Anticipate potential risks and vulnerabilities, mitigate them through appropriate safeguards and protocols, and prioritize the well-being and dignity of all involved.

Beneficence and Non-Maleficence: Strive to maximize the benefits of the evaluation while minimizing any potential harm or adverse effects. Ensure that evaluation activities contribute to the improvement of programs or projects, enhance stakeholders’ understanding and decision-making, and do not cause undue stress, discomfort, or harm.

Transparency and Accountability: Maintain transparency and accountability in all aspects of the evaluation, including its design, implementation, analysis, and reporting. Clearly communicate the evaluation’s objectives, methodologies, findings, and limitations, allowing stakeholders to assess its credibility and relevance.

Equitable Participation and Representation: Foster equitable participation and representation of diverse stakeholders throughout the evaluation process. Engage stakeholders in meaningful ways, valuing their input, perspectives, and contributions, and address power differentials to ensure inclusive decision-making and ownership of evaluation outcomes.

Continuous Reflection and Improvement: Continuously reflect on ethical considerations throughout the evaluation process, remaining responsive to emerging issues, challenges, and ethical dilemmas. Seek feedback from stakeholders, engage in dialogue about ethical concerns, and adapt evaluation approaches accordingly to uphold ethical standards.

By integrating these ethical considerations into summative evaluation practices, evaluators can uphold principles of integrity, respect, fairness, and accountability, promoting trust, credibility, and meaningful impact in program assessment and improvement. Ethical evaluation practices not only ensure compliance with professional standards and legal requirements but also uphold fundamental values of respect for human dignity, justice, and social responsibility.

Summative evaluation is an important tool for assessing the overall effectiveness of a program or project. Here are some potential future directions for summative evaluation research and practice:

  • Incorporating Technology: Advances in technology have the potential to improve the efficiency and accuracy of summative evaluation. Future research could explore the use of artificial intelligence, machine learning, and other technologies to streamline data collection and analysis.
  • Enhancing Stakeholder Engagement: Future research could explore ways to enhance stakeholder engagement in summative evaluation, such as by involving stakeholders in the evaluation planning and implementation process.
  • Increasing Use of Mixed Methods: Future research could explore the use of mixed methods approaches in summative evaluation, such as combining qualitative and quantitative methods to gain a more comprehensive understanding of program or project effectiveness.
  • Addressing Equity and Inclusion: Future research could focus on addressing issues of equity and inclusion in summative evaluation, such as by ensuring that evaluation methods are sensitive to the needs and experiences of diverse stakeholders.
  • Addressing Complexity: Many programs and projects operate in complex and dynamic environments. Future research could explore ways to address this complexity in summative evaluation, such as by developing more adaptive and flexible evaluation methods.
  • Improving Integration with Formative Evaluation: Summative evaluation is typically conducted after a program or project has been completed, while formative evaluation is conducted during program or project implementation. Future research could explore ways to better integrate summative and formative evaluation, in order to promote continuous program improvement.

These future directions for summative evaluation research and practice have the potential to improve the effectiveness and relevance of summative evaluation, and to enhance its value as a tool for program and project assessment and improvement.

' data-src=

David Ndiyamba

Quite comprehensive Thank you

' data-src=

Fation Luli

Leave a comment cancel reply.

Your email address will not be published.

How strong is my Resume?

Only 2% of resumes land interviews.

Land a better, higher-paying career

summative evaluation research design

Recommended Jobs

Director of finance and administration – mrcs niger and burkina faso, senior associate, business development, learning specialist – mid-level (local), project assistant, business development/ proposal development consultant, business development associate, senior program associate, ispi, executive assistant (administrative management specialist) – usaid africa, intern- international project and proposal support, ispi, office coordinator, primary health care advisor (locals only), usaid uganda, sudan monitoring project (smp): third party monitoring coordinator, democracy, rights, and governance specialist – usaid ecuador, senior human resources associate.

  • United States

Digital MEL Manager – Digital Strategy Implementation Mechanism (DSIM) Washington, DC

  • Washington, DC, USA

Services you might be interested in

Useful guides ....

How to Create a Strong Resume

Monitoring And Evaluation Specialist Resume

Resume Length for the International Development Sector

Types of Evaluation

Monitoring, Evaluation, Accountability, and Learning (MEAL)

LAND A JOB REFERRAL IN 2 WEEKS (NO ONLINE APPS!)

Sign Up & To Get My Free Referral Toolkit Now:

Logo for Open Educational Resources

Chapter 2. Research Design

Getting started.

When I teach undergraduates qualitative research methods, the final product of the course is a “research proposal” that incorporates all they have learned and enlists the knowledge they have learned about qualitative research methods in an original design that addresses a particular research question. I highly recommend you think about designing your own research study as you progress through this textbook. Even if you don’t have a study in mind yet, it can be a helpful exercise as you progress through the course. But how to start? How can one design a research study before they even know what research looks like? This chapter will serve as a brief overview of the research design process to orient you to what will be coming in later chapters. Think of it as a “skeleton” of what you will read in more detail in later chapters. Ideally, you will read this chapter both now (in sequence) and later during your reading of the remainder of the text. Do not worry if you have questions the first time you read this chapter. Many things will become clearer as the text advances and as you gain a deeper understanding of all the components of good qualitative research. This is just a preliminary map to get you on the right road.

Null

Research Design Steps

Before you even get started, you will need to have a broad topic of interest in mind. [1] . In my experience, students can confuse this broad topic with the actual research question, so it is important to clearly distinguish the two. And the place to start is the broad topic. It might be, as was the case with me, working-class college students. But what about working-class college students? What’s it like to be one? Why are there so few compared to others? How do colleges assist (or fail to assist) them? What interested me was something I could barely articulate at first and went something like this: “Why was it so difficult and lonely to be me?” And by extension, “Did others share this experience?”

Once you have a general topic, reflect on why this is important to you. Sometimes we connect with a topic and we don’t really know why. Even if you are not willing to share the real underlying reason you are interested in a topic, it is important that you know the deeper reasons that motivate you. Otherwise, it is quite possible that at some point during the research, you will find yourself turned around facing the wrong direction. I have seen it happen many times. The reason is that the research question is not the same thing as the general topic of interest, and if you don’t know the reasons for your interest, you are likely to design a study answering a research question that is beside the point—to you, at least. And this means you will be much less motivated to carry your research to completion.

Researcher Note

Why do you employ qualitative research methods in your area of study? What are the advantages of qualitative research methods for studying mentorship?

Qualitative research methods are a huge opportunity to increase access, equity, inclusion, and social justice. Qualitative research allows us to engage and examine the uniquenesses/nuances within minoritized and dominant identities and our experiences with these identities. Qualitative research allows us to explore a specific topic, and through that exploration, we can link history to experiences and look for patterns or offer up a unique phenomenon. There’s such beauty in being able to tell a particular story, and qualitative research is a great mode for that! For our work, we examined the relationships we typically use the term mentorship for but didn’t feel that was quite the right word. Qualitative research allowed us to pick apart what we did and how we engaged in our relationships, which then allowed us to more accurately describe what was unique about our mentorship relationships, which we ultimately named liberationships ( McAloney and Long 2021) . Qualitative research gave us the means to explore, process, and name our experiences; what a powerful tool!

How do you come up with ideas for what to study (and how to study it)? Where did you get the idea for studying mentorship?

Coming up with ideas for research, for me, is kind of like Googling a question I have, not finding enough information, and then deciding to dig a little deeper to get the answer. The idea to study mentorship actually came up in conversation with my mentorship triad. We were talking in one of our meetings about our relationship—kind of meta, huh? We discussed how we felt that mentorship was not quite the right term for the relationships we had built. One of us asked what was different about our relationships and mentorship. This all happened when I was taking an ethnography course. During the next session of class, we were discussing auto- and duoethnography, and it hit me—let’s explore our version of mentorship, which we later went on to name liberationships ( McAloney and Long 2021 ). The idea and questions came out of being curious and wanting to find an answer. As I continue to research, I see opportunities in questions I have about my work or during conversations that, in our search for answers, end up exposing gaps in the literature. If I can’t find the answer already out there, I can study it.

—Kim McAloney, PhD, College Student Services Administration Ecampus coordinator and instructor

When you have a better idea of why you are interested in what it is that interests you, you may be surprised to learn that the obvious approaches to the topic are not the only ones. For example, let’s say you think you are interested in preserving coastal wildlife. And as a social scientist, you are interested in policies and practices that affect the long-term viability of coastal wildlife, especially around fishing communities. It would be natural then to consider designing a research study around fishing communities and how they manage their ecosystems. But when you really think about it, you realize that what interests you the most is how people whose livelihoods depend on a particular resource act in ways that deplete that resource. Or, even deeper, you contemplate the puzzle, “How do people justify actions that damage their surroundings?” Now, there are many ways to design a study that gets at that broader question, and not all of them are about fishing communities, although that is certainly one way to go. Maybe you could design an interview-based study that includes and compares loggers, fishers, and desert golfers (those who golf in arid lands that require a great deal of wasteful irrigation). Or design a case study around one particular example where resources were completely used up by a community. Without knowing what it is you are really interested in, what motivates your interest in a surface phenomenon, you are unlikely to come up with the appropriate research design.

These first stages of research design are often the most difficult, but have patience . Taking the time to consider why you are going to go through a lot of trouble to get answers will prevent a lot of wasted energy in the future.

There are distinct reasons for pursuing particular research questions, and it is helpful to distinguish between them.  First, you may be personally motivated.  This is probably the most important and the most often overlooked.   What is it about the social world that sparks your curiosity? What bothers you? What answers do you need in order to keep living? For me, I knew I needed to get a handle on what higher education was for before I kept going at it. I needed to understand why I felt so different from my peers and whether this whole “higher education” thing was “for the likes of me” before I could complete my degree. That is the personal motivation question. Your personal motivation might also be political in nature, in that you want to change the world in a particular way. It’s all right to acknowledge this. In fact, it is better to acknowledge it than to hide it.

There are also academic and professional motivations for a particular study.  If you are an absolute beginner, these may be difficult to find. We’ll talk more about this when we discuss reviewing the literature. Simply put, you are probably not the only person in the world to have thought about this question or issue and those related to it. So how does your interest area fit into what others have studied? Perhaps there is a good study out there of fishing communities, but no one has quite asked the “justification” question. You are motivated to address this to “fill the gap” in our collective knowledge. And maybe you are really not at all sure of what interests you, but you do know that [insert your topic] interests a lot of people, so you would like to work in this area too. You want to be involved in the academic conversation. That is a professional motivation and a very important one to articulate.

Practical and strategic motivations are a third kind. Perhaps you want to encourage people to take better care of the natural resources around them. If this is also part of your motivation, you will want to design your research project in a way that might have an impact on how people behave in the future. There are many ways to do this, one of which is using qualitative research methods rather than quantitative research methods, as the findings of qualitative research are often easier to communicate to a broader audience than the results of quantitative research. You might even be able to engage the community you are studying in the collecting and analyzing of data, something taboo in quantitative research but actively embraced and encouraged by qualitative researchers. But there are other practical reasons, such as getting “done” with your research in a certain amount of time or having access (or no access) to certain information. There is nothing wrong with considering constraints and opportunities when designing your study. Or maybe one of the practical or strategic goals is about learning competence in this area so that you can demonstrate the ability to conduct interviews and focus groups with future employers. Keeping that in mind will help shape your study and prevent you from getting sidetracked using a technique that you are less invested in learning about.

STOP HERE for a moment

I recommend you write a paragraph (at least) explaining your aims and goals. Include a sentence about each of the following: personal/political goals, practical or professional/academic goals, and practical/strategic goals. Think through how all of the goals are related and can be achieved by this particular research study . If they can’t, have a rethink. Perhaps this is not the best way to go about it.

You will also want to be clear about the purpose of your study. “Wait, didn’t we just do this?” you might ask. No! Your goals are not the same as the purpose of the study, although they are related. You can think about purpose lying on a continuum from “ theory ” to “action” (figure 2.1). Sometimes you are doing research to discover new knowledge about the world, while other times you are doing a study because you want to measure an impact or make a difference in the world.

Purpose types: Basic Research, Applied Research, Summative Evaluation, Formative Evaluation, Action Research

Basic research involves research that is done for the sake of “pure” knowledge—that is, knowledge that, at least at this moment in time, may not have any apparent use or application. Often, and this is very important, knowledge of this kind is later found to be extremely helpful in solving problems. So one way of thinking about basic research is that it is knowledge for which no use is yet known but will probably one day prove to be extremely useful. If you are doing basic research, you do not need to argue its usefulness, as the whole point is that we just don’t know yet what this might be.

Researchers engaged in basic research want to understand how the world operates. They are interested in investigating a phenomenon to get at the nature of reality with regard to that phenomenon. The basic researcher’s purpose is to understand and explain ( Patton 2002:215 ).

Basic research is interested in generating and testing hypotheses about how the world works. Grounded Theory is one approach to qualitative research methods that exemplifies basic research (see chapter 4). Most academic journal articles publish basic research findings. If you are working in academia (e.g., writing your dissertation), the default expectation is that you are conducting basic research.

Applied research in the social sciences is research that addresses human and social problems. Unlike basic research, the researcher has expectations that the research will help contribute to resolving a problem, if only by identifying its contours, history, or context. From my experience, most students have this as their baseline assumption about research. Why do a study if not to make things better? But this is a common mistake. Students and their committee members are often working with default assumptions here—the former thinking about applied research as their purpose, the latter thinking about basic research: “The purpose of applied research is to contribute knowledge that will help people to understand the nature of a problem in order to intervene, thereby allowing human beings to more effectively control their environment. While in basic research the source of questions is the tradition within a scholarly discipline, in applied research the source of questions is in the problems and concerns experienced by people and by policymakers” ( Patton 2002:217 ).

Applied research is less geared toward theory in two ways. First, its questions do not derive from previous literature. For this reason, applied research studies have much more limited literature reviews than those found in basic research (although they make up for this by having much more “background” about the problem). Second, it does not generate theory in the same way as basic research does. The findings of an applied research project may not be generalizable beyond the boundaries of this particular problem or context. The findings are more limited. They are useful now but may be less useful later. This is why basic research remains the default “gold standard” of academic research.

Evaluation research is research that is designed to evaluate or test the effectiveness of specific solutions and programs addressing specific social problems. We already know the problems, and someone has already come up with solutions. There might be a program, say, for first-generation college students on your campus. Does this program work? Are first-generation students who participate in the program more likely to graduate than those who do not? These are the types of questions addressed by evaluation research. There are two types of research within this broader frame; however, one more action-oriented than the next. In summative evaluation , an overall judgment about the effectiveness of a program or policy is made. Should we continue our first-gen program? Is it a good model for other campuses? Because the purpose of such summative evaluation is to measure success and to determine whether this success is scalable (capable of being generalized beyond the specific case), quantitative data is more often used than qualitative data. In our example, we might have “outcomes” data for thousands of students, and we might run various tests to determine if the better outcomes of those in the program are statistically significant so that we can generalize the findings and recommend similar programs elsewhere. Qualitative data in the form of focus groups or interviews can then be used for illustrative purposes, providing more depth to the quantitative analyses. In contrast, formative evaluation attempts to improve a program or policy (to help “form” or shape its effectiveness). Formative evaluations rely more heavily on qualitative data—case studies, interviews, focus groups. The findings are meant not to generalize beyond the particular but to improve this program. If you are a student seeking to improve your qualitative research skills and you do not care about generating basic research, formative evaluation studies might be an attractive option for you to pursue, as there are always local programs that need evaluation and suggestions for improvement. Again, be very clear about your purpose when talking through your research proposal with your committee.

Action research takes a further step beyond evaluation, even formative evaluation, to being part of the solution itself. This is about as far from basic research as one could get and definitely falls beyond the scope of “science,” as conventionally defined. The distinction between action and research is blurry, the research methods are often in constant flux, and the only “findings” are specific to the problem or case at hand and often are findings about the process of intervention itself. Rather than evaluate a program as a whole, action research often seeks to change and improve some particular aspect that may not be working—maybe there is not enough diversity in an organization or maybe women’s voices are muted during meetings and the organization wonders why and would like to change this. In a further step, participatory action research , those women would become part of the research team, attempting to amplify their voices in the organization through participation in the action research. As action research employs methods that involve people in the process, focus groups are quite common.

If you are working on a thesis or dissertation, chances are your committee will expect you to be contributing to fundamental knowledge and theory ( basic research ). If your interests lie more toward the action end of the continuum, however, it is helpful to talk to your committee about this before you get started. Knowing your purpose in advance will help avoid misunderstandings during the later stages of the research process!

The Research Question

Once you have written your paragraph and clarified your purpose and truly know that this study is the best study for you to be doing right now , you are ready to write and refine your actual research question. Know that research questions are often moving targets in qualitative research, that they can be refined up to the very end of data collection and analysis. But you do have to have a working research question at all stages. This is your “anchor” when you get lost in the data. What are you addressing? What are you looking at and why? Your research question guides you through the thicket. It is common to have a whole host of questions about a phenomenon or case, both at the outset and throughout the study, but you should be able to pare it down to no more than two or three sentences when asked. These sentences should both clarify the intent of the research and explain why this is an important question to answer. More on refining your research question can be found in chapter 4.

Chances are, you will have already done some prior reading before coming up with your interest and your questions, but you may not have conducted a systematic literature review. This is the next crucial stage to be completed before venturing further. You don’t want to start collecting data and then realize that someone has already beaten you to the punch. A review of the literature that is already out there will let you know (1) if others have already done the study you are envisioning; (2) if others have done similar studies, which can help you out; and (3) what ideas or concepts are out there that can help you frame your study and make sense of your findings. More on literature reviews can be found in chapter 9.

In addition to reviewing the literature for similar studies to what you are proposing, it can be extremely helpful to find a study that inspires you. This may have absolutely nothing to do with the topic you are interested in but is written so beautifully or organized so interestingly or otherwise speaks to you in such a way that you want to post it somewhere to remind you of what you want to be doing. You might not understand this in the early stages—why would you find a study that has nothing to do with the one you are doing helpful? But trust me, when you are deep into analysis and writing, having an inspirational model in view can help you push through. If you are motivated to do something that might change the world, you probably have read something somewhere that inspired you. Go back to that original inspiration and read it carefully and see how they managed to convey the passion that you so appreciate.

At this stage, you are still just getting started. There are a lot of things to do before setting forth to collect data! You’ll want to consider and choose a research tradition and a set of data-collection techniques that both help you answer your research question and match all your aims and goals. For example, if you really want to help migrant workers speak for themselves, you might draw on feminist theory and participatory action research models. Chapters 3 and 4 will provide you with more information on epistemologies and approaches.

Next, you have to clarify your “units of analysis.” What is the level at which you are focusing your study? Often, the unit in qualitative research methods is individual people, or “human subjects.” But your units of analysis could just as well be organizations (colleges, hospitals) or programs or even whole nations. Think about what it is you want to be saying at the end of your study—are the insights you are hoping to make about people or about organizations or about something else entirely? A unit of analysis can even be a historical period! Every unit of analysis will call for a different kind of data collection and analysis and will produce different kinds of “findings” at the conclusion of your study. [2]

Regardless of what unit of analysis you select, you will probably have to consider the “human subjects” involved in your research. [3] Who are they? What interactions will you have with them—that is, what kind of data will you be collecting? Before answering these questions, define your population of interest and your research setting. Use your research question to help guide you.

Let’s use an example from a real study. In Geographies of Campus Inequality , Benson and Lee ( 2020 ) list three related research questions: “(1) What are the different ways that first-generation students organize their social, extracurricular, and academic activities at selective and highly selective colleges? (2) how do first-generation students sort themselves and get sorted into these different types of campus lives; and (3) how do these different patterns of campus engagement prepare first-generation students for their post-college lives?” (3).

Note that we are jumping into this a bit late, after Benson and Lee have described previous studies (the literature review) and what is known about first-generation college students and what is not known. They want to know about differences within this group, and they are interested in ones attending certain kinds of colleges because those colleges will be sites where academic and extracurricular pressures compete. That is the context for their three related research questions. What is the population of interest here? First-generation college students . What is the research setting? Selective and highly selective colleges . But a host of questions remain. Which students in the real world, which colleges? What about gender, race, and other identity markers? Will the students be asked questions? Are the students still in college, or will they be asked about what college was like for them? Will they be observed? Will they be shadowed? Will they be surveyed? Will they be asked to keep diaries of their time in college? How many students? How many colleges? For how long will they be observed?

Recommendation

Take a moment and write down suggestions for Benson and Lee before continuing on to what they actually did.

Have you written down your own suggestions? Good. Now let’s compare those with what they actually did. Benson and Lee drew on two sources of data: in-depth interviews with sixty-four first-generation students and survey data from a preexisting national survey of students at twenty-eight selective colleges. Let’s ignore the survey for our purposes here and focus on those interviews. The interviews were conducted between 2014 and 2016 at a single selective college, “Hilltop” (a pseudonym ). They employed a “purposive” sampling strategy to ensure an equal number of male-identifying and female-identifying students as well as equal numbers of White, Black, and Latinx students. Each student was interviewed once. Hilltop is a selective liberal arts college in the northeast that enrolls about three thousand students.

How did your suggestions match up to those actually used by the researchers in this study? It is possible your suggestions were too ambitious? Beginning qualitative researchers can often make that mistake. You want a research design that is both effective (it matches your question and goals) and doable. You will never be able to collect data from your entire population of interest (unless your research question is really so narrow to be relevant to very few people!), so you will need to come up with a good sample. Define the criteria for this sample, as Benson and Lee did when deciding to interview an equal number of students by gender and race categories. Define the criteria for your sample setting too. Hilltop is typical for selective colleges. That was a research choice made by Benson and Lee. For more on sampling and sampling choices, see chapter 5.

Benson and Lee chose to employ interviews. If you also would like to include interviews, you have to think about what will be asked in them. Most interview-based research involves an interview guide, a set of questions or question areas that will be asked of each participant. The research question helps you create a relevant interview guide. You want to ask questions whose answers will provide insight into your research question. Again, your research question is the anchor you will continually come back to as you plan for and conduct your study. It may be that once you begin interviewing, you find that people are telling you something totally unexpected, and this makes you rethink your research question. That is fine. Then you have a new anchor. But you always have an anchor. More on interviewing can be found in chapter 11.

Let’s imagine Benson and Lee also observed college students as they went about doing the things college students do, both in the classroom and in the clubs and social activities in which they participate. They would have needed a plan for this. Would they sit in on classes? Which ones and how many? Would they attend club meetings and sports events? Which ones and how many? Would they participate themselves? How would they record their observations? More on observation techniques can be found in both chapters 13 and 14.

At this point, the design is almost complete. You know why you are doing this study, you have a clear research question to guide you, you have identified your population of interest and research setting, and you have a reasonable sample of each. You also have put together a plan for data collection, which might include drafting an interview guide or making plans for observations. And so you know exactly what you will be doing for the next several months (or years!). To put the project into action, there are a few more things necessary before actually going into the field.

First, you will need to make sure you have any necessary supplies, including recording technology. These days, many researchers use their phones to record interviews. Second, you will need to draft a few documents for your participants. These include informed consent forms and recruiting materials, such as posters or email texts, that explain what this study is in clear language. Third, you will draft a research protocol to submit to your institutional review board (IRB) ; this research protocol will include the interview guide (if you are using one), the consent form template, and all examples of recruiting material. Depending on your institution and the details of your study design, it may take weeks or even, in some unfortunate cases, months before you secure IRB approval. Make sure you plan on this time in your project timeline. While you wait, you can continue to review the literature and possibly begin drafting a section on the literature review for your eventual presentation/publication. More on IRB procedures can be found in chapter 8 and more general ethical considerations in chapter 7.

Once you have approval, you can begin!

Research Design Checklist

Before data collection begins, do the following:

  • Write a paragraph explaining your aims and goals (personal/political, practical/strategic, professional/academic).
  • Define your research question; write two to three sentences that clarify the intent of the research and why this is an important question to answer.
  • Review the literature for similar studies that address your research question or similar research questions; think laterally about some literature that might be helpful or illuminating but is not exactly about the same topic.
  • Find a written study that inspires you—it may or may not be on the research question you have chosen.
  • Consider and choose a research tradition and set of data-collection techniques that (1) help answer your research question and (2) match your aims and goals.
  • Define your population of interest and your research setting.
  • Define the criteria for your sample (How many? Why these? How will you find them, gain access, and acquire consent?).
  • If you are conducting interviews, draft an interview guide.
  •  If you are making observations, create a plan for observations (sites, times, recording, access).
  • Acquire any necessary technology (recording devices/software).
  • Draft consent forms that clearly identify the research focus and selection process.
  • Create recruiting materials (posters, email, texts).
  • Apply for IRB approval (proposal plus consent form plus recruiting materials).
  • Block out time for collecting data.
  • At the end of the chapter, you will find a " Research Design Checklist " that summarizes the main recommendations made here ↵
  • For example, if your focus is society and culture , you might collect data through observation or a case study. If your focus is individual lived experience , you are probably going to be interviewing some people. And if your focus is language and communication , you will probably be analyzing text (written or visual). ( Marshall and Rossman 2016:16 ). ↵
  • You may not have any "live" human subjects. There are qualitative research methods that do not require interactions with live human beings - see chapter 16 , "Archival and Historical Sources." But for the most part, you are probably reading this textbook because you are interested in doing research with people. The rest of the chapter will assume this is the case. ↵

One of the primary methodological traditions of inquiry in qualitative research, ethnography is the study of a group or group culture, largely through observational fieldwork supplemented by interviews. It is a form of fieldwork that may include participant-observation data collection. See chapter 14 for a discussion of deep ethnography. 

A methodological tradition of inquiry and research design that focuses on an individual case (e.g., setting, institution, or sometimes an individual) in order to explore its complexity, history, and interactive parts.  As an approach, it is particularly useful for obtaining a deep appreciation of an issue, event, or phenomenon of interest in its particular context.

The controlling force in research; can be understood as lying on a continuum from basic research (knowledge production) to action research (effecting change).

In its most basic sense, a theory is a story we tell about how the world works that can be tested with empirical evidence.  In qualitative research, we use the term in a variety of ways, many of which are different from how they are used by quantitative researchers.  Although some qualitative research can be described as “testing theory,” it is more common to “build theory” from the data using inductive reasoning , as done in Grounded Theory .  There are so-called “grand theories” that seek to integrate a whole series of findings and stories into an overarching paradigm about how the world works, and much smaller theories or concepts about particular processes and relationships.  Theory can even be used to explain particular methodological perspectives or approaches, as in Institutional Ethnography , which is both a way of doing research and a theory about how the world works.

Research that is interested in generating and testing hypotheses about how the world works.

A methodological tradition of inquiry and approach to analyzing qualitative data in which theories emerge from a rigorous and systematic process of induction.  This approach was pioneered by the sociologists Glaser and Strauss (1967).  The elements of theory generated from comparative analysis of data are, first, conceptual categories and their properties and, second, hypotheses or generalized relations among the categories and their properties – “The constant comparing of many groups draws the [researcher’s] attention to their many similarities and differences.  Considering these leads [the researcher] to generate abstract categories and their properties, which, since they emerge from the data, will clearly be important to a theory explaining the kind of behavior under observation.” (36).

An approach to research that is “multimethod in focus, involving an interpretative, naturalistic approach to its subject matter.  This means that qualitative researchers study things in their natural settings, attempting to make sense of, or interpret, phenomena in terms of the meanings people bring to them.  Qualitative research involves the studied use and collection of a variety of empirical materials – case study, personal experience, introspective, life story, interview, observational, historical, interactional, and visual texts – that describe routine and problematic moments and meanings in individuals’ lives." ( Denzin and Lincoln 2005:2 ). Contrast with quantitative research .

Research that contributes knowledge that will help people to understand the nature of a problem in order to intervene, thereby allowing human beings to more effectively control their environment.

Research that is designed to evaluate or test the effectiveness of specific solutions and programs addressing specific social problems.  There are two kinds: summative and formative .

Research in which an overall judgment about the effectiveness of a program or policy is made, often for the purpose of generalizing to other cases or programs.  Generally uses qualitative research as a supplement to primary quantitative data analyses.  Contrast formative evaluation research .

Research designed to improve a program or policy (to help “form” or shape its effectiveness); relies heavily on qualitative research methods.  Contrast summative evaluation research

Research carried out at a particular organizational or community site with the intention of affecting change; often involves research subjects as participants of the study.  See also participatory action research .

Research in which both researchers and participants work together to understand a problematic situation and change it for the better.

The level of the focus of analysis (e.g., individual people, organizations, programs, neighborhoods).

The large group of interest to the researcher.  Although it will likely be impossible to design a study that incorporates or reaches all members of the population of interest, this should be clearly defined at the outset of a study so that a reasonable sample of the population can be taken.  For example, if one is studying working-class college students, the sample may include twenty such students attending a particular college, while the population is “working-class college students.”  In quantitative research, clearly defining the general population of interest is a necessary step in generalizing results from a sample.  In qualitative research, defining the population is conceptually important for clarity.

A fictional name assigned to give anonymity to a person, group, or place.  Pseudonyms are important ways of protecting the identity of research participants while still providing a “human element” in the presentation of qualitative data.  There are ethical considerations to be made in selecting pseudonyms; some researchers allow research participants to choose their own.

A requirement for research involving human participants; the documentation of informed consent.  In some cases, oral consent or assent may be sufficient, but the default standard is a single-page easy-to-understand form that both the researcher and the participant sign and date.   Under federal guidelines, all researchers "shall seek such consent only under circumstances that provide the prospective subject or the representative sufficient opportunity to consider whether or not to participate and that minimize the possibility of coercion or undue influence. The information that is given to the subject or the representative shall be in language understandable to the subject or the representative.  No informed consent, whether oral or written, may include any exculpatory language through which the subject or the representative is made to waive or appear to waive any of the subject's rights or releases or appears to release the investigator, the sponsor, the institution, or its agents from liability for negligence" (21 CFR 50.20).  Your IRB office will be able to provide a template for use in your study .

An administrative body established to protect the rights and welfare of human research subjects recruited to participate in research activities conducted under the auspices of the institution with which it is affiliated. The IRB is charged with the responsibility of reviewing all research involving human participants. The IRB is concerned with protecting the welfare, rights, and privacy of human subjects. The IRB has the authority to approve, disapprove, monitor, and require modifications in all research activities that fall within its jurisdiction as specified by both the federal regulations and institutional policy.

Introduction to Qualitative Research Methods Copyright © 2023 by Allison Hurst is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License , except where otherwise noted.

Integrations

What's new?

Prototype Testing

Live Website Testing

Feedback Surveys

Interview Studies

Card Sorting

Tree Testing

In-Product Prompts

Participant Management

Automated Reports

Templates Gallery

Choose from our library of pre-built mazes to copy, customize, and share with your own users

Browse all templates

Financial Services

Tech & Software

Product Designers

Product Managers

User Researchers

By use case

Concept & Idea Validation

Wireframe & Usability Test

Content & Copy Testing

Feedback & Satisfaction

Content Hub

Educational resources for product, research and design teams

Explore all resources

Question Bank

Research Maturity Model

Guides & Reports

Help Center

Future of User Research Report

The Optimal Path Podcast

Maze Guides | Resources Hub

What is UX Research: The Ultimate Guide for UX Researchers

0% complete

Evaluative research: Key methods, types, and examples

In the last chapter, we learned what generative research means and how it prepares you to build an informed solution for users. Now, let’s look at evaluative research for design and user experience (UX).

evaluative research illustration

What is evaluative research?

Evaluative research is a research method used to evaluate a product or concept and collect data to help improve your solution. It offers many benefits, including identifying whether a product works as intended and uncovering areas for improvement.

Also known as evaluation research or program evaluation, this kind of research is typically introduced in the early phases of the design process to test existing or new solutions. It continues to be employed in an iterative way until the product becomes ‘final’. “With evaluation research, we’re making sure the value is there so that effort and resources aren’t wasted,” explains Nannearl LeKesia Brown , Product Researcher at Figma.

According to Mithila Fox , Senior UX Researcher at Stack Overflow, the evaluation research process includes various activities, like content testing , assessing accessibility or desirability. During UX research , evaluation can also be conducted on competitor products to understand what solutions work well in the current market before you start building your own.

“Even before you have your own mockups, you can start by testing competitors or similar products,” says Mithila. “There’s a lot we can learn from what is and isn't working about other products in the market.”

However, evaluation research doesn’t stop when a new product is launched. For the best user experience, solutions need to be monitored after release and improved based on customer feedback.

Turn insights into impact with Maze

Create better product experiences with evaluative research powered by actionable insights from your users.

summative evaluation research design

Why is evaluative research important?

Evaluative research is crucial in UX design and research, providing insights to enhance user experiences, identify usability issues, and inform iterative design improvements. It helps you:

  • Refine and improve UX: Evaluative research allows you to test a solution and collect valuable feedback to refine and improve the user experience. For example, you can A/B test the copy on your site to maximize engagement with users.
  • Identify areas of improvement: Findings from evaluative research are key to assessing what works and what doesn't. You might, for instance, run usability testing to observe how users navigate your website and identify pain points or areas of confusion.
  • Align your ideas with users: Research should always be a part of the design and product development process . By allowing users to evaluate your product early and often you'll know whether you're building the right solution for your audience.
  • Get buy-in: The insights you get from this type of research can demonstrate the effectiveness and impact of your project. Show this information to stakeholders to get buy-in for future projects.

Evaluative vs. Generative research

The difference between generative research and evaluative research lies in their focus: generative methods investigate user needs for new solutions, while evaluative research assesses and validates existing designs for improvements.

Generative and evaluative research are both valuable decision-making tools in the arsenal of a researcher. They should be similarly employed throughout the product development process as they both help you get the evidence you need.

When creating the research plan , study the competitive landscape, target audience, needs of the people you’re building for, and any existing solutions. Depending on what you need to find out, you’ll be able to determine if you should run generative or evaluative research.

Mithila explains the benefits of using both research methodologies: “Generative research helps us deeply understand our users and learn their needs, wants, and challenges. On the other hand, evaluative research helps us test whether the solutions we've come up with address those needs, wants, and challenges.”

Use generative research to bring forth new ideas during the discovery phase. And use evaluation research to test and monitor the product before and after launch.

The two types of evaluative research

There are two types of evaluative studies you can tap into: summative and formative research. Although summative evaluations are often quantitative, they can also be part of qualitative research.

Summative evaluation research

A summative evaluation helps you understand how a design performs overall. It’s usually done at the end of the design process to evaluate its usability or detect overlooked issues. You can also use a summative evaluation to benchmark your new solution against a prior one, or that of a competitor’s, and understand if the final product needs assessment. Summative evaluation can be used for outcome-focused evaluation to assess impact and effectiveness for specific outcomes—for example, how design influences conversion.

Formative evaluation research

On the other hand, formative research is conducted early and often during the design process to test and improve a solution before arriving at the final design. Running a formative evaluation allows you to test and identify issues in the solutions as you’re creating them, and improve them based on user feedback.

TL;DR: Run formative research to test and evaluate solutions during the design process, and conduct a summative evaluation at the end to evaluate the final product.

Looking to conduct UX research? Check out our list of the top UX research tools to run an effective research study.

5 Key evaluative research methods

“Evaluation research can start as soon as you understand your user’s needs,” says Mithila. Here are five typical UX research methods to include in your evaluation research process:

Evaluative research methods

User surveys can provide valuable quantitative insights into user preferences, satisfaction levels, and attitudes toward a design or product. By gathering a large amount of data efficiently, surveys can identify trends, patterns, and user demographics to make informed decisions and prioritize design improvements.

Closed card sorting

Closed card sorting helps evaluate the effectiveness and intuitiveness of an existing or proposed navigation structure. By analyzing how participants group and categorize information, researchers can identify potential issues, inconsistencies, or gaps in the design's information architecture, leading to improved navigation and findability.

Tree testing

Tree testing , also known as reverse card sorting, is a research method used to evaluate the findability and effectiveness of information architecture. Participants are given a text-based representation of the website's navigation structure (without visual design elements) and are asked to locate specific items or perform specific tasks by navigating through the tree structure. This method helps identify potential issues such as confusing labels, unclear hierarchy, or navigation paths that hinder users' ability to find information.

Usability testing

Usability testing involves observing and collecting qualitative and/or quantitative data on how users interact with a design or product. Participants are given specific tasks to perform while their interactions, feedback, and difficulties are recorded. This approach helps identify usability issues, areas of confusion, or pain points in the user experience.

A/B testing

A/B testing , also known as split testing, is an evaluative research approach that involves comparing two or more versions of a design or feature to determine which one performs better in achieving a specific objective. Users are randomly assigned to different variants, and their interactions, behavior, or conversion rates are measured and analyzed. A/B testing allows researchers to make data-driven decisions by quantitatively assessing the impact of design changes on user behavior, engagement, or conversion metrics.

This is the value of having a UX research plan before diving into the research approach itself. If we were able to answer the evaluative questions we had, in addition to figuring out if our hypotheses were valid (or not), I’d count that as a successful evaluation study. Ultimately, research is about learning in order to make more informed decisions—if we learned, we were successful.

Nannearl LeKesia Brown, Product Researcher at Figma

Nannearl LeKesia Brown , Product Researcher at Figma

Evaluative research question examples

To gather valuable data and make better design decisions, you need to ask the right research questions . Here are some examples of evaluative research questions:

Usability questions

  • How would you go about performing [task]?
  • How was your experience completing [task]?
  • How did you find navigating to [X] page?
  • Based on the previous task, how would you prefer to do this action instead?

Get inspired by real-life usability test examples and discover more usability testing questions in our guide to usability testing.

Product survey questions

  • How often do you use the product/feature?
  • How satisfied are you with the product/feature?
  • Does the product/feature help you achieve your goals?
  • How easy is the product/feature to use?

Discover more examples of product survey questions in our article on product surveys .

Closed card sorting questions

  • Were there any categories you were unsure about?
  • Which categories were you unsure about?
  • Why were you unsure about the [X] category?

Find out more in our complete card sorting guide .

Evaluation research examples

Across UX design, research, and product testing, evaluative research can take several forms. Here are some ways you can conduct evaluative research:

Comparative usability testing

This example of evaluative research involves conducting usability tests with participants to compare the performance and user satisfaction of two or more competing design variations or prototypes.

You’ll gather qualitative and quantitative data on task completion rates, errors, user preferences, and feedback to identify the most effective design option. You can then use the insights gained from comparative usability testing to inform design decisions and prioritize improvements based on user-centered feedback .

Cognitive walkthroughs

Cognitive walkthroughs assess the usability and effectiveness of a design from a user's perspective.

You’ll create evaluators to identify potential points of confusion, decision-making challenges, or errors. You can then gather insights on user expectations, mental models, and information processing to improve the clarity and intuitiveness of the design .

Diary studies

Conducting diary studies gives you insights into users' experiences and behaviors over an extended period of time.

You provide participants with diaries or digital tools to record their interactions, thoughts, frustrations, and successes related to a product or service. You can then analyze the collected data to identify usage patterns, uncover pain points, and understand the factors influencing the user experience .

In the next chapters, we'll learn more about quantitative and qualitative research, as well as the most common UX research methods . We’ll also share some practical applications of how UX researchers use these methods to conduct effective research.

Generate product insights with Maze

Make actionable decisions powered by user feedback, evaluation, and research.

user testing data insights

In the next chapters, we'll learn more about quantitative and qualitative research, as well as the most common research approaches, and share some practical applications of how UX researchers use them to conduct effective research.

Frequently asked questions

Evaluative research, also known as evaluation research or program evaluation, is a type of research you can use to evaluate a product or concept and collect data that helps improve your solution.

Quantitative vs. qualitative UX research: An overview of UX research methods

Evaluative Research Design Examples, Methods, And Questions For Product Managers

Evaluative Research Design Examples, Methods, And Questions For Product Managers cover

Looking for excellent evaluative research design examples?

If so, you’re in the right place!

In this article, we explore various evaluative research methods and best data collection techniques for SaaS product leaders that will help you set up your own research projects.

Sound like it’s worth a read? Let’s get right to it then!

  • Evaluative research gauges how well the product meets its goals at all stages of the product development process.
  • The purpose of generative research is to gain a better understanding of user needs and define problems to solve, while evaluative research assesses how successful your current product or feature is.
  • Evaluation research helps teams validate ideas and estimate how good the product or feature will be at satisfying user needs, which greatly increases the chances of product success .
  • Formative evaluation research sets the baseline for other kinds of evaluative research and assesses user needs.
  • Summative evaluation research checks how successful the outputs of the process are against its targets.
  • Outcome evaluation research evaluates if the product has had the desired effect on users’ lives.
  • Quantitative research collects and analyzes numerical data like satisfaction scores or conversion rates to establish trends and interdependencies.
  • Qualitative methods use non-numerical data to understand reasons for trends and user behavior.
  • You can use feedback surveys to collect both quantitative and qualitative data from your target audience.
  • A/B testing is a quantitative research method for choosing the best versions of a product or feature.
  • Usability testing techniques like session replays or eye-tracking help PMs and designers determine how easy and intuitive the product is to use.
  • Beta-testing is a popular technique that enables teams to evaluate the product or feature with real users before its launch .
  • Fake door tests are a popular and cost-effective validation technique.
  • With Userpilot, you can run user feedback surveys, and build user segments based on product usage data to recruit participants for interviews and beta-testing. Want to see how? Book the demo!

What is evaluative research?

Evaluative research, aka program evaluation or evaluation research, is a set of research practices aimed at assessing how well the product meets its goals .

It takes place at all stages of the product development process, both in the launch lead-up and afterward.

This kind of research is not limited to your own product. You can use it to evaluate your rivals to find ways to get a competitive edge.

Evaluative research vs generative research

Generative and evaluation research have different objectives.

Generative research is used for product and customer discovery . Its purpose is to gain a more detailed understanding of user needs , define the problem to solve, and guide product ideation .

Evaluative research, on the other hand, tests how good your current product or feature is. It assesses customer satisfaction by looking at how well the solution addresses their problems and its usability .

Why is conducting evaluation research important for product managers?

Ongoing evaluation research is essential for product success .

It allows PMs to identify ways to improve the product and the overall user experience. It helps you validate your ideas and determine how likely your product is to satisfy the needs of the target consumers.

Types of evaluation research methods

There are a number of evaluation methods that you can leverage to assess your product. The type of research method you choose will depend on the stage in the development process and what exactly you’re trying to find out.

Formative evaluation research

Formative evaluation research happens at the beginning of the evaluation process and sets the baseline for subsequent studies.

In short, its objective is to assess the needs of target users and the market before you start working on any specific solutions.

Summative evaluation research

Summative evaluation research focuses on how successful the outcomes are.

This kind of research happens as soon as the project or program is over. It assesses the value of the deliverables against the forecast results and project objectives.

Outcome evaluation research

Outcome evaluation research measures the impact of the product on the customer. In other words, it assesses if the product brings a positive change to users’ lives.

Quantitative research

Quantitative research methods use numerical data and statistical analysis. They’re great for establishing cause-effect relationships and tracking trends, for example in customer satisfaction.

In SaaS, we normally use surveys and product usage data tracking for quantitative research purposes.

Qualitative research

Qualitative research uses non-numerical data and focuses on gaining a deeper understanding of user experience and their attitude toward the product.

In other words, qualitative research is about the ‘why?’ of user satisfaction or its lack. For example, it can shed light on what makes your detractors dissatisfied with the product.

What techniques can you use for qualitative research ?

The most popular ones include interviews, case studies, and focus groups.

Best evaluative research data collection techniques

How is evaluation research conducted? SaaS PMs can use a range of techniques to collect quantitative and qualitative data to support the evaluation research process.

User feedback surveys

User feedback surveys are the cornerstone of the evaluation research methodology in SaaS.

There are plenty of tools that allow you to build and customize in-app and email surveys without any coding skills.

You use them to target specific user segments at a time that’s most suitable for what you’re testing. For example, you can trigger them contextually as soon as the users engage with the feature that you’re evaluating.

Apart from quantitative data, like the NPS or CSAT scores, it’s good practice to follow up with qualitative questions to get a deeper understanding of user sentiment towards the feature or product.

Evaluative Research Design Examples: in-app feedback survey

A/B testing

A/B tests are some of the most common ways of evaluating features, UI elements, and onboarding flows in SaaS. That’s because they’re fairly simple to design and administer.

Let’s imagine you’re working on a new landing page layout to boost demo bookings.

First, you modify one UI element at a time, like the position of the CTA button. Next, you launch the new version and direct half of your user traffic to it, while the remaining 50% of users still use the old version.

As your users engage with both versions, you track the conversion rate. You repeat the process with the other versions to eventually choose the best one.

Evaluative Research Design Examples: A/B testing

Usability testing

Usability testing helps you evaluate how easy it is for users to complete their tasks in the product.

There is a range of techniques that you can leverage for usability testing :

  • Guerilla testing is the easiest to set up. Just head over to a public place like a coffee shop or a mall where your target users hang out. Take your prototype with you and ask random users for their feedback.
  • In the 5-second test, you allow the user to engage with a feature for 5 seconds and interview them about their impressions.
  • First-click testing helps you assess how intuitive the product is and how easy it is for the user to find and follow the happy path.
  • In session replays you record and analyze what the users do in the app or on the website.
  • Eye-tracking uses webcams to record where users look on a webpage or dashboard and presents it in a heatmap for ease of analysis.

As with all the qualitative and quantitative methods, it’s essential to select a representative user sample for your usability testing. Relying exclusively on the early adopters or power users can skew the outcomes.

Beta testing

Beta testing is another popular evaluation research technique. And there’s a good reason for that.

By testing the product or feature prior to the launch with real users, you can gather user feedback and validate your product-market fit.

Most importantly, you can identify and fix bugs that could otherwise damage your reputation and the trust of the wider user population. And if you get it right, your beta testers can spread the word about your product and build up the hype around the launch.

How do you recruit beta testers ?

If you’re looking at expanding into new markets, you may opt for users who have no experience with your product. You can find them on sites like Ubertesters, in beta testing communities, or through paid advertising.

Otherwise, your active users are the best bet because they are familiar with the product and they are normally keen to help. You can reach out to them by email or in-app messages .

Evaluative Research Design Examples: Beta Testing

Fake door testing

Fake door testing is a sneaky way of evaluating your ideas.

Why sneaky? Well, because it kind of involves cheating.

If you want to test if there’s demand for a feature or product, you can add it to your UI or create a landing page before you even start working on it.

Next, paid adverts or in-app messages like the tooltip below, to drive traffic and engagement.

Evaluative Research Design Examples: Fake Door Test

By tracking engagement with the feature, it’s easy to determine if there’s enough interest in the functionality to justify the resources you would need to spend on its development.

Of course, that’s not the end. If you don’t want to face customer rage and fury, you must always explain why you’ve stooped down to such a mischievous deed.

A modal will do the job nicely. Tell them the feature isn’t ready yet but you’re working on it. Try to placate your users by offering them early access to the feature before everybody else.

In this way, you kill two birds with one stone. You evaluate the interest and build a list of possible beta testers .

Evaluative Research Design Examples: Fake Door Test

Evaluation research questions

The success of your evaluation research very much depends on asking the right questions.

Usability evaluation questions

  • How was your experience completing this task?
  • What technical difficulties did you experience while completing the task?
  • How intuitive was the navigation?
  • How would you prefer to do this action instead?
  • Were there any unnecessary features?
  • How easy was the task to complete?
  • Were there any features missing?

Product survey research questions

  • Would you recommend the product to your colleagues/friends?
  • How disappointed would you be if you could no longer use the feature/product?
  • How satisfied are you with the product/feature?
  • What is the one thing you wish the product/feature could do that it doesn’t already?
  • What would make you cancel your subscription?

How Userpilot can help product managers conduct evaluation research

Userpilot is a digital adoption platform . It consists of three main components: engagement, product analytics, and user sentiment layers. While all of them can help you evaluate your product performance, it’s the latter two that are particularly relevant.

Let’s start with the user sentiment. With Userpilot you can create customized in-app surveys that will blend seamlessly into your product UI.

Easy survey customization in Userpilot

You can trigger these for all your users or target particular segments.

Where do the segments come from? You can create them based on a wide range of criteria. Apart from demographics or JTBDs, you can use product usage data or survey results. In addition to the quantitative scores, you can also use qualitative NPS responses for this.

Segmentation is also great for finding your beta testers and interview participants. If your users engage with your product regularly and give you high scores in customer satisfaction surveys , they may be happy to spare some of their time to help you.

power-users-user-segments-userpilot-evaluative-research-design-examples

Evaluative research enables product managers to assess how well the product meets user and organizational needs, and how easy it is to use. When carried out regularly during the product development process, it allows them to validate ideas and iterate on them in an informed way.

If you’d like to see how Userpilot can help your business collect evaluative data, book the demo!

Leave a comment Cancel reply

Save my name, email, and website in this browser for the next time I comment.

Book a demo with on of our product specialists

Get The Insights!

The fastest way to learn about Product Growth,Management & Trends.

The coolest way to learn about Product Growth, Management & Trends. Delivered fresh to your inbox, weekly.

summative evaluation research design

The fastest way to learn about Product Growth, Management & Trends.

You might also be interested in ...

Saas product management: definition, process & best practices.

[email protected]

How to Create a Product Launch Plan for SaaS Companies?

Aazar Ali Shad

Feature Adoption 101: Definition, Metrics, Best Practices, and Tools

Summative Evaluation

  • Living reference work entry
  • First Online: 24 February 2022
  • Cite this living reference work entry

Book cover

  • W. Douglas Evans 3  

62 Accesses

1 Citations

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Andrade, E. L., Evans, W. D., Barrett, N., Edberg, M., & Cleary, S. (2018). Design, implementation, and monitoring of the Adelante community social marketing campaign. Health Education Research . https://doi.org/10.1093/her/cyx076 .

Edberg, M., Clearly, S. D., Andrade, E. L., Evans, W. D., Simmons, L., & Cubilla, I. (2016). Applying ecological positive youth development theory to address the co-occurrence of substance abuse, sex risk, and interpersonal violence among immigrant Latino youth. Health Promotion Practice . https://doi.org/10.1177/1524839916638302 . ePub April 18, 2016.

Evans, W. D., Blitstein, J., & Hersey, J. (2008a). Evaluation of public health brands: Design, measurement, and analysis. In W. D. Evans & G. Hastings (Eds.), Public health branding: Applying marketing for social change . London: Oxford University Press.

Chapter   Google Scholar  

Evans, W. D., Davis, K. C., & Farrelly, M. C. (2008b). Planning for a media evaluation. In D. Holden & M. Zimmerman (Eds.), A practical guide to program evaluation planning . Thousand Oaks: Sage.

Google Scholar  

Evans, W. D., Andrade, E. L., Barrett, N., Edberg, M., & Cleary, S. (2019). Outcomes of the Adelante community social marketing campaign. Health Education Research . https://doi.org/10.1093/her/cyz016 .

Farrelly, M. C., Davis, K. C., Haviland, M. L., Messeri, P., & Healton, C. G. (2005). Evidence of a dose-response relationship between ‘truth’ antismoking ads and youth smoking. American Journal of Public Health, 95 (3), 425–431.

Article   Google Scholar  

Guba, E. G., & Lincoln, Y. S. (1994). Competing paradigms in qualitative research. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research . Thousand Oaks: Sage.

Rossi, P., & Freeman, R. (1993). Evaluation: A systematic approach . Thousand Oaks: Sage.

Valente, T. W. (2002). Evaluating health promotion programs . Oxford, UK: Oxford University Press.

Download references

Author information

Authors and affiliations.

Milken Institute School of Public Health, George Washington University, Washington, DC, USA

W. Douglas Evans

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to W. Douglas Evans .

Section Editor information

Milken Institute School of Public Health, George Washington University, Washington, D. C., USA

School of Health, Sport, and Life Sciences, Leeds Trinity University, Horsforth, Leeds, UK

Marco Bardus

Dept. of Health Promotion and Community Health, American University of Beirut, Beirut, Lebanon

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive licence to Springer Nature Switzerland AG

About this entry

Cite this entry.

Evans, W.D. (2022). Summative Evaluation. In: The Palgrave Encyclopedia of Social Marketing. Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-030-14449-4_156-1

Download citation

DOI : https://doi.org/10.1007/978-3-030-14449-4_156-1

Received : 16 June 2020

Accepted : 23 January 2022

Published : 24 February 2022

Publisher Name : Palgrave Macmillan, Cham

Print ISBN : 978-3-030-14449-4

Online ISBN : 978-3-030-14449-4

eBook Packages : Springer Reference Business and Management Reference Module Humanities and Social Sciences Reference Module Business, Economics and Social Sciences

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • Evaluation Research Design: Examples, Methods & Types

busayo.longe

As you engage in tasks, you will need to take intermittent breaks to determine how much progress has been made and if any changes need to be effected along the way. This is very similar to what organizations do when they carry out  evaluation research.  

The evaluation research methodology has become one of the most important approaches for organizations as they strive to create products, services, and processes that speak to the needs of target users. In this article, we will show you how your organization can conduct successful evaluation research using Formplus .

What is Evaluation Research?

Also known as program evaluation, evaluation research is a common research design that entails carrying out a structured assessment of the value of resources committed to a project or specific goal. It often adopts social research methods to gather and analyze useful information about organizational processes and products.  

As a type of applied research , evaluation research typically associated  with real-life scenarios within organizational contexts. This means that the researcher will need to leverage common workplace skills including interpersonal skills and team play to arrive at objective research findings that will be useful to stakeholders. 

Characteristics of Evaluation Research

  • Research Environment: Evaluation research is conducted in the real world; that is, within the context of an organization. 
  • Research Focus: Evaluation research is primarily concerned with measuring the outcomes of a process rather than the process itself. 
  • Research Outcome: Evaluation research is employed for strategic decision making in organizations. 
  • Research Goal: The goal of program evaluation is to determine whether a process has yielded the desired result(s). 
  • This type of research protects the interests of stakeholders in the organization. 
  • It often represents a middle-ground between pure and applied research. 
  • Evaluation research is both detailed and continuous. It pays attention to performative processes rather than descriptions. 
  • Research Process: This research design utilizes qualitative and quantitative research methods to gather relevant data about a product or action-based strategy. These methods include observation, tests, and surveys.

Types of Evaluation Research

The Encyclopedia of Evaluation (Mathison, 2004) treats forty-two different evaluation approaches and models ranging from “appreciative inquiry” to “connoisseurship” to “transformative evaluation”. Common types of evaluation research include the following: 

  • Formative Evaluation

Formative evaluation or baseline survey is a type of evaluation research that involves assessing the needs of the users or target market before embarking on a project.  Formative evaluation is the starting point of evaluation research because it sets the tone of the organization’s project and provides useful insights for other types of evaluation.  

  • Mid-term Evaluation

Mid-term evaluation entails assessing how far a project has come and determining if it is in line with the set goals and objectives. Mid-term reviews allow the organization to determine if a change or modification of the implementation strategy is necessary, and it also serves for tracking the project. 

  • Summative Evaluation

This type of evaluation is also known as end-term evaluation of project-completion evaluation and it is conducted immediately after the completion of a project. Here, the researcher examines the value and outputs of the program within the context of the projected results. 

Summative evaluation allows the organization to measure the degree of success of a project. Such results can be shared with stakeholders, target markets, and prospective investors. 

  • Outcome Evaluation

Outcome evaluation is primarily target-audience oriented because it measures the effects of the project, program, or product on the users. This type of evaluation views the outcomes of the project through the lens of the target audience and it often measures changes such as knowledge-improvement, skill acquisition, and increased job efficiency. 

  • Appreciative Enquiry

Appreciative inquiry is a type of evaluation research that pays attention to result-producing approaches. It is predicated on the belief that an organization will grow in whatever direction its stakeholders pay primary attention to such that if all the attention is focused on problems, identifying them would be easy. 

In carrying out appreciative inquiry, the research identifies the factors directly responsible for the positive results realized in the course of a project, analyses the reasons for these results, and intensifies the utilization of these factors. 

Evaluation Research Methodology 

There are four major evaluation research methods, namely; output measurement, input measurement, impact assessment and service quality

  • Output/Performance Measurement

Output measurement is a method employed in evaluative research that shows the results of an activity undertaking by an organization. In other words, performance measurement pays attention to the results achieved by the resources invested in a specific activity or organizational process. 

More than investing resources in a project, organizations must be able to track the extent to which these resources have yielded results, and this is where performance measurement comes in. Output measurement allows organizations to pay attention to the effectiveness and impact of a process rather than just the process itself. 

Other key indicators of performance measurement include user-satisfaction, organizational capacity, market penetration, and facility utilization. In carrying out performance measurement, organizations must identify the parameters that are relevant to the process in question, their industry, and the target markets. 

5 Performance Evaluation Research Questions Examples

  • What is the cost-effectiveness of this project?
  • What is the overall reach of this project?
  • How would you rate the market penetration of this project?
  • How accessible is the project? 
  • Is this project time-efficient? 

performance-evaluation-survey

  • Input Measurement

In evaluation research, input measurement entails assessing the number of resources committed to a project or goal in any organization. This is one of the most common indicators in evaluation research because it allows organizations to track their investments. 

The most common indicator of inputs measurement is the budget which allows organizations to evaluate and limit expenditure for a project. It is also important to measure non-monetary investments like human capital; that is the number of persons needed for successful project execution and production capital. 

5 Input Evaluation Research Questions Examples

  • What is the budget for this project?
  • What is the timeline of this process?
  • How many employees have been assigned to this project? 
  • Do we need to purchase new machinery for this project? 
  • How many third-parties are collaborators in this project? 

summative evaluation research design

  • Impact/Outcomes Assessment

In impact assessment, the evaluation researcher focuses on how the product or project affects target markets, both directly and indirectly. Outcomes assessment is somewhat challenging because many times, it is difficult to measure the real-time value and benefits of a project for the users. 

In assessing the impact of a process, the evaluation researcher must pay attention to the improvement recorded by the users as a result of the process or project in question. Hence, it makes sense to focus on cognitive and affective changes, expectation-satisfaction, and similar accomplishments of the users. 

5 Impact Evaluation Research Questions Examples

  • How has this project affected you? 
  • Has this process affected you positively or negatively?
  • What role did this project play in improving your earning power? 
  • On a scale of 1-10, how excited are you about this project?
  • How has this project improved your mental health? 

summative evaluation research design

  • Service Quality

Service quality is the evaluation research method that accounts for any differences between the expectations of the target markets and their impression of the undertaken project. Hence, it pays attention to the overall service quality assessment carried out by the users. 

It is not uncommon for organizations to build the expectations of target markets as they embark on specific projects. Service quality evaluation allows these organizations to track the extent to which the actual product or service delivery fulfils the expectations. 

5 Service Quality Evaluation Questions

  • On a scale of 1-10, how satisfied are you with the product?
  • How helpful was our customer service representative?
  • How satisfied are you with the quality of service?
  • How long did it take to resolve the issue at hand?
  • How likely are you to recommend us to your network?

summative evaluation research design

Uses of Evaluation Research 

  • Evaluation research is used by organizations to measure the effectiveness of activities and identify areas needing improvement. Findings from evaluation research are key to project and product advancements and are very influential in helping organizations realize their goals efficiently.     
  • The findings arrived at from evaluation research serve as evidence of the impact of the project embarked on by an organization. This information can be presented to stakeholders, customers, and can also help your organization secure investments for future projects. 
  • Evaluation research helps organizations to justify their use of limited resources and choose the best alternatives. 
  •  It is also useful in pragmatic goal setting and realization. 
  • Evaluation research provides detailed insights into projects embarked on by an organization. Essentially, it allows all stakeholders to understand multiple dimensions of a process, and to determine strengths and weaknesses. 
  • Evaluation research also plays a major role in helping organizations to improve their overall practice and service delivery. This research design allows organizations to weigh existing processes through feedback provided by stakeholders, and this informs better decision making. 
  • Evaluation research is also instrumental to sustainable capacity building. It helps you to analyze demand patterns and determine whether your organization requires more funds, upskilling or improved operations.

Data Collection Techniques Used in Evaluation Research

In gathering useful data for evaluation research, the researcher often combines quantitative and qualitative research methods . Qualitative research methods allow the researcher to gather information relating to intangible values such as market satisfaction and perception. 

On the other hand, quantitative methods are used by the evaluation researcher to assess numerical patterns, that is, quantifiable data. These methods help you measure impact and results; although they may not serve for understanding the context of the process. 

Quantitative Methods for Evaluation Research

A survey is a quantitative method that allows you to gather information about a project from a specific group of people. Surveys are largely context-based and limited to target groups who are asked a set of structured questions in line with the predetermined context.

Surveys usually consist of close-ended questions that allow the evaluative researcher to gain insight into several  variables including market coverage and customer preferences. Surveys can be carried out physically using paper forms or online through data-gathering platforms like Formplus . 

  • Questionnaires

A questionnaire is a common quantitative research instrument deployed in evaluation research. Typically, it is an aggregation of different types of questions or prompts which help the researcher to obtain valuable information from respondents. 

A poll is a common method of opinion-sampling that allows you to weigh the perception of the public about issues that affect them. The best way to achieve accuracy in polling is by conducting them online using platforms like Formplus. 

Polls are often structured as Likert questions and the options provided always account for neutrality or indecision. Conducting a poll allows the evaluation researcher to understand the extent to which the product or service satisfies the needs of the users. 

Qualitative Methods for Evaluation Research

  • One-on-One Interview

An interview is a structured conversation involving two participants; usually the researcher and the user or a member of the target market. One-on-One interviews can be conducted physically, via the telephone and through video conferencing apps like Zoom and Google Meet. 

  • Focus Groups

A focus group is a research method that involves interacting with a limited number of persons within your target market, who can provide insights on market perceptions and new products. 

  • Qualitative Observation

Qualitative observation is a research method that allows the evaluation researcher to gather useful information from the target audience through a variety of subjective approaches. This method is more extensive than quantitative observation because it deals with a smaller sample size, and it also utilizes inductive analysis. 

  • Case Studies

A case study is a research method that helps the researcher to gain a better understanding of a subject or process. Case studies involve in-depth research into a given subject, to understand its functionalities and successes. 

How to Formplus Online Form Builder for Evaluation Survey 

  • Sign into Formplus

In the Formplus builder, you can easily create your evaluation survey by dragging and dropping preferred fields into your form. To access the Formplus builder, you will need to create an account on Formplus. 

Once you do this, sign in to your account and click on “Create Form ” to begin. 

formplus

  • Edit Form Title

Click on the field provided to input your form title, for example, “Evaluation Research Survey”.

summative evaluation research design

Click on the edit button to edit the form.

Add Fields: Drag and drop preferred form fields into your form in the Formplus builder inputs column. There are several field input options for surveys in the Formplus builder. 

summative evaluation research design

Edit fields

Click on “Save”

Preview form.

  • Form Customization

With the form customization options in the form builder, you can easily change the outlook of your form and make it more unique and personalized. Formplus allows you to change your form theme, add background images, and even change the font according to your needs. 

evaluation-research-from-builder

  • Multiple Sharing Options

Formplus offers multiple form sharing options which enables you to easily share your evaluation survey with survey respondents. You can use the direct social media sharing buttons to share your form link to your organization’s social media pages. 

You can send out your survey form as email invitations to your research subjects too. If you wish, you can share your form’s QR code or embed it on your organization’s website for easy access. 

Conclusion  

Conducting evaluation research allows organizations to determine the effectiveness of their activities at different phases. This type of research can be carried out using qualitative and quantitative data collection methods including focus groups, observation, telephone and one-on-one interviews, and surveys. 

Online surveys created and administered via data collection platforms like Formplus make it easier for you to gather and process information during evaluation research. With Formplus multiple form sharing options, it is even easier for you to gather useful data from target markets.

Logo

Connect to Formplus, Get Started Now - It's Free!

  • characteristics of evaluation research
  • evaluation research methods
  • types of evaluation research
  • what is evaluation research
  • busayo.longe

Formplus

You may also like:

Recall Bias: Definition, Types, Examples & Mitigation

This article will discuss the impact of recall bias in studies and the best ways to avoid them during research.

summative evaluation research design

What is Pure or Basic Research? + [Examples & Method]

Simple guide on pure or basic research, its methods, characteristics, advantages, and examples in science, medicine, education and psychology

Formal Assessment: Definition, Types Examples & Benefits

In this article, we will discuss different types and examples of formal evaluation, and show you how to use Formplus for online assessments.

Assessment vs Evaluation: 11 Key Differences

This article will discuss what constitutes evaluations and assessments along with the key differences between these two research methods.

Formplus - For Seamless Data Collection

Collect data the right way with a versatile data collection tool. try formplus and transform your work productivity today..

  • History of the Profession
  • Professional Organizations
  • Professional Development
  • Managing UX
  • Editorial Board
  • History of the Usability BoK
  • Design Approach

Usability evaluation of a complete or near-complete design under realistic conditions that can be used to determine if the design meets specific measurable performance and/or satisfaction goals, or to establish a usability benchmark or to make comparisons.<

This approach is in contrast to a formative evaluation which is used to find and eliminate problems during the design and development process, rather than judge a completed product against specific goals.

Both summative and formative refer to the purpose of the evaluation.

Related Links

Web resources.

Sauro, J. (2010) Are the Terms Formative and Summative Helpful or Harmful? measuringusability.com

Hartson, H.R., Andre, T.S., Williges, R.C. (2003) Criteria For Evaluating Usability Evaluation Methods . International Journal of HCI, 5(1), 145-181.

Formal Publications

Scriven, M. (1967). The methodology of evaluation. In R. W. Tyler, R. M. Gagne, & M. Scriven (Eds.), Perspectives of curriculum evaluation , 39-83. Chicago, IL: Rand McNally.

Published Studies

Carlson, Jennifer Lee; Braun,Kelly; Kantner, Laurie. When Product Teams Observe Field Research Sessions: Benefits and Lessons Learned. UPA 2009 Conference.

  • Printer-friendly version
  • Community and Engagement
  • Honors and Awards
  • Give Now 

College of Education Awarded $4 Million in Grant Funding From October 2023 Through March 2024

A graphic shows scales, non-specific heads and dollar signs in green, red and purple.

Faculty and researchers at the NC State College of Education, including the Belk Center for Community College Leadership and Research and the Friday Institute for Educational Innovation , were awarded $4,090,192 to support 23 projects from Oct. 1, 2023, through March 31, 2024.

Editor’s note: All dollar amounts listed are reflective of the grant funding awarded directly to the College of Education and do not include funding awarded to other collaborators. 

NC State STEM Education Scholars Program

This $1,176,730 grant from the National Science Foundation will develop the NC State STEM Education Scholars program to increase the number of highly qualified teacher candidates in secondary science and mathematics by reducing financial barriers to teaching, provide participants with targeted experiences to develop teacher candidates’ pedagogical content knowledge with a focus on building community funds of knowledge and increase STEM teacher retention by providing ongoing professional development. Associate Dean of Research and Innovation Karen Hollebrands is the project’s principal investigator. Assistant Teaching Professor of Science Education Matt Reynolds is a co-principal investigator on the project. 

NSF CAREER: Supporting Teachers to Leverage Students’ Languages in Mathematics

This $926,102 National Science Foundation CAREER grant will be used to partner with a mathematics department at a public middle school to co-design, analyze and improve teachers’ translanguaging pedagogies to draw on students’ full linguistic repertoires as resources for their learning and help teachers, teacher educators and researchers to better understand how students’ languages can be leveraged as a resource for mathematical learning. Assistant Professor of Mathematics Education Samantha Marshall is the project’s principal investigator.

NSF CAREER: Integrating Robotics and Socio-emotional Learning for Incarcerated Middle School Students

This $587,700 National Science Foundation CAREER grant aims to provide new choices to confined youth by developing and investigating robotics learning activities within a juvenile justice alternative education program and engaging pre-service teachers in mentoring youth participants. Assistant Professor of Technology, Engineering, and Design Education Daniel Kelly is the project’s principal investigator.

Examining Rural Dual Language Programs, Multilingual Learners, and Rural Community Cultural Wealth

This $413,811 grant from the Spencer Foundation will examine rural dual-language immersion education programs, the rural community ecosystems in which they operate and how rural educators of multilingual students leverage the linguistic resources of their students and families. Goodnight Distinguished Professor in Educational Equity Maria Coady is the project’s principal investigator. Assistant Teaching Professor Joanna Koch is a co-principal investigator on the project. 

Falls Lake Partners in Forensic Science II

This grant from the Burroughs Wellcome Fund will kindle students’ interest in aquatic science careers through short interactive presentations. Professor of Technology, Design, and Engineering Education Aaron Clark will serve as personnel on the project. 

Project Connect

This $157,000 grant from the U.S. Department of Education will provide a comprehensive formative and summative evaluation of Project Connect from the Program Evaluation and Education Research (PEER) group at the Friday Institute for Educational Evaluation. Director of Program Evaluation and Education Research Callie Womble Edwards is a co-principal investigator on the project. 

Validation of the Equity and Access Rubrics for Mathematics Instruction (VEAR-MI)

This $152,733 grant from the National Science Foundation will facilitate the analysis of cognitive interview data from the VEAR-MI project as well as the analysis of mathematics lesson videos. Associate Professor of Mathematics Education Temple Walkowiak is a co-principal investigator on the project. 

Project Read: Reading Extension Activation and Deliver

This $151,196 grant from the Mebane Foundation will launch and deliver a reading-specific extension initiative in connection with convenings that will be held through a partnership between the College of Education and NC State Extension. Assistant Dean for Professional Education and Accreditation Erin Horne is the project’s principal investigator. 

Project Adding Direct Support (ADS)

This $85,555 grant from the U.S. Department of Education will uncover in-service needs related to school counseling, create a recruitment plan, conduct a community needs assessment and develop the planning and implementation of trauma-informed, equity-focused virtual training sessions that will be offered to school social workers and social work programs across North Carolina. Professor of Counselor Education Stan Baker is the project’s principal investigator. Assistant Professor of Counselor Education Rolanda Mitchell is a co-principal investigator on the project. 

IRIS Center

This $71,183 grant from the U.S. Department of Education will help develop and disseminate digital, open educational resources – including training modules and online tools – for Vanderbilt University’s IRIS Center with the goal of supporting educators’ use of evidence-based practices. Assistant Teaching Professor of Elementary Education and Special Education Jordan Lukins is the project’s principal investigator.

Hattie’s Influences on Student Achievement Under an Institutionally Racist System: What Works for Black & Brown Students

This $68,742 grant from the William T. Grant Foundation will fund a study revisiting Hattie’s List to identify and restrict the original studies to only those that include American Black and Brown students and conduct a new meta-analysis based on that data. Assistant Professor of Educational Evaluation and Policy Analysis Lam Pham is the project’s principal investigator.

Citizen Math: Using Math Class to Create Informed, Thoughtful, and Productive Citizens

This $61,739 grant from the U.S. Department of Education will enable the Friday Institute’s Professional Learning and Leading Collaborative team to engage middle school administrators and teachers from across North Carolina in order to recruit study cohort participants for a scalable, low-cost program that addresses issues of societal importances in ways that engage students while developing social-emotional skills and rigorous mathematics learning. Friday Institute Senior Research Scholar Emmy Coleman is the project’s principal investigator. 

Empathy and AI: Towards Equitable Microtransit

This $59,450 grant from the National Science Foundation aims to identify, test and evaluate technologically enabled and community-supported solutions for equitably distributing travel demand over time for on-demand public transportation services with a focus on understanding the feasibility and tradeoffs involved in enabling and incentivizing prosocial behavior. Associate Professor of English Education Crystal Chen Lee is a co-principal investigator on the project. 

Understanding the Long-term Effects Adaptation Strategies on Cape Hatteras National Seashore and Ocracoke Island through Co-Production

This $37,933 grant from the U.S. National Park Service will use a barrier island model and adapt participatory modeling and deliberative dialogue approaches while using best practices for co-producing decision-relevant science in order to co-create a process and tools that will support adaptation planning along the Cape Hatteras National Seashore and surrounding communities. Associate Professor of Science Education K.C. Busch is the principal investigator for NC State. 

Hosting the USGS Southeast Climate Adaptation Science Center

This $32,106 grant from the United States Geological Survey, will bring partners, community members and researchers together to discuss global change impacts and train graduate students on how to use and develop global change science. Associate Professor of Science Education K.C. Busch will serve as senior personnel on the project. 

Asset Inventory: Eastern NC Digital Equity

This $27,202 grant from the Camber Foundation will enable NC State’s Institute for Emerging Issues and the Friday Institute to collaborate with the Camber Foundation to support data collection and analysis in the development of Digital Equity Asset Inventories in three eastern North Carolina Councils of Governments to include in their digital inclusion plans. Friday Institute Associate Director of Program Evaluation and Educational Research Erin Huggins is a co-principal investigator on the project. 

Inclusion Diversity Equity & Accessibility (Ideas) To Forestry and Renewable Energy Careers

This $25,982 grant from the U.S. Department of Agriculture will leverage scholarship funds with existing initiatives in the NC State College of Natural Resources to improve access to forestry and renewable energy careers and graduate education among Indigenous, Black and Hispanic/Latino populations as well as women. Friday Institute Director of Program Evaluation and Education Research Callie Womble Edwards is a co-principal investigator on the project. 

Virtual Training to Manage Legal Risk for Turkey Producers and Processors

This $14,736 grant from the USDA will contribute to the development of a virtual, on-demand education program to help turkey producers and processors manage legal risk regarding animal welfare using engaging multimedia methods presented in both English and Spanish. Goodnight Distinguished Professor in Educational Equity Maria Coady is a co-principal investigator on the project. 

NC State Improvement Project IHE Partnership

This $10,556 grant from the North Carolina Department of Public Instruction will prepare teachers to implement research-based curriculum, employ high-yield instructional practices and utilize an assessment system to make instructional decisions. Assistant Teaching Professor of Elementary Education and Special Education Jordan Lukins is the project’s principal investigator.

Preparing the New Teacher Workforce to Foster Deeper Learning

This $10,000 grant from Stanford University will contribute to the design and piloting of new online modules and supplementary resources aimed at developing pre-service teachers’ capacity to draw on learning science to effectively foster deeper learning. Director of Professional Education Sarah Cannon is the project’s principal investigator. 

Data at Work Course Development and Pilot

This $9,944 grant from the North Carolina Department of Health and Human Services (DHHS) will support the NC State Data Science Academy in facilitating the development, piloting, and assessment of a customized course for the Early Childhood Division to help participants understand data, tools and analysis within the context of their work at DHHS. Friday Institute Senior Research Scholar Gemma Mojica will serve as an evaluator on the project. 

PK-2 North Carolina Math Convening

This $8,300 grant from the Burroughs Wellcome Fund supported a convening of an interdisciplinary group of experts in Pre-K-2 mathematics teaching and learning to examine areas of agreement and disagreement related to mathematics education, special education and cognitive science. Dean Paola Sztajn is the project’s principal investigator. 

Building Eastern North Carolina Teachers’ Understanding of Climate Change and Community Resiliency

This $1,000 grant from the Burroughs Wellcome Fund will enable 20 educators from Eastern North Carolina to participate in a series of virtual and in-person experiences to build a statewide perspective on climate change and climate resiliency. Friday Institute Research Scholar Kevin Winn serves as personnel on the project. 

  • Research and Impact
  • homepage-news

More From College of Education News

Hollylynne Lee

Distinguished University Professor of Mathematics and Statistics Education Hollylynne Lee Leads Team Named Finalist in 2023-24 Tools Competition 

summative evaluation research design

Goodnight Distinguished Professor in Educational Equity Maria Coady, Assistant Teaching Professor Joanna Koch to Examine Dual-language Immersion Programs in Rural North Carolina Settings through Spencer Foundation Grant 

Friday Institute 20 years celebration

Friday Institute for Educational Innovation Celebrates 20 Years on NC State’s Centennial Campus 

IMAGES

  1. Formative vs Summative Research—and the Role of Codesign in Both

    summative evaluation research design

  2. Evaluative Research: Definition, Methods & Types

    summative evaluation research design

  3. Designing Assessments

    summative evaluation research design

  4. Evaluative Research Design Examples, Methods, And Questions For PMs

    summative evaluation research design

  5. 75 Formative Assessment Examples (2024)

    summative evaluation research design

  6. Difference Between Formative And Summative Assessment(With Table

    summative evaluation research design

VIDEO

  1. 1st Summative Evaluation, Class -VII, Mathematics, Model Practice Set

  2. 1st Summative Evaluation, Class -VI, Mathematics Model Practice Set

  3. Class Viii 1st Summative Evaluation, Subject : English,1st Summative English Question Paper 2024

  4. Types OF Evaluation

  5. 1st Summative Evaluation, Class -VII, Mathematics Model Practice Set

  6. QUANTITATIVE METHODOLOGY (Part 2 of 3):

COMMENTS

  1. Formative vs. Summative Evaluations

    Research Methods for Formative vs. Summative Evaluations. After it is clear which type of evaluation you will conduct, you have to determine which research method you should use. There is a common misconception that summative equals quantitative and formative equals qualitative ­­— this is not the case.

  2. Conducting Summative Evaluation and Research: The Final Stage

    A Summative Evaluation Plan follows a format similar to a Formative Evaluation Plan except that the methods and tools may vary due to research design and methodology being used. Evaluators define the online instruction's purpose(s), the stakeholders, materials to be used, the evaluators and participants in the evaluation, and evaluation ...

  3. Evaluative research: Methods, types, and examples (2024)

    Engage users with prototypes to identify usability issues, gauge user satisfaction, and validate design decisions. This early evaluation ensures that potential problems are addressed before moving forward, saving time and resources in the later stages of development. ... Summative evaluation research. Summative evaluation research occurs after ...

  4. Design and Implementation of Evaluation Research

    Evaluation has its roots in the social, behavioral, and statistical sciences, and it relies on their principles and methodologies of research, including experimental design, measurement, statistical tests, and direct observation. What distinguishes evaluation research from other social science is that its subjects are ongoing social action programs that are intended to produce individual or ...

  5. Understanding Summative Evaluation: Definition, Benefits, and Best

    Future Directions for Summative Evaluation Research and Practice ... Second, it can help to identify areas where improvements can be made in program delivery, such as in program design or implementation. Third, it can help to determine whether the program or project is a worthwhile investment, and whether it is meeting the needs of stakeholders

  6. Summative Evaluation

    The distinction between formative and summative evaluation was proposed by Scriven (1967) in an article on the methodology of curriculum and program evaluation conducted by researchers and school administrators. In the mastery learning model developed by Bloom (1968) and Bloom et al. (1971), this distinction was transposed to the assessment of student learning by classroom teachers.

  7. Summative Analysis: A Qualitative Method for Social Science and Health

    In this paper the author describes a new qualitative analytic technique that she has been perfecting across a range of health research studies. She describes the summative analysis method, which is a group, collaborative analytic technique that concentrates on consensus-building activities, illustrating its use within a study of Holocaust ...

  8. PDF RESEARCH DESIGN IN EVALUATION: CHOICES AND ISSUES

    process and product public, evaluation is clearly a research activity. It is, however, a research activity with particular characteristics which distinguish it from other forms of research. Jamieson [1] has suggested that evaluation differs from academic research in two important ways. The first concerns the choice of research questions.

  9. PDF Designing Evaluation

    Summative evaluation is used to make judgments about how worthwhile a program is in order to determine whether to keep it or license it; hence, the evaluation must have credibility for a number of ... EVALUATION RESEARCH DESIGN a good deal of resources go into the preparation for and administering of instruments to assess treatments; quantitative

  10. Research Design Steps

    Research Design Getting Started. ... Because the purpose of such summative evaluation is to measure success and to determine whether this success is scalable (capable of being generalized beyond the specific case), quantitative data is more often used than qualitative data. In our example, we might have "outcomes" data for thousands of ...

  11. Evaluative Research: Examples and Methods to Build Better Products

    Summative evaluation can be used for outcome-focused evaluation to assess impact and effectiveness for specific outcomes—for example, how design influences conversion. Formative evaluation research On the other hand, formative research is conducted early and often during the design process to test and improve a solution before arriving at the ...

  12. New Trends in Formative-Summative Evaluations for Adult Education

    4. Assessment goal is formative or assessment for learning, that is, to improve the performance during the process but evaluation is summative since it is preformed after the program has been completed to judge the quality. 5. Assessment targets the process, whereas evaluation is aimed to the outcome. 6.

  13. FEDS: a Framework for Evaluation in Design Science Research

    Summative evaluation episodes are more often used to measure the results of a completed development or to appraise a situation before development begins. ... The framework provides a way to support evaluation research design decisions by creating a bridge between the evaluation goals and evaluation strategies. By providing a classification of ...

  14. Quantitative Approaches for the Evaluation of Implementation Research

    3. Quantitative Methods for Evaluating Implementation Outcomes. While summative evaluation is distinguishable from formative evaluation (see Elwy et al. this issue), proper understanding of the implementation strategy requires using both methods, perhaps at different stages of implementation research (The Health Foundation, 2015).Formative evaluation is a rigorous assessment process designed ...

  15. Innovations in Mixed Methods Evaluations

    Characteristics of Mixed Methods Designs in Evaluation Research. Several typologies exist in mixed methods designs, including convergent, explanatory, exploratory, embedded, transformative, and multiphase designs ().These, along with other mixed method designs in evaluation research, can be categorized in terms of their structure, function, and process (1, 4, 63, 65, 73).

  16. Evaluative Research Design Examples, Methods, And Questions ...

    Evaluative research, aka program evaluation or evaluation research, is a set of research practices aimed at assessing how well the product meets its goals. It takes place at all stages of the product development process, both in the launch lead-up and afterward. This kind of research is not limited to your own product.

  17. Summative Evaluation

    Definition. Summative (or outcome) monitoring and evaluation (M&E) is research on the extent to which a social marketing campaign achieved its outcome objectives. They consist of multiple study designs depending on the campaign implementation context, and integrate process data in order to attribute observed changes in outcomes and attribute ...

  18. PDF THE DESIGN OF SUMMATIVE EVALUATIONS

    HRDC Expert Panel on Evaluation Design for the EBSM Program1 ("the Panel"). The two goals of the report are: (1) To provide a more detailed discussion of some of the issues raised at the Panel's meeting2 of March 8 and 9,2001; and (2) To highlight those remaining issues toward which additional research efforts might be directed.

  19. Formative vs. summative research

    1. This article aims to explore the definition, difference and purpose of formative and summative usability research, how and when they fit into the product design cycle, and how to conduct them. First, formative and summative evaluations are both a form of evaluative usability research. They are each conducted to understand how a product ...

  20. Evaluation Research Design: Examples, Methods & Types

    Evaluation Research Methodology. There are four major evaluation research methods, namely; output measurement, input measurement, impact assessment and service quality. Output measurement is a method employed in evaluative research that shows the results of an activity undertaking by an organization.

  21. Summative Evaluation

    Summative Evaluation. Usability evaluation of a complete or near-complete design under realistic conditions that can be used to determine if the design meets specific measurable performance and/or satisfaction goals, or to establish a usability benchmark or to make comparisons.<. This approach is in contrast to a formative evaluation< which is ...

  22. The Role of Formative Evaluation in Implementation Research and the

    Summative evaluation is a systematic process of collecting data on the impacts, outputs, ... complementary use of FE within an experimental study can create a dual or hybrid style approach for implementation research. 15 The experimental design is thus combined with descriptive or observational research that employs a mix of qualitative and ...

  23. (PDF) Evaluation In Instructional Design

    Abstract. This research investigated evaluation in instructional design. Evaluation is an integral part of any model of instructional design. It is of three types: formative evaluation, summative ...

  24. Asset Based Community Development: Co-Designing an Asset-Based

    The rise of asset-based approaches in health and care has been swift since the notion of an asset model for public health evidence was first proposed by Morgan and Ziglio (2007).Frustration with the limitations of a deficit model of research based on investigating risk and health need has not yet been replaced with shared understandings of how knowledge on health assets should be built ...

  25. College of Education Awarded $4 Million in Grant Funding From October

    Professor of Technology, Design, ... This $157,000 grant from the U.S. Department of Education will provide a comprehensive formative and summative evaluation of Project Connect from the Program Evaluation and Education Research (PEER) group at the Friday Institute for Educational Evaluation. Director of Program Evaluation and Education ...