• Privacy Policy

Buy Me a Coffee

Research Method

Home » Survey Research – Types, Methods, Examples

Survey Research – Types, Methods, Examples

Table of Contents

Survey Research

Survey Research

Definition:

Survey Research is a quantitative research method that involves collecting standardized data from a sample of individuals or groups through the use of structured questionnaires or interviews. The data collected is then analyzed statistically to identify patterns and relationships between variables, and to draw conclusions about the population being studied.

Survey research can be used to answer a variety of questions, including:

  • What are people’s opinions about a certain topic?
  • What are people’s experiences with a certain product or service?
  • What are people’s beliefs about a certain issue?

Survey Research Methods

Survey Research Methods are as follows:

  • Telephone surveys: A survey research method where questions are administered to respondents over the phone, often used in market research or political polling.
  • Face-to-face surveys: A survey research method where questions are administered to respondents in person, often used in social or health research.
  • Mail surveys: A survey research method where questionnaires are sent to respondents through mail, often used in customer satisfaction or opinion surveys.
  • Online surveys: A survey research method where questions are administered to respondents through online platforms, often used in market research or customer feedback.
  • Email surveys: A survey research method where questionnaires are sent to respondents through email, often used in customer satisfaction or opinion surveys.
  • Mixed-mode surveys: A survey research method that combines two or more survey modes, often used to increase response rates or reach diverse populations.
  • Computer-assisted surveys: A survey research method that uses computer technology to administer or collect survey data, often used in large-scale surveys or data collection.
  • Interactive voice response surveys: A survey research method where respondents answer questions through a touch-tone telephone system, often used in automated customer satisfaction or opinion surveys.
  • Mobile surveys: A survey research method where questions are administered to respondents through mobile devices, often used in market research or customer feedback.
  • Group-administered surveys: A survey research method where questions are administered to a group of respondents simultaneously, often used in education or training evaluation.
  • Web-intercept surveys: A survey research method where questions are administered to website visitors, often used in website or user experience research.
  • In-app surveys: A survey research method where questions are administered to users of a mobile application, often used in mobile app or user experience research.
  • Social media surveys: A survey research method where questions are administered to respondents through social media platforms, often used in social media or brand awareness research.
  • SMS surveys: A survey research method where questions are administered to respondents through text messaging, often used in customer feedback or opinion surveys.
  • IVR surveys: A survey research method where questions are administered to respondents through an interactive voice response system, often used in automated customer feedback or opinion surveys.
  • Mixed-method surveys: A survey research method that combines both qualitative and quantitative data collection methods, often used in exploratory or mixed-method research.
  • Drop-off surveys: A survey research method where respondents are provided with a survey questionnaire and asked to return it at a later time or through a designated drop-off location.
  • Intercept surveys: A survey research method where respondents are approached in public places and asked to participate in a survey, often used in market research or customer feedback.
  • Hybrid surveys: A survey research method that combines two or more survey modes, data sources, or research methods, often used in complex or multi-dimensional research questions.

Types of Survey Research

There are several types of survey research that can be used to collect data from a sample of individuals or groups. following are Types of Survey Research:

  • Cross-sectional survey: A type of survey research that gathers data from a sample of individuals at a specific point in time, providing a snapshot of the population being studied.
  • Longitudinal survey: A type of survey research that gathers data from the same sample of individuals over an extended period of time, allowing researchers to track changes or trends in the population being studied.
  • Panel survey: A type of longitudinal survey research that tracks the same sample of individuals over time, typically collecting data at multiple points in time.
  • Epidemiological survey: A type of survey research that studies the distribution and determinants of health and disease in a population, often used to identify risk factors and inform public health interventions.
  • Observational survey: A type of survey research that collects data through direct observation of individuals or groups, often used in behavioral or social research.
  • Correlational survey: A type of survey research that measures the degree of association or relationship between two or more variables, often used to identify patterns or trends in data.
  • Experimental survey: A type of survey research that involves manipulating one or more variables to observe the effect on an outcome, often used to test causal hypotheses.
  • Descriptive survey: A type of survey research that describes the characteristics or attributes of a population or phenomenon, often used in exploratory research or to summarize existing data.
  • Diagnostic survey: A type of survey research that assesses the current state or condition of an individual or system, often used in health or organizational research.
  • Explanatory survey: A type of survey research that seeks to explain or understand the causes or mechanisms behind a phenomenon, often used in social or psychological research.
  • Process evaluation survey: A type of survey research that measures the implementation and outcomes of a program or intervention, often used in program evaluation or quality improvement.
  • Impact evaluation survey: A type of survey research that assesses the effectiveness or impact of a program or intervention, often used to inform policy or decision-making.
  • Customer satisfaction survey: A type of survey research that measures the satisfaction or dissatisfaction of customers with a product, service, or experience, often used in marketing or customer service research.
  • Market research survey: A type of survey research that collects data on consumer preferences, behaviors, or attitudes, often used in market research or product development.
  • Public opinion survey: A type of survey research that measures the attitudes, beliefs, or opinions of a population on a specific issue or topic, often used in political or social research.
  • Behavioral survey: A type of survey research that measures actual behavior or actions of individuals, often used in health or social research.
  • Attitude survey: A type of survey research that measures the attitudes, beliefs, or opinions of individuals, often used in social or psychological research.
  • Opinion poll: A type of survey research that measures the opinions or preferences of a population on a specific issue or topic, often used in political or media research.
  • Ad hoc survey: A type of survey research that is conducted for a specific purpose or research question, often used in exploratory research or to answer a specific research question.

Types Based on Methodology

Based on Methodology Survey are divided into two Types:

Quantitative Survey Research

Qualitative survey research.

Quantitative survey research is a method of collecting numerical data from a sample of participants through the use of standardized surveys or questionnaires. The purpose of quantitative survey research is to gather empirical evidence that can be analyzed statistically to draw conclusions about a particular population or phenomenon.

In quantitative survey research, the questions are structured and pre-determined, often utilizing closed-ended questions, where participants are given a limited set of response options to choose from. This approach allows for efficient data collection and analysis, as well as the ability to generalize the findings to a larger population.

Quantitative survey research is often used in market research, social sciences, public health, and other fields where numerical data is needed to make informed decisions and recommendations.

Qualitative survey research is a method of collecting non-numerical data from a sample of participants through the use of open-ended questions or semi-structured interviews. The purpose of qualitative survey research is to gain a deeper understanding of the experiences, perceptions, and attitudes of participants towards a particular phenomenon or topic.

In qualitative survey research, the questions are open-ended, allowing participants to share their thoughts and experiences in their own words. This approach allows for a rich and nuanced understanding of the topic being studied, and can provide insights that are difficult to capture through quantitative methods alone.

Qualitative survey research is often used in social sciences, education, psychology, and other fields where a deeper understanding of human experiences and perceptions is needed to inform policy, practice, or theory.

Data Analysis Methods

There are several Survey Research Data Analysis Methods that researchers may use, including:

  • Descriptive statistics: This method is used to summarize and describe the basic features of the survey data, such as the mean, median, mode, and standard deviation. These statistics can help researchers understand the distribution of responses and identify any trends or patterns.
  • Inferential statistics: This method is used to make inferences about the larger population based on the data collected in the survey. Common inferential statistical methods include hypothesis testing, regression analysis, and correlation analysis.
  • Factor analysis: This method is used to identify underlying factors or dimensions in the survey data. This can help researchers simplify the data and identify patterns and relationships that may not be immediately apparent.
  • Cluster analysis: This method is used to group similar respondents together based on their survey responses. This can help researchers identify subgroups within the larger population and understand how different groups may differ in their attitudes, behaviors, or preferences.
  • Structural equation modeling: This method is used to test complex relationships between variables in the survey data. It can help researchers understand how different variables may be related to one another and how they may influence one another.
  • Content analysis: This method is used to analyze open-ended responses in the survey data. Researchers may use software to identify themes or categories in the responses, or they may manually review and code the responses.
  • Text mining: This method is used to analyze text-based survey data, such as responses to open-ended questions. Researchers may use software to identify patterns and themes in the text, or they may manually review and code the text.

Applications of Survey Research

Here are some common applications of survey research:

  • Market Research: Companies use survey research to gather insights about customer needs, preferences, and behavior. These insights are used to create marketing strategies and develop new products.
  • Public Opinion Research: Governments and political parties use survey research to understand public opinion on various issues. This information is used to develop policies and make decisions.
  • Social Research: Survey research is used in social research to study social trends, attitudes, and behavior. Researchers use survey data to explore topics such as education, health, and social inequality.
  • Academic Research: Survey research is used in academic research to study various phenomena. Researchers use survey data to test theories, explore relationships between variables, and draw conclusions.
  • Customer Satisfaction Research: Companies use survey research to gather information about customer satisfaction with their products and services. This information is used to improve customer experience and retention.
  • Employee Surveys: Employers use survey research to gather feedback from employees about their job satisfaction, working conditions, and organizational culture. This information is used to improve employee retention and productivity.
  • Health Research: Survey research is used in health research to study topics such as disease prevalence, health behaviors, and healthcare access. Researchers use survey data to develop interventions and improve healthcare outcomes.

Examples of Survey Research

Here are some real-time examples of survey research:

  • COVID-19 Pandemic Surveys: Since the outbreak of the COVID-19 pandemic, surveys have been conducted to gather information about public attitudes, behaviors, and perceptions related to the pandemic. Governments and healthcare organizations have used this data to develop public health strategies and messaging.
  • Political Polls During Elections: During election seasons, surveys are used to measure public opinion on political candidates, policies, and issues in real-time. This information is used by political parties to develop campaign strategies and make decisions.
  • Customer Feedback Surveys: Companies often use real-time customer feedback surveys to gather insights about customer experience and satisfaction. This information is used to improve products and services quickly.
  • Event Surveys: Organizers of events such as conferences and trade shows often use surveys to gather feedback from attendees in real-time. This information can be used to improve future events and make adjustments during the current event.
  • Website and App Surveys: Website and app owners use surveys to gather real-time feedback from users about the functionality, user experience, and overall satisfaction with their platforms. This feedback can be used to improve the user experience and retain customers.
  • Employee Pulse Surveys: Employers use real-time pulse surveys to gather feedback from employees about their work experience and overall job satisfaction. This feedback is used to make changes in real-time to improve employee retention and productivity.

Survey Sample

Purpose of survey research.

The purpose of survey research is to gather data and insights from a representative sample of individuals. Survey research allows researchers to collect data quickly and efficiently from a large number of people, making it a valuable tool for understanding attitudes, behaviors, and preferences.

Here are some common purposes of survey research:

  • Descriptive Research: Survey research is often used to describe characteristics of a population or a phenomenon. For example, a survey could be used to describe the characteristics of a particular demographic group, such as age, gender, or income.
  • Exploratory Research: Survey research can be used to explore new topics or areas of research. Exploratory surveys are often used to generate hypotheses or identify potential relationships between variables.
  • Explanatory Research: Survey research can be used to explain relationships between variables. For example, a survey could be used to determine whether there is a relationship between educational attainment and income.
  • Evaluation Research: Survey research can be used to evaluate the effectiveness of a program or intervention. For example, a survey could be used to evaluate the impact of a health education program on behavior change.
  • Monitoring Research: Survey research can be used to monitor trends or changes over time. For example, a survey could be used to monitor changes in attitudes towards climate change or political candidates over time.

When to use Survey Research

there are certain circumstances where survey research is particularly appropriate. Here are some situations where survey research may be useful:

  • When the research question involves attitudes, beliefs, or opinions: Survey research is particularly useful for understanding attitudes, beliefs, and opinions on a particular topic. For example, a survey could be used to understand public opinion on a political issue.
  • When the research question involves behaviors or experiences: Survey research can also be useful for understanding behaviors and experiences. For example, a survey could be used to understand the prevalence of a particular health behavior.
  • When a large sample size is needed: Survey research allows researchers to collect data from a large number of people quickly and efficiently. This makes it a useful method when a large sample size is needed to ensure statistical validity.
  • When the research question is time-sensitive: Survey research can be conducted quickly, which makes it a useful method when the research question is time-sensitive. For example, a survey could be used to understand public opinion on a breaking news story.
  • When the research question involves a geographically dispersed population: Survey research can be conducted online, which makes it a useful method when the population of interest is geographically dispersed.

How to Conduct Survey Research

Conducting survey research involves several steps that need to be carefully planned and executed. Here is a general overview of the process:

  • Define the research question: The first step in conducting survey research is to clearly define the research question. The research question should be specific, measurable, and relevant to the population of interest.
  • Develop a survey instrument : The next step is to develop a survey instrument. This can be done using various methods, such as online survey tools or paper surveys. The survey instrument should be designed to elicit the information needed to answer the research question, and should be pre-tested with a small sample of individuals.
  • Select a sample : The sample is the group of individuals who will be invited to participate in the survey. The sample should be representative of the population of interest, and the size of the sample should be sufficient to ensure statistical validity.
  • Administer the survey: The survey can be administered in various ways, such as online, by mail, or in person. The method of administration should be chosen based on the population of interest and the research question.
  • Analyze the data: Once the survey data is collected, it needs to be analyzed. This involves summarizing the data using statistical methods, such as frequency distributions or regression analysis.
  • Draw conclusions: The final step is to draw conclusions based on the data analysis. This involves interpreting the results and answering the research question.

Advantages of Survey Research

There are several advantages to using survey research, including:

  • Efficient data collection: Survey research allows researchers to collect data quickly and efficiently from a large number of people. This makes it a useful method for gathering information on a wide range of topics.
  • Standardized data collection: Surveys are typically standardized, which means that all participants receive the same questions in the same order. This ensures that the data collected is consistent and reliable.
  • Cost-effective: Surveys can be conducted online, by mail, or in person, which makes them a cost-effective method of data collection.
  • Anonymity: Participants can remain anonymous when responding to a survey. This can encourage participants to be more honest and open in their responses.
  • Easy comparison: Surveys allow for easy comparison of data between different groups or over time. This makes it possible to identify trends and patterns in the data.
  • Versatility: Surveys can be used to collect data on a wide range of topics, including attitudes, beliefs, behaviors, and preferences.

Limitations of Survey Research

Here are some of the main limitations of survey research:

  • Limited depth: Surveys are typically designed to collect quantitative data, which means that they do not provide much depth or detail about people’s experiences or opinions. This can limit the insights that can be gained from the data.
  • Potential for bias: Surveys can be affected by various biases, including selection bias, response bias, and social desirability bias. These biases can distort the results and make them less accurate.
  • L imited validity: Surveys are only as valid as the questions they ask. If the questions are poorly designed or ambiguous, the results may not accurately reflect the respondents’ attitudes or behaviors.
  • Limited generalizability : Survey results are only generalizable to the population from which the sample was drawn. If the sample is not representative of the population, the results may not be generalizable to the larger population.
  • Limited ability to capture context: Surveys typically do not capture the context in which attitudes or behaviors occur. This can make it difficult to understand the reasons behind the responses.
  • Limited ability to capture complex phenomena: Surveys are not well-suited to capture complex phenomena, such as emotions or the dynamics of interpersonal relationships.

Following is an example of a Survey Sample:

Welcome to our Survey Research Page! We value your opinions and appreciate your participation in this survey. Please answer the questions below as honestly and thoroughly as possible.

1. What is your age?

  • A) Under 18
  • G) 65 or older

2. What is your highest level of education completed?

  • A) Less than high school
  • B) High school or equivalent
  • C) Some college or technical school
  • D) Bachelor’s degree
  • E) Graduate or professional degree

3. What is your current employment status?

  • A) Employed full-time
  • B) Employed part-time
  • C) Self-employed
  • D) Unemployed

4. How often do you use the internet per day?

  •  A) Less than 1 hour
  • B) 1-3 hours
  • C) 3-5 hours
  • D) 5-7 hours
  • E) More than 7 hours

5. How often do you engage in social media per day?

6. Have you ever participated in a survey research study before?

7. If you have participated in a survey research study before, how was your experience?

  • A) Excellent
  • E) Very poor

8. What are some of the topics that you would be interested in participating in a survey research study about?

……………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………….

9. How often would you be willing to participate in survey research studies?

  • A) Once a week
  • B) Once a month
  • C) Once every 6 months
  • D) Once a year

10. Any additional comments or suggestions?

Thank you for taking the time to complete this survey. Your feedback is important to us and will help us improve our survey research efforts.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Questionnaire

Questionnaire – Definition, Types, and Examples

Case Study Research

Case Study – Methods, Examples and Guide

Observational Research

Observational Research – Methods and Guide

Quantitative Research

Quantitative Research – Methods, Types and...

Qualitative Research Methods

Qualitative Research Methods

Explanatory Research

Explanatory Research – Types, Methods, Guide

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

research in survey methodology

Home Market Research

Survey Research: Definition, Examples and Methods

Survey Research

Survey Research is a quantitative research method used for collecting data from a set of respondents. It has been perhaps one of the most used methodologies in the industry for several years due to the multiple benefits and advantages that it has when collecting and analyzing data.

LEARN ABOUT: Behavioral Research

In this article, you will learn everything about survey research, such as types, methods, and examples.

Survey Research Definition

Survey Research is defined as the process of conducting research using surveys that researchers send to survey respondents. The data collected from surveys is then statistically analyzed to draw meaningful research conclusions. In the 21st century, every organization’s eager to understand what their customers think about their products or services and make better business decisions. Researchers can conduct research in multiple ways, but surveys are proven to be one of the most effective and trustworthy research methods. An online survey is a method for extracting information about a significant business matter from an individual or a group of individuals. It consists of structured survey questions that motivate the participants to respond. Creditable survey research can give these businesses access to a vast information bank. Organizations in media, other companies, and even governments rely on survey research to obtain accurate data.

The traditional definition of survey research is a quantitative method for collecting information from a pool of respondents by asking multiple survey questions. This research type includes the recruitment of individuals collection, and analysis of data. It’s useful for researchers who aim to communicate new features or trends to their respondents.

LEARN ABOUT: Level of Analysis Generally, it’s the primary step towards obtaining quick information about mainstream topics and conducting more rigorous and detailed quantitative research methods like surveys/polls or qualitative research methods like focus groups/on-call interviews can follow. There are many situations where researchers can conduct research using a blend of both qualitative and quantitative strategies.

LEARN ABOUT: Survey Sampling

Survey Research Methods

Survey research methods can be derived based on two critical factors: Survey research tool and time involved in conducting research. There are three main survey research methods, divided based on the medium of conducting survey research:

  • Online/ Email:   Online survey research is one of the most popular survey research methods today. The survey cost involved in online survey research is extremely minimal, and the responses gathered are highly accurate.
  • Phone:  Survey research conducted over the telephone ( CATI survey ) can be useful in collecting data from a more extensive section of the target population. There are chances that the money invested in phone surveys will be higher than other mediums, and the time required will be higher.
  • Face-to-face:  Researchers conduct face-to-face in-depth interviews in situations where there is a complicated problem to solve. The response rate for this method is the highest, but it can be costly.

Further, based on the time taken, survey research can be classified into two methods:

  • Longitudinal survey research:  Longitudinal survey research involves conducting survey research over a continuum of time and spread across years and decades. The data collected using this survey research method from one time period to another is qualitative or quantitative. Respondent behavior, preferences, and attitudes are continuously observed over time to analyze reasons for a change in behavior or preferences. For example, suppose a researcher intends to learn about the eating habits of teenagers. In that case, he/she will follow a sample of teenagers over a considerable period to ensure that the collected information is reliable. Often, cross-sectional survey research follows a longitudinal study .
  • Cross-sectional survey research:  Researchers conduct a cross-sectional survey to collect insights from a target audience at a particular time interval. This survey research method is implemented in various sectors such as retail, education, healthcare, SME businesses, etc. Cross-sectional studies can either be descriptive or analytical. It is quick and helps researchers collect information in a brief period. Researchers rely on the cross-sectional survey research method in situations where descriptive analysis of a subject is required.

Survey research also is bifurcated according to the sampling methods used to form samples for research: Probability and Non-probability sampling. Every individual in a population should be considered equally to be a part of the survey research sample. Probability sampling is a sampling method in which the researcher chooses the elements based on probability theory. The are various probability research methods, such as simple random sampling , systematic sampling, cluster sampling, stratified random sampling, etc. Non-probability sampling is a sampling method where the researcher uses his/her knowledge and experience to form samples.

LEARN ABOUT: Survey Sample Sizes

The various non-probability sampling techniques are :

  • Convenience sampling
  • Snowball sampling
  • Consecutive sampling
  • Judgemental sampling
  • Quota sampling

Process of implementing survey research methods:

  • Decide survey questions:  Brainstorm and put together valid survey questions that are grammatically and logically appropriate. Understanding the objective and expected outcomes of the survey helps a lot. There are many surveys where details of responses are not as important as gaining insights about what customers prefer from the provided options. In such situations, a researcher can include multiple-choice questions or closed-ended questions . Whereas, if researchers need to obtain details about specific issues, they can consist of open-ended questions in the questionnaire. Ideally, the surveys should include a smart balance of open-ended and closed-ended questions. Use survey questions like Likert Scale , Semantic Scale, Net Promoter Score question, etc., to avoid fence-sitting.

LEARN ABOUT: System Usability Scale

  • Finalize a target audience:  Send out relevant surveys as per the target audience and filter out irrelevant questions as per the requirement. The survey research will be instrumental in case the target population decides on a sample. This way, results can be according to the desired market and be generalized to the entire population.

LEARN ABOUT:  Testimonial Questions

  • Send out surveys via decided mediums:  Distribute the surveys to the target audience and patiently wait for the feedback and comments- this is the most crucial step of the survey research. The survey needs to be scheduled, keeping in mind the nature of the target audience and its regions. Surveys can be conducted via email, embedded in a website, shared via social media, etc., to gain maximum responses.
  • Analyze survey results:  Analyze the feedback in real-time and identify patterns in the responses which might lead to a much-needed breakthrough for your organization. GAP, TURF Analysis , Conjoint analysis, Cross tabulation, and many such survey feedback analysis methods can be used to spot and shed light on respondent behavior. Researchers can use the results to implement corrective measures to improve customer/employee satisfaction.

Reasons to conduct survey research

The most crucial and integral reason for conducting market research using surveys is that you can collect answers regarding specific, essential questions. You can ask these questions in multiple survey formats as per the target audience and the intent of the survey. Before designing a study, every organization must figure out the objective of carrying this out so that the study can be structured, planned, and executed to perfection.

LEARN ABOUT: Research Process Steps

Questions that need to be on your mind while designing a survey are:

  • What is the primary aim of conducting the survey?
  • How do you plan to utilize the collected survey data?
  • What type of decisions do you plan to take based on the points mentioned above?

There are three critical reasons why an organization must conduct survey research.

  • Understand respondent behavior to get solutions to your queries:  If you’ve carefully curated a survey, the respondents will provide insights about what they like about your organization as well as suggestions for improvement. To motivate them to respond, you must be very vocal about how secure their responses will be and how you will utilize the answers. This will push them to be 100% honest about their feedback, opinions, and comments. Online surveys or mobile surveys have proved their privacy, and due to this, more and more respondents feel free to put forth their feedback through these mediums.
  • Present a medium for discussion:  A survey can be the perfect platform for respondents to provide criticism or applause for an organization. Important topics like product quality or quality of customer service etc., can be put on the table for discussion. A way you can do it is by including open-ended questions where the respondents can write their thoughts. This will make it easy for you to correlate your survey to what you intend to do with your product or service.
  • Strategy for never-ending improvements:  An organization can establish the target audience’s attributes from the pilot phase of survey research . Researchers can use the criticism and feedback received from this survey to improve the product/services. Once the company successfully makes the improvements, it can send out another survey to measure the change in feedback keeping the pilot phase the benchmark. By doing this activity, the organization can track what was effectively improved and what still needs improvement.

Survey Research Scales

There are four main scales for the measurement of variables:

  • Nominal Scale:  A nominal scale associates numbers with variables for mere naming or labeling, and the numbers usually have no other relevance. It is the most basic of the four levels of measurement.
  • Ordinal Scale:  The ordinal scale has an innate order within the variables along with labels. It establishes the rank between the variables of a scale but not the difference value between the variables.
  • Interval Scale:  The interval scale is a step ahead in comparison to the other two scales. Along with establishing a rank and name of variables, the scale also makes known the difference between the two variables. The only drawback is that there is no fixed start point of the scale, i.e., the actual zero value is absent.
  • Ratio Scale:  The ratio scale is the most advanced measurement scale, which has variables that are labeled in order and have a calculated difference between variables. In addition to what interval scale orders, this scale has a fixed starting point, i.e., the actual zero value is present.

Benefits of survey research

In case survey research is used for all the right purposes and is implemented properly, marketers can benefit by gaining useful, trustworthy data that they can use to better the ROI of the organization.

Other benefits of survey research are:

  • Minimum investment:  Mobile surveys and online surveys have minimal finance invested per respondent. Even with the gifts and other incentives provided to the people who participate in the study, online surveys are extremely economical compared to paper-based surveys.
  • Versatile sources for response collection:  You can conduct surveys via various mediums like online and mobile surveys. You can further classify them into qualitative mediums like focus groups , and interviews and quantitative mediums like customer-centric surveys. Due to the offline survey response collection option, researchers can conduct surveys in remote areas with limited internet connectivity. This can make data collection and analysis more convenient and extensive.
  • Reliable for respondents:  Surveys are extremely secure as the respondent details and responses are kept safeguarded. This anonymity makes respondents answer the survey questions candidly and with absolute honesty. An organization seeking to receive explicit responses for its survey research must mention that it will be confidential.

Survey research design

Researchers implement a survey research design in cases where there is a limited cost involved and there is a need to access details easily. This method is often used by small and large organizations to understand and analyze new trends, market demands, and opinions. Collecting information through tactfully designed survey research can be much more effective and productive than a casually conducted survey.

There are five stages of survey research design:

  • Decide an aim of the research:  There can be multiple reasons for a researcher to conduct a survey, but they need to decide a purpose for the research. This is the primary stage of survey research as it can mold the entire path of a survey, impacting its results.
  • Filter the sample from target population:  Who to target? is an essential question that a researcher should answer and keep in mind while conducting research. The precision of the results is driven by who the members of a sample are and how useful their opinions are. The quality of respondents in a sample is essential for the results received for research and not the quantity. If a researcher seeks to understand whether a product feature will work well with their target market, he/she can conduct survey research with a group of market experts for that product or technology.
  • Zero-in on a survey method:  Many qualitative and quantitative research methods can be discussed and decided. Focus groups, online interviews, surveys, polls, questionnaires, etc. can be carried out with a pre-decided sample of individuals.
  • Design the questionnaire:  What will the content of the survey be? A researcher is required to answer this question to be able to design it effectively. What will the content of the cover letter be? Or what are the survey questions of this questionnaire? Understand the target market thoroughly to create a questionnaire that targets a sample to gain insights about a survey research topic.
  • Send out surveys and analyze results:  Once the researcher decides on which questions to include in a study, they can send it across to the selected sample . Answers obtained from this survey can be analyzed to make product-related or marketing-related decisions.

Survey examples: 10 tips to design the perfect research survey

Picking the right survey design can be the key to gaining the information you need to make crucial decisions for all your research. It is essential to choose the right topic, choose the right question types, and pick a corresponding design. If this is your first time creating a survey, it can seem like an intimidating task. But with QuestionPro, each step of the process is made simple and easy.

Below are 10 Tips To Design The Perfect Research Survey:

  • Set your SMART goals:  Before conducting any market research or creating a particular plan, set your SMART Goals . What is that you want to achieve with the survey? How will you measure it promptly, and what are the results you are expecting?
  • Choose the right questions:  Designing a survey can be a tricky task. Asking the right questions may help you get the answers you are looking for and ease the task of analyzing. So, always choose those specific questions – relevant to your research.
  • Begin your survey with a generalized question:  Preferably, start your survey with a general question to understand whether the respondent uses the product or not. That also provides an excellent base and intro for your survey.
  • Enhance your survey:  Choose the best, most relevant, 15-20 questions. Frame each question as a different question type based on the kind of answer you would like to gather from each. Create a survey using different types of questions such as multiple-choice, rating scale, open-ended, etc. Look at more survey examples and four measurement scales every researcher should remember.
  • Prepare yes/no questions:  You may also want to use yes/no questions to separate people or branch them into groups of those who “have purchased” and those who “have not yet purchased” your products or services. Once you separate them, you can ask them different questions.
  • Test all electronic devices:  It becomes effortless to distribute your surveys if respondents can answer them on different electronic devices like mobiles, tablets, etc. Once you have created your survey, it’s time to TEST. You can also make any corrections if needed at this stage.
  • Distribute your survey:  Once your survey is ready, it is time to share and distribute it to the right audience. You can share handouts and share them via email, social media, and other industry-related offline/online communities.
  • Collect and analyze responses:  After distributing your survey, it is time to gather all responses. Make sure you store your results in a particular document or an Excel sheet with all the necessary categories mentioned so that you don’t lose your data. Remember, this is the most crucial stage. Segregate your responses based on demographics, psychographics, and behavior. This is because, as a researcher, you must know where your responses are coming from. It will help you to analyze, predict decisions, and help write the summary report.
  • Prepare your summary report:  Now is the time to share your analysis. At this stage, you should mention all the responses gathered from a survey in a fixed format. Also, the reader/customer must get clarity about your goal, which you were trying to gain from the study. Questions such as – whether the product or service has been used/preferred or not. Do respondents prefer some other product to another? Any recommendations?

Having a tool that helps you carry out all the necessary steps to carry out this type of study is a vital part of any project. At QuestionPro, we have helped more than 10,000 clients around the world to carry out data collection in a simple and effective way, in addition to offering a wide range of solutions to take advantage of this data in the best possible way.

From dashboards, advanced analysis tools, automation, and dedicated functions, in QuestionPro, you will find everything you need to execute your research projects effectively. Uncover insights that matter the most!

MORE LIKE THIS

employee development software

Top 10 Employee Development Software for Talent Growth

Apr 3, 2024

insight community platforms

Top 5 Insight Community Platforms to Elevate Your Research

concept testing platform

Choose The Right Concept Testing Platform to Boost Your Ideas

Apr 2, 2024

nps software

Top 15 NPS Software for Customer Feedback in 2024

Other categories.

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

Root out friction in every digital experience, super-charge conversion rates, and optimize digital self-service

Uncover insights from any interaction, deliver AI-powered agent coaching, and reduce cost to serve

Increase revenue and loyalty with real-time insights and recommendations delivered to teams on the ground

Know how your people feel and empower managers to improve employee engagement, productivity, and retention

Take action in the moments that matter most along the employee journey and drive bottom line growth

Whatever they’re are saying, wherever they’re saying it, know exactly what’s going on with your people

Get faster, richer insights with qual and quant tools that make powerful market research available to everyone

Run concept tests, pricing studies, prototyping + more with fast, powerful studies designed by UX research experts

Track your brand performance 24/7 and act quickly to respond to opportunities and challenges in your market

Explore the platform powering Experience Management

  • Free Account
  • For Digital
  • For Customer Care
  • For Human Resources
  • For Researchers
  • Financial Services
  • All Industries

Popular Use Cases

  • Customer Experience
  • Employee Experience
  • Employee Exit Interviews
  • Net Promoter Score
  • Voice of Customer
  • Customer Success Hub
  • Product Documentation
  • Training & Certification
  • XM Institute
  • Popular Resources
  • Customer Stories
  • Market Research
  • Artificial Intelligence
  • Partnerships
  • Marketplace

The annual gathering of the experience leaders at the world’s iconic brands building breakthrough business results, live in Salt Lake City.

  • English/AU & NZ
  • Español/Europa
  • Español/América Latina
  • Português Brasileiro
  • REQUEST DEMO
  • Experience Management
  • What is a survey?
  • Survey Research

Try Qualtrics for free

What is survey research.

15 min read Find out everything you need to know about survey research, from what it is and how it works to the different methods and tools you can use to ensure you’re successful.

Survey research is the process of collecting data from a predefined group (e.g. customers or potential customers) with the ultimate goal of uncovering insights about your products, services, or brand overall .

As a quantitative data collection method, survey research can provide you with a goldmine of information that can inform crucial business and product decisions. But survey research needs careful planning and execution to get the results you want.

So if you’re thinking about using surveys to carry out research, read on.

Get started with our free survey maker tool

Types of survey research

Calling these methods ‘survey research’ slightly underplays the complexity of this type of information gathering. From the expertise required to carry out each activity to the analysis of the data and its eventual application, a considerable amount of effort is required.

As for how you can carry out your research, there are several options to choose from — face-to-face interviews, telephone surveys, focus groups (though more interviews than surveys), online surveys , and panel surveys.

Typically, the survey method you choose will largely be guided by who you want to survey, the size of your sample , your budget, and the type of information you’re hoping to gather.

Here are a few of the most-used survey types:

Face-to-face interviews

Before technology made it possible to conduct research using online surveys, telephone, and mail were the most popular methods for survey research. However face-to-face interviews were considered the gold standard — the only reason they weren’t as popular was due to their highly prohibitive costs.

When it came to face-to-face interviews, organizations would use highly trained researchers who knew when to probe or follow up on vague or problematic answers. They also knew when to offer assistance to respondents when they seemed to be struggling. The result was that these interviewers could get sample members to participate and engage in surveys in the most effective way possible, leading to higher response rates and better quality data.

Telephone surveys

While phone surveys have been popular in the past, particularly for measuring general consumer behavior or beliefs, response rates have been declining since the 1990s .

Phone surveys are usually conducted using a random dialing system and software that a researcher can use to record responses.

This method is beneficial when you want to survey a large population but don’t have the resources to conduct face-to-face research surveys or run focus groups, or want to ask multiple-choice and open-ended questions .

The downsides are they can: take a long time to complete depending on the response rate, and you may have to do a lot of cold-calling to get the information you need.

You also run the risk of respondents not being completely honest . Instead, they’ll answer your survey questions quickly just to get off the phone.

Focus groups (interviews — not surveys)

Focus groups are a separate qualitative methodology rather than surveys — even though they’re often bunched together. They’re normally used for survey pretesting and designing , but they’re also a great way to generate opinions and data from a diverse range of people.

Focus groups involve putting a cohort of demographically or socially diverse people in a room with a moderator and engaging them in a discussion on a particular topic, such as your product, brand, or service.

They remain a highly popular method for market research , but they’re expensive and require a lot of administration to conduct and analyze the data properly.

You also run the risk of more dominant members of the group taking over the discussion and swaying the opinions of other people — potentially providing you with unreliable data.

Online surveys

Online surveys have become one of the most popular survey methods due to being cost-effective, enabling researchers to accurately survey a large population quickly.

Online surveys can essentially be used by anyone for any research purpose – we’ve all seen the increasing popularity of polls on social media (although these are not scientific).

Using an online survey allows you to ask a series of different question types and collect data instantly that’s easy to analyze with the right software.

There are also several methods for running and distributing online surveys that allow you to get your questionnaire in front of a large population at a fraction of the cost of face-to-face interviews or focus groups.

This is particularly true when it comes to mobile surveys as most people with a smartphone can access them online.

However, you have to be aware of the potential dangers of using online surveys, particularly when it comes to the survey respondents. The biggest risk is because online surveys require access to a computer or mobile device to complete, they could exclude elderly members of the population who don’t have access to the technology — or don’t know how to use it.

It could also exclude those from poorer socio-economic backgrounds who can’t afford a computer or consistent internet access. This could mean the data collected is more biased towards a certain group and can lead to less accurate data when you’re looking for a representative population sample.

When it comes to surveys, every voice matters.

Find out how to create more inclusive and representative surveys for your research.

Panel surveys

A panel survey involves recruiting respondents who have specifically signed up to answer questionnaires and who are put on a list by a research company. This could be a workforce of a small company or a major subset of a national population. Usually, these groups are carefully selected so that they represent a sample of your target population — giving you balance across criteria such as age, gender, background, and so on.

Panel surveys give you access to the respondents you need and are usually provided by the research company in question. As a result, it’s much easier to get access to the right audiences as you just need to tell the research company your criteria. They’ll then determine the right panels to use to answer your questionnaire.

However, there are downsides. The main one being that if the research company offers its panels incentives, e.g. discounts, coupons, money — respondents may answer a lot of questionnaires just for the benefits.

This might mean they rush through your survey without providing considered and truthful answers. As a consequence, this can damage the credibility of your data and potentially ruin your analyses.

What are the benefits of using survey research?

Depending on the research method you use, there are lots of benefits to conducting survey research for data collection. Here, we cover a few:

1.   They’re relatively easy to do

Most research surveys are easy to set up, administer and analyze. As long as the planning and survey design is thorough and you target the right audience , the data collection is usually straightforward regardless of which survey type you use.

2.   They can be cost effective

Survey research can be relatively cheap depending on the type of survey you use.

Generally, qualitative research methods that require access to people in person or over the phone are more expensive and require more administration.

Online surveys or mobile surveys are often more cost-effective for market research and can give you access to the global population for a fraction of the cost.

3.   You can collect data from a large sample

Again, depending on the type of survey, you can obtain survey results from an entire population at a relatively low price. You can also administer a large variety of survey types to fit the project you’re running.

4.   You can use survey software to analyze results immediately

Using survey software, you can use advanced statistical analysis techniques to gain insights into your responses immediately.

Analysis can be conducted using a variety of parameters to determine the validity and reliability of your survey data at scale.

5.   Surveys can collect any type of data

While most people view surveys as a quantitative research method, they can just as easily be adapted to gain qualitative information by simply including open-ended questions or conducting interviews face to face.

How to measure concepts with survey questions

While surveys are a great way to obtain data, that data on its own is useless unless it can be analyzed and developed into actionable insights.

The easiest, and most effective way to measure survey results, is to use a dedicated research tool that puts all of your survey results into one place.

When it comes to survey measurement, there are four measurement types to be aware of that will determine how you treat your different survey results:

Nominal scale

With a nominal scale , you can only keep track of how many respondents chose each option from a question, and which response generated the most selections.

An example of this would be simply asking a responder to choose a product or brand from a list.

You could find out which brand was chosen the most but have no insight as to why.

Ordinal scale

Ordinal scales are used to judge an order of preference. They do provide some level of quantitative value because you’re asking responders to choose a preference of one option over another.

Ratio scale

Ratio scales can be used to judge the order and difference between responses. For example, asking respondents how much they spend on their weekly shopping on average.

Interval scale

In an interval scale, values are lined up in order with a meaningful difference between the two values — for example, measuring temperature or measuring a credit score between one value and another.

Step by step: How to conduct surveys and collect data

Conducting a survey and collecting data is relatively straightforward, but it does require some careful planning and design to ensure it results in reliable data.

Step 1 – Define your objectives

What do you want to learn from the survey? How is the data going to help you? Having a hypothesis or series of assumptions about survey responses will allow you to create the right questions to test them.

Step 2 – Create your survey questions

Once you’ve got your hypotheses or assumptions, write out the questions you need answering to test your theories or beliefs. Be wary about framing questions that could lead respondents or inadvertently create biased responses .

Step 3 – Choose your question types

Your survey should include a variety of question types and should aim to obtain quantitative data with some qualitative responses from open-ended questions. Using a mix of questions (simple Yes/ No, multiple-choice, rank in order, etc) not only increases the reliability of your data but also reduces survey fatigue and respondents simply answering questions quickly without thinking.

Find out how to create a survey that’s easy to engage with

Step 4 – Test your questions

Before sending your questionnaire out, you should test it (e.g. have a random internal group do the survey) and carry out A/B tests to ensure you’ll gain accurate responses.

Step 5 – Choose your target and send out the survey

Depending on your objectives, you might want to target the general population with your survey or a specific segment of the population. Once you’ve narrowed down who you want to target, it’s time to send out the survey.

After you’ve deployed the survey, keep an eye on the response rate to ensure you’re getting the number you expected. If your response rate is low, you might need to send the survey out to a second group to obtain a large enough sample — or do some troubleshooting to work out why your response rates are so low. This could be down to your questions, delivery method, selected sample, or otherwise.

Step 6 – Analyze results and draw conclusions

Once you’ve got your results back, it’s time for the fun part.

Break down your survey responses using the parameters you’ve set in your objectives and analyze the data to compare to your original assumptions. At this stage, a research tool or software can make the analysis a lot easier — and that’s somewhere Qualtrics can help.

Get reliable insights with survey software from Qualtrics

Gaining feedback from customers and leads is critical for any business, data gathered from surveys can prove invaluable for understanding your products and your market position, and with survey software from Qualtrics, it couldn’t be easier.

Used by more than 13,000 brands and supporting more than 1 billion surveys a year, Qualtrics empowers everyone in your organization to gather insights and take action. No coding required — and your data is housed in one system.

Get feedback from more than 125 sources on a single platform and view and measure your data in one place to create actionable insights and gain a deeper understanding of your target customers .

Automatically run complex text and statistical analysis to uncover exactly what your survey data is telling you, so you can react in real-time and make smarter decisions.

We can help you with survey management, too. From designing your survey and finding your target respondents to getting your survey in the field and reporting back on the results, we can help you every step of the way.

And for expert market researchers and survey designers, Qualtrics features custom programming to give you total flexibility over question types, survey design, embedded data, and other variables.

No matter what type of survey you want to run, what target audience you want to reach, or what assumptions you want to test or answers you want to uncover, we’ll help you design, deploy and analyze your survey with our team of experts.

Ready to find out more about Qualtrics CoreXM?

Get started with our free survey maker tool today

Related resources

Survey bias types 24 min read, post event survey questions 10 min read, best survey software 16 min read, close-ended questions 7 min read, survey vs questionnaire 12 min read, response bias 13 min read, double barreled question 11 min read, request demo.

Ready to learn more about Qualtrics?

Logo for University of Southern Queensland

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

9 Survey research

Survey research is a research method involving the use of standardised questionnaires or interviews to collect data about people and their preferences, thoughts, and behaviours in a systematic manner. Although census surveys were conducted as early as Ancient Egypt, survey as a formal research method was pioneered in the 1930–40s by sociologist Paul Lazarsfeld to examine the effects of the radio on political opinion formation of the United States. This method has since become a very popular method for quantitative research in the social sciences.

The survey method can be used for descriptive, exploratory, or explanatory research. This method is best suited for studies that have individual people as the unit of analysis. Although other units of analysis, such as groups, organisations or dyads—pairs of organisations, such as buyers and sellers—are also studied using surveys, such studies often use a specific person from each unit as a ‘key informant’ or a ‘proxy’ for that unit. Consequently, such surveys may be subject to respondent bias if the chosen informant does not have adequate knowledge or has a biased opinion about the phenomenon of interest. For instance, Chief Executive Officers may not adequately know employees’ perceptions or teamwork in their own companies, and may therefore be the wrong informant for studies of team dynamics or employee self-esteem.

Survey research has several inherent strengths compared to other research methods. First, surveys are an excellent vehicle for measuring a wide variety of unobservable data, such as people’s preferences (e.g., political orientation), traits (e.g., self-esteem), attitudes (e.g., toward immigrants), beliefs (e.g., about a new law), behaviours (e.g., smoking or drinking habits), or factual information (e.g., income). Second, survey research is also ideally suited for remotely collecting data about a population that is too large to observe directly. A large area—such as an entire country—can be covered by postal, email, or telephone surveys using meticulous sampling to ensure that the population is adequately represented in a small sample. Third, due to their unobtrusive nature and the ability to respond at one’s convenience, questionnaire surveys are preferred by some respondents. Fourth, interviews may be the only way of reaching certain population groups such as the homeless or illegal immigrants for which there is no sampling frame available. Fifth, large sample surveys may allow detection of small effects even while analysing multiple variables, and depending on the survey design, may also allow comparative analysis of population subgroups (i.e., within-group and between-group analysis). Sixth, survey research is more economical in terms of researcher time, effort and cost than other methods such as experimental research and case research. At the same time, survey research also has some unique disadvantages. It is subject to a large number of biases such as non-response bias, sampling bias, social desirability bias, and recall bias, as discussed at the end of this chapter.

Depending on how the data is collected, survey research can be divided into two broad categories: questionnaire surveys (which may be postal, group-administered, or online surveys), and interview surveys (which may be personal, telephone, or focus group interviews). Questionnaires are instruments that are completed in writing by respondents, while interviews are completed by the interviewer based on verbal responses provided by respondents. As discussed below, each type has its own strengths and weaknesses in terms of their costs, coverage of the target population, and researcher’s flexibility in asking questions.

Questionnaire surveys

Invented by Sir Francis Galton, a questionnaire is a research instrument consisting of a set of questions (items) intended to capture responses from respondents in a standardised manner. Questions may be unstructured or structured. Unstructured questions ask respondents to provide a response in their own words, while structured questions ask respondents to select an answer from a given set of choices. Subjects’ responses to individual questions (items) on a structured questionnaire may be aggregated into a composite scale or index for statistical analysis. Questions should be designed in such a way that respondents are able to read, understand, and respond to them in a meaningful way, and hence the survey method may not be appropriate or practical for certain demographic groups such as children or the illiterate.

Most questionnaire surveys tend to be self-administered postal surveys , where the same questionnaire is posted to a large number of people, and willing respondents can complete the survey at their convenience and return it in prepaid envelopes. Postal surveys are advantageous in that they are unobtrusive and inexpensive to administer, since bulk postage is cheap in most countries. However, response rates from postal surveys tend to be quite low since most people ignore survey requests. There may also be long delays (several months) in respondents’ completing and returning the survey, or they may even simply lose it. Hence, the researcher must continuously monitor responses as they are being returned, track and send non-respondents repeated reminders (two or three reminders at intervals of one to one and a half months is ideal). Questionnaire surveys are also not well-suited for issues that require clarification on the part of the respondent or those that require detailed written responses. Longitudinal designs can be used to survey the same set of respondents at different times, but response rates tend to fall precipitously from one survey to the next.

A second type of survey is a group-administered questionnaire . A sample of respondents is brought together at a common place and time, and each respondent is asked to complete the survey questionnaire while in that room. Respondents enter their responses independently without interacting with one another. This format is convenient for the researcher, and a high response rate is assured. If respondents do not understand any specific question, they can ask for clarification. In many organisations, it is relatively easy to assemble a group of employees in a conference room or lunch room, especially if the survey is approved by corporate executives.

A more recent type of questionnaire survey is an online or web survey. These surveys are administered over the Internet using interactive forms. Respondents may receive an email request for participation in the survey with a link to a website where the survey may be completed. Alternatively, the survey may be embedded into an email, and can be completed and returned via email. These surveys are very inexpensive to administer, results are instantly recorded in an online database, and the survey can be easily modified if needed. However, if the survey website is not password-protected or designed to prevent multiple submissions, the responses can be easily compromised. Furthermore, sampling bias may be a significant issue since the survey cannot reach people who do not have computer or Internet access, such as many of the poor, senior, and minority groups, and the respondent sample is skewed toward a younger demographic who are online much of the time and have the time and ability to complete such surveys. Computing the response rate may be problematic if the survey link is posted on LISTSERVs or bulletin boards instead of being emailed directly to targeted respondents. For these reasons, many researchers prefer dual-media surveys (e.g., postal survey and online survey), allowing respondents to select their preferred method of response.

Constructing a survey questionnaire is an art. Numerous decisions must be made about the content of questions, their wording, format, and sequencing, all of which can have important consequences for the survey responses.

Response formats. Survey questions may be structured or unstructured. Responses to structured questions are captured using one of the following response formats:

Dichotomous response , where respondents are asked to select one of two possible choices, such as true/false, yes/no, or agree/disagree. An example of such a question is: Do you think that the death penalty is justified under some circumstances? (circle one): yes / no.

Nominal response , where respondents are presented with more than two unordered options, such as: What is your industry of employment?: manufacturing / consumer services / retail / education / healthcare / tourism and hospitality / other.

Ordinal response , where respondents have more than two ordered options, such as: What is your highest level of education?: high school / bachelor’s degree / postgraduate degree.

Interval-level response , where respondents are presented with a 5-point or 7-point Likert scale, semantic differential scale, or Guttman scale. Each of these scale types were discussed in a previous chapter.

Continuous response , where respondents enter a continuous (ratio-scaled) value with a meaningful zero point, such as their age or tenure in a firm. These responses generally tend to be of the fill-in-the blanks type.

Question content and wording. Responses obtained in survey research are very sensitive to the types of questions asked. Poorly framed or ambiguous questions will likely result in meaningless responses with very little value. Dillman (1978) [1] recommends several rules for creating good survey questions. Every single question in a survey should be carefully scrutinised for the following issues:

Is the question clear and understandable ?: Survey questions should be stated in very simple language, preferably in active voice, and without complicated words or jargon that may not be understood by a typical respondent. All questions in the questionnaire should be worded in a similar manner to make it easy for respondents to read and understand them. The only exception is if your survey is targeted at a specialised group of respondents, such as doctors, lawyers and researchers, who use such jargon in their everyday environment. Is the question worded in a negative manner ?: Negatively worded questions such as ‘Should your local government not raise taxes?’ tend to confuse many respondents and lead to inaccurate responses. Double-negatives should be avoided when designing survey questions.

Is the question ambiguous ?: Survey questions should not use words or expressions that may be interpreted differently by different respondents (e.g., words like ‘any’ or ‘just’). For instance, if you ask a respondent, ‘What is your annual income?’, it is unclear whether you are referring to salary/wages, or also dividend, rental, and other income, whether you are referring to personal income, family income (including spouse’s wages), or personal and business income. Different interpretation by different respondents will lead to incomparable responses that cannot be interpreted correctly.

Does the question have biased or value-laden words ?: Bias refers to any property of a question that encourages subjects to answer in a certain way. Kenneth Rasinky (1989) [2] examined several studies on people’s attitude toward government spending, and observed that respondents tend to indicate stronger support for ‘assistance to the poor’ and less for ‘welfare’, even though both terms had the same meaning. In this study, more support was also observed for ‘halting rising crime rate’ and less for ‘law enforcement’, more for ‘solving problems of big cities’ and less for ‘assistance to big cities’, and more for ‘dealing with drug addiction’ and less for ‘drug rehabilitation’. A biased language or tone tends to skew observed responses. It is often difficult to anticipate in advance the biasing wording, but to the greatest extent possible, survey questions should be carefully scrutinised to avoid biased language.

Is the question double-barrelled ?: Double-barrelled questions are those that can have multiple answers. For example, ‘Are you satisfied with the hardware and software provided for your work?’. In this example, how should a respondent answer if they are satisfied with the hardware, but not with the software, or vice versa? It is always advisable to separate double-barrelled questions into separate questions: ‘Are you satisfied with the hardware provided for your work?’, and ’Are you satisfied with the software provided for your work?’. Another example: ‘Does your family favour public television?’. Some people may favour public TV for themselves, but favour certain cable TV programs such as Sesame Street for their children.

Is the question too general ?: Sometimes, questions that are too general may not accurately convey respondents’ perceptions. If you asked someone how they liked a certain book and provided a response scale ranging from ‘not at all’ to ‘extremely well’, if that person selected ‘extremely well’, what do they mean? Instead, ask more specific behavioural questions, such as, ‘Will you recommend this book to others, or do you plan to read other books by the same author?’. Likewise, instead of asking, ‘How big is your firm?’ (which may be interpreted differently by respondents), ask, ‘How many people work for your firm?’, and/or ‘What is the annual revenue of your firm?’, which are both measures of firm size.

Is the question too detailed ?: Avoid unnecessarily detailed questions that serve no specific research purpose. For instance, do you need the age of each child in a household, or is just the number of children in the household acceptable? However, if unsure, it is better to err on the side of details than generality.

Is the question presumptuous ?: If you ask, ‘What do you see as the benefits of a tax cut?’, you are presuming that the respondent sees the tax cut as beneficial. Many people may not view tax cuts as being beneficial, because tax cuts generally lead to lesser funding for public schools, larger class sizes, and fewer public services such as police, ambulance, and fire services. Avoid questions with built-in presumptions.

Is the question imaginary ?: A popular question in many television game shows is, ‘If you win a million dollars on this show, how will you spend it?’. Most respondents have never been faced with such an amount of money before and have never thought about it—they may not even know that after taxes, they will get only about $640,000 or so in the United States, and in many cases, that amount is spread over a 20-year period—and so their answers tend to be quite random, such as take a tour around the world, buy a restaurant or bar, spend on education, save for retirement, help parents or children, or have a lavish wedding. Imaginary questions have imaginary answers, which cannot be used for making scientific inferences.

Do respondents have the information needed to correctly answer the question ?: Oftentimes, we assume that subjects have the necessary information to answer a question, when in reality, they do not. Even if a response is obtained, these responses tend to be inaccurate given the subjects’ lack of knowledge about the question being asked. For instance, we should not ask the CEO of a company about day-to-day operational details that they may not be aware of, or ask teachers about how much their students are learning, or ask high-schoolers, ‘Do you think the US Government acted appropriately in the Bay of Pigs crisis?’.

Question sequencing. In general, questions should flow logically from one to the next. To achieve the best response rates, questions should flow from the least sensitive to the most sensitive, from the factual and behavioural to the attitudinal, and from the more general to the more specific. Some general rules for question sequencing:

Start with easy non-threatening questions that can be easily recalled. Good options are demographics (age, gender, education level) for individual-level surveys and firmographics (employee count, annual revenues, industry) for firm-level surveys.

Never start with an open ended question.

If following a historical sequence of events, follow a chronological order from earliest to latest.

Ask about one topic at a time. When switching topics, use a transition, such as, ‘The next section examines your opinions about…’

Use filter or contingency questions as needed, such as, ‘If you answered “yes” to question 5, please proceed to Section 2. If you answered “no” go to Section 3′.

Other golden rules . Do unto your respondents what you would have them do unto you. Be attentive and appreciative of respondents’ time, attention, trust, and confidentiality of personal information. Always practice the following strategies for all survey research:

People’s time is valuable. Be respectful of their time. Keep your survey as short as possible and limit it to what is absolutely necessary. Respondents do not like spending more than 10-15 minutes on any survey, no matter how important it is. Longer surveys tend to dramatically lower response rates.

Always assure respondents about the confidentiality of their responses, and how you will use their data (e.g., for academic research) and how the results will be reported (usually, in the aggregate).

For organisational surveys, assure respondents that you will send them a copy of the final results, and make sure that you follow up with your promise.

Thank your respondents for their participation in your study.

Finally, always pretest your questionnaire, at least using a convenience sample, before administering it to respondents in a field setting. Such pretesting may uncover ambiguity, lack of clarity, or biases in question wording, which should be eliminated before administering to the intended sample.

Interview survey

Interviews are a more personalised data collection method than questionnaires, and are conducted by trained interviewers using the same research protocol as questionnaire surveys (i.e., a standardised set of questions). However, unlike a questionnaire, the interview script may contain special instructions for the interviewer that are not seen by respondents, and may include space for the interviewer to record personal observations and comments. In addition, unlike postal surveys, the interviewer has the opportunity to clarify any issues raised by the respondent or ask probing or follow-up questions. However, interviews are time-consuming and resource-intensive. Interviewers need special interviewing skills as they are considered to be part of the measurement instrument, and must proactively strive not to artificially bias the observed responses.

The most typical form of interview is a personal or face-to-face interview , where the interviewer works directly with the respondent to ask questions and record their responses. Personal interviews may be conducted at the respondent’s home or office location. This approach may even be favoured by some respondents, while others may feel uncomfortable allowing a stranger into their homes. However, skilled interviewers can persuade respondents to co-operate, dramatically improving response rates.

A variation of the personal interview is a group interview, also called a focus group . In this technique, a small group of respondents (usually 6–10 respondents) are interviewed together in a common location. The interviewer is essentially a facilitator whose job is to lead the discussion, and ensure that every person has an opportunity to respond. Focus groups allow deeper examination of complex issues than other forms of survey research, because when people hear others talk, it often triggers responses or ideas that they did not think about before. However, focus group discussion may be dominated by a dominant personality, and some individuals may be reluctant to voice their opinions in front of their peers or superiors, especially while dealing with a sensitive issue such as employee underperformance or office politics. Because of their small sample size, focus groups are usually used for exploratory research rather than descriptive or explanatory research.

A third type of interview survey is a telephone interview . In this technique, interviewers contact potential respondents over the phone, typically based on a random selection of people from a telephone directory, to ask a standard set of survey questions. A more recent and technologically advanced approach is computer-assisted telephone interviewing (CATI). This is increasing being used by academic, government, and commercial survey researchers. Here the interviewer is a telephone operator who is guided through the interview process by a computer program displaying instructions and questions to be asked. The system also selects respondents randomly using a random digit dialling technique, and records responses using voice capture technology. Once respondents are on the phone, higher response rates can be obtained. This technique is not ideal for rural areas where telephone density is low, and also cannot be used for communicating non-audio information such as graphics or product demonstrations.

Role of interviewer. The interviewer has a complex and multi-faceted role in the interview process, which includes the following tasks:

Prepare for the interview: Since the interviewer is in the forefront of the data collection effort, the quality of data collected depends heavily on how well the interviewer is trained to do the job. The interviewer must be trained in the interview process and the survey method, and also be familiar with the purpose of the study, how responses will be stored and used, and sources of interviewer bias. They should also rehearse and time the interview prior to the formal study.

Locate and enlist the co-operation of respondents: Particularly in personal, in-home surveys, the interviewer must locate specific addresses, and work around respondents’ schedules at sometimes undesirable times such as during weekends. They should also be like a salesperson, selling the idea of participating in the study.

Motivate respondents: Respondents often feed off the motivation of the interviewer. If the interviewer is disinterested or inattentive, respondents will not be motivated to provide useful or informative responses either. The interviewer must demonstrate enthusiasm about the study, communicate the importance of the research to respondents, and be attentive to respondents’ needs throughout the interview.

Clarify any confusion or concerns: Interviewers must be able to think on their feet and address unanticipated concerns or objections raised by respondents to the respondents’ satisfaction. Additionally, they should ask probing questions as necessary even if such questions are not in the script.

Observe quality of response: The interviewer is in the best position to judge the quality of information collected, and may supplement responses obtained using personal observations of gestures or body language as appropriate.

Conducting the interview. Before the interview, the interviewer should prepare a kit to carry to the interview session, consisting of a cover letter from the principal investigator or sponsor, adequate copies of the survey instrument, photo identification, and a telephone number for respondents to call to verify the interviewer’s authenticity. The interviewer should also try to call respondents ahead of time to set up an appointment if possible. To start the interview, they should speak in an imperative and confident tone, such as, ‘I’d like to take a few minutes of your time to interview you for a very important study’, instead of, ‘May I come in to do an interview?’. They should introduce themself, present personal credentials, explain the purpose of the study in one to two sentences, and assure respondents that their participation is voluntary, and their comments are confidential, all in less than a minute. No big words or jargon should be used, and no details should be provided unless specifically requested. If the interviewer wishes to record the interview, they should ask for respondents’ explicit permission before doing so. Even if the interview is recorded, the interviewer must take notes on key issues, probes, or verbatim phrases

During the interview, the interviewer should follow the questionnaire script and ask questions exactly as written, and not change the words to make the question sound friendlier. They should also not change the order of questions or skip any question that may have been answered earlier. Any issues with the questions should be discussed during rehearsal prior to the actual interview sessions. The interviewer should not finish the respondent’s sentences. If the respondent gives a brief cursory answer, the interviewer should probe the respondent to elicit a more thoughtful, thorough response. Some useful probing techniques are:

The silent probe: Just pausing and waiting without going into the next question may suggest to respondents that the interviewer is waiting for more detailed response.

Overt encouragement: An occasional ‘uh-huh’ or ‘okay’ may encourage the respondent to go into greater details. However, the interviewer must not express approval or disapproval of what the respondent says.

Ask for elaboration: Such as, ‘Can you elaborate on that?’ or ‘A minute ago, you were talking about an experience you had in high school. Can you tell me more about that?’.

Reflection: The interviewer can try the psychotherapist’s trick of repeating what the respondent said. For instance, ‘What I’m hearing is that you found that experience very traumatic’ and then pause and wait for the respondent to elaborate.

After the interview is completed, the interviewer should thank respondents for their time, tell them when to expect the results, and not leave hastily. Immediately after leaving, they should write down any notes or key observations that may help interpret the respondent’s comments better.

Biases in survey research

Despite all of its strengths and advantages, survey research is often tainted with systematic biases that may invalidate some of the inferences derived from such surveys. Five such biases are the non-response bias, sampling bias, social desirability bias, recall bias, and common method bias.

Non-response bias. Survey research is generally notorious for its low response rates. A response rate of 15-20 per cent is typical in a postal survey, even after two or three reminders. If the majority of the targeted respondents fail to respond to a survey, this may indicate a systematic reason for the low response rate, which may in turn raise questions about the validity of the study’s results. For instance, dissatisfied customers tend to be more vocal about their experience than satisfied customers, and are therefore more likely to respond to questionnaire surveys or interview requests than satisfied customers. Hence, any respondent sample is likely to have a higher proportion of dissatisfied customers than the underlying population from which it is drawn. In this instance, not only will the results lack generalisability, but the observed outcomes may also be an artefact of the biased sample. Several strategies may be employed to improve response rates:

Advance notification: Sending a short letter to the targeted respondents soliciting their participation in an upcoming survey can prepare them in advance and improve their propensity to respond. The letter should state the purpose and importance of the study, mode of data collection (e.g., via a phone call, a survey form in the mail, etc.), and appreciation for their co-operation. A variation of this technique may be to ask the respondent to return a prepaid postcard indicating whether or not they are willing to participate in the study.

Relevance of content: People are more likely to respond to surveys examining issues of relevance or importance to them.

Respondent-friendly questionnaire: Shorter survey questionnaires tend to elicit higher response rates than longer questionnaires. Furthermore, questions that are clear, non-offensive, and easy to respond tend to attract higher response rates.

Endorsement: For organisational surveys, it helps to gain endorsement from a senior executive attesting to the importance of the study to the organisation. Such endorsement can be in the form of a cover letter or a letter of introduction, which can improve the researcher’s credibility in the eyes of the respondents.

Follow-up requests: Multiple follow-up requests may coax some non-respondents to respond, even if their responses are late.

Interviewer training: Response rates for interviews can be improved with skilled interviewers trained in how to request interviews, use computerised dialling techniques to identify potential respondents, and schedule call-backs for respondents who could not be reached.

Incentives : Incentives in the form of cash or gift cards, giveaways such as pens or stress balls, entry into a lottery, draw or contest, discount coupons, promise of contribution to charity, and so forth may increase response rates.

Non-monetary incentives: Businesses, in particular, are more prone to respond to non-monetary incentives than financial incentives. An example of such a non-monetary incentive is a benchmarking report comparing the business’s individual response against the aggregate of all responses to a survey.

Confidentiality and privacy: Finally, assurances that respondents’ private data or responses will not fall into the hands of any third party may help improve response rates

Sampling bias. Telephone surveys conducted by calling a random sample of publicly available telephone numbers will systematically exclude people with unlisted telephone numbers, mobile phone numbers, and people who are unable to answer the phone when the survey is being conducted—for instance, if they are at work—and will include a disproportionate number of respondents who have landline telephone services with listed phone numbers and people who are home during the day, such as the unemployed, the disabled, and the elderly. Likewise, online surveys tend to include a disproportionate number of students and younger people who are constantly on the Internet, and systematically exclude people with limited or no access to computers or the Internet, such as the poor and the elderly. Similarly, questionnaire surveys tend to exclude children and the illiterate, who are unable to read, understand, or meaningfully respond to the questionnaire. A different kind of sampling bias relates to sampling the wrong population, such as asking teachers (or parents) about their students’ (or children’s) academic learning, or asking CEOs about operational details in their company. Such biases make the respondent sample unrepresentative of the intended population and hurt generalisability claims about inferences drawn from the biased sample.

Social desirability bias . Many respondents tend to avoid negative opinions or embarrassing comments about themselves, their employers, family, or friends. With negative questions such as, ‘Do you think that your project team is dysfunctional?’, ‘Is there a lot of office politics in your workplace?’, ‘Or have you ever illegally downloaded music files from the Internet?’, the researcher may not get truthful responses. This tendency among respondents to ‘spin the truth’ in order to portray themselves in a socially desirable manner is called the ‘social desirability bias’, which hurts the validity of responses obtained from survey research. There is practically no way of overcoming the social desirability bias in a questionnaire survey, but in an interview setting, an astute interviewer may be able to spot inconsistent answers and ask probing questions or use personal observations to supplement respondents’ comments.

Recall bias. Responses to survey questions often depend on subjects’ motivation, memory, and ability to respond. Particularly when dealing with events that happened in the distant past, respondents may not adequately remember their own motivations or behaviours, or perhaps their memory of such events may have evolved with time and no longer be retrievable. For instance, if a respondent is asked to describe his/her utilisation of computer technology one year ago, or even memorable childhood events like birthdays, their response may not be accurate due to difficulties with recall. One possible way of overcoming the recall bias is by anchoring the respondent’s memory in specific events as they happened, rather than asking them to recall their perceptions and motivations from memory.

Common method bias. Common method bias refers to the amount of spurious covariance shared between independent and dependent variables that are measured at the same point in time, such as in a cross-sectional survey, using the same instrument, such as a questionnaire. In such cases, the phenomenon under investigation may not be adequately separated from measurement artefacts. Standard statistical tests are available to test for common method bias, such as Harmon’s single-factor test (Podsakoff, MacKenzie, Lee & Podsakoff, 2003), [3] Lindell and Whitney’s (2001) [4] market variable technique, and so forth. This bias can potentially be avoided if the independent and dependent variables are measured at different points in time using a longitudinal survey design, or if these variables are measured using different methods, such as computerised recording of dependent variable versus questionnaire-based self-rating of independent variables.

  • Dillman, D. (1978). Mail and telephone surveys: The total design method . New York: Wiley. ↵
  • Rasikski, K. (1989). The effect of question wording on public support for government spending. Public Opinion Quarterly , 53(3), 388–394. ↵
  • Podsakoff, P. M., MacKenzie, S. B., Lee, J.-Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology , 88(5), 879–903. http://dx.doi.org/10.1037/0021-9010.88.5.879. ↵
  • Lindell, M. K., & Whitney, D. J. (2001). Accounting for common method variance in cross-sectional research designs. Journal of Applied Psychology , 86(1), 114–121. ↵

Social Science Research: Principles, Methods and Practices (Revised edition) Copyright © 2019 by Anol Bhattacherjee is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Doing Survey Research | A Step-by-Step Guide & Examples

Doing Survey Research | A Step-by-Step Guide & Examples

Published on 6 May 2022 by Shona McCombes . Revised on 10 October 2022.

Survey research means collecting information about a group of people by asking them questions and analysing the results. To conduct an effective survey, follow these six steps:

  • Determine who will participate in the survey
  • Decide the type of survey (mail, online, or in-person)
  • Design the survey questions and layout
  • Distribute the survey
  • Analyse the responses
  • Write up the results

Surveys are a flexible method of data collection that can be used in many different types of research .

Table of contents

What are surveys used for, step 1: define the population and sample, step 2: decide on the type of survey, step 3: design the survey questions, step 4: distribute the survey and collect responses, step 5: analyse the survey results, step 6: write up the survey results, frequently asked questions about surveys.

Surveys are used as a method of gathering data in many different fields. They are a good choice when you want to find out about the characteristics, preferences, opinions, or beliefs of a group of people.

Common uses of survey research include:

  • Social research: Investigating the experiences and characteristics of different social groups
  • Market research: Finding out what customers think about products, services, and companies
  • Health research: Collecting data from patients about symptoms and treatments
  • Politics: Measuring public opinion about parties and policies
  • Psychology: Researching personality traits, preferences, and behaviours

Surveys can be used in both cross-sectional studies , where you collect data just once, and longitudinal studies , where you survey the same sample several times over an extended period.

Prevent plagiarism, run a free check.

Before you start conducting survey research, you should already have a clear research question that defines what you want to find out. Based on this question, you need to determine exactly who you will target to participate in the survey.

Populations

The target population is the specific group of people that you want to find out about. This group can be very broad or relatively narrow. For example:

  • The population of Brazil
  • University students in the UK
  • Second-generation immigrants in the Netherlands
  • Customers of a specific company aged 18 to 24
  • British transgender women over the age of 50

Your survey should aim to produce results that can be generalised to the whole population. That means you need to carefully define exactly who you want to draw conclusions about.

It’s rarely possible to survey the entire population of your research – it would be very difficult to get a response from every person in Brazil or every university student in the UK. Instead, you will usually survey a sample from the population.

The sample size depends on how big the population is. You can use an online sample calculator to work out how many responses you need.

There are many sampling methods that allow you to generalise to broad populations. In general, though, the sample should aim to be representative of the population as a whole. The larger and more representative your sample, the more valid your conclusions.

There are two main types of survey:

  • A questionnaire , where a list of questions is distributed by post, online, or in person, and respondents fill it out themselves
  • An interview , where the researcher asks a set of questions by phone or in person and records the responses

Which type you choose depends on the sample size and location, as well as the focus of the research.

Questionnaires

Sending out a paper survey by post is a common method of gathering demographic information (for example, in a government census of the population).

  • You can easily access a large sample.
  • You have some control over who is included in the sample (e.g., residents of a specific region).
  • The response rate is often low.

Online surveys are a popular choice for students doing dissertation research , due to the low cost and flexibility of this method. There are many online tools available for constructing surveys, such as SurveyMonkey and Google Forms .

  • You can quickly access a large sample without constraints on time or location.
  • The data is easy to process and analyse.
  • The anonymity and accessibility of online surveys mean you have less control over who responds.

If your research focuses on a specific location, you can distribute a written questionnaire to be completed by respondents on the spot. For example, you could approach the customers of a shopping centre or ask all students to complete a questionnaire at the end of a class.

  • You can screen respondents to make sure only people in the target population are included in the sample.
  • You can collect time- and location-specific data (e.g., the opinions of a shop’s weekday customers).
  • The sample size will be smaller, so this method is less suitable for collecting data on broad populations.

Oral interviews are a useful method for smaller sample sizes. They allow you to gather more in-depth information on people’s opinions and preferences. You can conduct interviews by phone or in person.

  • You have personal contact with respondents, so you know exactly who will be included in the sample in advance.
  • You can clarify questions and ask for follow-up information when necessary.
  • The lack of anonymity may cause respondents to answer less honestly, and there is more risk of researcher bias.

Like questionnaires, interviews can be used to collect quantitative data : the researcher records each response as a category or rating and statistically analyses the results. But they are more commonly used to collect qualitative data : the interviewees’ full responses are transcribed and analysed individually to gain a richer understanding of their opinions and feelings.

Next, you need to decide which questions you will ask and how you will ask them. It’s important to consider:

  • The type of questions
  • The content of the questions
  • The phrasing of the questions
  • The ordering and layout of the survey

Open-ended vs closed-ended questions

There are two main forms of survey questions: open-ended and closed-ended. Many surveys use a combination of both.

Closed-ended questions give the respondent a predetermined set of answers to choose from. A closed-ended question can include:

  • A binary answer (e.g., yes/no or agree/disagree )
  • A scale (e.g., a Likert scale with five points ranging from strongly agree to strongly disagree )
  • A list of options with a single answer possible (e.g., age categories)
  • A list of options with multiple answers possible (e.g., leisure interests)

Closed-ended questions are best for quantitative research . They provide you with numerical data that can be statistically analysed to find patterns, trends, and correlations .

Open-ended questions are best for qualitative research. This type of question has no predetermined answers to choose from. Instead, the respondent answers in their own words.

Open questions are most common in interviews, but you can also use them in questionnaires. They are often useful as follow-up questions to ask for more detailed explanations of responses to the closed questions.

The content of the survey questions

To ensure the validity and reliability of your results, you need to carefully consider each question in the survey. All questions should be narrowly focused with enough context for the respondent to answer accurately. Avoid questions that are not directly relevant to the survey’s purpose.

When constructing closed-ended questions, ensure that the options cover all possibilities. If you include a list of options that isn’t exhaustive, you can add an ‘other’ field.

Phrasing the survey questions

In terms of language, the survey questions should be as clear and precise as possible. Tailor the questions to your target population, keeping in mind their level of knowledge of the topic.

Use language that respondents will easily understand, and avoid words with vague or ambiguous meanings. Make sure your questions are phrased neutrally, with no bias towards one answer or another.

Ordering the survey questions

The questions should be arranged in a logical order. Start with easy, non-sensitive, closed-ended questions that will encourage the respondent to continue.

If the survey covers several different topics or themes, group together related questions. You can divide a questionnaire into sections to help respondents understand what is being asked in each part.

If a question refers back to or depends on the answer to a previous question, they should be placed directly next to one another.

Before you start, create a clear plan for where, when, how, and with whom you will conduct the survey. Determine in advance how many responses you require and how you will gain access to the sample.

When you are satisfied that you have created a strong research design suitable for answering your research questions, you can conduct the survey through your method of choice – by post, online, or in person.

There are many methods of analysing the results of your survey. First you have to process the data, usually with the help of a computer program to sort all the responses. You should also cleanse the data by removing incomplete or incorrectly completed responses.

If you asked open-ended questions, you will have to code the responses by assigning labels to each response and organising them into categories or themes. You can also use more qualitative methods, such as thematic analysis , which is especially suitable for analysing interviews.

Statistical analysis is usually conducted using programs like SPSS or Stata. The same set of survey data can be subject to many analyses.

Finally, when you have collected and analysed all the necessary data, you will write it up as part of your thesis, dissertation , or research paper .

In the methodology section, you describe exactly how you conducted the survey. You should explain the types of questions you used, the sampling method, when and where the survey took place, and the response rate. You can include the full questionnaire as an appendix and refer to it in the text if relevant.

Then introduce the analysis by describing how you prepared the data and the statistical methods you used to analyse it. In the results section, you summarise the key results from your analysis.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviours. It is made up of four or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with five or seven possible responses, to capture their degree of agreement.

Individual Likert-type questions are generally considered ordinal data , because the items have clear rank order, but don’t have an even distribution.

Overall Likert scale scores are sometimes treated as interval data. These scores are considered to have directionality and even spacing between them.

The type of data determines what statistical tests you should use to analyse your data.

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analysing data from people using questionnaires.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2022, October 10). Doing Survey Research | A Step-by-Step Guide & Examples. Scribbr. Retrieved 2 April 2024, from https://www.scribbr.co.uk/research-methods/surveys/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, qualitative vs quantitative research | examples & methods, construct validity | definition, types, & examples, what is a likert scale | guide & examples.

Logo for BCcampus Open Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Chapter 9: Survey Research

Overview of Survey Research

Learning Objectives

  • Define what survey research is, including its two important characteristics.
  • Describe several different ways that survey research can be used and give some examples.

What Is Survey Research?

Survey research  is a quantitative and qualitative method with two important characteristics. First, the variables of interest are measured using self-reports. In essence, survey researchers ask their participants (who are often called respondents  in survey research) to report directly on their own thoughts, feelings, and behaviours. Second, considerable attention is paid to the issue of sampling. In particular, survey researchers have a strong preference for large random samples because they provide the most accurate estimates of what is true in the population. In fact, survey research may be the only approach in psychology in which random sampling is routinely used. Beyond these two characteristics, almost anything goes in survey research. Surveys can be long or short. They can be conducted in person, by telephone, through the mail, or over the Internet. They can be about voting intentions, consumer preferences, social attitudes, health, or anything else that it is possible to ask people about and receive meaningful answers.  Although survey data are often analyzed using statistics, there are many questions that lend themselves to more qualitative analysis.

Most survey research is nonexperimental. It is used to describe single variables (e.g., the percentage of voters who prefer one presidential candidate or another, the prevalence of schizophrenia in the general population) and also to assess statistical relationships between variables (e.g., the relationship between income and health). But surveys can also be experimental. The study by Lerner and her colleagues is a good example. Their use of self-report measures and a large national sample identifies their work as survey research. But their manipulation of an independent variable (anger vs. fear) to assess its effect on a dependent variable (risk judgments) also identifies their work as experimental.

History and Uses of Survey Research

Survey research may have its roots in English and American “social surveys” conducted around the turn of the 20th century by researchers and reformers who wanted to document the extent of social problems such as poverty (Converse, 1987) [1] . By the 1930s, the US government was conducting surveys to document economic and social conditions in the country. The need to draw conclusions about the entire population helped spur advances in sampling procedures. At about the same time, several researchers who had already made a name for themselves in market research, studying consumer preferences for American businesses, turned their attention to election polling. A watershed event was the presidential election of 1936 between Alf Landon and Franklin Roosevelt. A magazine called  Literary Digest  conducted a survey by sending ballots (which were also subscription requests) to millions of Americans. Based on this “straw poll,” the editors predicted that Landon would win in a landslide. At the same time, the new pollsters were using scientific methods with much smaller samples to predict just the opposite—that Roosevelt would win in a landslide. In fact, one of them, George Gallup, publicly criticized the methods of Literary Digest  before the election and all but guaranteed that his prediction would be correct. And of course it was. (We will consider the reasons that Gallup was right later in this chapter.) Interest in surveying around election times has led to several long-term projects, notably the Canadian Election Studies which has measured opinions of Canadian voters around federal elections since 1965.  Anyone can access the data and read about the results of the experiments in these studies.

From market research and election polling, survey research made its way into several academic fields, including political science, sociology, and public health—where it continues to be one of the primary approaches to collecting new data. Beginning in the 1930s, psychologists made important advances in questionnaire design, including techniques that are still used today, such as the Likert scale. (See “What Is a Likert Scale?” in  Section 9.2 “Constructing Survey Questionnaires” .) Survey research has a strong historical association with the social psychological study of attitudes, stereotypes, and prejudice. Early attitude researchers were also among the first psychologists to seek larger and more diverse samples than the convenience samples of university students that were routinely used in psychology (and still are).

Survey research continues to be important in psychology today. For example, survey data have been instrumental in estimating the prevalence of various mental disorders and identifying statistical relationships among those disorders and with various other factors. The National Comorbidity Survey is a large-scale mental health survey conducted in the United States . In just one part of this survey, nearly 10,000 adults were given a structured mental health interview in their homes in 2002 and 2003.  Table 9.1  presents results on the lifetime prevalence of some anxiety, mood, and substance use disorders. (Lifetime prevalence is the percentage of the population that develops the problem sometime in their lifetime.) Obviously, this kind of information can be of great use both to basic researchers seeking to understand the causes and correlates of mental disorders as well as to clinicians and policymakers who need to understand exactly how common these disorders are.

And as the opening example makes clear, survey research can even be used to conduct experiments to test specific hypotheses about causal relationships between variables. Such studies, when conducted on large and diverse samples, can be a useful supplement to laboratory studies conducted on university students. Although this approach is not a typical use of survey research, it certainly illustrates the flexibility of this method.

Key Takeaways

  • Survey research is a quantitative approach that features the use of self-report measures on carefully selected samples. It is a flexible approach that can be used to study a wide variety of basic and applied research questions.
  • Survey research has its roots in applied social research, market research, and election polling. It has since become an important approach in many academic disciplines, including political science, sociology, public health, and, of course, psychology.

Discussion: Think of a question that each of the following professionals might try to answer using survey research.

  • a social psychologist
  • an educational researcher
  • a market researcher who works for a supermarket chain
  • the mayor of a large city
  • the head of a university police force
  • Converse, J. M. (1987). Survey research in the United States: Roots and emergence, 1890–1960 . Berkeley, CA: University of California Press. ↵
  • The lifetime prevalence of a disorder is the percentage of people in the population that develop that disorder at any time in their lives. ↵

A quantitative approach in which variables are measured using self-reports from a sample of the population.

Participants of a survey.

Research Methods in Psychology - 2nd Canadian Edition Copyright © 2015 by Paul C. Price, Rajiv Jhangiani, & I-Chant A. Chiang is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

research in survey methodology

Book cover

Handbook of Survey Methodology for the Social Sciences

  • Lior Gideon 0

John Jay College of Criminal Justice, City University of New York, New York, USA

You can also search for this editor in PubMed   Google Scholar

Comprehensive overview of survey methodology in the social sciences Covers survey design, to data collection, to result analysis Statistical principles and techniques targeted for Social Scientists

Includes supplementary material: sn.pub/extras

441k Accesses

378 Citations

18 Altmetric

  • Table of contents

About this book

Editors and affiliations, about the editor, bibliographic information.

  • Publish with us

This is a preview of subscription content, log in via an institution to check access.

Access this book

  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Other ways to access

Licence this eBook for your library

Institutional subscriptions

Table of contents (28 chapters)

Front matter, introduction.

Lior Gideon

Classification of Surveys

  • Ineke Stoop, Eric Harrison

Survey Research Ethics

  • Robert W. Oldendick

An Overlooked Approach in Survey Research: Total Survey Error

  • René Bautista

Common Survey Sampling Techniques

  • Mary Hibberts, R. Burke Johnson, Kenneth Hudson

Frames, Framing Effects, and Survey Responses

  • Loretta J. Stalans

The Art of Question Phrasing

Interviewing.

  • Lior Gideon, Peter Moskos

Unit Non-Response Due to Refusal

  • Ineke Stoop

Non-Response and Measurement Error

  • Jaak Billiet, Hideko Matsuo

Why People Agree to Participate in Surveys

  • Gerald Albaum, Scott M. Smith

Respondents Cooperation: Demographic Profile of Survey Respondents and Its Implication

  • Patrick Glaser

Effects of Incentives in Surveys

  • Vera Toepoel

Designing the Face-to-Face Survey

  • W. Lawrence Neuman

Repeated Cross-Sectional Surveys Using FTF

Costs and errors in fixed and mobile phone surveys.

  • Vasja Vehovar, Ana Slavec, Nejc Berzelak

Mail Survey in Social Research

  • Alana Henninger, Hung-En Sung

E-Mail Surveys

  • Gustavo Mesch

Increasing Response Rate in Web-Based/Internet Surveys

  • Amber N. Manzo, Jennifer M. Burke
  • Computer Assisted Surveys
  • Data Analysis
  • Response Rates
  • Survey Design
  • Survey Methodology
  • demographics of survey respondents
  • self selection bias
  • survey ethics
  • survey question phrasing
  • survey response error
  • web based survey design

Lior Gideon, Ph.D., is a Full time Professor at John Jay College of Criminal Justice in New York, New York. Is a devout methodologist, and have over 15 years of international experience in teaching methodology courses and training future cohorts of researchers in the field of criminology and criminal justice research.  He also specializes in corrections-based program evaluation and focuses his research on rehabilitation, reentry, and reintegration issues and in particular by examining offenders’ perceptions of their needs. To that extent, Dr. Gideon developed many survey based measurement to examine level of punitiveness, attitudes supportive of rehabilitation, and recently measures of social support.  His research interests also involve international and comparative corrections-related public opinion surveys and their affect on policy. To that extent, Dr. Gideon published several manuscripts on these topics, including two previously published books on offenders needs in the reintegration process: Substance Abusing Inmates: Experiences of Recovering Drug Addicts on Their Way Back Home (2010, Springer), and Rethinking Corrections: Rehabilitation, Reentry, and Reintegration (with Hung-En Sung, 2011, Sage). Aside from the above, Dr. Gideon has published a methodology book titled Theories of Research Methodology: readings in methods , which is now available in its second addition. His other works were recently published in The Prison Journal , the International Journal of Offender Therapy and Comparative Criminology , and the Asian Journal of Criminology . Dr. Gideon earned his PhD from the Faculty of Law, Institute of Criminology at the Hebrew University in Jerusalem, Israel, and completed a postdoctoral fellowship at the University of Maryland’s Bureau of Governmental Research.

Book Title : Handbook of Survey Methodology for the Social Sciences

Editors : Lior Gideon

DOI : https://doi.org/10.1007/978-1-4614-3876-2

Publisher : Springer New York, NY

eBook Packages : Humanities, Social Sciences and Law , Social Sciences (R0)

Copyright Information : Springer Science+Business Media New York 2012

Hardcover ISBN : 978-1-4614-3875-5 Published: 21 June 2012

Softcover ISBN : 978-1-4939-4516-0 Published: 23 August 2016

eBook ISBN : 978-1-4614-3876-2 Published: 21 June 2012

Edition Number : 1

Number of Pages : XVIII, 520

Topics : Methodology of the Social Sciences

Policies and ethics

  • Find a journal
  • Track your research
  • Survey Methods: Definition, Types, and Examples

busayo.longe

Data gathering is a flexible and exciting process; especially when you use surveys. There are different survey methods that allow you to collect relevant information from research participants or the people who have access to the required data. 

For instance, you can conduct an interview or simply observe the research participants as they interact in their environment. Typically, your research context, the type of systematic investigation, and many other factors should determine the survey method you adopt. 

In this article, we will discuss different types of survey methods and also show you how to conduct online surveys using Formplus . 

What is a Survey Method?

A survey method is a process, tool, or technique that you can use to gather information in research by asking questions to a predefined group of people. Typically, it facilitates the exchange of information between the research participants and the person or organization carrying out the research. 

Survey methods can be qualitative or quantitative depending on the type of research and the type of data you want to gather in the end. For instance, you can choose to create and administer an online survey with Formplus that allows you to collect statistical information from respondents. For qualitative research, you can conduct a face-to-face interview or organize a focus group. 

Types of Survey Methods  

Interviews    .

An interview is a survey research method where the researcher facilitates some sort of conversation with the research participant to gather useful information about the research subject. This conversation can happen physically as a face-to-face interview or virtually as a telephone interview or via video and audio-conferencing platforms.  

During an interview, the researcher has the opportunity to connect personally with the research subject and establish some sort of relationship. This connection allows the interviewer (researcher) to gain more insight into the information provided by the research participant in the course of the conversation. 

An interview can be structured, semi-structured, or unstructured . In a structured interview , the researcher strictly adheres to a sequence of premeditated questions throughout the conversation. This is also known as a standardized interview or a researcher-administered interview and it often results in quantitative research findings. 

In a semi-structured interview , the researcher has a set of premeditated interview questions but he or she can veer off the existing interview sequence to get more answers and gain more clarity from the interviewee. The semi-structured interview method is flexible and allows the researcher to work outside the scope of the sequence while maintaining the basic interview framework. 

Just as the name suggests, an unstructured interview is one that doesn’t restrict the researcher to a set of premeditated questions or the interview sequence. Here, the researcher is allowed to leverage his or her knowledge and to creatively weave questions to help him or her to get useful information from the participant. This is why it is also called an in-depth interview. 

Advantages of Interviews

  • Interviews, especially face-to-face interviews, allow you to capture non-verbal nuances that provide more context around the interviewee’s responses. For instance, the interview can act in a certain way to suggest that he or she is uncomfortable with a particular question. 
  • Interviews are more flexible as a method of survey research. With semi-structured and unstructured interviews, you can adjust the conversation sequence to suit prevailing circumstances. 

Disadvantages of Interviews

  • It is expensive and time-consuming; especially when you have to interview large numbers of people. 
  • It is subject to researcher bias which can affect the quality of data gathered at the end of the process. 

A survey is a data collection tool that lists a set of structured questions to which respondents provide answers based on their knowledge and experiences. It is a standard data gathering process that allows you to access information from a predefined group of respondents during research. 

In a survey, you would find different types of questions based on the research context and the type of information you want to have access to. Many surveys combine open-ended and closed-ended questions including rating scales and semantic scales. This means you can use them for qualitative and quantitative research. 

Surveys come in 2 major formats; paper forms or online forms. A paper survey is a more traditional method of data collection and it can easily result in loss of data. Paper forms are also cumbersome to organize and process. 

Online surveys, on the other hand, are usually created via data collection platforms like Formplus. These platforms have form builders where you can create your survey from scratch using different form fields and features. On Formplus, you can also find different online survey templates for data collection. 

One of the many advantages of online surveys is accuracy as it typically records a lower margin of error than paper surveys. Also, online surveys are easier to administer as you can share them with respondents via email or social media channels. 

Advantages of Surveys

  • Surveys allow you to gather data from a large sample size or research population. This helps to improve the validity and accuracy of your research findings. 
  • The cost of creating and administering a survey is usually lower compared to other research methods. 
  • It is a convenient method of data collection for the researcher and the respondents. 

Disadvantages of Surveys

  • The validity of the research data can be affected by survey response bias. 
  • High survey dropout rates can also affect the number of responses received in your survey. 

Observation  

Just as the name suggests, observation is a method of gathering data by paying attention to the actions and behaviors of the research subjects as they interact in their environment. This qualitative research method allows you to get first-hand information about the research subjects in line with the aims and objectives of your systematic investigation. 

If you have tried out this survey method, then you must have come across one or more of the 4 types of observation in research. These are; Complete observer method, observer as participant method, participant as observer method, and complete participant method. 

In the complete observer method , the researcher is entirely detached or absorbed from the research environment. This means that the participants are completely unaware of the researcher’s presence and this allows them to act naturally as they interact with their environment. You can think of it as a remote observation. 

The observer as participant method requires the researcher to be involved in the research environment; albeit with limited interaction with the participants. The participants typically know the researcher and may also be familiar with the goals and objectives of the systematic investigation. 

A good example of this is when a researcher visits a school to understand how students interact with each other during extra-curricular activities. In this case, the students may be fully aware of the research process; although they may not interact with the researcher. 

In the participant as observer method , the researcher has some kind of relationship with the participants and interacts with them often as he or she carries out the investigation. For instance, when an anthropologist goes to a host community for research, s/he builds a relationship with members of the community while the host community is aware of the research. 

In the complete participant method , the researcher interacts with the research participants and is also an active member of the research environment. However, the research participants remain unaware of the research process; they do not know that a researcher is among them and they also do not know that they are being observed. 

Advantages of Observation Method

  • It is one of the simplest methods of data collection as it does not require specialization or expertise in many cases.
  • The observation method helps you to formulate a valid research hypothesis for your systematic investigation. You can test this hypothesis via experimental research to get valid findings.  

Disadvantages of Observation Method

  • When the participants know they are being observed, they may act differently and this can affect the accuracy of the information you gather. 
  • Because observation is done in the participant’s natural environment; that is an environment without control, the findings from this process are not very reliable. 

Focus Groups

A focus group is an open conversation with a small number of carefully-selected participants who provide useful information for research. The selected participants are a subset of your research population and should represent the different groups in the larger population. 

In a focus group, the researcher can act as the moderator who sets the tone of the conversation and guides the discourse. The moderator ensures that the overall conversations are in line with the aims and objectives of the research and he or she also reduces the bias in the discussions.  

If you are conducting qualitative research with a large and diverse research population, then adopting focus groups is an effective and cost-efficient method of data collection . Typically, your focus group should have 6-10 participants, usually 8; including the moderator. 

Based on the focus of your research, you can adopt one or more types of focus groups for your investigation. Common types of focus groups you should consider include:

  • Dual-moderator focus group
  • Mini focus group
  • Client-involvement focus group
  • Virtual or online focus groups. 

Advantages of Focus Groups

  • Focus groups are open-ended and this allows you to explore a variety of opinions and ideas that may come up during the discussions. 
  • Focus groups help you to discover other salient points that you may not have considered in the systematic investigation. 

Disadvantages of Focus Groups

  • Participants may not communicate their true thoughts and experiences and this affects the validity of the entire process.
  • Participants can be easily influenced by the opinions of other people in the group. 

How to Conduct Online Surveys with Formplus  

As we’ve mentioned earlier, an online survey allows you to gather data from a large pool of respondents easily and conveniently. Unlike paper forms, online surveys are secure and it is also easy to distribute them and collate responses for valid research data. 

Formplus allows you to create your online surve y in a few easy steps. It also has several features that make data collection and organization easy for you. Let’s show you how to conduct online surveys with Formplus. 

  • Create your Formplus account here. If you already have a Formplus account, you can log in at www.formpl.us . 

research in survey methodology

  • On your Formplus dashboard, you will find several buttons and options. Click on the “create new form” button located at the top left corner of the dashboard to begin. 
  • Now, you should have access to the form builder. The Formplus builder allows you to add different form fields to your survey by simply dragging and dropping them from the builder’s fields section into your form. You will find the fields section at the left corner of the form builder. 

research in survey methodology

  • First, add the title of your form by clicking on the title tab just at the top of the builder. 
  • Next, click on the different fields you’d like to have in your survey. You can add rating fields, number fields, and more than 30 other form fields as you like. 

research in survey methodology

  • After adding the fields to your survey, it is time to populate them with questions and answer-options as needed. Click on the small pencil icon located beside each field to access their unique editing tab. 
  • Apart from adding questions and answer-options to the fields, you can also make preferred fields to be compulsory or make them read-only. 
  • Save all the changes you have made to the form by clicking on the save icon at the top right corner. This gives you immediate access to the builder’s customization section. 

research in survey methodology

  • Formplus has numerous customization options that you can use to change the outlook and layout of your online survey without any knowledge of CSS. You can change your form font, add your organization’s logo, and also add preferred background images among other things. 

research in survey methodology

  • To start collecting responses in your online survey, you can use any of the Formplus multiple form sharing options. Go to the builder’s “share” section, choose your preferred option, and follow the prompt provided. If you have a WordPress website, you can add the survey to it via the WordPress plugin. 

research in survey methodology

  • Don’t forget to track your form responses and other important data in our form analytics dashboard. 

Advantages of Online Surveys

  • Online surveys are a faster method of data collection : They help you to save time by accelerating your data collection process. Typically, respondents would spend ⅓ of the time used in completing a paper survey for an online survey. This means you will record almost-immediate responses from participants.  
  • Apart from saving time, you also get to save cost. For instance, you do not have to spend money on printing paper surveys and transporting them to respondents. Also, many online survey tools have a free subscription plan and also support affordable premium subscription plans. You can check out Formplus pricing here . 
  • Online surveys reduce the margin of error in data collection. This allows you to gather more accurate information and arrive at objective research findings. 
  • It is flexible and allows participants to respond as is convenient. For instance, Formplus has a save and resume later feature that allows respondents to save an incomplete survey and finish up when it is more convenient. The order of the questions in an online survey can also be changed. 
  • Online surveys make the data collection process easy and seamless. By leveraging the internet for distribution, you can gather information from thousands of people in your target population. 
  • Because online surveys are very convenient, they result in increased survey response rates because participants can complete the survey according to their own pace, chosen time, and preferences.

Conclusion  

When conducting research, many survey methods can help you to gather, analyze and process data effectively. In this article, we have looked at some of these methods in detail including interviews, focus groups, and the observation approach. 

As we’ve shown you, each of these survey methods has its strengths and weaknesses. This is why your choice should be informed by the type of research you are conducting and what you want to get out of it. While some of these methods work best for qualitative research, others are better suited for quantitative data collection . 

Logo

Connect to Formplus, Get Started Now - It's Free!

  • accuracy surveys
  • brand survey examples
  • survey methods
  • survey question types
  • survey questionnaire
  • busayo.longe

Formplus

You may also like:

Survey & Questionnaire Introduction: Examples + [5 Types]

The Golden Rule of Surveys: Be Polite. Whether online or offline, you need to politely approach survey respondents and get th

research in survey methodology

33 Event Survey Questions + [Template Examples]

Read this article to learn how to create an event survey with Formplus

Pilot Survey: Definition, Importance + [ Question Examples]

Before launching a new product or feature into the market, it is a good idea to find out what you

25 Training Survey Questions + Free Form Templates

Asking the right training survey questions before, during, and after a training session is an effective way to gather valuabl

Formplus - For Seamless Data Collection

Collect data the right way with a versatile data collection tool. try formplus and transform your work productivity today..

  • Reference Manager
  • Simple TEXT file

People also looked at

Perspective article, methodological considerations for survey-based research during emergencies and public health crises: improving the quality of evidence and communication.

www.frontiersin.org

  • 1 Disaster and Emergency Management, School of Administrative Studies, York University, Toronto, Canada
  • 2 Institute for Methods Innovation (IMI), Eureka, CA, United States

The novel coronavirus (COVID-19) outbreak has resulted in a massive amount of global research on the social and human dimensions of the disease. Between academic researchers, governments, and polling firms, thousands of survey projects have been launched globally, tracking aspects like public opinion, social impacts, and drivers of disease transmission and mitigation. This deluge of research has created numerous potential risks and problems, including methodological concerns, duplication of efforts, and inappropriate selection and application of social science research techniques. Such concerns are more acute when projects are launched under the auspices of quick response, time-pressured conditions–and are magnified when such research is often intended for rapid public and policy-maker consumption, given the massive public importance of the topic.

Introduction

The COVID-19 pandemic has unfortunately illustrated the deadly consequences of ineffective science communication and decision-making. Globally, millions of people have succumbed to scientific misinformation about mitigation and treatment of the virus, fuelling behaviors that put themselves and their loved ones in mortal danger. 1 Nurses have told stories of COVID-19 patients, gasping for air, and dying, while still insisting the disease was a hoax (e.g., Villegas 2020 ). While science communication has always had real world implications, the magnitude of the COVID-19 crisis illustrates a remarkable degree of impact. Moreover, the crisis has demonstrated the complexity and challenge of making robust, evidence-informed policy in the midst of uncertain evidence, divergent public views, and heterogenous impacts. This adds urgency to seemingly abstract or academic questions of how the evidence that informs science communication practice and decision-making can be made more robust, even during rapidly evolving crises and grand challenges.

There has been a massive surge of science communication-related survey research projects in response to the COVID-19 crisis. These projects cover a wide range of topics, from assessing psychosocial impacts to attempting to evaluate different interventions and containment measures. Many of the issues being investigated connect to core themes in science communication, including (mis)information on scientific issues (e.g., Gupta et al., 2020 ; Pickles et al., 2021 ), trust in scientific technologies and interventions, including vaccines (e.g., Jensen et al., 2021a ; Kennedy et al., 2021a ; Kwok et al., 2021 ; Ruiz and Ball 2021 ), and more general issues of scientific literacy (e.g., Biasio et al., 2021 )—themes being investigated in a context of heightened public interest, significant pressure for effectiveness in interventions, and with highly polarized and contentious debate. Such survey research can be instrumental in informing effective government policies and interventions, for example, by evaluating the acceptability of different mitigation strategies, identifying vulnerable populations experiencing disproportionate negative effects, and clarifying information needs ( Van Bavel et al., 2020 ).

However, the rush of COVID-19 survey research has exposed challenges in using questionnaires in emergency contexts, such as methodological flaws, duplication of efforts, and lack of transparency. These issues are especially apparent when projects are launched under time-pressured conditions and conducted exclusively online. Addressing these challenges head on is essential to reduce the flow of questionable results into the policymaking process, where problematic methods can go undetected. To truly succeed at evidence-based science communication (see Jensen and Gerber 2020 )—and to support evidence-based decision-making through good science communication—requires that survey-based research in emergency settings be conducted according to the best feasible practices.

In this article, we highlight the utility of questionnaire-based research in COVID-19 and other emergencies, outlining best practices. We offer guidance to help researchers navigate key methodological choices, including sampling strategies, validation of measures, harmonization of instruments, and conceptualization/operationalization of research frameworks. Finally, we provide a summary of emerging networks, remaining gaps, and best practices for international coordination of survey-based research relating to COVID-19 and future disasters, emergencies, and crises.

Suitability of Survey-Based Research

Social and behavioural sciences have much to offer in terms of understanding emergency situations broadly, including the COVID-19 crisis, and informing policy responses (see Van Bavel et al., 2020 ) and post-disaster reactions ( Solomon and Green, 1992 ). Questionnaires have unique advantages and limitations in terms of the information that can be gathered and the insights that can be generated when used in isolation from other research approaches (e.g., see Jensen and Laurie, 2016 ). For these reasons, researchers should carefully assess the suitability of survey-based methods for addressing their research questions.

In emergency contexts, survey research can offer several advantages. Questionnaire-based work can:

• Allow for relatively straightforward recruitment and consenting procedures with large numbers of participants, as well as increasing the geographical scale that researchers can target (versus, for example, interview or observational research).

• Gather accurate data about an individual’s subjective memories or personal accounts, knowledge, attitudes, appraisals, interpretations, and perceptions about experiences.

• Allow for many mixed or integrated strategies for data collection, including both qualitative/quantitative; cross-sectional/longitudinal; closed-/open-ended; among others.

• Integrate effectively with other research methods (e.g., interviews, case study, biosampling) as supplemental or complementary (see Morgan, 2007 ) approaches to maximise strengths and offset weaknesses that allow for data triangulation.

• Allow for consistent administration of questions across a sample, as well as carefully crafted administration across multi-lingual contexts (e.g., validating multiple languages of a survey for consistent results).

• Enable highly complicated back-end rules (“survey logic”) for tailoring the user experience to ensure only relevant questions are presented.

• Create opportunities for carefully-crafted experimental designs, such as manipulating a variable of interest or comparing responses to different scenarios across a population.

• Deploy with relatively low costs and rapid timeframes compared to in-person methodologies.

At the same time, surveys can have significant limitations in the context of crisis research that can undermine their reliability or create temptations for methodological shortcuts. For example:

• Surveys face important limits in terms of what information can be reliably obtained. For example, respondents generally cannot accurately report about the attitudes, experiences, and behaviors of other people in their social groups. Likewise, self-reports can be systematically distorted by psychological processes, especially when it comes to behavioural intentions and projected future actions. Retrospective accounts can also be unreliable, particularly in cases of complex event sequences or events that took place long ago (e.g., Wagoner and Jensen 2015 ).

• The quality of survey data can degrade rapidly when there is low ecological validity (i.e., participants are not representative of the broader population), whether through sampling problems, systematic patterns in attrition for longitudinal research, or other factors.

• Seemingly simple designs may require extensive methodological or statistical expertise to maximise questionnaire design and data analysis (i.e., ensuring valid measures, maximizing best practice, and avoiding common mistakes).

• The limited ability to adjust measures once a survey has been released, without compromising the ability to develop inferences from comparable data, can be challenging in rapidly evolving crisis contexts where relevant issues are changing rapidly.

• Cross-sectional surveys can give a false impression of personal attributes that are prone to change if assumptions of cross-situational consistency are applied (e.g., factors that are expected to remain stable across time) (e.g., Hoffman, 2015 ).

Given these advantages and limitations, there are several appropriate targets for survey research in crises and emergencies. Alongside other methods—including observational, ethnographic, and interview-based work, depending on the specific research questions formulated—surveys can help to gather reliable data on:

• Knowledge: What people currently believe to be true about the disease (e.g., origin of the coronavirus, how could they catch it, or how they could reduce exposure).

• Trust: Confidence in different political and government institutions/actors, media and information sources, and other members of their community (e.g., neighbors, strangers) (e.g., see Jensen et al., 2021 ).

• Opinions: Approval of particular interventions to slow the spread; belief about whether policies or behaviours have been effective or changed the emergency outcome; or personal views about perceptions of vaccine efficacy or safety.

• Personal impacts: Reports from individuals who are exposed or negatively affected, such as with chronic stress or loss of loved ones, employment, health, and stigmatization.

• Risk perceptions: Hopes and fears related to the disease, end points of the emergency, and return to normalcy.

Even when aware of the limitations, launching and conducting survey research is a specialized skill that requires training, experience and mentorship. This expertise is comparable to conducting epidemiological, biomedical, or statistical research. Even when questionnaires appear ‘simple’ because of the skillful use of plain language and straightforward user interfaces, there are substantial methodological learning curves associated with proper research designs. In the next sections, we provide several project design, coordination, and methodological recommendations for researchers launching or conducting rapid-response research projects on these topics inherent with emergency contexts, in both COVID-19 and beyond. In the next section, we discuss overall research coordination, project designs, and specific methodological approaches.

Project Design

Researchers face important choices when designing survey-based research within the fast-moving context of disasters and emergencies. There can be a substantial pressure to conduct research quickly , including funder timelines, the perceived race to publish, or pressure to collect ephemeral data. Each of these factors can necessitate difficult decisions about project and research designs. At a high level, we recommend that survey-based projects on COVID-19 adopt the following standards ( Table 1 ):

www.frontiersin.org

TABLE 1 . Key factors for effective COVID-19 survey-based research.

Methodological Considerations

In emergency situations, avoiding common pitfalls in methodological designs can be challenging because of temporal pressures and unique emergency contexts. We recommend the following standards in methodological designs for COVID-19 research ( Table 2 ):

www.frontiersin.org

TABLE 2 . Key methodological considerations for COVID-19 survey research.

We also encourage readers to explore other resources for supporting methodological rigour in emergency contexts. In particular, the CONVERGE program associated with the Natural Hazards Center at the University of Colorado Boulder maintains a significant community resource via tutorials and “check sheets” to support method design and implementation (see https://converge.colorado.edu/resources/check-sheets/ ).

Research Coordination

Research coordination during emergencies requires pragmatic strategies to maximise the impact of evidence from rapid-response research. Despite massive government attention and resulting funding schemes, the available funds for social science research are outstripped by research needs–a situation made worse through duplication of research, overproduction, and inefficient use of resources in some topics. This results in fewer topics and populations receiving research attention, and investigations spanning a shorter period. It also generates a “wave profile” of investigation that is temporary and transient, disappearing as funds become limited due to economic constraints or further displacements occur to new topics.

We recommend the following practical considerations to maximize the efficiency, coordination, and effectiveness of survey-based research efforts ( Table 3 ):

www.frontiersin.org

TABLE 3 . Primary considerations for coordination of survey-based COVID-19 research.

Evidence-based science communication and decision-making depends on the reliability and robustness of the underlying research. Survey-based research can be valuable to supporting communication and policy-making efforts. However, it can also be vulnerable to significant limitations and common mistakes in the rush of trying to deploy instruments in an emergency context. The best practices outlined above not only help to ensure more rigorous data, but also serve as valuable intermediate steps when developing the project (e.g., meta-analysis helping to inform more robust question formulations; methodological transparency allowing more scrutiny of instruments before deployment). For example, by drawing on existing survey designs prepared by well-qualified experts, you can both help to enable comparability of data and reduce the risk of using flawed survey questions and response options.

In this article, we have presented a series of principles regarding effective crisis and emergency survey research. We argue that it is essential to begin by assessing the suitability of questionnaire-based approaches (including the unique strengths of surveys, potential limitations related to design and self-reporting, and the types of information that can be collected). We then laid out best practices essential to reliable research such as open access designs, engaging requisite social science expertise, using longitudinal and repeated measure designs, and selecting suitable sampling strategies. We then discussed three methodological issues (validation of items, use of standardized items, and alignment between concepts and operationalizations) that can prove challenging in rapid response contexts. Finally, we highlighted best practices for funding and project management in crisis contexts, including de-duplication, coordination, harmonization, and evidence synthesis.

Survey research is challenging work requiring methodological expertise. The best practices cannot be satisfactorily trained in the immediate race to respond to a crisis. Indeed, even for those with significant expertise in survey methods, issues like open access, de-duplication of projects, and harmonization between designs can pose significant challenges. Ultimately, the same principles hold true in emergency research as in more “normal” survey operations, and “the quality of a survey is best judged not by its size, scope, or prominence, but by how much attention is given to dealing with all the many important problems that can arise” ( American Statistical Association, 1998 , p. 11).

The emergency context should not weaken commitments to best practice principles, given the need to provide robust evidence that can inform policy and practice during crises. For researchers, this means creating multidisciplinary teams with sufficient expertise to ensure methodological quality. For practitioners and policy makers, this means being conscientious consumers of survey data–and seeking ways to engage expert perspectives in critical reviews of best available evidence. And, for funders of such research, it means redoubling a commitment to rigorous approaches and building the infrastructure that supports pre-crisis design and implementation, as well as effective coordination during events. Building resilience for future crises requires investment in survey methodology capacity building and network development before emergencies strike.

Author Contributions

All three authors contributed to the drafting and editing of the manuscript, with EBK as lead.

This project is supported in part by funding from the Social Sciences and Humanities Research Council (1006-2019-0001). This project was also supported through the COVID-19 Working Group effort supported by the National Science Foundation-funded Social Science Extreme Events Research (SSEER) Network and the CONVERGE facility at the Natural Hazards Center at the University of Colorado Boulder (NSF Award #1841338). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of SSHRC, NSF, SSEER, or CONVERGE.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

1 As just one example, Loomba et al. 2021 found that misinformation results in a decline of over 6% in vaccine intentions in the United States, or some approximately 21 million prospective American vaccine recipients.

American Statistical Association (1998). Judging the Quality of a Survey. ASA: Section Surv. Res. Methods , 1–13.

Google Scholar

Ballantyne, N. (2019). Epistemic Trespassing. Mind 128, 367–395. doi:10.1093/mind/fzx042

CrossRef Full Text | Google Scholar

Biasio, L. R., Bonaccorsi, G., Lorini, C., and Pecorelli, S. (2021). Assessing COVID-19 Vaccine Literacy: a Preliminary Online Survey. Hum. Vaccin. Immunother. 17 (5), 1304–1312. doi:10.1080/21645515.2020.1829315

PubMed Abstract | CrossRef Full Text | Google Scholar

Dong, E., Du, H., and Gardner, L. (2020). An Interactive Web-Based Dashboard to Track COVID-19 in Real Time. Lancet Infect. Dis 20 (5). doi:10.1016/s1473-3099(20)30120-1

Gupta, L., Gasparyan, A. Y., Misra, D. P., Agarwal, V., Zimba, O., and Yessirkepov, M. (2020). Information and Misinformation on COVID-19: a Cross-Sectional Survey Study. J. Korean Med. Sci. 35 (27), e256. doi:10.3346/jkms.2020.35.e256

Hoffman, L. (2015). Longitudinal Analysis: Modeling Within-Person Fluctuation and Change . New York: Routledge . doi:10.4324/9781315744094

Jensen, E. A., and Gerber, A. (2020). Evidence-based Science Communication. Front. Commun. 4 (78), 1–5. doi:10.3389/fcomm.2019.00078

Jensen, E. A., Kennedy, E. B., and Greenwood, E. (2021a). Pandemic: Public Feeling More Positive about Science. Nature 591, 34. doi:10.1038/d41586-021-00542-w

Jensen, E. A., Pfleger, A., Herbig, L., Wagoner, B., Lorenz, L., and Watzlawik, M. (2021b). What Drives Belief in Vaccination Conspiracy Theories in Germany. Front. Commun. 6. doi:10.3389/fcomm.2021.678335

Jensen, E. A., Kennedy, E., and Greenwood, E. (2021). Pandemic: Public Feeling More Positive About Science (Correspondence). Nature 591, 34.

Jensen, E., and Laurie, C. (2016). Doing Real Research: A Practical Guide to Social Research . London: SAGE .

Jensen, E., and Wagoner, B. (2014). “Developing Idiographic Research Methodology: Extending the Trajectory Equifinality Model and Historically Situated Sampling,” in Cultural Psychology and its Future: Complementarity in a New Key . Editors B. Wagoner, N. Chaudhary, and P. Hviid.

Kennedy, E. B., Daoust, J. F., Vikse, J., and Nelson, V. (2021a). “Until I Know It’s Safe for Me”: The Role of Timing in COVID-19 Vaccine Decision-Making and Vaccine Hesitancy. Under Review. doi:10.3390/vaccines9121417

Kennedy, E. B., Nelson, V., and Vikse, J. (2021b). Survey Research in the Context of COVID-19: Lessons Learned from a National Canadian Survey. Working Paper.

Kennedy, E. B., Vikse, J., Chaufan, C., O’Doherty, K., Wu, C., Qian, Y., et al. (2020). Canadian COVID-19 Social Impacts Survey - Summary of Results #1: Risk Perceptions, Trust, Impacts, and Responses. Technical Report #004. Toronto, Canada: York University Disaster and Emergency Management . doi:10.6084/m9.figshare.12121905

Kwok, K. O., Li, K.-K., Wei, W. I., Tang, A., Wong, S. Y. S., and Lee, S. S. (2021). Influenza Vaccine Uptake, COVID-19 Vaccination Intention and Vaccine Hesitancy Among Nurses: A Survey. Int. J. Nurs. Stud. 114, 103854. doi:10.1016/j.ijnurstu.2020.103854

Loomba, S., de Figueiredo, A., Piatek, S. J., de Graaf, K., and Larson, H. J. (2021). Measuring the Impact of COVID-19 Vaccine Misinformation on Vaccination Intent in the UK and USA. Nat. Hum. Behav. 5 (3), 337–348. doi:10.1038/s41562-021-01056-1

Mauss, I. B., and Robinson, M. D. (2009). Measures of Emotion: A Review. Cogn. Emot. 23 (2), 209–237. doi:10.1080/02699930802204677

Morgan, D. L. (2007). Paradigms Lost and Pragmatism Regained: Methodological Implications of Combining Qualitative and Quantitative Methods. Journal of Mixed Methods Research 2007, 1–48. doi:10.1177/2345678906292462

Pickles, K., Cvejic, E., Nickel, B., Copp, T., Bonner, C., Leask, J., et al. (2021). COVID-19 Misinformation Trends in Australia: Prospective Longitudinal National Survey. J. Med. Internet Res. 23 (1), e23805. doi:10.2196/23805

Ruiz, J. B., and Bell, R. A. (2021). Predictors of Intention to Vaccinate against COVID-19: Results of a Nationwide Survey. Vaccine 39 (7), 1080–1086. doi:10.1016/j.vaccine.2021.01.010

Schwarz, N., Kahneman, D., Xu, J., Belli, R., Stafford, F., and Alwin, D. (2009). “Global and Episodic Reports of Hedonic Experience,” in Using Calendar and Diary Methods in Life Events Research , 157–174.

Smith, B. K., and Jensen, E. A. (2016). Critical Review of the United Kingdom's “gold Standard” Survey of Public Attitudes to Science. Public Underst. Sci. 25, 154–170. doi:10.1177/0963662515623248

Smith, B. K., Jensen, E., and Wagoner, B. (2015). “The International Encyclopedia of Communication Theory and Philosophy,” in International Encyclopedia of Communication Theory and Philosophy . Editors K. B. Jensen, R. T. Craig, J. Pooley, and E. Rothenbuhler (New Jersey, Wiley-Blackwell ). wbiect201]. doi:10.1002/9781118766804

Solomon, S. D., and Green, B. L. (1992). Mental Health Effects of Natural and Human-Made Disasters. PTSD. Res. Q. 3 (1), 1–8.

Tourangeau, R., Rips, L., and Rasinski, K. (2000). The Psychology of Survey Response . Cambridge: Cambridge University Press .

Van Bavel, J. J., Baicker, K., Boggio, P., Capraro, V., Cichocka, A., Cikara, M., et al. (2020). Using Social and Behavioural Science to Support COVID-19 Pandemic Response. Nat. Hum. Behav. 4, 460–471. doi:10.1038/s41562-020-0884-z

Villegas, Paulina. (2020). South Dakota Nurse Says many Patients Deny the Coronavirus Exists – Right up until Death . Washington, DC: Washington Post . Available at: https://www.washingtonpost.com/health/2020/11/16/south-dakota-nurse-coronavirus-deniers .

Wagoner, B., and Jensen, E. (2015). “Microgenetic Evaluation: Studying Learning in Motion,” in The Yearbook of Idiographic Science. Volume 6: Reflexivity and Change in Psychology . Editors G. Marsico, R. Ruggieri, and S. Salvatore (Charlotte, N.C.: Information Age Publishing ).

Wagoner, B., and Valsiner, J. (2005). “Rating Tasks in Psychology: From Static Ontology to Dialogical Synthesis of Meaning,” in Contemporary Theorizing in Psychology: Global Perspectives . Editors A. Gülerce, I. Hofmeister, G. Saunders, and J. Kaye (Toronto, Canada: Captus ), 197–213.

Keywords: survey, questionnaire, research methods, COVID-19, emergency, crises

Citation: Kennedy EB, Jensen EA and Jensen AM (2022) Methodological Considerations for Survey-Based Research During Emergencies and Public Health Crises: Improving the Quality of Evidence and Communication. Front. Commun. 6:736195. doi: 10.3389/fcomm.2021.736195

Received: 04 July 2021; Accepted: 18 October 2021; Published: 15 February 2022.

Reviewed by:

Copyright © 2022 Kennedy, Jensen and Jensen. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Eric B Kennedy, [email protected]

This article is part of the Research Topic

Evidence-Based Science Communication in the COVID-19 Era

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Korean Med Sci
  • v.35(45); 2020 Nov 23

Logo of jkms

Reporting Survey Based Studies – a Primer for Authors

Prithvi sanjeevkumar gaur.

1 Smt. Kashibai Navale Medical College and General Hospital, Pune, India.

Olena Zimba

2 Department of Internal Medicine No. 2, Danylo Halytsky Lviv National Medical University, Lviv, Ukraine.

Vikas Agarwal

3 Department Clinical Immunology and Rheumatology, Sanjay Gandhi Postgraduate Institute of Medical Sciences, Lucknow, India.

Latika Gupta

Associated data.

The coronavirus disease 2019 (COVID-19) pandemic has led to a massive rise in survey-based research. The paucity of perspicuous guidelines for conducting surveys may pose a challenge to the conduct of ethical, valid and meticulous research. The aim of this paper is to guide authors aiming to publish in scholarly journals regarding the methods and means to carry out surveys for valid outcomes. The paper outlines the various aspects, from planning, execution and dissemination of surveys followed by the data analysis and choosing target journals. While providing a comprehensive understanding of the scenarios most conducive to carrying out a survey, the role of ethical approval, survey validation and pilot testing, this brief delves deeper into the survey designs, methods of dissemination, the ways to secure and maintain data anonymity, the various analytical approaches, the reporting techniques and the process of choosing the appropriate journal. Further, the authors analyze retracted survey-based studies and the reasons for the same. This review article intends to guide authors to improve the quality of survey-based research by describing the essential tools and means to do the same with the hope to improve the utility of such studies.

Graphical Abstract

An external file that holds a picture, illustration, etc.
Object name is jkms-35-e398-abf001.jpg

INTRODUCTION

Surveys are the principal method used to address topics that require individual self-report about beliefs, knowledge, attitudes, opinions or satisfaction, which cannot be assessed using other approaches. 1 This research method allows information to be collected by asking a set of questions on a specific topic to a subset of people and generalizing the results to a larger population. Assessment of opinions in a valid and reliable way require clear, structured and precise reporting of results. This is possible with a survey based out of a meticulous design, followed by validation and pilot testing. 2 The aim of this opinion piece is to provide practical advice to conduct survey-based research. It details the ethical and methodological aspects to be undertaken while performing a survey, the online platforms available for distributing survey, and the implications of survey-based research.

Survey-based research is a means to obtain quick data, and such studies are relatively easy to conduct and analyse, and are cost-effective (under a majority of the circumstances). 3 These are also one of the most convenient methods of obtaining data about rare diseases. 4 With major technological advancements and improved global interconnectivity, especially during the coronavirus disease 2019 (COVID-19) pandemic, surveys have surpassed other means of research due to their distinctive advantage of a wider reach, including respondents from various parts of the world having diverse cultures and geographically disparate locations. Moreover, survey-based research allows flexibility to the investigator and respondent alike. 5 While the investigator(s) may tailor the survey dates and duration as per their availability, the respondents are allowed the convenience of responding to the survey at ease, in the comfort of their homes, and at a time when they can answer the questions with greater focus and to the best of their abilities. 6 Respondent biases inherent to environmental stressors can be significantly reduced by this approach. 5 It also allows responses across time-zones, which may be a major impediment to other forms of research or data-collection. This allows distant placement of the investigator from the respondents.

Various digital tools are now available for designing surveys ( Table 1 ). 7 Most of these are free with separate premium paid options. The analysis of data can be made simpler and cleaning process almost obsolete by minimising open-ended answer choices. 8 Close-ended answers makes data collection and analysis efficient, by generating an excel which can be directly accessed and analysed. 9 Minimizing the number of questions and making all questions mandatory can further aid this process by bringing uniformity to the responses and analysis simpler. Surveys are arguably also the most engaging form of research, conditional to the skill of the investigator.

Q/t = questions per typeform, A/m = answers per month, Q/s = questions per survey, A/s = answers per survey, NA = not applicable, NPS = net promoter score.

Data protection laws now mandate anonymity while collecting data for most surveys, particularly when they are exempt from ethical review. 10 , 11 Anonymization has the potential to reduce (or at times even eliminate) social desirability bias which gains particular relevance when targeting responses from socially isolated or vulnerable communities (e.g. LGBTQ and low socio-economic strata communities) or minority groups (religious, ethnic and medical) or controversial topics (drug abuse, using language editing software).

Moreover, surveys could be the primary methodology to explore a hypothesis until it evolves into a more sophisticated and partly validated idea after which it can be probed further in a systematic and structured manner using other research methods.

The aim of this paper is to reduce the incorrect reporting of surveys. The paper also intends to inform researchers of the various aspects of survey-based studies and the multiple points that need to be taken under consideration while conducting survey-based research.

SURVEYS IN THE COVID-19 PANDEMIC

The COVID-19 has led to a distinctive rise in survey-based research. 12 The need to socially distance amid widespread lockdowns reduced patient visits to the hospital and brought most other forms of research to a standstill in the early pandemic period. A large number of level-3 bio-safety laboratories are being engaged for research pertaining to COVID-19, thereby limiting the options to conduct laboratory-based research. 13 , 14 Therefore, surveys appear to be the most viable option for researchers to explore hypotheses related to the situation and its impact in such times. 15

LIMITATIONS WHILE CONDUCTING SURVEY-BASED RESEARCH

Designing a fine survey is an arduous task and requires skill even though clear guidelines are available in regard to the same. Survey design requires extensive thoughtfulness on the core questions (based on the hypothesis or the primary research question), with consideration of all possible answers, and the inclusion of open-ended options to allow recording other possibilities. A survey should be robust, in regard to the questions gathered and the answer choices available, it must be validated, and pilot tested. 16 The survey design may be supplanted with answer choices tailored for the convenience of the responder, to reduce the effort while making it more engaging. Survey dissemination and engagement of respondents also requires experience and skill. 17

Furthermore, the absence of an interviewer prevents us from gaining clarification on responses of open-ended questions if any. Internet surveys are also prone to survey fraud by erroneous reporting. Hence, anonymity of surveys is a boon and a bane. The sample sizes are skewed as it lacks representation of population absent on the Internet like the senile or the underprivileged. The illiterate population also lacks representation in survey-based research.

The “Enhancing the QUAlity and Transparency Of health Research” network (EQUATOR) provides two separate guidelines replete with checklists to ensure valid reporting of e-survey methodology. These include “The Checklist for Reporting Results of Internet E-Surveys” (CHERRIES) statement and “ The Journal of Medical Internet Research ” (JMIR) checklist.

COMMON TYPES OF SURVEY-BASED RESEARCH

From a clinician's standpoint, the common survey types include those centered around problems faced by the patients or physicians. 18 Surveys collecting the opinions of various clinicians on a debated clinical topic or feedback forms typically served after attending medical conferences or prescribing a new drug or trying a new method for a given procedure are also surveys. The formulation of clinical practice guidelines entails Delphi exercises using paper surveys, which are yet another form of survey-mediated research.

Size of the survey depends on its intent. They could be large or small surveys. Therefore, identification of the intent behind the survey is essential to allow the investigator to form a hypothesis and then explore it further. Large population-based or provider-based surveys are often done and generate mammoth data over the years. E.g. The National Health and Nutrition Examination Survey, The National Health Interview Survey and the National Ambulatory Medical Care Survey.

SCENARIOS FOR CONDUCTING SURVEY-BASED RESEARCH

Despite all said and done about the convenience of conducting survey-based research, it is prudent to conduct a feasibility check before embarking on one. Certain scenarios may be the key determinants in determining the fate of survey-based research ( Table 2 ).

ETHICS APPROVAL FOR SURVEY-BASED RESEARCH

Approval from the Institutional Review Board should be taken as per requirement according to the CHERRIES checklist. However, rules for approval are different as per the country or nation and therefore, local rules must be checked and followed. For instance, in India, the Indian Council of Medical Research released an article in 2017, stating that the concept of broad consent has been updated which is defined “consent for an unspecified range of future research subject to a few contents and/or process restrictions.” It talks about “the flexibility of Indian ethics committees to review a multicentric study proposal for research involving low or minimal risk, survey or studies using anonymized samples or data or low or minimal risk public health research.” The reporting of approvals received and applied for and the procedure of written, informed consent followed must be clear and transparent. 10 , 19

The use of incentives in surveys is also an ethical concern. 20 The different of incentives that can be used are monetary or non-monetary. Monetary incentives are usually discouraged as these may attract the wrong population due to the temptation of the monetary benefit. However, monetary incentives have been seen to make survey receive greater traction even though this is yet to proven. Monetary incentives are not only provided in terms of cash or cheque but also in the form of free articles, discount coupons, phone cards, e-money or cashback value. 21 These methods though tempting must be seldom used. If used, their use must be disclosed and justified in the report. The use of non-monetary incentives like a meeting with a famous personality or access to restricted and authorized areas. These can also help pique the interest of the respondents.

DESIGNING A SURVEY

As mentioned earlier, the design of a survey is reflective of the skill of the investigator curating it. 22 Survey builders can be used to design an efficient survey. These offer majority of the basic features needed to construct a survey, free of charge. Therefore, surveys can be designed from scratch, using pre-designed templates or by using previous survey designs as inspiration. Taking surveys could be made convenient by using the various aids available ( Table 1 ). Moreover, even the investigator should be mindful of the unintended response effects of ordering and context of survey questions. 23

Surveys using clear, unambiguous, simple and well-articulated language record precise answers. 24 A well-designed survey accounts for the culture, language and convenience of the target demographic. The age, region, country and occupation of the target population is also considered before constructing a survey. Consistency is maintained in the terms used in the survey and abbreviations are avoided to allow the respondents to have a clear understanding of the question being answered. Universal abbreviations or previously indexed abbreviations maintain the unambiguity of the survey.

Surveys beginning with broad, easy and non-specific questions as compared to sensitive, tedious and non-specific ones receive more accurate and complete answers. 25 Questionnaires designed such that the relatively tedious and long questions requiring the respondent to do some nit-picking are placed at the end improves the response rate of the survey. This prevents the respondent to be discouraged to answer the survey at the beginning itself and motivates the respondent to finish the survey at the end. All questions must provide a non-response option and all questions should be made mandatory to increase completeness of the survey. Questions can be framed in close-ended or open-ended fashion. However, close-ended questions are easier to analyze and are less tedious to answer by the respondent and therefore must be the main component in a survey. Open-ended questions have minimal use as they are tedious, take time to answer and require fine articulation of one's thoughts. Also, their minimal use is advocated because the interpretation of such answers requires dedication in terms of time and energy due to the diverse nature of the responses which is difficult to promise owing to the large sample sizes. 26 However, whenever the closed choices do not cover all probabilities, an open answer choice must be added. 27 , 28

Screening questions to meet certain criteria to gain access to the survey in cases where inclusion criteria need to be established to maintain authenticity of target demographic. Similarly, logic function can be used to apply an exclusion. This allows clean and clear record of responses and makes the job of an investigator easier. The respondents can or cannot have the option to return to the previous page or question to alter their answer as per the investigator's preference.

The range of responses received can be reduced in case of questions directed towards the feelings or opinions of people by using slider scales, or a Likert scale. 29 , 30 In questions having multiple answers, check boxes are efficient. When a large number of answers are possible, dropdown menus reduce the arduousness. 31 Matrix scales can be used to answer questions requiring grading or having a similar range of answers for multiple conditions. Maximum respondent participation and complete survey responses can be ensured by reducing the survey time. Quiz mode or weighted modes allow the respondent to shuffle between questions and allows scoring of quizzes and can be used to complement other weighted scoring systems. 32 A flowchart depicting a survey construct is presented as Fig. 1 .

An external file that holds a picture, illustration, etc.
Object name is jkms-35-e398-g001.jpg

Survey validation

Validation testing though tedious and meticulous, is worthy effort as the accuracy of a survey is determined by its validity. It is indicative of the of the sample of the survey and the specificity of the questions such that the data acquired is streamlined to answer the questions being posed or to determine a hypothesis. 33 , 34 Face validation determines the mannerism of construction of questions such that necessary data is collected. Content validation determines the relation of the topic being addressed and its related areas with the questions being asked. Internal validation makes sure that the questions being posed are directed towards the outcome of the survey. Finally, Test – retest validation determines the stability of questions over a period of time by testing the questionnaire twice and maintaining a time interval between the two tests. For surveys determining knowledge of respondents pertaining to a certain subject, it is advised to have a panel of experts for undertaking the validation process. 2 , 35

Reliability testing

If the questions in the survey are posed in a manner so as to elicit the same or similar response from the respondents irrespective of the language or construction of the question, the survey is said to be reliable. It is thereby, a marker of the consistency of the survey. This stands to be of considerable importance in knowledge-based researches where recall ability is tested by making the survey available for answering by the same participants at regular intervals. It can also be used to maintain authenticity of the survey, by varying the construction of the questions.

Designing a cover letter

A cover letter is the primary means of communication with the respondent, with the intent to introduce the respondent to the survey. A cover letter should include the purpose of the survey, details of those who are conducting it, including contact details in case clarifications are desired. It should also clearly depict the action required by the respondent. Data anonymization may be crucial to many respondents and is their right. This should be respected in a clear description of the data handling process while disseminating the survey. A good cover letter is the key to building trust with the respondent population and can be the forerunner to better response rates. Imparting a sense of purpose is vital to ideationally incentivize the respondent population. 36 , 37 Adding the credentials of the team conducting the survey may further aid the process. It is seen that an advance intimation of the survey prepares the respondents while improving their compliance.

The design of a cover letter needs much attention. It should be captivating, clear, precise and use a vocabulary and language specific to the target population for the survey. Active voice should be used to make a greater impact. Crowding of the details must be avoided. Using italics, bold fonts or underlining may be used to highlight critical information. the tone ought to be polite, respectful, and grateful in advance. The use of capital letters is at best avoided, as it is surrogate for shouting in verbal speech and may impart a bad taste.

The dates of the survey may be intimated, so the respondents may prepare themselves for taking it at a time conducive to them. While, emailing a closed group in a convenience sampled survey, using the name of the addressee may impart a customized experience and enhance trust building and possibly compliance. Appropriate use of salutations like Mr./Ms./Mrs. may be considered. Various portals such as SurveyMonkey allow the researchers to save an address list on the website. These may then be reached out using an embedded survey link from a verified email address to minimize bouncing back of emails.

The body of the cover letter must be short, crisp and not exceed 2–3 paragraphs under idea circumstances. Ernest efforts to protect confidentiality may go a long way in enhancing response rates. 38 While it is enticing to provide incentives to enhance response, these are best avoided. 38 , 39 In cases when indirect incentives are offered, such as provision of results of the survey, these may be clearly stated in the cover letter. Lastly, a formal closing note with the signatures of the lead investigator are welcome. 38 , 40

Designing questions

Well-constructed questionnaires are essentially the backbone of successful survey-based studies. With this type of research, the primary concern is the adequate promotion and dissemination of the questionnaire to the target population. The careful of selection of sample population, therefore, needs to be with minimal flaws. The method of conducting survey is an essential determinant of the response rate observed. 41 Broadly, surveys are of two types: closed and open. Depending on the sample population the method of conducting the survey must be determined.

Various doctors use their own patients as the target demographic, as it improves compliance. However, this is effective in surveys aiming towards a geographically specific, fairly common disease as the sample size needs to be adequate. Response bias can be identified by the data collected from respondent and non-respondent groups. 42 , 43 Therefore, to choose a target population whose database of baseline characteristics is already known is more efficacious. In cases of surveys focused on patients having a rare group of diseases, online surveys or e-surveys can be conducted. Data can also be gathered from the multiple national organizations and societies all over the world. 44 , 45 Computer generated random selection can be done from this data to choose participants and they can be reached out to using emails or social media platforms like WhatsApp and LinkedIn. In both these scenarios, closed questionnaires can be conducted. These have restricted access either through a URL link or through e-mail.

In surveys targeting an issue faced by a larger demographic (e.g. pandemics like the COVID-19, flu vaccines and socio-political scenarios), open surveys seem like the more viable option as they can be easily accessed by majority of the public and ensures large number of responses, thereby increasing the accuracy of the study. Survey length should be optimal to avoid poor response rates. 25 , 46

SURVEY DISSEMINATION

Uniform distribution of the survey ensures equitable opportunity to the entire target population to access the questionnaire and participate in it. While deciding the target demographic communities should be studied and the process of “lurking” is sometimes practiced. Multiple sampling methods are available ( Fig. 1 ). 47

Distribution of survey to the target demographic could be done using emails. Even though e-mails reach a large proportion of the target population, an unknown sender could be blocked, making the use of personal or a previously used email preferable for correspondence. Adding a cover letter along with the invite adds a personal touch and is hence, advisable. Some platforms allow the sender to link the survey portal with the sender's email after verifying it. Noteworthily, despite repeated email reminders, personal communication over the phone or instant messaging improved responses in the authors' experience. 48 , 49

Distribution of the survey over other social media platforms (SMPs, namely WhatsApp, Facebook, Instagram, Twitter, LinkedIn etc.) is also practiced. 50 , 51 , 52 Surveys distributed on every available platform ensures maximal outreach. 53 Other smartphone apps can also be used for wider survey dissemination. 50 , 54 It is important to be mindful of the target population while choosing the platform for dissemination of the survey as some SMPs such as WhatsApp are more popular in India, while others like WeChat are used more widely in China, and similarly Facebook among the European population. Professional accounts or popular social accounts can be used to promote and increase the outreach for a survey. 55 Incentives such as internet giveaways or meet and greets with their favorite social media influencer have been used to motivate people to participate.

However, social-media platforms do not allow calculation of the denominator of the target population, resulting in inability to gather the accurate response rate. Moreover, this method of collecting data may result in a respondent bias inherent to a community that has a greater online presence. 43 The inability to gather the demographics of the non-respondents (in a bid to identify and prove that they were no different from respondents) can be another challenge in convenience sampling, unlike in cohort-based studies.

Lastly, manually filling of surveys, over the telephone, by narrating the questions and answer choices to the respondents is used as the last-ditch resort to achieve a high desired response rate. 56 Studies reveal that surveys released on Mondays, Fridays, and Sundays receive more traction. Also, reminders set at regular intervals of time help receive more responses. Data collection can be improved in collaborative research by syncing surveys to fill out electronic case record forms. 57 , 58 , 59

Data anonymity refers to the protection of data received as a part of the survey. This data must be stored and handled in accordance with the patient privacy rights/privacy protection laws in reference to surveys. Ethically, the data must be received on a single source file handled by one individual. Sharing or publishing this data on any public platform is considered a breach of the patient's privacy. 11 In convenience sampled surveys conducted by e-mailing a predesignated group, the emails shall remain confidential, as inadvertent sharing of these as supplementary data in the manuscript may amount to a violation of the ethical standards. 60 A completely anonymized e-survey discourages collection of Internet protocol addresses in addition to other patient details such as names and emails.

Data anonymity gives the respondent the confidence to be candid and answer the survey without inhibitions. This is especially apparent in minority groups or communities facing societal bias (sex workers, transgenders, lower caste communities, women). Data anonymity aids in giving the respondents/participants respite regarding their privacy. As the respondents play a primary role in data collection, data anonymity plays a vital role in survey-based research.

DATA HANDLING OF SURVEYS

The data collected from the survey responses are compiled in a .xls, .csv or .xlxs format by the survey tool itself. The data can be viewed during the survey duration or after its completion. To ensure data anonymity, minimal number of people should have access to these results. The data should then be sifted through to invalidate false, incorrect or incomplete data. The relevant and complete data should then be analyzed qualitatively and quantitatively, as per the aim of the study. Statistical aids like pie charts, graphs and data tables can be used to report relative data.

ANALYSIS OF SURVEY DATA

Analysis of the responses recorded is done after the time made available to answer the survey is complete. This ensures that statistical and hypothetical conclusions are established after careful study of the entire database. Incomplete and complete answers can be used to make analysis conditional on the study. Survey-based studies require careful consideration of various aspects of the survey such as the time required to complete the survey. 61 Cut-off points in the time frame allow authentic answers to be recorded and analyzed as compared to disingenuous completed questionnaires. Methods of handling incomplete questionnaires and atypical timestamps must be pre-decided to maintain consistency. Since, surveys are the only way to reach people especially during the COVID-19 pandemic, disingenuous survey practices must not be followed as these will later be used to form a preliminary hypothesis.

REPORTING SURVEY-BASED RESEARCH

Reporting the survey-based research is by far the most challenging part of this method. A well-reported survey-based study is a comprehensive report covering all the aspects of conducting a survey-based research.

The design of the survey mentioning the target demographic, sample size, language, type, methodology of the survey and the inclusion-exclusion criteria followed comprises a descriptive report of a survey-based study. Details regarding the conduction of pilot-testing, validation testing, reliability testing and user-interface testing add value to the report and supports the data and analysis. Measures taken to prevent bias and ensure consistency and precision are key inclusions in a report. The report usually mentions approvals received, if any, along with the written, informed, consent taken from the participants to use the data received for research purposes. It also gives detailed accounts of the different distribution and promotional methods followed.

A detailed account of the data input and collection methods along with tools used to maintain the anonymity of the participants and the steps taken to ensure singular participation from individual respondents indicate a well-structured report. Descriptive information of the website used, visitors received and the externally influencing factors of the survey is included. Detailed reporting of the post-survey analysis including the number of analysts involved, data cleaning required, if any, statistical analysis done and the probable hypothesis concluded is a key feature of a well-reported survey-based research. Methods used to do statistical corrections, if used, should be included in the report. The EQUATOR network has two checklists, “The Checklist for Reporting Results of Internet E-Surveys” (CHERRIES) statement and “ The Journal of Medical Internet Research ” (JMIR) checklist, that can be utilized to construct a well-framed report. 62 , 63 Importantly, self-reporting of biases and errors avoids the carrying forward of false hypothesis as a basis of more advanced research. References should be cited using standard recommendations, and guided by the journal specifications. 64

CHOOSING A TARGET JOURNAL FOR SURVEY-BASED RESEARCH

Surveys can be published as original articles, brief reports or as a letter to the editor. Interestingly, most modern journals do not actively make mention of surveys in the instructions to the author. Thus, depending on the study design, the authors may choose the article category, cohort or case-control interview or survey-based study. It is prudent to mention the type of study in the title. Titles albeit not too long, should not exceed 10–12 words, and may feature the type of study design for clarity after a semicolon for greater citation potential.

While the choice of journal is largely based on the study subject and left to the authors discretion, it may be worthwhile exploring trends in a journal archive before proceeding with submission. 65 Although the article format is similar across most journals, specific rules relevant to the target journal may be followed for drafting the article structure before submission.

RETRACTION OF ARTICLES

Articles that are removed from the publication after being released are retracted articles. These are usually retracted when new discrepancies come to light regarding, the methodology followed, plagiarism, incorrect statistical analysis, inappropriate authorship, fake peer review, fake reporting and such. 66 A sufficient increase in such papers has been noticed. 67

We carried out a search of “surveys” on Retraction Watch on 31st August 2020 and received 81 search results published between November 2006 to June 2020, out of which 3 were repeated. Out of the 78 results, 37 (47.4%) articles were surveys, 23 (29.4%) showed as unknown types and 18 (23.2%) reported other types of research. ( Supplementary Table 1 ). Fig. 2 gives a detailed description of the causes of retraction of the surveys we found and its geographic distribution.

An external file that holds a picture, illustration, etc.
Object name is jkms-35-e398-g002.jpg

A good survey ought to be designed with a clear objective, the design being precise and focused with close-ended questions and all probabilities included. Use of rating scales, multiple choice questions and checkboxes and maintaining a logical question sequence engages the respondent while simplifying data entry and analysis for the investigator. Conducting pilot-testing is vital to identify and rectify deficiencies in the survey design and answer choices. The target demographic should be defined well, and invitations sent accordingly, with periodic reminders as appropriate. While reporting the survey, maintaining transparency in the methods employed and clearly stating the shortcomings and biases to prevent advocating an invalid hypothesis.

Disclosure: The authors have no potential conflicts of interest to disclose.

Author Contributions:

  • Conceptualization: Gaur PS, Zimba O, Agarwal V, Gupta L.
  • Visualization: Gaur PS, Zimba O, Agarwal V, Gupta L.
  • Writing - original draft: Gaur PS, Gupta L.

SUPPLEMENTARY MATERIAL

Reporting survey based research

Home

Search form

Survey methodology program.

  •    Survey Research Center
  •    Overview of Survey Methodology

Since its founding, the Survey Research Center (SRC) at the Institute for Social Research (ISR) has been a principal source of innovation in the methodology of survey research. Innovations have been made in sample design, estimation of sampling variance from complex surveys, questionnaire design, interviewing behavior, computer-assisted measurement technologies, imputation for missing data, and the analysis of survey data. The Survey Methodology Program (SMP) was established within SRC in 1992 with the explicit aim of creating a multidisciplinary team to focus on research methodology.  Thus, the SMP draws upon a range of disciplines including social psychology, cognitive psychology, sociology, statistics, and computer science.

The Survey Methodology Program is staffed by eleven full-time research faculty and research scientists, all of whom are internationally renowned survey statisticians or methodologists (or both) and teach courses for the PSM.  In addition, several other researchers at the University are affiliated with the Program.  Their interests span (but are not limited to) sampling, statistical analysis, interviewing methodology, total survey error, and collaborating with Survey Research Operations staff to develop innovative methods. SMP is structured to combine knowledge from relevant academic disciplines to advance both theory and practice. The mission of the SMP can thus be summarized as (1) conducting research on survey methods, (2) training students in survey methodology, and (3) collaborating with SRO staff to develop innovative methods.

All PSM faculty play an active role in the various research projects that the SMP is conducting at any point in time. For more information about the research activities in which the PSDS faculty are currently involved, please visit the MPSDS faculty page .

Any questions about potential research collaboration with SMP faculty should be directed to Brady West.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Dissertation
  • What Is a Research Methodology? | Steps & Tips

What Is a Research Methodology? | Steps & Tips

Published on August 25, 2022 by Shona McCombes and Tegan George. Revised on November 20, 2023.

Your research methodology discusses and explains the data collection and analysis methods you used in your research. A key part of your thesis, dissertation , or research paper , the methodology chapter explains what you did and how you did it, allowing readers to evaluate the reliability and validity of your research and your dissertation topic .

It should include:

  • The type of research you conducted
  • How you collected and analyzed your data
  • Any tools or materials you used in the research
  • How you mitigated or avoided research biases
  • Why you chose these methods
  • Your methodology section should generally be written in the past tense .
  • Academic style guides in your field may provide detailed guidelines on what to include for different types of studies.
  • Your citation style might provide guidelines for your methodology section (e.g., an APA Style methods section ).

Instantly correct all language mistakes in your text

Upload your document to correct all your mistakes in minutes

upload-your-document-ai-proofreader

Table of contents

How to write a research methodology, why is a methods section important, step 1: explain your methodological approach, step 2: describe your data collection methods, step 3: describe your analysis method, step 4: evaluate and justify the methodological choices you made, tips for writing a strong methodology chapter, other interesting articles, frequently asked questions about methodology.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Your methods section is your opportunity to share how you conducted your research and why you chose the methods you chose. It’s also the place to show that your research was rigorously conducted and can be replicated .

It gives your research legitimacy and situates it within your field, and also gives your readers a place to refer to if they have any questions or critiques in other sections.

You can start by introducing your overall approach to your research. You have two options here.

Option 1: Start with your “what”

What research problem or question did you investigate?

  • Aim to describe the characteristics of something?
  • Explore an under-researched topic?
  • Establish a causal relationship?

And what type of data did you need to achieve this aim?

  • Quantitative data , qualitative data , or a mix of both?
  • Primary data collected yourself, or secondary data collected by someone else?
  • Experimental data gathered by controlling and manipulating variables, or descriptive data gathered via observations?

Option 2: Start with your “why”

Depending on your discipline, you can also start with a discussion of the rationale and assumptions underpinning your methodology. In other words, why did you choose these methods for your study?

  • Why is this the best way to answer your research question?
  • Is this a standard methodology in your field, or does it require justification?
  • Were there any ethical considerations involved in your choices?
  • What are the criteria for validity and reliability in this type of research ? How did you prevent bias from affecting your data?

Once you have introduced your reader to your methodological approach, you should share full details about your data collection methods .

Quantitative methods

In order to be considered generalizable, you should describe quantitative research methods in enough detail for another researcher to replicate your study.

Here, explain how you operationalized your concepts and measured your variables. Discuss your sampling method or inclusion and exclusion criteria , as well as any tools, procedures, and materials you used to gather your data.

Surveys Describe where, when, and how the survey was conducted.

  • How did you design the questionnaire?
  • What form did your questions take (e.g., multiple choice, Likert scale )?
  • Were your surveys conducted in-person or virtually?
  • What sampling method did you use to select participants?
  • What was your sample size and response rate?

Experiments Share full details of the tools, techniques, and procedures you used to conduct your experiment.

  • How did you design the experiment ?
  • How did you recruit participants?
  • How did you manipulate and measure the variables ?
  • What tools did you use?

Existing data Explain how you gathered and selected the material (such as datasets or archival data) that you used in your analysis.

  • Where did you source the material?
  • How was the data originally produced?
  • What criteria did you use to select material (e.g., date range)?

The survey consisted of 5 multiple-choice questions and 10 questions measured on a 7-point Likert scale.

The goal was to collect survey responses from 350 customers visiting the fitness apparel company’s brick-and-mortar location in Boston on July 4–8, 2022, between 11:00 and 15:00.

Here, a customer was defined as a person who had purchased a product from the company on the day they took the survey. Participants were given 5 minutes to fill in the survey anonymously. In total, 408 customers responded, but not all surveys were fully completed. Due to this, 371 survey results were included in the analysis.

  • Information bias
  • Omitted variable bias
  • Regression to the mean
  • Survivorship bias
  • Undercoverage bias
  • Sampling bias

Qualitative methods

In qualitative research , methods are often more flexible and subjective. For this reason, it’s crucial to robustly explain the methodology choices you made.

Be sure to discuss the criteria you used to select your data, the context in which your research was conducted, and the role you played in collecting your data (e.g., were you an active participant, or a passive observer?)

Interviews or focus groups Describe where, when, and how the interviews were conducted.

  • How did you find and select participants?
  • How many participants took part?
  • What form did the interviews take ( structured , semi-structured , or unstructured )?
  • How long were the interviews?
  • How were they recorded?

Participant observation Describe where, when, and how you conducted the observation or ethnography .

  • What group or community did you observe? How long did you spend there?
  • How did you gain access to this group? What role did you play in the community?
  • How long did you spend conducting the research? Where was it located?
  • How did you record your data (e.g., audiovisual recordings, note-taking)?

Existing data Explain how you selected case study materials for your analysis.

  • What type of materials did you analyze?
  • How did you select them?

In order to gain better insight into possibilities for future improvement of the fitness store’s product range, semi-structured interviews were conducted with 8 returning customers.

Here, a returning customer was defined as someone who usually bought products at least twice a week from the store.

Surveys were used to select participants. Interviews were conducted in a small office next to the cash register and lasted approximately 20 minutes each. Answers were recorded by note-taking, and seven interviews were also filmed with consent. One interviewee preferred not to be filmed.

  • The Hawthorne effect
  • Observer bias
  • The placebo effect
  • Response bias and Nonresponse bias
  • The Pygmalion effect
  • Recall bias
  • Social desirability bias
  • Self-selection bias

Mixed methods

Mixed methods research combines quantitative and qualitative approaches. If a standalone quantitative or qualitative study is insufficient to answer your research question, mixed methods may be a good fit for you.

Mixed methods are less common than standalone analyses, largely because they require a great deal of effort to pull off successfully. If you choose to pursue mixed methods, it’s especially important to robustly justify your methods.

The only proofreading tool specialized in correcting academic writing - try for free!

The academic proofreading tool has been trained on 1000s of academic texts and by native English editors. Making it the most accurate and reliable proofreading tool for students.

research in survey methodology

Try for free

Next, you should indicate how you processed and analyzed your data. Avoid going into too much detail: you should not start introducing or discussing any of your results at this stage.

In quantitative research , your analysis will be based on numbers. In your methods section, you can include:

  • How you prepared the data before analyzing it (e.g., checking for missing data , removing outliers , transforming variables)
  • Which software you used (e.g., SPSS, Stata or R)
  • Which statistical tests you used (e.g., two-tailed t test , simple linear regression )

In qualitative research, your analysis will be based on language, images, and observations (often involving some form of textual analysis ).

Specific methods might include:

  • Content analysis : Categorizing and discussing the meaning of words, phrases and sentences
  • Thematic analysis : Coding and closely examining the data to identify broad themes and patterns
  • Discourse analysis : Studying communication and meaning in relation to their social context

Mixed methods combine the above two research methods, integrating both qualitative and quantitative approaches into one coherent analytical process.

Above all, your methodology section should clearly make the case for why you chose the methods you did. This is especially true if you did not take the most standard approach to your topic. In this case, discuss why other methods were not suitable for your objectives, and show how this approach contributes new knowledge or understanding.

In any case, it should be overwhelmingly clear to your reader that you set yourself up for success in terms of your methodology’s design. Show how your methods should lead to results that are valid and reliable, while leaving the analysis of the meaning, importance, and relevance of your results for your discussion section .

  • Quantitative: Lab-based experiments cannot always accurately simulate real-life situations and behaviors, but they are effective for testing causal relationships between variables .
  • Qualitative: Unstructured interviews usually produce results that cannot be generalized beyond the sample group , but they provide a more in-depth understanding of participants’ perceptions, motivations, and emotions.
  • Mixed methods: Despite issues systematically comparing differing types of data, a solely quantitative study would not sufficiently incorporate the lived experience of each participant, while a solely qualitative study would be insufficiently generalizable.

Remember that your aim is not just to describe your methods, but to show how and why you applied them. Again, it’s critical to demonstrate that your research was rigorously conducted and can be replicated.

1. Focus on your objectives and research questions

The methodology section should clearly show why your methods suit your objectives and convince the reader that you chose the best possible approach to answering your problem statement and research questions .

2. Cite relevant sources

Your methodology can be strengthened by referencing existing research in your field. This can help you to:

  • Show that you followed established practice for your type of research
  • Discuss how you decided on your approach by evaluating existing research
  • Present a novel methodological approach to address a gap in the literature

3. Write for your audience

Consider how much information you need to give, and avoid getting too lengthy. If you are using methods that are standard for your discipline, you probably don’t need to give a lot of background or justification.

Regardless, your methodology should be a clear, well-structured text that makes an argument for your approach, not just a list of technical details and procedures.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Measures of central tendency
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles

Methodology

  • Cluster sampling
  • Stratified sampling
  • Thematic analysis
  • Cohort study
  • Peer review
  • Ethnography

Research bias

  • Implicit bias
  • Cognitive bias
  • Conformity bias
  • Hawthorne effect
  • Availability heuristic
  • Attrition bias

Methodology refers to the overarching strategy and rationale of your research project . It involves studying the methods used in your field and the theories or principles behind them, in order to develop an approach that matches your objectives.

Methods are the specific tools and procedures you use to collect and analyze data (for example, experiments, surveys , and statistical tests ).

In shorter scientific papers, where the aim is to report the findings of a specific study, you might simply describe what you did in a methods section .

In a longer or more complex research project, such as a thesis or dissertation , you will probably include a methodology section , where you explain your approach to answering the research questions and cite relevant sources to support your choice of methods.

In a scientific paper, the methodology always comes after the introduction and before the results , discussion and conclusion . The same basic structure also applies to a thesis, dissertation , or research proposal .

Depending on the length and type of document, you might also include a literature review or theoretical framework before the methodology.

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

Reliability and validity are both about how well a method measures something:

  • Reliability refers to the  consistency of a measure (whether the results can be reproduced under the same conditions).
  • Validity   refers to the  accuracy of a measure (whether the results really do represent what they are supposed to measure).

If you are doing experimental research, you also have to consider the internal and external validity of your experiment.

A sample is a subset of individuals from a larger population . Sampling means selecting the group that you will actually collect data from in your research. For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

In statistics, sampling allows you to test a hypothesis about the characteristics of a population.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. & George, T. (2023, November 20). What Is a Research Methodology? | Steps & Tips. Scribbr. Retrieved April 4, 2024, from https://www.scribbr.com/dissertation/methodology/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, what is a theoretical framework | guide to organizing, what is a research design | types, guide & examples, qualitative vs. quantitative research | differences, examples & methods, unlimited academic ai-proofreading.

✔ Document error-free in 5minutes ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

  • Search Menu
  • Advance articles
  • Editor's Choice
  • Focus Issue Archive
  • Open Access Articles
  • Author Guidelines
  • Submission Site
  • Open Access
  • Call for Papers
  • About Journal of the American Medical Informatics Association
  • About the American Medical Informatics Association
  • Journals Career Network
  • Editorial Board
  • Advertising and Corporate Services
  • Self-Archiving Policy
  • Dispatch Dates
  • For Reviewers
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

Article Contents

Background and significance, acknowledgments, author contributions, supplementary material, conflict of interest, data availability.

  • < Previous

A national survey of digital health company experiences with electronic health record application programming interfaces

ORCID logo

  • Article contents
  • Figures & tables
  • Supplementary Data

Wesley Barker, Natalya Maisel, Catherine E Strawley, Grace K Israelit, Julia Adler-Milstein, Benjamin Rosner, A national survey of digital health company experiences with electronic health record application programming interfaces, Journal of the American Medical Informatics Association , Volume 31, Issue 4, April 2024, Pages 866–874, https://doi.org/10.1093/jamia/ocae006

  • Permissions Icon Permissions

This study sought to capture current digital health company experiences integrating with electronic health records (EHRs), given new federally regulated standards-based application programming interface (API) policies.

We developed and fielded a survey among companies that develop solutions enabling human interaction with an EHR API. The survey was developed by the University of California San Francisco in collaboration with the Office of the National Coordinator for Health Information Technology, the California Health Care Foundation, and ScaleHealth. The instrument contained questions pertaining to experiences with API integrations, barriers faced during API integrations, and API-relevant policy efforts.

About 73% of companies reported current or previous use of a standards-based EHR API in production. About 57% of respondents indicated using both standards-based and proprietary APIs to integrate with an EHR, and 24% worked about equally with both APIs. Most companies reported use of the Fast Healthcare Interoperability Resources standard. Companies reported that standards-based APIs required on average less burden than proprietary APIs to establish and maintain. However, companies face barriers to adopting standards-based APIs, including high fees, lack of realistic clinical testing data, and lack of data elements of interest or value.

The industry is moving toward the use of standardized APIs to streamline data exchange, with a majority of digital health companies using standards-based APIs to integrate with EHRs. However, barriers persist.

A large portion of digital health companies use standards-based APIs to interoperate with EHRs. Continuing to improve the resources for digital health companies to find, test, connect, and use these APIs “without special effort” will be crucial to ensure future technology robustness and durability.

Over the past decade, and increasingly over the past few years, electronic health record (EHR) developers have implemented application programming interfaces (APIs) in response to the need to open their systems to third-party applications. In particular, as called for in the 2014 JASON report, A Robust Health Data Infrastructure , and Office of the National Coordinator for Health IT (ONC)-funded work led by Substitutable Medical Apps & Reusable Technology (SMART) and the Argonaut Project, standards-based APIs were essential to allow scalable integrations. 1–3 Standards-based APIs harmonize connections across different EHRs and facilitate third-party software integration, thereby improving interoperability by enabling streamlined and secure data exchange. 4 The progress of these efforts and maturity of APIs set the stage for federal regulations, implementing provisions of the 21st Century Cures Act, that made standards-based APIs the default method for third-party applications to access and exchange patient electronic health information from EHRs certified to the criteria and standards adopted by the US Department of Health and Human Services (HHS). 5 In particular, these regulations, finalized in 2020, adopted the Health Level Seven (HL7) Fast Healthcare Interoperability Resources (FHIR) data exchange standard to enable third-party app developers to connect to certified EHRs. 6 Certified health IT developers were required to implement these APIs by 2022.

While the intent of these efforts—to improve interoperability—is clear, to what extent and for what use cases these standards-based APIs succeed in doing so is less clear. Historically, a 2016 survey of digital health companies found that a substantial number had attempted integrations with EHRs but encountered barriers, including a lack of developer support from EHR vendors, overall difficulty partnering with EHRs, and high associated costs. 7 A follow-up survey in 2018 found progress in companies’ abilities to integrate with EHRs through APIs, though challenges still remained. 8 Other studies have examined the availability of certain technologies integrated with EHRs (ie, capturing what was successfully integrated) and the overall robustness and durability of individual EHR company’s resources for third-party developers. 4 , 9 However, both prior digital health company surveys took place before the 2022 implementation deadline and included responses from less than 100 companies that had ever integrated their technology with an EHR. We therefore undertook an updated survey of these companies to capture the early impact of these regulations. We specifically sought to assess 3 dimensions. First, it is important to evaluate the use of standards-based versus proprietary EHR APIs to get a snapshot of national progress toward streamlined health data exchange between EHRs and third-party applications. Second, understanding company experiences integrating with specific EHR vendors (eg, Epic, Cerner) as well as the total number of vendors provides insight into the extent of interoperability of digital health company products. Third, it is critical to understand enablers of and barriers to EHR integration to inform ongoing policy and industry efforts to advance APIs and EHR integration.

This study sought to capture current digital health company experiences integrating with EHRs, now that new federally regulated standards-based API policies are in place and being implemented by EHR vendors. The survey covered company experience with EHR API integration, barriers to EHR integrations, and API policy and advancement efforts to ensure a robust perspective from digital health companies who are the primary consumers of these EHR APIs. These perspectives directly inform both policymakers and industry stakeholders on how to deliver next-generation technology solutions to health care providers and consumers. In particular, results will serve to guide the Department of Health and Human Services (HHS) on where ongoing policymaking may be needed to fulfill the intent of the 21st Century Cures Act. Results will also serve to guide EHR vendors and third-party software companies on the prevalence of ecosystem pain points that could lend themselves to private-sector solutions.

Sample data sources

A list of digital health companies to survey was compiled from a variety of data sources. The majority of companies ( n  = 605) came from a data scraping methodology developed by Barker & Johnson, which pulled company data from public app galleries for EHR-integrated solutions available from 1uphealth, Allscripts, athenahealth, CMS Bluebutton, CARIN Alliance, Cerner Corporation, eMDs, Epic Systems Corporation, Greenway, NextGen, and SMART. 9 Scraped data included the company name, the number of app galleries in which a company was found; the number of unique apps, names of apps, and functional app categories associated with a company; the targeted users of the company’s technology, and the company’s webpage. Since this method only identified companies that had been successful in integrating at least 1 app with an EHR or EHR-associated platform, we sought to capture a broader set of companies that may have attempted EHR integrations but have not been successful. We supplemented the preliminary list by pulling companies from: (1) a 2020 CB Insights Report titled “The digital health startups transforming the future of healthcare,” ( n  = 20), 10 (2) an analysis of relied upon software reported through the ONC Health IT Certification Program ( n  = 9), and (3) members of a national expert advisory board convened to support this project ( n  = 110) (see Table S1 for the list of members).

Inclusion criteria

Once we developed the list of companies across these 4 sources, we sought to limit it to those that develop solutions that enable human interaction with an API, such as provider-facing apps that access clinical data, either alone or in combination with non-clinical data, as well as patient-facing apps that access clinical or non-clinical data. These criteria exclude companies that solely make solutions that do not enable human interaction with an API, such as external databases or networks that connect to EHRs, apps that enable integration between 2 EHR systems, and provider-facing apps that do not access clinical data—given that these use cases are not the focus of federal regulations and face a different set of challenges. We also sought to exclude companies that make solutions that do not connect to an EHR (primarily those sourced from the CB Insights Report), as well as EHR vendors themselves.

To apply our inclusion criteria, we leveraged the app categories from the data scraping methodology. Companies and apps that were categorized as “clinical use” or “patient care” were included, while companies and apps that were categorized as “administrative” only were excluded. Companies and apps that were categorized as “patient engagement” were manually reviewed to determine inclusion. Manual review primarily involved accessing the app developer’s website or reviewing marketing materials obtained from the online marketplace or gallery to learn more about the app and its intended use. If it was determined that an app’s patient engagement function allowed access to patient records and clinical data, the company was included. For the remaining companies—those that did not have information on their app category, either because they had missing data or were not sourced using the scraping methodology—we first relied on data from the Apple and Google app stores to identify the app’s category. Among apps that could be found in the Apple or Google app stores, those categorized as “medical” were included in our sample, while those categorized as “health and fitness” were excluded. Apps that could not be found in the Apple or Google app stores were manually reviewed by evaluating the marketing materials on the app developer’s website to determine if they met inclusion criteria. This resulted in a final sampling frame of 704 companies.

Survey development

To capture the current state of progress and challenges that digital health companies face when integrating tools with EHRs, we developed and fielded a survey. The survey instrument was developed by the University of California San Francisco (UCSF) in collaboration with ONC, the California Health Care Foundation, and ScaleHealth (a healthcare solutions marketplace). It was refined based on feedback from the expert advisory board. The survey had 3 sections: (1) Experiences with API integrations, (2) barriers faced during API integrations, and (3) API-relevant policy efforts. The survey was pilot tested with 5 companies and then refined based on feedback. The final instrument can be found in the Supplementary Material .

Survey administration

Contact information for a target respondent at each company was sourced by ScaleHealth. The survey was distributed via the survey software Qualtrics and was fielded from June to November 2022.

To maximize the response rate, we employed a variety of outreach strategies. These included individual emails not only from UCSF but also from ScaleHealth, our expert advisors, and together. Health to target companies with whom they had existing relationships. We also posted the survey link and information to a variety of message boards, online forums, and listservs (which resulted in capturing 9 additional companies not in our original sampling frame that met inclusion criteria), increasing our total sample to 713. These boards, forums, and listservs included Health Tech Nerds, the American Medical Association Innovation Network, HIMSS Accelerate, ScaleHealth email listservs, the Society of Physician Entrepreneurs LinkedIn group, and the CARIN Alliance email listserv. Lastly, we printed business cards with a QR code link to the survey and distributed them to companies at the 2022 HLTH Conference. We followed-up with non-respondents up to 15 times over the course of survey administration. Incentives to participate in the survey included listing participating companies on public and peer-reviewed reports, providing a copy of the reports to respondents, and inviting respondents to a special session hosted by ONC during which the results and insights from the findings will be shared.

We conducted a set of descriptive analyses based on survey responses. First, we assessed the organizational demographics of the sample, including company relationship with protected heath information (eg, healthcare provider or other covered entity), primary application domain(s), and 2 proxies for size/maturity: company development stage and number of full time equivalent (FTE) staff working on products that integrate with commercial EHRs.

Our first set of analyses sought to capture use of standards-based versus proprietary APIs. We used survey questions that captured company status of integrations with EHRs via proprietary APIs, standards-based APIs, and third-party integration service (eg, Redox). For each integration type, companies were given the following response options: “Yes, in production (currently or previously),” “Yes, in process but not in production,” “Yes, but stopped (incomplete),” or “No.”

We then measured the relationship between the use of standards-based and proprietary APIs by calculating the percent of companies that use 1 type only (standards-based or proprietary), both types, and neither type. We also examined the relative use of proprietary and standards-based EHR APIs for companies that reported using both types by measuring the percent of respondents that reported using each API predominantly, mostly, or equally. Finally, within each of the groups, we calculated the percent of companies that reported using FHIR at all and the percent that used FHIR “extensively” to assess differences between companies’ use of FHIR in their apps across types of EHR API integrations. As FHIR represents the leading industry data standard for RESTful API-based data exchange, it is important to measure how companies’ adoption and use of the standard associates with the types of APIs they used to integrate with EHRs.

Our second set of analyses focused on experiences integrating with specific EHR vendors (eg, Epic, Cerner) as well as the total number of vendors. Through these analyses, we sought to assess the share of companies that integrate with specific EHRs and how adoption of standards-based APIs varies across companies that integrate with 1 or more EHR vendors. Specifically, we calculated the percent of companies that had a successful integration or 1 underway with an EHR. We then stratified the use of FHIR by the number of vendors with which a company integrated (1 vendor, 2-3 vendors, 4+ vendors) and calculated the percent of companies that reported using FHIR at all and the percent that reported using FHIR “extensively” to assess whether companies integrating with more than 1 EHR had higher rates of FHIR use. The core impetus for standardizing API-based exchange is to facilitate app and software integrations across multiple EHRs. We evaluated FHIR use this way because it is important to understand whether FHIR adoption by companies in their products correlates with the number of EHRs with which they integrate.

Our third set of analyses focused on enablers and barriers. First, we calculated the percent of companies that endorsed different dimensions of APIs as “moderately critical” or “critical to a great extent” to the company’s ability to work successfully with EHR APIs. These listed dimensions on the survey included: technical performance, breadth of data elements, and cost. We then calculated the top 10 most prevalent barriers reported by companies as “substantial” barriers to integration from a closed list of 20 barriers. We also compared the effort to establish and maintain proprietary and standards-based APIs to show how reported barriers may differently impact companies’ abilities to establish versus maintain EHR integrations. Finally, we examined open-ended responses to the questions of (1) high-priority clinical data types for future federally regulated availability and (2) future directions for policy efforts in promoting or enforcing access to data. We performed a text analysis of the free-text responses and report the 5 most common responses (grouped by key terms and themes) for each of the questions.

Sample sizes for each measure varied based on item non-response and skip logic (eg, if a company had no API-based EHR integrations, the survey programming logic had them skip many questions on the survey). Missing data were excluded from reported percentages. We conducted a non-response bias analysis to compare company characteristics between respondents and non-respondents. We did not do non-response weighting for reported statistics.

Of the 713 digital health companies on our final list, 125 companies completed the survey and 16 were considered sufficient partial completers (defined as completing through the questions on effort/resources to establish and maintain integrations with EHR vendors), for a response rate of 20%. A summary of respondent characteristics is included in Table 1 .

Characteristics of digital health company survey respondents.

Groups are not mutually exclusive.

Denominator differs due to survey question skip logic. Characteristic was collected only from respondents who reported an “in production” or “in process” integration with a commercial EHR.

Use of standards-based and proprietary APIs

Respondents reported using standards-based APIs to integrate their technologies with EHRs at high levels. Overall, 73% of companies reported current or previous use of a standards-based API in production, and another 13% reported having a standards-based API integration in process ( Figure 1 ). The second most frequently reported method for integration with EHRs was proprietary APIs, which 68% of companies reported as having currently or previously in production. About 30% of respondents indicated currently using or having previously used a third-party integration service in production. It was more common for companies to integrate their solutions using the EHR APIs directly than using a third-party integrator.

Digital health company status of integrations with EHRs. N = 141.

Digital health company status of integrations with EHRs. N = 141.

A majority of respondents (57%) indicated using both standards-based and proprietary APIs to integrate with an EHR ( Figure 2 ). Overall, 85% of companies reported supporting the FHIR standard as part of their application, with 61% using the standard extensively. Reported use of the FHIR standard was much higher among companies that used a standards-based EHR API (either alone or alongside a proprietary EHR API) compared to those that did not. 82% of companies using standards-based EHR APIs only and 79% of companies using standards-based EHR APIs alongside proprietary APIs reported use of FHIR in their products, with 89% and 75% of those companies using FHIR, respectively, reporting extensive use of the standard. Conversely, fewer companies that did not use standards-based EHR APIs used the FHIR standard. About 67% of companies only using proprietary APIs to integrate with an EHR and 52% of companies using neither API type, reported use of FHIR, with 50% of those companies using FHIR reporting extensive use of the standard.

Digital health company use of APIs and the FHIR standard. N = 141.

Digital health company use of APIs and the FHIR standard. N = 141.

We found that 24% of companies worked about equally with both standards-based and proprietary APIs and 44% mostly or predominantly used standards-based APIs ( Figure 3 ).

The extent to which digital health companies report currently working with proprietary versus standards-based EHR APIs. N = 141.

The extent to which digital health companies report currently working with proprietary versus standards-based EHR APIs. N = 141.

EHR vendors

Companies reported successful integrations most frequently with market leading EHRs, including Epic (64%), athenahealth (37%), and Cerner (36%). An additional 18% (Epic), 13% (athenahealth), and 24% (Cerner) of companies reported that API-based integration efforts were underway ( Figure 4 ).

Status of integrations using varying EHR APIs. N = 141.

Status of integrations using varying EHR APIs. N = 141.

About 92% of companies had integrations underway with at least 1 EHR and 78% had integrations underway with 2 or more EHRs. Those companies that worked with more than 1 EHR vendor more frequently reported extensive use of the FHIR standard ( Figure 5 ). Specifically, among companies that worked with more than 1 EHR vendor, 73% reported extensive use of FHIR, compared to 27% of companies working with just 1 EHR vendor. About 47% of companies with integrations with just 1 EHR vendor reported using FHIR in a limited way, and 27% reported no use of the FHIR standard. The percent of companies that reported no use of the FHIR standard was just 9% for companies with integrations with 2-3 EHR vendors and 5% for companies with integrations with 4 or more vendors.

Digital health company respondent use of the FHIR standard, stratified by the number of EHR vendors with which their apps are integrated. N = 141.

Digital health company respondent use of the FHIR standard, stratified by the number of EHR vendors with which their apps are integrated. N = 141.

Enablers and barriers

Several dimensions were identified by most respondents as critical for a company’s ability to work successfully with APIs ( Table 2 ). Technical performance (61%), breadth of data elements (60%), cost (56%), and quality documentation (51%) were reported most frequently as dimensions that were critical “to a great extent” for successful work with APIs, followed by EHR vendor support (50%), and effort to implement (45%).

Percent of digital health company respondents that indicated dimensions were “moderately critical” and “critical to a great extent” for a company’s ability to work successfully with EHR APIs ( N  = 141).

About 28% of companies rated standards-based APIs as very good based on the critical dimensions for a company to be able to work successfully with an API; this was a larger percent than proprietary APIs (25%), but a lesser percent compared to API-based third-party integration (40%).

Barriers pose challenges to digital health company use of EHR APIs. Among companies that reported using APIs, 47% reported high fees associated with accessing an EHR API as a substantial barrier ( Figure 6 ). The next most common challenges included a lack of realistic clinical testing data (41%), access to data elements of interest or value through APIs (40%), availability of standards-based APIs from the EHR vendor (38%), and standardized data elements (35%).

Top 10 “substantial” barriers to integrate with EHRs via APIs. N = 141.

Top 10 “substantial” barriers to integrate with EHRs via APIs. N = 141.

Efforts to establish and maintain proprietary and standards-based APIs differed substantially. Companies reported that standards-based APIs required on average less burden than proprietary APIs to establish and maintain, with 52% and 21% of companies reporting that substantial effort is required for establishment and maintenance of proprietary APIs, and just 40% and 13% reporting substantial effort required for the establishment and maintenance of standards-based APIs.

Digital health company respondents provided open-ended responses regarding high-priority clinical data types for future federally regulated availability via EHR APIs, as well as future opportunities for policy efforts to promote or enforce access to data. This is summarized in Table 3 .

Five most requested improvements to high-priority clinical data types and future policy opportunities ( N  = 141).

In brief, respondents indicated interest in federally regulated availability (through EHR APIs) of social determinants of health (SDoH) and demographic data, genomic testing results, prescription and administered medications lists, clinical notes, and claims data. In addition to expanded data element availability, companies frequently highlighted the need for cost controls on EHR integration, as well as enforcement and incentivization of EHR vendor adherence to API standards.

Non-response bias analysis

Given the survey’s relatively low response rate (20%), we assessed non-response bias and found a few statistically significant differences between respondents and non-respondents. However, the observed, small-magnitude differences are unlikely to bias the representativeness of our results (Appendix SA2).

This study sought to capture current digital health company experiences integrating with EHRs, now that new federally regulated standards-based API policies are in place and being implemented by EHR vendors. Our analysis focused on 3 domains: the use of standards-based and proprietary EHR APIs, integrations across EHR vendors, and enablers and barriers to integrate with EHR APIs. Our results reveal that the majority of respondents use standards-based APIs to integrate with EHRs and support use of the HL7 FHIR standard in their products, likely facilitating their use of standards-based APIs. Although nearly the same number of companies reported use of proprietary EHR APIs, more companies reported predominantly or mostly using standards-based versus proprietary APIs, signaling that both API types were needed to successfully integrate, but that standards-based APIs were more integral. Taken together, this suggests that the field is making important progress in moving toward use of APIs that streamline data exchange through a common language but that a notable portion of digital health companies rely to some extent on non-standards-based APIs.

Substantial barriers such as high fees, lack of realistic clinical testing data, and lack of data elements of interest or value, indicate that progress has not been without associated friction. This is further supported by the significant difference we found in companies’ reported efforts to establish and maintain EHR API integrations—where efforts to establish were more than twice as burdensome. Companies’ recommendations for improving upon the current state of integration included that federal policy should promote more access through cost controls, testing and validation, and an expanded set of data elements available through APIs, which directly address these barriers. Further private sector support and federal policy are needed to ensure APIs are available to reduce barriers to entry and nurture competition “without special effort.”

In particular, results signal an opportunity for industry and ONC to consider and gain input on other high value use cases not currently adopted in the United States Core Data for Interoperability standard and standards-based APIs. Government and industry efforts, through pilots, standards accelerators, and standards development work groups, can help further standardize the data elements that can be accessed using standards-based APIs. 11 ONC also accepts and uses public feedback and complaints on real-world certified health IT use and barriers through the ONC Health IT Feedback portal to inform agency actions. 12

Reported barriers related to the uneven availability of APIs and access across different EHRs could lead more digital health companies to focus their integration efforts and customer recruitment across a subset of EHR vendors who provide more robust developer support and a wider availability of data elements beyond just the floor set by federal requirements. The percent of companies, however, that integrate with each EHR vendor align with the EHR market share we calculated across office-based sites and acute care hospitals derived from recent public data sources. 13 , 14 Even though the EHR marketplace skews toward a few predominant market leaders, it is important to ensure the market remains competitive and the burgeoning app ecosystem is built across all technologies (not just a few leaders). High rates of FHIR use among respondents, especially among companies working with multiple EHR vendors, suggest that FHIR-based APIs are successful in supporting apps developed with the intention to scale across multiple EHRs.

Limitations

The sample and respondents may not comprise a representative sample of digital health companies or all companies that are actively integrating and using EHR APIs. Nonetheless, our methodology to base our sample primarily on a list of companies pulled from public app galleries maintained by EHR vendors and other organizations and evolve and modify that list based on technical expert input resulted in a comprehensive list that, to our knowledge, exists nowhere else. We found through our market research no other representative list or sampling frame for this study, so novel methods and expert insights were needed to derive a sample of companies knowledgeable and experienced to answer the survey’s technical questions.

Our study was also limited to primarily commercial users of EHR APIs and did not include perspectives from clinicians, academic medical center researchers, and other EHR data users, who have research and business cases to use the APIs to connect and integrate their technologies and applications to the EHR. Their perspectives are no less important but were determined as out of scope for this study.

This study used a novel survey and sampling methodology to derive a robust sample of digital health companies to glean novel, national insights into companies’ experiences using EHR APIs and how the industry and federal policy can continue to shape the healthcare technology ecosystem. We found that a high proportion of digital health companies use standards-based APIs to interoperate with EHRs and support standards as part of their product base. The results show that an iterative and inclusive approach that incorporates industry feedback (not just EHRs, but the digital health and app developer community, too) can help push the technical and functional properties of standards-based APIs forward and in step with developer needs. Continuing to improve the resources for digital health companies to find, test, connect, and use these APIs “without special effort” will be crucial to ensure the technology is robust and durable into the future.

We would like to acknowledge Robert Plush, Scale Health; Hong Truong, formerly California Health Care Foundation; Christian Johnson, PhD, formerly Office of the National Coordinator for Health IT; and Vaishali Patel, PhD, Office of the National Coordinator for Health IT for their work informing the survey instrument, sampling frame, and fielding of the final survey.

W.B. contributed to the conception and design of the manuscript; data acquisition, analysis, and interpretation of the data; and drafted and critically revised the manuscript. N.M. contributed to the design, data acquisition, analysis, and interpretation of the data and drafted and critically revised the manuscript. C.S. contributed to the analysis and interpretation of the data and drafted and critically revised the manuscript. G.I. contributed to the design, data acquisition, and analysis of the data and drafted and critically revised the manuscript. J.A.M. contributed to the conception and design of the manuscript; data acquisition and interpretation of the data; and critically revised the manuscript. B.R. contributed to the conception and design of the manuscript; interpretation of the data; and critically revised the manuscript. All authors provided final approval of the final manuscript and agree to be accountable for all aspects of the work.

Supplementary material is available at Journal of the American Medical Informatics Association online.

This study was funded by the California Health Care Foundation (CHCF) (grant number G-31685). CHCF was involved in developing the survey instrument and recruiting respondents. CHCF had no role in the collection, analysis, and interpretation of the data, or in preparation of the manuscript.

The authors have no conflicts of interest.

The data underlying this article, even deidentified data, cannot be shared publicly with outside groups to preserve the privacy of individual survey responses. We are able to share aggregated results upon request.

The MITRE Corporation . A robust health data infrastructure. AHRQ Publication No. 14-0041-EF; April 2014 . Accessed May 24, 2023. https://www.healthit.gov/sites/default/files/ptp13-700hhs_white.pdf

Office of the National Coordinator for Health IT . Final report: Assessing the SHARP experience; July 2014 . Accessed May 24, 2023. https://www.healthit.gov/sites/default/files/sharp_final_report.pdf

HL7 International . Argonaut project. Accessed May 24, 2023 . https://confluence.hl7.org/display/AP/Argonaut+Project+Home

Office of the National Coordinator for Health IT . Accelerating application programing interfaces for scientific discovery: app developer and data integrator perspectives; March 2022 . Accessed May 24, 2023. https://www.healthit.gov/sites/default/files/page/2022-06/App-Developer-and-Integrator-Perspectives.pdf

Anthony E. The cures act final rule: interoperability-focused policies that empower patients and support providers; March 9, 2020 . Accessed June 22, 2023. https://www.healthit.gov/buzz-blog/21st-century-cures-act/the-cures-final-rule

Office of the National Coordinator for Health IT . 21st Century Cures Act: interoperability, information blocking, and the ONC health IT certification program. 85 FR 25642. Accessed May 24, 2023 . https://www.federalregister.gov/documents/2020/05/01/2020-07419/21st-century-curesact-interoperability-information-blocking-and-the-onc-health-it-certification

California HealthCare Foundation . Health 2.0 EMR API report; September 19, 2016 . Accessed May 24, 2023. https://www.slideshare.net/health2dev/health-20-emr-api-report

California HealthCare Foundation . Health 2.0 EMR API report 2018; November 28, 2018. Accessed May 24, 2023. https://www.slideshare.net/health2dev/helath-20-emr-api-report-2018

Barker W , Johnson C. The ecosystem of apps and software integrated with certified health information technology . J Am Med Inform Assoc . 2021 ; 28 ( 11 ): 2379 - 2384 .

Google Scholar

CB Insights . Digital health 150 of 2020: the digital health startups transforming the future of healthcare; August 13, 2020 . Accessed: June 22, 2023. https://www.cbinsights.com/research/digital-health-startups-redefining-healthcare-2020/

Office of the National Coordinator for Health IT . Interoperability standards advisory: United States core data for interoperability. Accessed May 24, 2023 . https://www.healthit.gov/isa/united-states-core-data-interoperability-uscdi

Office of the National Coordinator for Health IT . Health IT feedback and inquiry portal. Accessed December 12, 2023. https://inquiry.healthit.gov/support/plugins/servlet/desk/portal/2

National Center for Health Statistics (NCHS) . 2021 National Electronic Health Record Survey (NEHRS). Accessed May 24, 2023 . https://www.cdc.gov/nchs/nehrs/about.htm

American Hospital Association (AHA) . 2022 information technology supplement. Accessed May 24, 2023 . https://www.ahadata.com/aha-healthcare-it-database

Supplementary data

Email alerts, citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 1527-974X
  • Copyright © 2024 American Medical Informatics Association
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

National Center for Science and Engineering Statistics

  • 2022 - 2023
  • 2021 - 2022
  • 2020 - 2021
  • All previous cycle years

The Survey of Federal Funds for Research and Development is an annual census of federal agencies that conduct research and development (R&D) programs and the primary source of information about U.S. federal funding for R&D.

Survey Info

  • tag for use when URL is provided --> Methodology
  • tag for use when URL is provided --> Data
  • tag for use when URL is provided --> Analysis

The Survey of Federal Funds for Research and Development (R&D) is the primary source of information about federal funding for R&D in the United States. The survey is an annual census completed by the federal agencies that conduct R&D programs. Actual data are collected for the fiscal year just completed; estimates are obtained for the current fiscal year.

Areas of Interest

  • Government Funding for Science and Engineering
  • Research and Development

Survey Administration

Synectics for Management Decisions, Inc. (Synectics) performed the data collection for volume 72 (FYs 2022–23) under contract to the National Center for Science and Engineering Statistics.

Survey Details

  • Survey Description (PDF 127 KB)
  • Data Tables (PDF 4.8 MB)

Featured Survey Analysis

Federal R&D Obligations Increased 0.4% in FY 2022; Estimated to Decline in FY 2023.

Federal R&D Obligations Increased 0.4% in FY 2022; Estimated to Decline in FY 2023

Image 2752

Survey of Federal Funds for R&D Overview

Methodology, survey description, survey overview (fys 2022–23 survey cycle; volume 72).

The annual Survey of Federal Funds for Research and Development (Federal Funds for R&D) is the primary source of information about federal funding for R&D in the United States. The results of the survey are also used in the federal government’s calculation of U.S. gross domestic product at the national and state level, used for policy analysis, and used for budget purposes for the Federal Laboratory Consortium for Technology Transfer, the Small Business Innovation Research, and the Small Business Technology Transfer. The survey is sponsored by the National Center for Science and Engineering Statistics (NCSES) within the National Science Foundation (NSF).

Data collection authority

The information is solicited under the authority of the National Science Foundation Act of 1950, as amended, and the America COMPETES Reauthorization Act of 2010.

Major changes to recent survey cycle

Key survey information, initial survey year, reference period.

FYs 2022–23.

Response unit

Federal agencies.

Sample or census

Population size.

The population consists of the 32 federal agencies that conduct R&D programs, excluding the Central Intelligence Agency (CIA).

Sample size

Not applicable; the survey is a census of all federal agencies that conduct R&D programs, excluding the CIA.

Key variables

Key variables of interest are listed below.

The survey provides data on federal obligations by the following key variables:

  • Federal agency
  • Field of R&D (formerly field of science and engineering)
  • Geographic location (within the United States and by foreign country or economy)
  • Performer (type of organization doing the work)
  • R&D plant (facilities and major equipment)
  • Type of R&D (research, development, test, and evaluation [RDT&E] for Department of Defense [DOD] agencies)
  • Basic research
  • Applied research
  • Development, also known as experimental development

The survey provides data on federal outlays by the following key variables:

  • R&D (RDT&E for DOD agencies)
  • R&D plant

Note that the variables “R&D,” “type of R&D,” and “R&D plant” in this survey use definitions comparable to those used by the Office of Management and Budget Circular A-11 , Section 84 (Schedule C).

Survey Design

Target population.

The population consists of the federal agencies that conduct R&D programs, excluding the CIA. For the FYs 2022–23 cycle, a total of 32 federal agencies (14 federal departments and 18 independent agencies) reported R&D data.

Sampling frame

The survey is a census of all federal agencies that conduct R&D programs, excluding the CIA. The agencies are identified from information in the president’s budget submitted to Congress. The Analytical Perspectives volume and the “Detailed Budget Estimates by Agency” section of the appendix to the president’s budget identify agencies that receive funding for R&D.

Sample design

Not applicable.

Data Collection and Processing

Data collection.

Synectics for Management Decisions, Inc. (Synectics) performed the data collection for volume 72 (FYs 2022–23) under contract to NCSES. Agencies were initially contacted by e-mail to verify the contact information of each agency-level survey respondent. A Web-based data collection system is used for the survey. Multiple subdivisions of some federal departments were permitted to submit information to create a complete accounting of the departments’ R&D funding activities.

Data collection for Federal Funds for R&D began in May 2023 and continued into September 2023.

Data processing

A Web-based data collection system is used to collect and manage data for the survey. This Web-based system was designed to help improve survey reporting and reduce data collection and processing costs by offering respondents direct online reporting and editing.

All data collection efforts, data imports, and trend checking are accomplished using the Web-based data collection system. The Web-based data collection system has a component that allows survey respondents to enter their data online; it also has a component that allows the contractor to monitor support requests, data entry, and data issues.

Estimation techniques

Published totals are created by summing respondent data, there are no survey weights or other adjustments.

Survey Quality Measures

Sampling error, coverage error.

Given the existence of a complete list of all eligible agencies, there is no known coverage error. The CIA is purposely excluded.

Nonresponse error

There is no unit nonresponse. To increase item response, agencies are encouraged to estimate when actual data are unavailable. The survey instrument allows respondents to enter data or skip data fields. There are several possible sources of nonresponse error by respondents, including inadvertently skipping data fields or skipping data fields when data are unavailable.

Measurement error

Some measurement problems are known to exist in the Federal Funds of R&D data. Some agencies cannot report the full costs of R&D, the final performer of R&D, or R&D plant data.

For example, DOD does not include headquarters’ costs of planning and administering R&D programs, which are estimated at a fraction of 1% of its total cost. DOD has stated that identification of amounts at this level is impracticable.

The National Institutes of Health (NIH) in the Department of Health and Human Services currently has many of its awards in its financial system without any field of R&D code. Therefore, NIH uses an alternate source to estimate its research dollars by field of R&D. NIH uses scientific class codes (based upon history of grant, content of the title, and the name of the awarding institute or center) as an approximation for field of R&D.

The National Aeronautics and Space Administration (NASA) does not include any field of R&D codes in its financial database. Consequently, NASA must estimate what percentage of the agency’s research dollars are allocated into the fields of R&D.

Also, agencies are required to report the ultimate performer of R&D. However, through past workshops, NCSES has learned that some agencies do not always track their R&D dollars to the ultimate performer of R&D. This leads to some degree of misclassification of performers of R&D, but NCSES has not determined the extent of the errors in performer misclassification by the reporting agencies.

R&D plant data are underreported to some extent because of the difficulty some agencies, particularly DOD and NASA, encounter in identifying and reporting these data. DOD’s respondents report obligations for R&D plant funded under the agency’s appropriation for construction, but they are able to identify only a small portion of the R&D plant support that is within R&D contracts funded from DOD’s appropriation for RDT&E. Similarly, NASA respondents cannot separately identify the portions of industrial R&D contracts that apply to R&D plant because these data are subsumed in the R&D data covering industrial performance. NASA R&D plant data for other performing sectors are reported separately.

Data Availability and Comparability

Data availability.

Annual data are available for FYs 1951–2023.

Data comparability

Until the release of volume 71 (FYs 2021–22) the information included in this survey had been unchanged since volume 23 (FYs 1973–75), when federal obligations for research to universities and colleges by agency and detailed field of science and engineering were added to the survey. Other variables (such as type of R&D and type of performer) are available from the early 1950s on. The volume 71 survey revisions maintained the four main R&D crosscuts (i.e., type of R&D, field of R&D [previously referred to as field of science and engineering], type of performer, and geographic area) collected previously. However, there were revisions within these crosscuts to ensure consistency with other NCSES surveys. These include revisions to the fields of R&D and the type of performer categories (see Technical Notes, table A-3 for a crosswalk of the fields of science and engineering to the fields of R&D). In addition, new variables were added, such as field of R&D for experimental development (whereas before, the survey participants had only reported fields of R&D [formerly fields of science] for basic research and applied research). Grants and contracts for extramural R&D performers and obligations to University Affiliated Research Centers were also added in volume 71.

Every time new data are released, there may be changes to past years’ data because agencies sometimes update older information or reclassify responses for prior years as additional budget data become available. For trend comparisons, use the historical data from only the most recent publication, which incorporates changes agencies have made in prior year data to reflect program reclassifications or other corrections. Do not use data published earlier.

Data Products

Publications.

NCSES publishes data from this survey annually in tables and analytic reports available at Federal Funds for R&D Survey page and in the Science and Engineering State Profiles .

Electronic access

Access to the data for major data elements are available in NCSES’s interactive data tool at https://ncsesdata.nsf.gov/ .

Technical Notes

Survey overview, data collection and processing methods, data comparability (changes), definitions.

Purpose. The annual Survey of Federal Funds for Research and Development (Federal Funds for R&D) is the primary source of information about federal funding for R&D in the United States. The results of the survey are also used in the federal government’s calculation of U.S. gross domestic product at the national and state level, for policy analysis, and for budget purposes for the Federal Laboratory Consortium for Technology Transfer, the Small Business Innovation Research, and the Small Business Technology Transfer. In addition, as of volume 71, the Survey of Federal Science and Engineering Support to Universities, Colleges, and Nonprofit Institutions (Federal S&E Support Survey) was integrated into this survey as a module, making Federal Funds for R&D the comprehensive data source on federal science and engineering (S&E) funding to individual academic and nonprofit institutions.

Data collection authority.  The information is solicited under the authority of the National Science Foundation Act of 1950, as amended, and the America COMPETES Reauthorization Act of 2010.

Survey contractor. Synectics for Management Decisions, Inc. (Synectics).

Survey sponsor. The National Center for Science and Engineering Statistics (NCSES) within the National Science Foundation (NSF).

Frequency . Annual.

Initial survey year . 1951.

Reference period . FYs 2022–23.

Response unit. Federal agencies.

Sample or census. Census.

Population size. For the FYs 2022–23 cycle, a total of 32 federal agencies reported R&D data. (See section “ Survey Design ” for details.)

Sample size. Not applicable; the survey is a census of all federal agencies that conduct R&D programs, excluding the Central Intelligence Agency (CIA).

Target population. The population consists of the federal agencies that conduct R&D programs, excluding the CIA. For the FYs 2022–23 cycle, a total of 32 federal agencies (14 federal departments and 18 independent agencies) reported R&D data.

Sampling f rame. The survey is a census of all federal agencies that conduct R&D programs, excluding the CIA. The agencies are identified from information in the president’s budget submitted to Congress. The Analytical Perspectives volume and the “Detailed Budget Estimates by Agency” section of the appendix to the president’s budget identify agencies that receive funding for R&D.

Sample design. Not applicable.

Data collection. Data for FYs 2022–23 (volume 72) were collected by Synectics under contract to NCSES (for a full list of fiscal years canvassed by survey volume reference, see Table A-4 ). Data collection began with an e-mail to each agency to verify the name, phone number, and e-mail address of each agency-level survey respondent. A Web-based data collection system is used for the survey. Because multiple subdivisions of some federal departments completed the survey, there were 72 agency-level respondents: 6 federal departments that reported for themselves, 48 agencies within another 8 federal departments, and 18 independent agencies. However, lower offices could also be authorized to enter data: in Federal Funds for R&D nomenclature, agency-level offices could authorize program offices, program offices could authorize field offices, and field offices could authorize branch offices. When these suboffices are included, there were 725 total respondents: 72 agencies, 95 program offices, 178 field offices, and 380 branch offices.

Since volume 66, each survey cycle collects information for 2 federal government fiscal years: the fiscal year just completed (FY 2022—i.e., 1 October 2021 through 30 September 2022) and the current fiscal year during the start of the survey collection period (i.e., FY 2023). FY 2022 data are completed transactions. FY 2023 data are estimates of congressional appropriation actions and apportionment and reprogramming decisions.

Data collection began on 10 May 2023, and the requested due date for data submissions was 5 August 2023. Data collection was extended until all surveyed agencies provided complete and final survey data in September 2023.

Mode. Federal Funds for R&D uses a Web-based data collection system. The Web-based system consists of a data collection component that allows survey respondents to enter their data online and a monitoring component that allows the data collection contractor to monitor support requests, data entry, and data issues. The Web-based system’s two components are password protected so that only authorized respondents and staff can access them. However, some agencies submit their data in alternative formats such as Excel files, which are later imported into the Web-based system. All edit and trend checks are accomplished through the Web-based system. Final submission occurs through the Web-based system after all edit failures and trend checks have been resolved.

Response rate. The unit response rate is 100%.

Data checking . Data errors in Federal Funds for R&D are flagged automatically by the Web-based data collection system: respondents cannot submit their final data to NCSES until all required fields have been completed without errors. Once data are submitted, specially written SAS programs are run to check each agency’s submission to identify possible discrepancies, to ensure data from all suboffices are included correctly, and to check that there were no inadvertent shifts in reporting from one year to the next. As always, respondents are contacted to resolve potential reporting errors that cannot be reconciled by the narratives. Explanations of questionable data are noted by the survey respondents for NCSES review.

Imputation . None.

Weighting. None.

Variance estimation. Not applicable.

Sampling error. Not applicable.

Coverage error. Given the existence of a complete list of all eligible agencies, there is no known coverage error. The CIA is purposely excluded.

Nonresponse error. There is no unit nonresponse. To increase item response, agencies are encouraged to estimate when actual data are unavailable. The survey instrument allows respondents to enter data or skip data fields; however, blank fields are not accepted for survey submission, and respondents must either populate the fields with data or with $0 if the question is not applicable. There are several possible sources of nonresponse error by respondents, including inadvertently skipping data fields, skipping data fields when data are unavailable, or entering $0 when specific data are unavailable.

Measurement error . Some measurement problems are known to exist in the Federal Funds of R&D data. Some agencies cannot report the full costs of R&D, the final performer of R&D, or R&D plant data.

For example, the Department of Defense (DOD) does not include headquarters’ costs of planning and administering R&D programs, which are estimated at a fraction of 1% of its total cost. DOD has stated that identification of amounts at this level is impracticable.

The National Institutes of Health (NIH) in the Department of Health and Human Services (HHS) currently has many of its awards in its financial system without any field of R&D code. Therefore, NIH uses an alternate source to estimate its research dollars by field of R&D. NIH uses scientific class codes (based upon history of grant, content of the title, and the name of the awarding institute or center) as an approximation for field of R&D.

Agencies are asked to report the ultimate performer of R&D. However, through past workshops, NCSES has learned that some agencies do not always track their R&D dollars to the ultimate performer of R&D. In the case of transfers to other federal agencies, the originating agency often does not have information on the final disposition of funding made by the receiving agency. Therefore, intragovernmental transfers, which are classified as federal intramural funding, may have some degree of extramural performance. This leads to some degree of misclassification of performers of R&D, but NCSES has not determined the extent of the errors in performer misclassification by the reporting agencies.

Differences in agency and NCSES classification of some performers will also lead to some degree of measurement error. For example, although many university research foundations are legally organized as nonprofit organizations and may be classified as such within a reporting agency’s own system of record, NCSES classifies these as component units of higher education. These classification differences may contribute to differences in findings by the Federal Funds for R&D and the Federal S&E Support Survey in federal agency obligations to both higher education and nonprofit institutions.

R&D plant data are underreported to some extent because of the difficulty some agencies, particularly DOD and NASA, encounter in identifying and reporting these data. DOD’s respondents report obligations for R&D plant that are funded under the agency’s appropriation for construction, but they are able to identify only a small portion of the R&D plant support that is within R&D contracts funded from DOD’s appropriation for research, development, testing, and evaluation (RDT&E). Similarly, NASA respondents cannot separately identify the portions of industrial R&D contracts that apply to R&D plant because these data are subsumed in the R&D data covering industrial performance. NASA R&D plant data for other performing sectors are reported separately.

Data revisions. When completing the current year’s survey, agencies naturally revise their estimates for the last year of the previous report—in this case, FY 2022. Sometimes, survey submissions also reflect reappraisals and revisions in classification of various aspects of agencies’ R&D programs; in those instances, NCSES requests that agencies provide revised prior year data to maintain consistency and comparability with the most recent R&D concepts.

For trend comparisons, use the historical data from only the most recent publication, which incorporates changes agencies have made in prior year data to reflect program reclassifications or other corrections. Do not use data published earlier.

Changes in survey coverage and population. This cycle (volume 72, FYs 2022–23), one department, the Department of Homeland Security (DHS), became the agency respondent instead of continuing to delegate that role to its bureaus; one agency was added as a respondent—the Department of Agriculture’s (USDA’s) Natural Resources Conservation Service; one agency, the Department of Transportation’s Maritime Administration, resumed reporting; and two agencies, the Department of Treasury’s Internal Revenue Service (IRS) and the independent agency the Federal Communications Commission, ceased to report.

Changes in questionnaire .

  • No changes were made to the questionnaire for volume 72.
  • The survey was redesigned for volume 71 (FYs 2021–22). The Federal S&E Support Survey was integrated as the final two questions in the Federal Funds for R&D questionnaire. (NCSES will continue to publish these data separately at https://ncses.nsf.gov/surveys/federal-support-survey/ .)
  • Four other new questions were added to the standard and DOD versions of the questionnaire; the questions covered, for the fiscal year just completed (FY 2021), R&D deobligations (Standard and DOD Question 4), nonfederal R&D obligations by type of agreement (Standard Question 10 and DOD Question 11), R&D obligations provided to other federal agencies (Standard Question 11 and DOD Question 12), and R&D and R&D plant obligations to university affiliated research centers (Standard Question 17 and DOD Question 19). One new question added solely to the DOD questionnaire (DOD Question 6) was about obligations for Small Business Innovation Research and Small Business Technology Transfer for the fiscal year just completed and the current fiscal year at the time of collection (i.e., FYs 2021 and 2022). Many of the other survey questions were reorganized and revised.
  • For volume 71, some changes were made within the questions for consistency with other NCSES surveys. Among the performer categories, federally funded R&D centers (FFRDCs), which in previous volumes were included among the extramural performers, became one of the intramural performers. Other changes include retitling of certain performer categories, where “industry” was changed to “businesses” and “universities and colleges” was changed to “higher education.”
  • For volume 71, “field of R&D” was used instead of the former “field of science and engineering.” The survey started collecting field of R&D information for experimental development obligations; previously, field of R&D information was collected only for research obligations.
  • For volume 71, federal obligations for research performed at higher education institutions, by detailed field of R&D was asked of all agencies. Previously these data had only been collected from the Departments of Agriculture, Defense, Energy, HHS, and Homeland Security; NASA; and NSF. 
  • For volume 71, geographic distribution of R&D obligations was asked of all agencies. Previously, these data had only been collected from the Departments of Agriculture, Commerce, Defense, Energy, HHS, Homeland Security; NASA; and NSF. Agencies are asked to provide the principal location (state or outlying area) of the work performed by the primary contractor, grantee, or intramural organization; assign the obligations to the location of the headquarters of the U.S. primary contractor, grantee, or intramural organization; or, for DOD agencies, list the funds as undistributed for classified funds.
  • For volume 71, collection of data on funding type (stimulus and non-stimulus) was limited to Question 5 on type of R&D.
  • For volume 71, grants and contracts for extramural R&D performers and obligations to University Affiliated Research Centers were added.
  • For volume 70 (FYs 2020–21), agencies were requested to report COVID-19 pandemic-related R&D from the agency’s initial appropriations, as well as from any stimulus funds received from the Coronavirus Aid, Relief, and Economic Security (CARES) Act, plus any other pandemic-related supplemental appropriations. Two tables in the questionnaire were modified to collect the stimulus and non-stimulus amounts separately (tables 1 and 2), and seven tables in the questionnaire (tables 6.1, 6.2, 7.1, 11.1, 11.2, 12.1, and 13.1) were added for respondents to specify stimulus and non-stimulus funding by various categories. The data on stimulus funding is reported in volume 70’s data table 132. The Biomedical Advanced Research and Development Authority accounted for 66% of all COVID-19 R&D in FY 2020; these obligations primarily include transfers to the other agencies to help facilitate execution of contractual awards under Operation Warp Speed.
  • For volume 70 (FYs 2020–21), the optional narrative tables that ask for comparisons of the R&D obligations reported in Federal Funds for R&D with corresponding amounts in the Federal S&E Support Survey (standard questionnaire only) were renumbered from tables 6B and 6C to tables 6A and 6B.
  • In volumes 68 (FYs 2018–19) and 69 (FYs 2019–20), table 6A, which collected information on federal intramural R&D obligations, was deactivated, and agencies were instructed not to complete it.
  • For volumes 66 (FYs 2016–17) and 67 (FYs 2017–18), table 6A (formerly table VI.A) was included, but it was modified so that it no longer collected laboratory names.
  • Starting with volume 66 (FYs 2016–17), the survey collects 2 federal government fiscal years—actual data for the fiscal year just completed and estimates for the current fiscal year. Previously, the survey also collected projected obligations for the next fiscal year based on the president’s budget request to Congress. For volume 66, data were collected for only 2 fiscal years due to the delayed FY 2018 budget formulation process. However, after consultation with data users, NCSES determined that the projections were not as useful as the budget authority data presented in the budget request.
  • In volume 66, the survey table numbering was changed from Roman numerals I–XI and, for selected agencies, the letters A–E, to Arabic numerals 1–16. The order of tables remained the same.
  • In the volume 66 DOD-version of the questionnaire, the definition of major systems development was changed to represent DOD Budget Activities 4 through 6 instead of Budget Activities 4 through 7, and questions relating to funding for Operational Systems Development (Budget Activity 7) were added to the instrument. The survey’s narrative tables 6 and 11 were removed from the DOD-version of the questionnaire.
  • For volume 65 (FYs 2015–17), the survey reintroduced table VI.A to collect information on federal intramural R&D obligations, including the names and addresses of all federal laboratories that received federal intramural R&D obligations. The table was included in both the standard and DOD questionnaires.
  • For volume 62 (FYs 2012–14), the survey added table VI.A to the standard questionnaire for that volume only to collect information on FY 2012 federal intramural R&D obligations, including the names and addresses of all federal laboratories that received federal intramural R&D obligations.
  • In volumes 59 (FYs 2009–11) and 60 (FYs 2010–12), questions relating to funding from the American Recovery and Reinvestment Act of 2009 (ARRA) were added to the data collection instruments. The survey collected separate outlays and obligations for ARRA and non-ARRA sources of funding, by performer and geography for FYs 2009 and 2010.
  • Starting with volume 59 (FYs 2009–11), federal funding data were requested in actual dollars (instead of rounded in thousands, as was done through volume 58).

Changes in reporting procedures or classification.

  • FY 2022. During the volume 72 cycle (FYs 2022–23), NASA revised its FY 2021 data by field of R&D and performer categories based on improved classification procedures developed during the volume 72 reporting period.
  • FY 2021. During the volume 71 cycle (FYs 2021–22), NCSES decided to remove “U.S.” from names like “U.S. Space Force” to conform with other surveys. For Federal Funds for R&D, this change will first appear in the detailed statistical tables.
  • FY 2020. For volume 70 (FYs 2020 and 2021), data include obligations from supplemental COVID-19 pandemic-related appropriations (e.g., CARES Act) plus any other pandemic-related supplemental appropriations.
  • FY 2020. The Department of Energy’s (DOE’s) Naval Reactor Program reclassified some of its R&D obligations from industry-administered FFRDCs to the industry sector.
  • FY 2020. The Department of the Air Force (AF) and the DOE’s Energy Efficiency and Renewable Energy (EERE) partially revised their FY 2019 data. AF revised its operational system development classified program numbers for businesses excluding business or industry-administered FFRDCs, and EERE revised its outlay numbers.
  • FY 2019. For volume 69 (FYs 2019–20), FY 2020 preliminary data do not include obligations from supplemental COVID-19 pandemic-related appropriations (e.g., CARES Act).
  • FY 2019. The Biomedical Advanced Research and Development Authority began reporting. For volume 69 (FYs 2019–20), it could not submit any geographical data, so its data were reported as undistributed on the state tables.
  • FY 2019. The U.S. Agency for Global Media (formerly the Broadcasting Board of Governors), which did not report data between FY 2008 and FY 2018, resumed reporting.
  • FY 2018. The HHS Centers for Medicare and Medicaid (CMS) funding was reported by the CMS Office of Financial Management at an agency-wide level instead of by the CMS Center for Medicare and Medicaid Innovation and its R&D group, the Office of Research, Development, and Information, which used to report at a component level.
  • FY 2018. The Department of State added the Global Health Programs R&D funding.
  • FY 2018. The Department of Veterans Affairs added funds for the Medical Services support to the existing R&D funding to fully report the total cost of intramural R&D. Although the Medical Services do not directly fund specific R&D activities, they host intramural research programs that were not previously reported.
  • FY 2018. DHS’s Countering Weapons of Mass Destruction (CWMD) Office was established on 7 December 2017. CWMD consolidated primarily the Domestic Nuclear Detection Office (DNDO) and a majority of the Office of Health Affairs, as well as other DHS elements. Prior to FY 2018, data reported for the CWMD would have been under the DNDO.
  • FY 2018. DOE revised its FYs 2016 and 2017 data after discovering its Office of Fossil Energy reported “in thousands” instead of actual dollars for volumes 66 (FYs 2016–17) and 67 (FYs 2017–18).
  • FY 2018. USDA’s Economic Research Service (ERS) partially revised its FYs 2009 and 2010 data during the volume 61 (FYs 2011–13) cycle. NCSES discovered a discrepancy that was corrected during the volume 68 cycle, completing the revision.
  • FY 2018. DHS’s Transportation Security Administration, which did not report data between FY 2010 and FY 2017, resumed reporting for volume 68 (FYs 2018–19).
  • FY 2018. DHS’s U.S. Secret Service, which did not report data between FY 2009 and FY 2017, resumed reporting for volume 68 (FYs 2018–19).
  • FY 2018. NCSES discovered that in some past volumes, the obligations reported for basic research in certain foreign countries were greater than the corresponding obligations reported for R&D; the following data were corrected as a result: DOD and Chemical and Biological Defense FY 2003 data, defense agencies and activities FY 2003 and FY 2011 data, AF FY 2009 data, and Department of the Navy FY 2005, FY 2011, and FY 2013 data; DOE and Office of Science FY 2009 data; HHS and Centers for Disease Control and Prevention (CDC) FY 2008 and FY 2017 data; and NSF FY 2001 data. NCSES also discovered that some obligations reported for academic performers were greater than the corresponding obligations reported for total performers, and DOD and AF FY 2009 data, DOE and Fossil Energy FY 1999 data, and NASA FY 2008 data were corrected. Finally, NCSES discovered a problem with FY 2017 HHS CDC personnel costs data, which were then also corrected.
  • FY 2017. The Department of the Treasury’s IRS performed a detailed evaluation and assessment of its programs and determined that none of its functions can be defined as R&D activity as defined in Office of Management and Budget (OMB) Circular A-11. The review included discussions with program owners and relevant contractors who perform work on behalf of the IRS. The IRS also provided a negative response to the OMB data call on R&D under Circular A-11 for the same reference period (FYs 2017–18). Despite no longer having any R&D obligations, the IRS still sponsors an FFRDC, the Center for Enterprise Modernization.
  • FY 2017. NASA estimated that the revised OMB definition for "experimental development" reduced its reported R&D total by about $2.7 billion in FY 2017 and $2.9 billion in FY 2018 from what would have been reported under the previous definition prior to volume 66 (FYs 2016–17).
  • FY 2017. The Patient-Centered Outcomes Research Trust Fund (PCORTF) was established by Congress through the Patient Protection and Affordable Care Act of 2010, signed by the president on 23 March 2010. PCORTF began reporting for volume 67 (FYs 2017–18), but it also submitted data for FYs 2011–16.
  • FY 2017. The Tennessee Valley Authority, which did not report data between FY 1999 and FY 2016, resumed reporting for volume 67 (FYs 2017–18).
  • FY 2017. The U.S. Postal Service, which did not report data between FY 1999 and FY 2016, resumed reporting for volume 67 (FYs 2017–18) and submitted data for FYs 2015–16.
  • FY 2017. During the volume 67 (FYs 2017–18) data collection, DHS’s Science and Technology Directorate revised its FY 2016 data.
  • FY 2016. The Administrative Office of the U.S. Courts began reporting as of volume 66 (FYs 2016–17).
  • Beginning with FY 2016, the totals reported for development obligations and outlays represent a refinement to this category by more narrowly defining it to be “experimental development.” Most notably, totals for development do not include the DOD Budget Activity 7 (Operational System Development) obligations and outlays. Those funds, previously included in DOD’s development totals, support the development efforts to upgrade systems that have been fielded or have received approval for full rate production and anticipate production funding in the current or subsequent fiscal year. Therefore, the data are not directly comparable with totals reported in previous years.
  • Prior to the volume 66 launch, the definitions of basic research, applied research, experimental development, R&D, and R&D plant were revised to match the definitions used by OMB in the July 2016 version of Circular A-11, Section 84 (Schedule C).
  • FYs 2016–17. Before the volume 66 survey cycle, NSF updated the list of foreign performers in Federal Funds R&D to match the list of countries and territories in the Department of State’s Bureau of Intelligence and Research fact sheet of Independent States in the World and fact sheet of Dependencies and Areas of Special Sovereignty. Country lists in volume 66 data tables and later may differ from those in previous reports.
  • FY 2015. The HHS Administration for Community Living (ACL) began reporting in FY 2015, replacing the Administration on Aging, which was transferred to ACL when ACL was established on 18 April 2012. Several programs that serve older adults and people with disabilities were transferred from other agencies to ACL, including a number of programs from the Department of Education due to the 2014 Workforce Innovation and Opportunities Act.
  • FY 2015. The Department of the Interior’s Bureau of Land Management and U.S. Fish and Wildlife Service, which did not report data between FY 1999 and FY 2014, resumed reporting.
  • In January 2014, all Research and Innovative Technology Administration programs were transferred into the Office of the Assistant Secretary for Research and Technology in the Office of the Secretary of Transportation.
  • FY 2014. DHS’s Domestic Nuclear Detection Office began reporting for FY 2014.
  • FY 2014. The Department of State data for FY 2014 were excluded due to their poor quality.
  • FY 2013. NASA revamped its reporting process so that the data for FY 2012 forward are not directly comparable with totals reported in previous years.
  • FY 2012. NASA began reporting International Space Station (ISS) obligations as research rather than R&D plant.
  • Starting with volume 62 (FYs 2012–14), an “undistributed” category was added to the geographic location tables for DOD obligations for which the location of performance is not reported. It includes DOD obligations for industry R&D that were included in individual state totals prior to FY 2012 and DOD obligations for other performers that were not reported prior to FY 2011. This change was applied retroactively to FY 2011 data.
  • Starting with volume 61 (FYs 2011–13), DOD subagencies other than the Defense Advanced Research Projects Agency were reported as an aggregate total under other defense agencies to enable complete reporting of DOD R&D (both unclassified and classified). Consequently, DOD began reporting additional classified R&D not previously reported by its subagencies.
  • FY 2011. USDA’s ERS partially revised its data for FYs 2009 and 2010 during the volume 61 (FYs 2011–13) cycle.
  • FY 2010. NASA resumed reporting ISS obligations as R&D plant.
  • FYs 2000–09. Beginning in FY 2000, AF did not report Budget Activity 6.7 Operational Systems Development data because the agency misunderstood the reporting requirements. During the volume 57 data collection cycle, AF edited prior year data for FYs 2000–07 to include Budget Activity 6.7 Operational Systems Development data. These data revisions were derived from FY 2007 distribution percentages that were then applied backward to revise data for FYs 2000–06.
  • FYs 2006–07. NASA’s R&D obligations decreased by $1 billion. Of this amount, $850 million was accounted for by obligations for operational projects that NASA excluded in FY 2007 but reported in FY 2006. The remainder was from an overall decrease in obligations between FYs 2006 and 2007.
  • FY 2006. NASA reclassified funding for the following items as operational costs: Space Operations, the Hubble Space Telescope, the Stratospheric Observatory for Infrared Astronomy, and the James Webb Space Telescope. This funding was previously reported as R&D plant.
  • FYs 2005–07. Before the volume 55 survey cycle, NSF updated the list of foreign performers in Federal Funds R&D to match the list of countries and territories in the Department of State’s Bureau of Intelligence and Research fact sheet of Independent States in the World and fact sheet of Dependencies and Areas of Special Sovereignty. Area and country lists in volume 55 data tables and later may differ from those in previous reports.
  • FYs 2004–06. NASA implemented a full-cost budget approach, which includes all of the direct and indirect costs for procurement, personnel, travel, and other infrastructure-related expenses relative to a particular program and project. NASA’s data for FY 2004 and later years may not be directly comparable with its data for FY 2003 and earlier years.
  • FY 2004. NIH revised its financial database; beginning with FY 2004, NIH records no longer contain information on the field of S&E. Data for FY 2004 and later years are not directly comparable with data for FY 2003 and earlier years.
  • Data for FYs 2003–06 from the Substance Abuse and Mental Health Services Administration (SAMHSA) are estimates based on SAMHSA's obligations by program activity budget and previously reported funding for development.
  • FY 2003. SAMHSA reclassified some of its funding categories as non-R&D that had been considered to be R&D in prior years.
  • On 25 November 2002, the president signed the Homeland Security Act of 2002, establishing DHS. DHS includes the R&D activities previously reported by the Federal Emergency Management Agency, the Science and Technology Directorate, the Transportation Security Administration, the U.S. Coast Guard, and the U.S. Secret Service.
  • FY 2000. NASA reclassified the ISS as a physical asset, reclassified ISS Research as equipment, and transferred funding for the program from R&D to R&D plant.
  • FY 2000. NIH reclassified as research the activities that it had previously classified as development. NIH data for FY 2000 forward reflect this change. For more information on the classification changes at NASA and NIH, refer to Classification Revisions Reduce Reported Federal Development Obligations (InfoBrief NSF 02-309), February 2002, available at https://www.nsf.gov/statistics/nsf02309 .
  • FYs 1996–98. The lines on the survey instrument for the special foreign currency program and for detailed field of S&E were eliminated beginning with the volume 46 survey cycle. Two tables depicting data on foreign performers by region, country, and agency that were removed before publication of volume 43 were reinstated with volume 46.
  • FYs 1994–96. During the volume 44 survey cycle, the Director for Defense Research and Engineering (DDR&E) at DOD requested that NSF further clarify the true character of DOD’s R&D program, particularly as it compares with other federal agencies, by adding more detail to development obligations reported by DOD respondents. Specifically, DOD requested that NSF allow DOD agencies to report development obligations in two separate categories: advanced technology development and major systems development. An excerpt from a letter written by Robert V. Tuohy, Chief, Program Analysis and Integration at DDR&E, to John E. Jankowski, Program Director, Research and Development Statistics Program, Division of Science Resources Statistics, NSF, explains the reasoning behind the DDR&E request: “The DOD’s R&D program is divided into two major pieces, Science and Technology (S&T) and Major Systems Development. The other federal agencies’ entire R&D programs are equivalent in nature to DOD’s S&T program, with the exception of the Department of Energy and possibly NASA. Comparing those other agency programs to DOD’s program, including the development of weapons systems such as F-22 Fighter and the New Attack Submarine, is misleading.”
  • FYs 1990–92. Since volume 40, DOD has reported research obligations and development obligations separately. Tables reporting obligations for research, by state and performer, and obligations for development, by state and performer, were specifically created for DOD. Circumstances specific to DOD are (1) DOD funds the preponderance of federal development and (2) DOD development funded at institutions of higher education is typically performed at university-affiliated nonacademic laboratories, which are separate from universities’ academic departments, where university research is typically performed.

Agency and subdivision. An agency is an organization of the federal government whose principal executive officer reports to the president. The Library of Congress and the Administrative Office of the U.S. Courts are also included in the survey, even though the chief officer of the Library of Congress reports to Congress and the U.S. Courts are part of the judicial branch. Subdivision refers to any organizational unit of a reporting agency, such as a bureau, division, office, or service.

Development . See R&D and R&D plant.

Fields of R&D (formerly fields of science and engineering ) . A list of the 41 fields of R&D reported on can be found on the survey questionnaire. In the data tables, the fields are grouped into 9 major areas: computer and information sciences; geosciences, atmospheric sciences, and ocean sciences; life sciences; mathematics and statistics; physical sciences; psychology; social sciences; engineering; and other fields. Table A-3 provides a crosswalk of the fields of science and engineering used in volume 70 and earlier surveys to the revised fields of R&D collected under volume 71.

Federal obligations for research performed at higher education institutions , by detailed field of R&D . As of volume 71, all respondents were required to report these obligations. Previously, this information was reported by seven agencies (the Departments of Agriculture, Defense, Energy, Health and Human Services, and Homeland Security; NASA; and NSF).

Geographic distribution of R&D obligations. As of volume 71, all respondents were required to respond to this portion of the survey. Previously, the 11 largest R&D funding agencies responded to this portion (the Departments of Agriculture, Commerce, Defense, Energy, Health and Human Services, Homeland Security, the Interior, and Transportation; the Environmental Protection Agency; NASA; and NSF). Respondents are asked to provide the principal location (state or outlying area) of the work performed by the primary contractor, grantee, or intramural organization, assign the obligations to the location of the headquarters of the U.S. primary contractor, grantee, or intramural organization, or list the funds as undistributed.

Obligations and outlays. Obligations represent the amounts for orders placed, contracts awarded, services received, and similar transactions during a given period, regardless of when funds were appropriated and when future payment of money is required. Outlays represent the amounts for checks issued and cash payments made during a given period, regardless of when funds were appropriated.

Performer. A group or organization carrying out an operational function or an extramural organization or a person receiving support or providing services under a contract or grant.

  • Intramural performers are agencies of the federal government, including federal employees who work on R&D both onsite and offsite and, as of volume 71, FFRDCs.
  • Federal. The work of agencies of the federal government is carried out directly by agency personnel. Obligations reported under this category are for activities performed or to be performed by the reporting agency itself or are for funds that the agency transfers to another federal agency for performance of R&D (intragovernmental transfers). Although the receiving agency may obligate these funds to extramural performers (businesses, universities and colleges, other nonprofit institutions, FFRDCs, nonfederal government, and foreign) they are reported as part of the federal sector by the originating agency. Federal activities cover not only actual intramural R&D performance but also the costs associated with administration of intramural R&D programs and extramural R&D procurements by federal personnel. Intramural activities also include the costs of supplies and off-the-shelf equipment (equipment that has gone beyond the development or prototype stage) procured for use in intramural R&D. For example, an operational launch vehicle purchased from an extramural source by NASA and used for intramural performance of R&D is reported as a part of the cost of intramural R&D.
  • Federally funded research and development centers (FFRDCs) —R&D-performing organizations that are exclusively or substantially financed by the federal government and are supported by the federal government either to meet a particular R&D objective or in some instances to provide major facilities at universities for research and associated training purposes. Each center is administered by an industrial firm, a university, or another nonprofit institution (see https://www.nsf.gov/statistics/ffrdclist/ for the Master Government List of FFRDCs maintained by NSF).
  • Extramural performers are organizations outside the federal sector that perform R&D with federal funds under contract, grant, or cooperative agreement. Only costs associated with actual R&D performance are reported. Types of extramural performers:
  • Businesses (previously “ Industry or i ndustr ial firms ”) —Organizations that may legally distribute net earnings to individuals or to other organizations.
  • Higher education institutions (previously “ Universities and colleges ”) —Institutions of higher education in the United States that engage primarily in providing resident or accredited instruction for a not less than a 2-year program above the secondary school level that is acceptable for full credit toward a bachelor’s degree or that provide not less than a 1-year program of training above the secondary school level that prepares students for gainful employment in a recognized occupation. Included are colleges of liberal arts; schools of arts and sciences; professional schools, as in engineering and medicine, including affiliated hospitals and associated research institutes; and agricultural experiment stations. Other examples of universities and colleges include community colleges, 4-year colleges, universities, and freestanding professional schools (medical schools, law schools, etc.).
  • Other nonprofit institutions —Private organizations other than educational institutions whose net earnings do not benefit either private stockholders or individuals and other private organizations organized for the exclusive purpose of turning over their entire net earnings to such nonprofit organizations. Examples of nonprofit institutions include foundations, trade associations, charities, and research organizations.
  • State and local governments —State and local government agencies, excluding state or local universities and colleges, agricultural experiment stations, medical schools, and affiliated hospitals. (Federal R&D funds obligated directly to such state and local institutions are excluded in this category. However, they are included under the universities and colleges category in this report.) R&D activities under the state and local governments category are performed either by the state or local agencies themselves or by other organizations under grants or contracts from such agencies. Regardless of the ultimate performer, federal R&D funds directed to state and local governments are reported only under this sector.
  • Non-U.S. performers (previously “Foreign performers”) —Other nations’ citizens, organizations, universities and colleges, governments, as well as international organizations located outside the United States, that perform R&D. In most cases, foreigners performing R&D in the United States are not reported here. Excluded from this category are U.S. agencies, U.S. organizations, or U.S. citizens performing R&D abroad for the federal government. Examples of foreign performers include the North Atlantic Treaty Organization, the United Nations Educational, Scientific, and Cultural Organization, and the World Health Organization. An exception in the past was made in the case of U.S. citizens performing R&D abroad under special foreign-currency funds; these activities were included under the foreign performers category but have not been collected since the mid-1990s.
  • Private individuals —When an R&D grant or contract is awarded directly to a private individual, obligations incurred are placed under the category businesses.

R &D and R&D plant. Amounts for R&D and R&D plant include all direct, incidental, or related costs resulting from, or necessary to, performance of R&D and costs of R&D plant as defined below, regardless of whether R&D is performed by a federal agency (intramurally) or by private individuals and organizations under grant or contract (extramurally). R&D excludes routine product testing, quality control, mapping and surveys, collection of general-purpose statistics, experimental production, and the training of scientific personnel.

  • Research is defined as systematic study directed toward fuller scientific knowledge or understanding of the subject studied. Research is classified as either basic or applied, according to the objectives of the sponsoring agency.
  • Basic research is defined as experimental or theoretical work undertaken primarily to acquire new knowledge of the underlying foundations of phenomena and observable facts. Basic research may include activities with broad or general applications in mind, such as the study of how plant genomes change, but should exclude research directed toward a specific application or requirement, such as the optimization of the genome of a specific crop species.
  • Applied research is defined as original investigation undertaken in order to acquire new knowledge. Applied research is, however, directed primarily toward a specific practical aim or objective.
  • Development , also known as experimental development, is defined as creative and systematic work, drawing on knowledge gained from research and practical experience, which is directed at producing new products or processes or improving existing products or processes. Like research, experimental development will result in gaining additional knowledge.

For reporting experimental development activities, the following are included:

The production of materials, devices, and systems or methods, including the design, construction, and testing of experimental prototypes.

Technology demonstrations, in cases where a system or component is being demonstrated at scale for the first time, and it is realistic to expect additional refinements to the design (feedback R&D) following the demonstration. However, not all activities that are identified as “technology demonstrations” are R&D.

However, experimental development excludes the following:

User demonstrations where the cost and benefits of a system are being validated for a specific use case. This includes low-rate initial production activities.

Pre-production development, which is defined as non-experimental work on a product or system before it goes into full production, including activities such as tooling and development of production facilities.

To better differentiate between the part of the federal R&D budget that supports science and key enabling technologies (including technologies for military and nondefense applications) and the part that primarily supports testing and evaluation (mostly of defense-related systems), NSF collects development dollars from DOD in two categories: advanced technology development and major systems development.

DOD uses RDT&E Budget Activities 1–7 to classify data into the survey categories. Within DOD’s research categories, basic research is classified as Budget Activity 1, and applied research is classified as Budget Activity 2. Within DOD’s development categories, advanced technology development is classified as Budget Activity 3. Starting in volume 66, major systems development is classified as Budget Activities 4–6 instead of Budget Activities 4–7 and includes advanced component development and prototypes, system development and demonstration, and RDT&E management support; data on Budget Activity 7, operational systems development, is collected separately. (Note: As a historical artifact from previous DOD budget authority terminology, funds for Budget Activity categories 1 through 7 are sometimes referred to as 6.1 through 6.7 monies.)

  • Demonstration includes amounts for activities that are part of R&D (i.e., that are intended to prove or to test whether a technology or method does in fact work). Demonstrations intended primarily to make information available about new technologies or methods are excluded.
  • R&D plant is defined as spending on both R&D facilities and major equipment as defined in OMB Circular A-11 Section 84 (Schedule C) and includes physical assets, such as land, structures, equipment, and intellectual property (e.g., software or applications) that have an estimated useful life of 2 years or more. Reporting for R&D plant includes the purchase, construction, manufacture, rehabilitation, or major improvement of physical assets regardless of whether the assets are owned or operated by the federal government, states, municipalities, or private individuals. The cost of the asset includes both its purchase price and all other costs incurred to bring it to a form and location suitable for use.
  • For reporting construction of R&D facilities and major moveable R&D equipment, include the following:

Construction of facilities that are necessary for the execution of an R&D program. This may include land, major fixed equipment, and supporting infrastructure such as a sewer line, or housing at a remote location. Many laboratory buildings will include a mixture of R&D facilities and office space. The fraction of the building that is considered to be used for R&D may be calculated based on the percentage of square footage that is used for R&D.

Acquisition, design, or production of major movable equipment, such as mass spectrometers, research vessels, DNA sequencers, and other movable major instrumentation for use in R&D activities.

Programs of $1 million or more that are devoted to the purchase or construction of R&D major equipment.

Exclude the following:

Construction of other non-R&D facilities.

Minor equipment purchases, such as personal computers, standard microscopes, and simple spectrometers (report these costs under total R&D, not R&D Plant).

Obligations for foreign R&D plant are limited to federal funds for facilities that are located abroad and used in support of foreign R&D.

Technical Tables

Questionnaires, view archived questionnaires, key data tables.

Recommended data tables

The Chronicle

Students and faculty voice concerns, give input on Campus Culture Survey rollout and methodology

research in survey methodology

Some students and faculty members voiced concerns about the methodology behind and interpretation of the recent Duke Campus Culture Survey, which hopes to effectively guide future equity initiatives on campus.

The survey, administered by Duke administration every three years, intends to understand how to foster a more “inclusive and equitable” campus community. Duke piloted an earlier version of the survey in 2021, which was deemed “the most comprehensive survey that Duke had ever executed on issues of diversity, equity and inclusion.”

Students first received the survey in a Jan. 29 email from President Vincent Price. The initial deadline to complete the survey was Feb. 16, but it was later extended first to March 1 and later to March 17.

Kimberly Hewitt, vice president of institutional equity and chief diversity officer, wrote in an email to The Chronicle that the extensions were intended to increase participation rates “so that more voices and perspectives would be heard” and thus limit bias. “High-level” results will be released in April, with further aggregated information due for release in the fall.

The 2024 survey addressed various aspects of campus life and students’ backgrounds, including demographic information, experiences with discrimination, Duke’s commitment to diversity, equal opportunity for historically disadvantaged groups and whether respondents felt they had a voice in the community.

While students and faculty members believed the survey covered critical topics, they raised doubts about how accurately the survey results would represent the views of the Duke community.

Student input

Students shared that they did not complete the survey either because they had other priorities or did not believe it would contribute to meaningful change on campus.

“I don’t care much about the Campus Culture Survey or really any survey Duke gives out, since it feels like [there are] hardly any results that arise over the responses,” wrote first-year Ahmad Choudhary in a message to The Chronicle. “[Their] purpose feels more like being used for research … than for actual change.”

First-year Haley Jansons similarly stated that the survey questions appeared too general and that she is unsure how students' answers would make “targeted and productive changes.”

“Hearing it would take 10-15 minutes made me even less likely to take it,” senior Shreya Joshi said. “I think that it is an incredibly important survey that [administration] puts out, I just felt the marketing wasn’t done in a way that would speak to students.”

Joshi also said that having users log in using NetIDs and multi-factor authentication made the process cumbersome for many students and may have discouraged some from filling it out.

Faculty perspectives

Sunshine Hillygus, professor of political science and director of the Duke Initiative on Survey Methodology, was involved in designing the survey. She noted that University administration asked for questions to remain consistent over time in order to allow for cross-year comparisons.

“As the saying goes, ‘you can’t measure change if you change the measure,’” Hillygus wrote in an email to The Chronicle. 

David Banks, professor of the practice of statistics, independently examined the survey. He cautioned that without knowing the specific purposes of the survey, it was challenging for him to evaluate its design and methodology. He said that it was not clear to him “what decision path would follow” from the Campus Culture Survey’s results.

However, Banks noted that the survey’s inclusion of demographic information may help reduce bias through comparison to existing demographic data for the full student body, while also adding that the 2024 survey could serve as a benchmark for future comparisons.

Banks qualified that although demographic metrics can help measure diversity, they are not the only applicable metric.

“[Duke students] are also all smart, all well-to-do and all healthy. And from that standpoint, they’re an incredibly homogeneous group,” Banks said.

Unlike students, Banks did not believe that the survey was excessively long, but he acknowledged it would have been helpful to designate time in class for students to take the survey.

Lead dust detected in various parts of Brodie Recreation Center, facility closed indefinitely

No. 2 duke softball digs deep, blows past charlotte after fifth-inning onslaught, here are five interesting classes to consider for fall 2024.

John Rose, instructor in the Kenan Institute of Ethics and associate director of the Civil Discourse Project, found a lack of emphasis on intellectual and ideological diversity in the survey.

“It's no secret that elite higher education doesn't have the robust culture of free expression that many of us would want,” Rose said. “… A portion of the public feels like their views aren't getting a fair hearing, or are adequately represented on campus. We could do better here.”

According to Rose, people who identify as conservative or more religiously observant are underrepresented at Duke and might be less likely to fill out the survey. 

“Given that the survey’s definition of ‘diversity’ included diversity of thought, it would have been interesting to include a question about Duke’s commitment to hiring and promotion of those who add intellectual diversity to the campus, particularly those with political views uncommon among the faculty,” Rose later wrote in an email to The Chronicle.

Banks agreed, recommending observational studies and small group classroom exercises to identify whether “conservative bias may be present.”

Hewitt wrote in her statement that the survey “includes questions about the experiences of community members based upon political thought” and that the University administration hoped to “learn more from the [survey’s] free response questions.”

Banks also warned about survey fatigue affecting results, adding that he had not filled out the survey himself.

“I get lots of surveys all the time, and over the years, I've just built up a resistance to them,” Banks said.

Get The Chronicle straight to your inbox

Signup for our weekly newsletter. Cancel at any time.

Samanyu Gangappa is a Trinity first-year and a staff reporter for the news department.       

Share and discuss “Students and faculty voice concerns, give input on Campus Culture Survey rollout and methodology” on social media.

 facebook  twitter

Pro-Israel organizations hold 'Bring Them Home' rally on Abele Quad, draw pro-Palestinian protesters

Three future blue devils showcase their skills at 2024 mcdonald's all-american game.

Help | Advanced Search

Computer Science > Computer Vision and Pattern Recognition

Title: a survey on long video generation: challenges, methods, and prospects.

Abstract: Video generation is a rapidly advancing research area, garnering significant attention due to its broad range of applications. One critical aspect of this field is the generation of long-duration videos, which presents unique challenges and opportunities. This paper presents the first survey of recent advancements in long video generation and summarises them into two key paradigms: divide and conquer temporal autoregressive. We delve into the common models employed in each paradigm, including aspects of network design and conditioning techniques. Furthermore, we offer a comprehensive overview and classification of the datasets and evaluation metrics which are crucial for advancing long video generation research. Concluding with a summary of existing studies, we also discuss the emerging challenges and future directions in this dynamic field. We hope that this survey will serve as an essential reference for researchers and practitioners in the realm of long video generation.

Submission history

Access paper:.

  • HTML (experimental)
  • Other Formats

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

IMAGES

  1. A Comprehensive Guide to Survey Research Methodologies

    research in survey methodology

  2. Survey Research

    research in survey methodology

  3. Survey Method

    research in survey methodology

  4. Types of Research Methodology: Uses, Types & Benefits

    research in survey methodology

  5. Survey methodology

    research in survey methodology

  6. Survey Design: Examples, Question Types, Guidelines

    research in survey methodology

VIDEO

  1. RM# SURVEY METHOD# MCQ#VIDEO

  2. Webinar Series on Survey Methodology

  3. Types of Survey Research

  4. Exploring Research Methodologies in the Social Sciences (4 Minutes)

  5. Webinar Series on Survey Methodology

  6. Survey Research

COMMENTS

  1. Survey Research

    Survey research means collecting information about a group of people by asking them questions and analyzing the results. To conduct an effective survey, follow these six steps: Determine who will participate in the survey. Decide the type of survey (mail, online, or in-person) Design the survey questions and layout.

  2. Survey Research

    Survey Research. Definition: Survey Research is a quantitative research method that involves collecting standardized data from a sample of individuals or groups through the use of structured questionnaires or interviews. The data collected is then analyzed statistically to identify patterns and relationships between variables, and to draw conclusions about the population being studied.

  3. Understanding and Evaluating Survey Research

    Survey research is defined as "the collection of information from a sample of individuals through their responses to questions" ( Check & Schutt, 2012, p. 160 ). This type of research allows for a variety of methods to recruit participants, collect data, and utilize various methods of instrumentation. Survey research can use quantitative ...

  4. Survey Research: Definition, Examples and Methods

    Survey Research: Definition, Examples and Methods. Survey Research is a quantitative research method used for collecting data from a set of respondents. It has been perhaps one of the most used methodologies in the industry for several years due to the multiple benefits and advantages that it has when collecting and analyzing data.

  5. A Comprehensive Guide to Survey Research Methodologies

    Survey Research Types based on Concept Testing ‍ Monadic Concept Testing. Monadic testing is a survey research methodology in which the respondents are split into multiple groups and ask each group questions about a separate concept in isolation. Generally, monadic surveys are hyper-focused on a particular concept and shorter in duration.

  6. Survey methodology

    Survey methodology is "the study of survey methods". [1] As a field of applied statistics concentrating on human-research surveys, survey methodology studies the sampling of individual units from a population and associated techniques of survey data collection, such as questionnaire construction and methods for improving the number and accuracy ...

  7. Survey Research: Definition, Examples & Methods

    Survey research is the process of collecting data from a predefined group (e.g. customers or potential customers) with the ultimate goal of uncovering insights about your products, services, or brand overall.. As a quantitative data collection method, survey research can provide you with a goldmine of information that can inform crucial business and product decisions.

  8. Survey research

    9 Survey research. Survey research is a research method involving the use of standardised questionnaires or interviews to collect data about people and their preferences, thoughts, and behaviours in a systematic manner. Although census surveys were conducted as early as Ancient Egypt, survey as a formal research method was pioneered in the 1930 ...

  9. Sage Research Methods

    Survey Methodology is becoming a more structured field of research, deserving of more and more academic attention. The SAGE Handbook of Survey Methodology explores both the increasingly scientific endeavour of surveys and their growing complexity, as different data collection modes and information sources are combined.

  10. Doing Survey Research

    Surveys are used as a method of gathering data in many different fields. They are a good choice when you want to find out about the characteristics, preferences, opinions, or beliefs of a group of people. Common uses of survey research include: Social research: Investigating the experiences and characteristics of different social groups

  11. PDF Fundamentals of Survey Research Methodology

    The survey is then constructed to test this model against observations of the phenomena. In contrast to survey research, a . survey. is simply a data collection tool for carrying out survey research. Pinsonneault and Kraemer (1993) defined a survey as a "means for gathering information about the characteristics, actions, or opinions of a ...

  12. Overview of Survey Research

    Survey research is a quantitative and qualitative method with two important characteristics. First, the variables of interest are measured using self-reports. In essence, survey researchers ask their participants (who are often called respondents in survey research) to report directly on their own thoughts, feelings, and behaviours.

  13. Handbook of Survey Methodology for the Social Sciences

    By covering all aspects of the topic, the Handbook is suited to readers taking their first steps in survey methodology, as well as to those already involved in survey design and execution, and to those currently in training. Featured in the Handbook: • The Total Survey Error: sampling and non-sampling errors. • Survey sampling techniques.

  14. Survey Research

    Survey designs. Kerry Tanner, in Research Methods (Second Edition), 2018. Conclusion. Survey research designs remain pervasive in many fields. Surveys can appear deceptively simple and straightforward to implement. However valid results depend on the researcher having a clear understanding of the circumstances where their use is appropriate and the constraints on inference in interpreting and ...

  15. U.S. Survey Methodology

    Since 2014, Pew Research Center has conducted surveys online in the United States using our American Trends Panel (ATP), a randomly selected, probability-based sample of U.S. adults ages 18 and older. The panel was initially built to supplement the prevalent mode of data collection at the Center during that time: random-digit-dial (RDD ...

  16. PDF Survey Research

    This chapter describes a research methodology that we believe has much to offer social psychologists in- terested in a multimethod approach: survey research. Survey research is a specific type of field study that in- volves the collection of data from a sample of ele- ments (e.g., adult women) drawn from a well-defined

  17. Survey Methods: Definition, Types, and Examples

    A survey method is a process, tool, or technique that you can use to gather information in research by asking questions to a predefined group of people. Typically, it facilitates the exchange of information between the research participants and the person or organization carrying out the research. Survey methods can be qualitative or ...

  18. A quick guide to survey research

    After settling on your research goal and beginning to design a questionnaire, the main considerations are the method of data collection, the survey instrument and the type of question you are going to ask. Methods of data collection include personal interviews, telephone, postal or electronic (Table 1).

  19. Frontiers

    The novel coronavirus (COVID-19) outbreak has resulted in a massive amount of global research on the social and human dimensions of the disease. Between academic researchers, governments, and polling firms, thousands of survey projects have been launched globally, tracking aspects like public opinion, social impacts, and drivers of disease transmission and mitigation.

  20. Reporting Survey Based Studies

    INTRODUCTION. Surveys are the principal method used to address topics that require individual self-report about beliefs, knowledge, attitudes, opinions or satisfaction, which cannot be assessed using other approaches.1 This research method allows information to be collected by asking a set of questions on a specific topic to a subset of people and generalizing the results to a larger population.

  21. Survey Methodology Program

    The Survey Methodology Program (SMP) was established within SRC in 1992 with the explicit aim of creating a multidisciplinary team to focus on research methodology. Thus, the SMP draws upon a range of disciplines including social psychology, cognitive psychology, sociology, statistics, and computer science. The Survey Methodology Program is ...

  22. What Is a Research Methodology?

    Your research methodology discusses and explains the data collection and analysis methods you used in your research. A key part of your thesis, dissertation, or research paper, ... The survey consisted of 5 multiple-choice questions and 10 questions measured on a 7-point Likert scale.

  23. Methodology

    Methodology. The data in this report comes from a self-administered web survey of K-12 public school teachers in the United States. It was conducted online in English from Oct. 17 to Nov. 14, 2023. Out of 6,357 teachers who were sampled, 191 were screened out as no longer eligible. A total of 2,531 completed the survey, for a completion rate of ...

  24. Methodology

    The American Trends Panel survey methodology Overview. The American Trends Panel (ATP), created by Pew Research Center, is a nationally representative panel of randomly selected U.S. adults. Panelists participate via self-administered web surveys. Panelists who do not have internet access at home are provided with a tablet and wireless internet ...

  25. national survey of digital health company experiences with electronic

    Clinical research: 33: 27 Patient access and management of health record data: 57: 46 Population health: 51: 41 Public health: 14: 11 Other: 13: 11: ... This study used a novel survey and sampling methodology to derive a robust sample of digital health companies to glean novel, national insights into companies' experiences using EHR APIs and ...

  26. Survey of Federal Funds for Research and Development 2022

    The Survey of Federal Funds for Research and Development (R&D) is the primary source of information about federal funding for R&D in the United States. ... For additional information about this survey or the methodology, contact. Christopher V. Pece Survey Manager Phone (703) 292-7788 E-mail [email protected]. Address National Center for Science ...

  27. Students and faculty voice concerns, give input on Campus Culture

    Students first received the survey in a Jan. 29 email from President Vincent Price. The initial deadline to complete the survey was Feb. 16, but it was later extended first to March 1 and later to ...

  28. A Survey on Long Video Generation: Challenges, Methods, and Prospects

    A Survey on Long Video Generation: Challenges, Methods, and Prospects. Chengxuan Li, Di Huang, Zeyu Lu, Yang Xiao, Qingqi Pei, Lei Bai. Video generation is a rapidly advancing research area, garnering significant attention due to its broad range of applications. One critical aspect of this field is the generation of long-duration videos, which ...

  29. What's It Like To Be a Teacher in America Today?

    Here are the questions used for this report, along with responses, and the survey methodology. Terminology Low-poverty , medium-poverty and high-poverty schools are based on the percentage of students eligible for free and reduced-price lunch, as reported by the National Center for Education Statistics (less than 40%, 40%-59% and 60% or more ...

  30. LGBT Identification in U.S. Ticks Up to 7.1%

    Survey Methods. Results for this Gallup poll are based on telephone interviews conducted in 2021, with a random sample of 12,416 adults, aged 18 and older, living in all 50 U.S. states and the District of Columbia. ... Learn More about Access Crucial Data for Your Research. Recommended. Politics. Feb 10, 2022. LGBT Americans Married to Same-Sex ...