Conceptual Research vs. Empirical Research

What's the difference.

Conceptual research and empirical research are two distinct approaches to conducting research. Conceptual research focuses on exploring and developing theories, concepts, and ideas. It involves analyzing existing literature, theories, and concepts to gain a deeper understanding of a particular topic. Conceptual research is often used in the early stages of research to generate hypotheses and develop a theoretical framework. On the other hand, empirical research involves collecting and analyzing data to test hypotheses and answer research questions. It relies on observation, measurement, and experimentation to gather evidence and draw conclusions. Empirical research is more focused on obtaining concrete and measurable results, often through surveys, experiments, or observations. Both approaches are valuable in research, with conceptual research providing a foundation for empirical research and empirical research validating or refuting conceptual theories.

Further Detail

Introduction.

Research is a fundamental aspect of any field of study, providing a systematic approach to acquiring knowledge and understanding. In the realm of research, two primary methodologies are commonly employed: conceptual research and empirical research. While both approaches aim to contribute to the body of knowledge, they differ significantly in their attributes, methodologies, and outcomes. This article aims to explore and compare the attributes of conceptual research and empirical research, shedding light on their unique characteristics and applications.

Conceptual Research

Conceptual research, also known as theoretical research, focuses on the exploration and development of theories, concepts, and ideas. It is primarily concerned with abstract and hypothetical constructs, aiming to enhance understanding and generate new insights. Conceptual research often involves a comprehensive review of existing literature, analyzing and synthesizing various theories and concepts to propose new frameworks or models.

One of the key attributes of conceptual research is its reliance on deductive reasoning. Researchers start with a set of existing theories or concepts and use logical reasoning to derive new hypotheses or frameworks. This deductive approach allows researchers to build upon existing knowledge and propose innovative ideas. Conceptual research is often exploratory in nature, seeking to expand the boundaries of knowledge and provide a foundation for further empirical investigations.

Conceptual research is particularly valuable in fields where empirical data may be limited or difficult to obtain. It allows researchers to explore complex phenomena, develop theoretical frameworks, and generate hypotheses that can later be tested through empirical research. By focusing on abstract concepts and theories, conceptual research provides a theoretical foundation for empirical investigations, guiding researchers in their quest for empirical evidence.

Furthermore, conceptual research plays a crucial role in the development of new disciplines or interdisciplinary fields. It helps establish a common language and theoretical framework, facilitating communication and collaboration among researchers from different backgrounds. By synthesizing existing knowledge and proposing new concepts, conceptual research lays the groundwork for empirical studies and contributes to the overall advancement of knowledge.

Empirical Research

Empirical research, in contrast to conceptual research, is concerned with the collection and analysis of observable data. It aims to test hypotheses, validate theories, and provide evidence-based conclusions. Empirical research relies on the systematic collection of data through various methods, such as surveys, experiments, observations, or interviews. The data collected is then analyzed using statistical or qualitative techniques to draw meaningful conclusions.

One of the primary attributes of empirical research is its inductive reasoning approach. Researchers start with specific observations or data and use them to develop general theories or conclusions. This inductive approach allows researchers to derive broader implications from specific instances, providing a basis for generalization. Empirical research is often hypothesis-driven, seeking to test and validate theories or hypotheses through the collection and analysis of data.

Empirical research is highly valued for its ability to provide concrete evidence and support or refute existing theories. It allows researchers to investigate real-world phenomena, understand cause-and-effect relationships, and make informed decisions based on empirical evidence. By relying on observable data, empirical research enhances the credibility and reliability of research findings, contributing to the overall body of knowledge in a field.

Moreover, empirical research is particularly useful in applied fields, where practical implications and real-world applications are of utmost importance. It allows researchers to evaluate the effectiveness of interventions, assess the impact of policies, or measure the outcomes of specific actions. Empirical research provides valuable insights that can inform decision-making processes, guide policy development, and drive evidence-based practices.

Comparing Conceptual Research and Empirical Research

While conceptual research and empirical research differ in their methodologies and approaches, they are both essential components of the research process. Conceptual research focuses on the development of theories and concepts, providing a theoretical foundation for empirical investigations. Empirical research, on the other hand, relies on the collection and analysis of observable data to test and validate theories.

Conceptual research is often exploratory and aims to expand the boundaries of knowledge. It is valuable in fields where empirical data may be limited or difficult to obtain. By synthesizing existing theories and proposing new frameworks, conceptual research provides a theoretical basis for empirical studies. It helps researchers develop hypotheses and guides their quest for empirical evidence.

Empirical research, on the other hand, is hypothesis-driven and seeks to provide concrete evidence and support or refute existing theories. It allows researchers to investigate real-world phenomena, understand cause-and-effect relationships, and make informed decisions based on empirical evidence. Empirical research is particularly useful in applied fields, where practical implications and real-world applications are of utmost importance.

Despite their differences, conceptual research and empirical research are not mutually exclusive. In fact, they often complement each other in the research process. Conceptual research provides the theoretical foundation and guidance for empirical investigations, while empirical research validates and refines existing theories or concepts. The iterative nature of research often involves a continuous cycle of conceptual and empirical research, with each informing and influencing the other.

Both conceptual research and empirical research contribute to the advancement of knowledge in their respective fields. Conceptual research expands theoretical frameworks, proposes new concepts, and lays the groundwork for empirical investigations. Empirical research, on the other hand, provides concrete evidence, validates theories, and informs practical applications. Together, they form a symbiotic relationship, driving progress and innovation in various disciplines.

Conceptual research and empirical research are two distinct methodologies employed in the pursuit of knowledge and understanding. While conceptual research focuses on the development of theories and concepts, empirical research relies on the collection and analysis of observable data. Both approaches have their unique attributes, methodologies, and applications.

Conceptual research plays a crucial role in expanding theoretical frameworks, proposing new concepts, and providing a foundation for empirical investigations. It is particularly valuable in fields where empirical data may be limited or difficult to obtain. On the other hand, empirical research provides concrete evidence, validates theories, and informs practical applications. It is highly valued in applied fields, where evidence-based decision-making is essential.

Despite their differences, conceptual research and empirical research are not mutually exclusive. They often work in tandem, with conceptual research guiding the development of hypotheses and theoretical frameworks, and empirical research validating and refining these theories through the collection and analysis of data. Together, they contribute to the overall advancement of knowledge and understanding in various disciplines.

Comparisons may contain inaccurate information about people, places, or facts. Please report any issues.

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

empirical research is concerned with

Home Market Research

Empirical Research: Definition, Methods, Types and Examples

What is Empirical Research

Content Index

Empirical research: Definition

Empirical research: origin, quantitative research methods, qualitative research methods, steps for conducting empirical research, empirical research methodology cycle, advantages of empirical research, disadvantages of empirical research, why is there a need for empirical research.

Empirical research is defined as any research where conclusions of the study is strictly drawn from concretely empirical evidence, and therefore “verifiable” evidence.

This empirical evidence can be gathered using quantitative market research and  qualitative market research  methods.

For example: A research is being conducted to find out if listening to happy music in the workplace while working may promote creativity? An experiment is conducted by using a music website survey on a set of audience who are exposed to happy music and another set who are not listening to music at all, and the subjects are then observed. The results derived from such a research will give empirical evidence if it does promote creativity or not.

LEARN ABOUT: Behavioral Research

You must have heard the quote” I will not believe it unless I see it”. This came from the ancient empiricists, a fundamental understanding that powered the emergence of medieval science during the renaissance period and laid the foundation of modern science, as we know it today. The word itself has its roots in greek. It is derived from the greek word empeirikos which means “experienced”.

In today’s world, the word empirical refers to collection of data using evidence that is collected through observation or experience or by using calibrated scientific instruments. All of the above origins have one thing in common which is dependence of observation and experiments to collect data and test them to come up with conclusions.

LEARN ABOUT: Causal Research

Types and methodologies of empirical research

Empirical research can be conducted and analysed using qualitative or quantitative methods.

  • Quantitative research : Quantitative research methods are used to gather information through numerical data. It is used to quantify opinions, behaviors or other defined variables . These are predetermined and are in a more structured format. Some of the commonly used methods are survey, longitudinal studies, polls, etc
  • Qualitative research:   Qualitative research methods are used to gather non numerical data.  It is used to find meanings, opinions, or the underlying reasons from its subjects. These methods are unstructured or semi structured. The sample size for such a research is usually small and it is a conversational type of method to provide more insight or in-depth information about the problem Some of the most popular forms of methods are focus groups, experiments, interviews, etc.

Data collected from these will need to be analysed. Empirical evidence can also be analysed either quantitatively and qualitatively. Using this, the researcher can answer empirical questions which have to be clearly defined and answerable with the findings he has got. The type of research design used will vary depending on the field in which it is going to be used. Many of them might choose to do a collective research involving quantitative and qualitative method to better answer questions which cannot be studied in a laboratory setting.

LEARN ABOUT: Qualitative Research Questions and Questionnaires

Quantitative research methods aid in analyzing the empirical evidence gathered. By using these a researcher can find out if his hypothesis is supported or not.

  • Survey research: Survey research generally involves a large audience to collect a large amount of data. This is a quantitative method having a predetermined set of closed questions which are pretty easy to answer. Because of the simplicity of such a method, high responses are achieved. It is one of the most commonly used methods for all kinds of research in today’s world.

Previously, surveys were taken face to face only with maybe a recorder. However, with advancement in technology and for ease, new mediums such as emails , or social media have emerged.

For example: Depletion of energy resources is a growing concern and hence there is a need for awareness about renewable energy. According to recent studies, fossil fuels still account for around 80% of energy consumption in the United States. Even though there is a rise in the use of green energy every year, there are certain parameters because of which the general population is still not opting for green energy. In order to understand why, a survey can be conducted to gather opinions of the general population about green energy and the factors that influence their choice of switching to renewable energy. Such a survey can help institutions or governing bodies to promote appropriate awareness and incentive schemes to push the use of greener energy.

Learn more: Renewable Energy Survey Template Descriptive Research vs Correlational Research

  • Experimental research: In experimental research , an experiment is set up and a hypothesis is tested by creating a situation in which one of the variable is manipulated. This is also used to check cause and effect. It is tested to see what happens to the independent variable if the other one is removed or altered. The process for such a method is usually proposing a hypothesis, experimenting on it, analyzing the findings and reporting the findings to understand if it supports the theory or not.

For example: A particular product company is trying to find what is the reason for them to not be able to capture the market. So the organisation makes changes in each one of the processes like manufacturing, marketing, sales and operations. Through the experiment they understand that sales training directly impacts the market coverage for their product. If the person is trained well, then the product will have better coverage.

  • Correlational research: Correlational research is used to find relation between two set of variables . Regression analysis is generally used to predict outcomes of such a method. It can be positive, negative or neutral correlation.

LEARN ABOUT: Level of Analysis

For example: Higher educated individuals will get higher paying jobs. This means higher education enables the individual to high paying job and less education will lead to lower paying jobs.

  • Longitudinal study: Longitudinal study is used to understand the traits or behavior of a subject under observation after repeatedly testing the subject over a period of time. Data collected from such a method can be qualitative or quantitative in nature.

For example: A research to find out benefits of exercise. The target is asked to exercise everyday for a particular period of time and the results show higher endurance, stamina, and muscle growth. This supports the fact that exercise benefits an individual body.

  • Cross sectional: Cross sectional study is an observational type of method, in which a set of audience is observed at a given point in time. In this type, the set of people are chosen in a fashion which depicts similarity in all the variables except the one which is being researched. This type does not enable the researcher to establish a cause and effect relationship as it is not observed for a continuous time period. It is majorly used by healthcare sector or the retail industry.

For example: A medical study to find the prevalence of under-nutrition disorders in kids of a given population. This will involve looking at a wide range of parameters like age, ethnicity, location, incomes  and social backgrounds. If a significant number of kids coming from poor families show under-nutrition disorders, the researcher can further investigate into it. Usually a cross sectional study is followed by a longitudinal study to find out the exact reason.

  • Causal-Comparative research : This method is based on comparison. It is mainly used to find out cause-effect relationship between two variables or even multiple variables.

For example: A researcher measured the productivity of employees in a company which gave breaks to the employees during work and compared that to the employees of the company which did not give breaks at all.

LEARN ABOUT: Action Research

Some research questions need to be analysed qualitatively, as quantitative methods are not applicable there. In many cases, in-depth information is needed or a researcher may need to observe a target audience behavior, hence the results needed are in a descriptive analysis form. Qualitative research results will be descriptive rather than predictive. It enables the researcher to build or support theories for future potential quantitative research. In such a situation qualitative research methods are used to derive a conclusion to support the theory or hypothesis being studied.

LEARN ABOUT: Qualitative Interview

  • Case study: Case study method is used to find more information through carefully analyzing existing cases. It is very often used for business research or to gather empirical evidence for investigation purpose. It is a method to investigate a problem within its real life context through existing cases. The researcher has to carefully analyse making sure the parameter and variables in the existing case are the same as to the case that is being investigated. Using the findings from the case study, conclusions can be drawn regarding the topic that is being studied.

For example: A report mentioning the solution provided by a company to its client. The challenges they faced during initiation and deployment, the findings of the case and solutions they offered for the problems. Such case studies are used by most companies as it forms an empirical evidence for the company to promote in order to get more business.

  • Observational method:   Observational method is a process to observe and gather data from its target. Since it is a qualitative method it is time consuming and very personal. It can be said that observational research method is a part of ethnographic research which is also used to gather empirical evidence. This is usually a qualitative form of research, however in some cases it can be quantitative as well depending on what is being studied.

For example: setting up a research to observe a particular animal in the rain-forests of amazon. Such a research usually take a lot of time as observation has to be done for a set amount of time to study patterns or behavior of the subject. Another example used widely nowadays is to observe people shopping in a mall to figure out buying behavior of consumers.

  • One-on-one interview: Such a method is purely qualitative and one of the most widely used. The reason being it enables a researcher get precise meaningful data if the right questions are asked. It is a conversational method where in-depth data can be gathered depending on where the conversation leads.

For example: A one-on-one interview with the finance minister to gather data on financial policies of the country and its implications on the public.

  • Focus groups: Focus groups are used when a researcher wants to find answers to why, what and how questions. A small group is generally chosen for such a method and it is not necessary to interact with the group in person. A moderator is generally needed in case the group is being addressed in person. This is widely used by product companies to collect data about their brands and the product.

For example: A mobile phone manufacturer wanting to have a feedback on the dimensions of one of their models which is yet to be launched. Such studies help the company meet the demand of the customer and position their model appropriately in the market.

  • Text analysis: Text analysis method is a little new compared to the other types. Such a method is used to analyse social life by going through images or words used by the individual. In today’s world, with social media playing a major part of everyone’s life, such a method enables the research to follow the pattern that relates to his study.

For example: A lot of companies ask for feedback from the customer in detail mentioning how satisfied are they with their customer support team. Such data enables the researcher to take appropriate decisions to make their support team better.

Sometimes a combination of the methods is also needed for some questions that cannot be answered using only one type of method especially when a researcher needs to gain a complete understanding of complex subject matter.

We recently published a blog that talks about examples of qualitative data in education ; why don’t you check it out for more ideas?

Since empirical research is based on observation and capturing experiences, it is important to plan the steps to conduct the experiment and how to analyse it. This will enable the researcher to resolve problems or obstacles which can occur during the experiment.

Step #1: Define the purpose of the research

This is the step where the researcher has to answer questions like what exactly do I want to find out? What is the problem statement? Are there any issues in terms of the availability of knowledge, data, time or resources. Will this research be more beneficial than what it will cost.

Before going ahead, a researcher has to clearly define his purpose for the research and set up a plan to carry out further tasks.

Step #2 : Supporting theories and relevant literature

The researcher needs to find out if there are theories which can be linked to his research problem . He has to figure out if any theory can help him support his findings. All kind of relevant literature will help the researcher to find if there are others who have researched this before, or what are the problems faced during this research. The researcher will also have to set up assumptions and also find out if there is any history regarding his research problem

Step #3: Creation of Hypothesis and measurement

Before beginning the actual research he needs to provide himself a working hypothesis or guess what will be the probable result. Researcher has to set up variables, decide the environment for the research and find out how can he relate between the variables.

Researcher will also need to define the units of measurements, tolerable degree for errors, and find out if the measurement chosen will be acceptable by others.

Step #4: Methodology, research design and data collection

In this step, the researcher has to define a strategy for conducting his research. He has to set up experiments to collect data which will enable him to propose the hypothesis. The researcher will decide whether he will need experimental or non experimental method for conducting the research. The type of research design will vary depending on the field in which the research is being conducted. Last but not the least, the researcher will have to find out parameters that will affect the validity of the research design. Data collection will need to be done by choosing appropriate samples depending on the research question. To carry out the research, he can use one of the many sampling techniques. Once data collection is complete, researcher will have empirical data which needs to be analysed.

LEARN ABOUT: Best Data Collection Tools

Step #5: Data Analysis and result

Data analysis can be done in two ways, qualitatively and quantitatively. Researcher will need to find out what qualitative method or quantitative method will be needed or will he need a combination of both. Depending on the unit of analysis of his data, he will know if his hypothesis is supported or rejected. Analyzing this data is the most important part to support his hypothesis.

Step #6: Conclusion

A report will need to be made with the findings of the research. The researcher can give the theories and literature that support his research. He can make suggestions or recommendations for further research on his topic.

Empirical research methodology cycle

A.D. de Groot, a famous dutch psychologist and a chess expert conducted some of the most notable experiments using chess in the 1940’s. During his study, he came up with a cycle which is consistent and now widely used to conduct empirical research. It consists of 5 phases with each phase being as important as the next one. The empirical cycle captures the process of coming up with hypothesis about how certain subjects work or behave and then testing these hypothesis against empirical data in a systematic and rigorous approach. It can be said that it characterizes the deductive approach to science. Following is the empirical cycle.

  • Observation: At this phase an idea is sparked for proposing a hypothesis. During this phase empirical data is gathered using observation. For example: a particular species of flower bloom in a different color only during a specific season.
  • Induction: Inductive reasoning is then carried out to form a general conclusion from the data gathered through observation. For example: As stated above it is observed that the species of flower blooms in a different color during a specific season. A researcher may ask a question “does the temperature in the season cause the color change in the flower?” He can assume that is the case, however it is a mere conjecture and hence an experiment needs to be set up to support this hypothesis. So he tags a few set of flowers kept at a different temperature and observes if they still change the color?
  • Deduction: This phase helps the researcher to deduce a conclusion out of his experiment. This has to be based on logic and rationality to come up with specific unbiased results.For example: In the experiment, if the tagged flowers in a different temperature environment do not change the color then it can be concluded that temperature plays a role in changing the color of the bloom.
  • Testing: This phase involves the researcher to return to empirical methods to put his hypothesis to the test. The researcher now needs to make sense of his data and hence needs to use statistical analysis plans to determine the temperature and bloom color relationship. If the researcher finds out that most flowers bloom a different color when exposed to the certain temperature and the others do not when the temperature is different, he has found support to his hypothesis. Please note this not proof but just a support to his hypothesis.
  • Evaluation: This phase is generally forgotten by most but is an important one to keep gaining knowledge. During this phase the researcher puts forth the data he has collected, the support argument and his conclusion. The researcher also states the limitations for the experiment and his hypothesis and suggests tips for others to pick it up and continue a more in-depth research for others in the future. LEARN MORE: Population vs Sample

LEARN MORE: Population vs Sample

There is a reason why empirical research is one of the most widely used method. There are a few advantages associated with it. Following are a few of them.

  • It is used to authenticate traditional research through various experiments and observations.
  • This research methodology makes the research being conducted more competent and authentic.
  • It enables a researcher understand the dynamic changes that can happen and change his strategy accordingly.
  • The level of control in such a research is high so the researcher can control multiple variables.
  • It plays a vital role in increasing internal validity .

Even though empirical research makes the research more competent and authentic, it does have a few disadvantages. Following are a few of them.

  • Such a research needs patience as it can be very time consuming. The researcher has to collect data from multiple sources and the parameters involved are quite a few, which will lead to a time consuming research.
  • Most of the time, a researcher will need to conduct research at different locations or in different environments, this can lead to an expensive affair.
  • There are a few rules in which experiments can be performed and hence permissions are needed. Many a times, it is very difficult to get certain permissions to carry out different methods of this research.
  • Collection of data can be a problem sometimes, as it has to be collected from a variety of sources through different methods.

LEARN ABOUT:  Social Communication Questionnaire

Empirical research is important in today’s world because most people believe in something only that they can see, hear or experience. It is used to validate multiple hypothesis and increase human knowledge and continue doing it to keep advancing in various fields.

For example: Pharmaceutical companies use empirical research to try out a specific drug on controlled groups or random groups to study the effect and cause. This way, they prove certain theories they had proposed for the specific drug. Such research is very important as sometimes it can lead to finding a cure for a disease that has existed for many years. It is useful in science and many other fields like history, social sciences, business, etc.

LEARN ABOUT: 12 Best Tools for Researchers

With the advancement in today’s world, empirical research has become critical and a norm in many fields to support their hypothesis and gain more knowledge. The methods mentioned above are very useful for carrying out such research. However, a number of new methods will keep coming up as the nature of new investigative questions keeps getting unique or changing.

Create a single source of real data with a built-for-insights platform. Store past data, add nuggets of insights, and import research data from various sources into a CRM for insights. Build on ever-growing research with a real-time dashboard in a unified research management platform to turn insights into knowledge.

LEARN MORE         FREE TRIAL

MORE LIKE THIS

Employee Engagement App

Employee Engagement App: Top 11 For Workforce Improvement 

Apr 10, 2024

employee evaluation software

Top 15 Employee Evaluation Software to Enhance Performance

event feedback software

Event Feedback Software: Top 11 Best in 2024

Apr 9, 2024

free market research tools

Top 10 Free Market Research Tools to Boost Your Business

Other categories.

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

Penn State University Libraries

Empirical research in the social sciences and education.

  • What is Empirical Research and How to Read It
  • Finding Empirical Research in Library Databases
  • Designing Empirical Research
  • Ethics, Cultural Responsiveness, and Anti-Racism in Research
  • Citing, Writing, and Presenting Your Work

Contact the Librarian at your campus for more help!

Ellysa Cahoy

Introduction: What is Empirical Research?

Empirical research is based on observed and measured phenomena and derives knowledge from actual experience rather than from theory or belief. 

How do you know if a study is empirical? Read the subheadings within the article, book, or report and look for a description of the research "methodology."  Ask yourself: Could I recreate this study and test these results?

Key characteristics to look for:

  • Specific research questions to be answered
  • Definition of the population, behavior, or   phenomena being studied
  • Description of the process used to study this population or phenomena, including selection criteria, controls, and testing instruments (such as surveys)

Another hint: some scholarly journals use a specific layout, called the "IMRaD" format, to communicate empirical research findings. Such articles typically have 4 components:

  • Introduction : sometimes called "literature review" -- what is currently known about the topic -- usually includes a theoretical framework and/or discussion of previous studies
  • Methodology: sometimes called "research design" -- how to recreate the study -- usually describes the population, research process, and analytical tools used in the present study
  • Results : sometimes called "findings" -- what was learned through the study -- usually appears as statistical data or as substantial quotations from research participants
  • Discussion : sometimes called "conclusion" or "implications" -- why the study is important -- usually describes how the research results influence professional practices or future studies

Reading and Evaluating Scholarly Materials

Reading research can be a challenge. However, the tutorials and videos below can help. They explain what scholarly articles look like, how to read them, and how to evaluate them:

  • CRAAP Checklist A frequently-used checklist that helps you examine the currency, relevance, authority, accuracy, and purpose of an information source.
  • IF I APPLY A newer model of evaluating sources which encourages you to think about your own biases as a reader, as well as concerns about the item you are reading.
  • Credo Video: How to Read Scholarly Materials (4 min.)
  • Credo Tutorial: How to Read Scholarly Materials
  • Credo Tutorial: Evaluating Information
  • Credo Video: Evaluating Statistics (4 min.)
  • Next: Finding Empirical Research in Library Databases >>
  • Last Updated: Feb 18, 2024 8:33 PM
  • URL: https://guides.libraries.psu.edu/emp

What is Empirical Research? Definition, Methods, Examples

Appinio Research · 09.02.2024 · 35min read

What is Empirical Research Definition Methods Examples

Ever wondered how we gather the facts, unveil hidden truths, and make informed decisions in a world filled with questions? Empirical research holds the key.

In this guide, we'll delve deep into the art and science of empirical research, unraveling its methods, mysteries, and manifold applications. From defining the core principles to mastering data analysis and reporting findings, we're here to equip you with the knowledge and tools to navigate the empirical landscape.

What is Empirical Research?

Empirical research is the cornerstone of scientific inquiry, providing a systematic and structured approach to investigating the world around us. It is the process of gathering and analyzing empirical or observable data to test hypotheses, answer research questions, or gain insights into various phenomena. This form of research relies on evidence derived from direct observation or experimentation, allowing researchers to draw conclusions based on real-world data rather than purely theoretical or speculative reasoning.

Characteristics of Empirical Research

Empirical research is characterized by several key features:

  • Observation and Measurement : It involves the systematic observation or measurement of variables, events, or behaviors.
  • Data Collection : Researchers collect data through various methods, such as surveys, experiments, observations, or interviews.
  • Testable Hypotheses : Empirical research often starts with testable hypotheses that are evaluated using collected data.
  • Quantitative or Qualitative Data : Data can be quantitative (numerical) or qualitative (non-numerical), depending on the research design.
  • Statistical Analysis : Quantitative data often undergo statistical analysis to determine patterns , relationships, or significance.
  • Objectivity and Replicability : Empirical research strives for objectivity, minimizing researcher bias . It should be replicable, allowing other researchers to conduct the same study to verify results.
  • Conclusions and Generalizations : Empirical research generates findings based on data and aims to make generalizations about larger populations or phenomena.

Importance of Empirical Research

Empirical research plays a pivotal role in advancing knowledge across various disciplines. Its importance extends to academia, industry, and society as a whole. Here are several reasons why empirical research is essential:

  • Evidence-Based Knowledge : Empirical research provides a solid foundation of evidence-based knowledge. It enables us to test hypotheses, confirm or refute theories, and build a robust understanding of the world.
  • Scientific Progress : In the scientific community, empirical research fuels progress by expanding the boundaries of existing knowledge. It contributes to the development of theories and the formulation of new research questions.
  • Problem Solving : Empirical research is instrumental in addressing real-world problems and challenges. It offers insights and data-driven solutions to complex issues in fields like healthcare, economics, and environmental science.
  • Informed Decision-Making : In policymaking, business, and healthcare, empirical research informs decision-makers by providing data-driven insights. It guides strategies, investments, and policies for optimal outcomes.
  • Quality Assurance : Empirical research is essential for quality assurance and validation in various industries, including pharmaceuticals, manufacturing, and technology. It ensures that products and processes meet established standards.
  • Continuous Improvement : Businesses and organizations use empirical research to evaluate performance, customer satisfaction, and product effectiveness. This data-driven approach fosters continuous improvement and innovation.
  • Human Advancement : Empirical research in fields like medicine and psychology contributes to the betterment of human health and well-being. It leads to medical breakthroughs, improved therapies, and enhanced psychological interventions.
  • Critical Thinking and Problem Solving : Engaging in empirical research fosters critical thinking skills, problem-solving abilities, and a deep appreciation for evidence-based decision-making.

Empirical research empowers us to explore, understand, and improve the world around us. It forms the bedrock of scientific inquiry and drives progress in countless domains, shaping our understanding of both the natural and social sciences.

How to Conduct Empirical Research?

So, you've decided to dive into the world of empirical research. Let's begin by exploring the crucial steps involved in getting started with your research project.

1. Select a Research Topic

Selecting the right research topic is the cornerstone of a successful empirical study. It's essential to choose a topic that not only piques your interest but also aligns with your research goals and objectives. Here's how to go about it:

  • Identify Your Interests : Start by reflecting on your passions and interests. What topics fascinate you the most? Your enthusiasm will be your driving force throughout the research process.
  • Brainstorm Ideas : Engage in brainstorming sessions to generate potential research topics. Consider the questions you've always wanted to answer or the issues that intrigue you.
  • Relevance and Significance : Assess the relevance and significance of your chosen topic. Does it contribute to existing knowledge? Is it a pressing issue in your field of study or the broader community?
  • Feasibility : Evaluate the feasibility of your research topic. Do you have access to the necessary resources, data, and participants (if applicable)?

2. Formulate Research Questions

Once you've narrowed down your research topic, the next step is to formulate clear and precise research questions . These questions will guide your entire research process and shape your study's direction. To create effective research questions:

  • Specificity : Ensure that your research questions are specific and focused. Vague or overly broad questions can lead to inconclusive results.
  • Relevance : Your research questions should directly relate to your chosen topic. They should address gaps in knowledge or contribute to solving a particular problem.
  • Testability : Ensure that your questions are testable through empirical methods. You should be able to gather data and analyze it to answer these questions.
  • Avoid Bias : Craft your questions in a way that avoids leading or biased language. Maintain neutrality to uphold the integrity of your research.

3. Review Existing Literature

Before you embark on your empirical research journey, it's essential to immerse yourself in the existing body of literature related to your chosen topic. This step, often referred to as a literature review, serves several purposes:

  • Contextualization : Understand the historical context and current state of research in your field. What have previous studies found, and what questions remain unanswered?
  • Identifying Gaps : Identify gaps or areas where existing research falls short. These gaps will help you formulate meaningful research questions and hypotheses.
  • Theory Development : If your study is theoretical, consider how existing theories apply to your topic. If it's empirical, understand how previous studies have approached data collection and analysis.
  • Methodological Insights : Learn from the methodologies employed in previous research. What methods were successful, and what challenges did researchers face?

4. Define Variables

Variables are fundamental components of empirical research. They are the factors or characteristics that can change or be manipulated during your study. Properly defining and categorizing variables is crucial for the clarity and validity of your research. Here's what you need to know:

  • Independent Variables : These are the variables that you, as the researcher, manipulate or control. They are the "cause" in cause-and-effect relationships.
  • Dependent Variables : Dependent variables are the outcomes or responses that you measure or observe. They are the "effect" influenced by changes in independent variables.
  • Operational Definitions : To ensure consistency and clarity, provide operational definitions for your variables. Specify how you will measure or manipulate each variable.
  • Control Variables : In some studies, controlling for other variables that may influence your dependent variable is essential. These are known as control variables.

Understanding these foundational aspects of empirical research will set a solid foundation for the rest of your journey. Now that you've grasped the essentials of getting started, let's delve deeper into the intricacies of research design.

Empirical Research Design

Now that you've selected your research topic, formulated research questions, and defined your variables, it's time to delve into the heart of your empirical research journey – research design . This pivotal step determines how you will collect data and what methods you'll employ to answer your research questions. Let's explore the various facets of research design in detail.

Types of Empirical Research

Empirical research can take on several forms, each with its own unique approach and methodologies. Understanding the different types of empirical research will help you choose the most suitable design for your study. Here are some common types:

  • Experimental Research : In this type, researchers manipulate one or more independent variables to observe their impact on dependent variables. It's highly controlled and often conducted in a laboratory setting.
  • Observational Research : Observational research involves the systematic observation of subjects or phenomena without intervention. Researchers are passive observers, documenting behaviors, events, or patterns.
  • Survey Research : Surveys are used to collect data through structured questionnaires or interviews. This method is efficient for gathering information from a large number of participants.
  • Case Study Research : Case studies focus on in-depth exploration of one or a few cases. Researchers gather detailed information through various sources such as interviews, documents, and observations.
  • Qualitative Research : Qualitative research aims to understand behaviors, experiences, and opinions in depth. It often involves open-ended questions, interviews, and thematic analysis.
  • Quantitative Research : Quantitative research collects numerical data and relies on statistical analysis to draw conclusions. It involves structured questionnaires, experiments, and surveys.

Your choice of research type should align with your research questions and objectives. Experimental research, for example, is ideal for testing cause-and-effect relationships, while qualitative research is more suitable for exploring complex phenomena.

Experimental Design

Experimental research is a systematic approach to studying causal relationships. It's characterized by the manipulation of one or more independent variables while controlling for other factors. Here are some key aspects of experimental design:

  • Control and Experimental Groups : Participants are randomly assigned to either a control group or an experimental group. The independent variable is manipulated for the experimental group but not for the control group.
  • Randomization : Randomization is crucial to eliminate bias in group assignment. It ensures that each participant has an equal chance of being in either group.
  • Hypothesis Testing : Experimental research often involves hypothesis testing. Researchers formulate hypotheses about the expected effects of the independent variable and use statistical analysis to test these hypotheses.

Observational Design

Observational research entails careful and systematic observation of subjects or phenomena. It's advantageous when you want to understand natural behaviors or events. Key aspects of observational design include:

  • Participant Observation : Researchers immerse themselves in the environment they are studying. They become part of the group being observed, allowing for a deep understanding of behaviors.
  • Non-Participant Observation : In non-participant observation, researchers remain separate from the subjects. They observe and document behaviors without direct involvement.
  • Data Collection Methods : Observational research can involve various data collection methods, such as field notes, video recordings, photographs, or coding of observed behaviors.

Survey Design

Surveys are a popular choice for collecting data from a large number of participants. Effective survey design is essential to ensure the validity and reliability of your data. Consider the following:

  • Questionnaire Design : Create clear and concise questions that are easy for participants to understand. Avoid leading or biased questions.
  • Sampling Methods : Decide on the appropriate sampling method for your study, whether it's random, stratified, or convenience sampling.
  • Data Collection Tools : Choose the right tools for data collection, whether it's paper surveys, online questionnaires, or face-to-face interviews.

Case Study Design

Case studies are an in-depth exploration of one or a few cases to gain a deep understanding of a particular phenomenon. Key aspects of case study design include:

  • Single Case vs. Multiple Case Studies : Decide whether you'll focus on a single case or multiple cases. Single case studies are intensive and allow for detailed examination, while multiple case studies provide comparative insights.
  • Data Collection Methods : Gather data through interviews, observations, document analysis, or a combination of these methods.

Qualitative vs. Quantitative Research

In empirical research, you'll often encounter the distinction between qualitative and quantitative research . Here's a closer look at these two approaches:

  • Qualitative Research : Qualitative research seeks an in-depth understanding of human behavior, experiences, and perspectives. It involves open-ended questions, interviews, and the analysis of textual or narrative data. Qualitative research is exploratory and often used when the research question is complex and requires a nuanced understanding.
  • Quantitative Research : Quantitative research collects numerical data and employs statistical analysis to draw conclusions. It involves structured questionnaires, experiments, and surveys. Quantitative research is ideal for testing hypotheses and establishing cause-and-effect relationships.

Understanding the various research design options is crucial in determining the most appropriate approach for your study. Your choice should align with your research questions, objectives, and the nature of the phenomenon you're investigating.

Data Collection for Empirical Research

Now that you've established your research design, it's time to roll up your sleeves and collect the data that will fuel your empirical research. Effective data collection is essential for obtaining accurate and reliable results.

Sampling Methods

Sampling methods are critical in empirical research, as they determine the subset of individuals or elements from your target population that you will study. Here are some standard sampling methods:

  • Random Sampling : Random sampling ensures that every member of the population has an equal chance of being selected. It minimizes bias and is often used in quantitative research.
  • Stratified Sampling : Stratified sampling involves dividing the population into subgroups or strata based on specific characteristics (e.g., age, gender, location). Samples are then randomly selected from each stratum, ensuring representation of all subgroups.
  • Convenience Sampling : Convenience sampling involves selecting participants who are readily available or easily accessible. While it's convenient, it may introduce bias and limit the generalizability of results.
  • Snowball Sampling : Snowball sampling is instrumental when studying hard-to-reach or hidden populations. One participant leads you to another, creating a "snowball" effect. This method is common in qualitative research.
  • Purposive Sampling : In purposive sampling, researchers deliberately select participants who meet specific criteria relevant to their research questions. It's often used in qualitative studies to gather in-depth information.

The choice of sampling method depends on the nature of your research, available resources, and the degree of precision required. It's crucial to carefully consider your sampling strategy to ensure that your sample accurately represents your target population.

Data Collection Instruments

Data collection instruments are the tools you use to gather information from your participants or sources. These instruments should be designed to capture the data you need accurately. Here are some popular data collection instruments:

  • Questionnaires : Questionnaires consist of structured questions with predefined response options. When designing questionnaires, consider the clarity of questions, the order of questions, and the response format (e.g., Likert scale, multiple-choice).
  • Interviews : Interviews involve direct communication between the researcher and participants. They can be structured (with predetermined questions) or unstructured (open-ended). Effective interviews require active listening and probing for deeper insights.
  • Observations : Observations entail systematically and objectively recording behaviors, events, or phenomena. Researchers must establish clear criteria for what to observe, how to record observations, and when to observe.
  • Surveys : Surveys are a common data collection instrument for quantitative research. They can be administered through various means, including online surveys, paper surveys, and telephone surveys.
  • Documents and Archives : In some cases, data may be collected from existing documents, records, or archives. Ensure that the sources are reliable, relevant, and properly documented.

To streamline your process and gather insights with precision and efficiency, consider leveraging innovative tools like Appinio . With Appinio's intuitive platform, you can harness the power of real-time consumer data to inform your research decisions effectively. Whether you're conducting surveys, interviews, or observations, Appinio empowers you to define your target audience, collect data from diverse demographics, and analyze results seamlessly.

By incorporating Appinio into your data collection toolkit, you can unlock a world of possibilities and elevate the impact of your empirical research. Ready to revolutionize your approach to data collection?

Book a Demo

Data Collection Procedures

Data collection procedures outline the step-by-step process for gathering data. These procedures should be meticulously planned and executed to maintain the integrity of your research.

  • Training : If you have a research team, ensure that they are trained in data collection methods and protocols. Consistency in data collection is crucial.
  • Pilot Testing : Before launching your data collection, conduct a pilot test with a small group to identify any potential problems with your instruments or procedures. Make necessary adjustments based on feedback.
  • Data Recording : Establish a systematic method for recording data. This may include timestamps, codes, or identifiers for each data point.
  • Data Security : Safeguard the confidentiality and security of collected data. Ensure that only authorized individuals have access to the data.
  • Data Storage : Properly organize and store your data in a secure location, whether in physical or digital form. Back up data to prevent loss.

Ethical Considerations

Ethical considerations are paramount in empirical research, as they ensure the well-being and rights of participants are protected.

  • Informed Consent : Obtain informed consent from participants, providing clear information about the research purpose, procedures, risks, and their right to withdraw at any time.
  • Privacy and Confidentiality : Protect the privacy and confidentiality of participants. Ensure that data is anonymized and sensitive information is kept confidential.
  • Beneficence : Ensure that your research benefits participants and society while minimizing harm. Consider the potential risks and benefits of your study.
  • Honesty and Integrity : Conduct research with honesty and integrity. Report findings accurately and transparently, even if they are not what you expected.
  • Respect for Participants : Treat participants with respect, dignity, and sensitivity to cultural differences. Avoid any form of coercion or manipulation.
  • Institutional Review Board (IRB) : If required, seek approval from an IRB or ethics committee before conducting your research, particularly when working with human participants.

Adhering to ethical guidelines is not only essential for the ethical conduct of research but also crucial for the credibility and validity of your study. Ethical research practices build trust between researchers and participants and contribute to the advancement of knowledge with integrity.

With a solid understanding of data collection, including sampling methods, instruments, procedures, and ethical considerations, you are now well-equipped to gather the data needed to answer your research questions.

Empirical Research Data Analysis

Now comes the exciting phase of data analysis, where the raw data you've diligently collected starts to yield insights and answers to your research questions. We will explore the various aspects of data analysis, from preparing your data to drawing meaningful conclusions through statistics and visualization.

Data Preparation

Data preparation is the crucial first step in data analysis. It involves cleaning, organizing, and transforming your raw data into a format that is ready for analysis. Effective data preparation ensures the accuracy and reliability of your results.

  • Data Cleaning : Identify and rectify errors, missing values, and inconsistencies in your dataset. This may involve correcting typos, removing outliers, and imputing missing data.
  • Data Coding : Assign numerical values or codes to categorical variables to make them suitable for statistical analysis. For example, converting "Yes" and "No" to 1 and 0.
  • Data Transformation : Transform variables as needed to meet the assumptions of the statistical tests you plan to use. Common transformations include logarithmic or square root transformations.
  • Data Integration : If your data comes from multiple sources, integrate it into a unified dataset, ensuring that variables match and align.
  • Data Documentation : Maintain clear documentation of all data preparation steps, as well as the rationale behind each decision. This transparency is essential for replicability.

Effective data preparation lays the foundation for accurate and meaningful analysis. It allows you to trust the results that will follow in the subsequent stages.

Descriptive Statistics

Descriptive statistics help you summarize and make sense of your data by providing a clear overview of its key characteristics. These statistics are essential for understanding the central tendencies, variability, and distribution of your variables. Descriptive statistics include:

  • Measures of Central Tendency : These include the mean (average), median (middle value), and mode (most frequent value). They help you understand the typical or central value of your data.
  • Measures of Dispersion : Measures like the range, variance, and standard deviation provide insights into the spread or variability of your data points.
  • Frequency Distributions : Creating frequency distributions or histograms allows you to visualize the distribution of your data across different values or categories.

Descriptive statistics provide the initial insights needed to understand your data's basic characteristics, which can inform further analysis.

Inferential Statistics

Inferential statistics take your analysis to the next level by allowing you to make inferences or predictions about a larger population based on your sample data. These methods help you test hypotheses and draw meaningful conclusions. Key concepts in inferential statistics include:

  • Hypothesis Testing : Hypothesis tests (e.g., t-tests, chi-squared tests) help you determine whether observed differences or associations in your data are statistically significant or occurred by chance.
  • Confidence Intervals : Confidence intervals provide a range within which population parameters (e.g., population mean) are likely to fall based on your sample data.
  • Regression Analysis : Regression models (linear, logistic, etc.) help you explore relationships between variables and make predictions.
  • Analysis of Variance (ANOVA) : ANOVA tests are used to compare means between multiple groups, allowing you to assess whether differences are statistically significant.

Inferential statistics are powerful tools for drawing conclusions from your data and assessing the generalizability of your findings to the broader population.

Qualitative Data Analysis

Qualitative data analysis is employed when working with non-numerical data, such as text, interviews, or open-ended survey responses. It focuses on understanding the underlying themes, patterns, and meanings within qualitative data. Qualitative analysis techniques include:

  • Thematic Analysis : Identifying and analyzing recurring themes or patterns within textual data.
  • Content Analysis : Categorizing and coding qualitative data to extract meaningful insights.
  • Grounded Theory : Developing theories or frameworks based on emergent themes from the data.
  • Narrative Analysis : Examining the structure and content of narratives to uncover meaning.

Qualitative data analysis provides a rich and nuanced understanding of complex phenomena and human experiences.

Data Visualization

Data visualization is the art of representing data graphically to make complex information more understandable and accessible. Effective data visualization can reveal patterns, trends, and outliers in your data. Common types of data visualization include:

  • Bar Charts and Histograms : Used to display the distribution of categorical or discrete data.
  • Line Charts : Ideal for showing trends and changes in data over time.
  • Scatter Plots : Visualize relationships and correlations between two variables.
  • Pie Charts : Display the composition of a whole in terms of its parts.
  • Heatmaps : Depict patterns and relationships in multidimensional data through color-coding.
  • Box Plots : Provide a summary of the data distribution, including outliers.
  • Interactive Dashboards : Create dynamic visualizations that allow users to explore data interactively.

Data visualization not only enhances your understanding of the data but also serves as a powerful communication tool to convey your findings to others.

As you embark on the data analysis phase of your empirical research, remember that the specific methods and techniques you choose will depend on your research questions, data type, and objectives. Effective data analysis transforms raw data into valuable insights, bringing you closer to the answers you seek.

How to Report Empirical Research Results?

At this stage, you get to share your empirical research findings with the world. Effective reporting and presentation of your results are crucial for communicating your research's impact and insights.

1. Write the Research Paper

Writing a research paper is the culmination of your empirical research journey. It's where you synthesize your findings, provide context, and contribute to the body of knowledge in your field.

  • Title and Abstract : Craft a clear and concise title that reflects your research's essence. The abstract should provide a brief summary of your research objectives, methods, findings, and implications.
  • Introduction : In the introduction, introduce your research topic, state your research questions or hypotheses, and explain the significance of your study. Provide context by discussing relevant literature.
  • Methods : Describe your research design, data collection methods, and sampling procedures. Be precise and transparent, allowing readers to understand how you conducted your study.
  • Results : Present your findings in a clear and organized manner. Use tables, graphs, and statistical analyses to support your results. Avoid interpreting your findings in this section; focus on the presentation of raw data.
  • Discussion : Interpret your findings and discuss their implications. Relate your results to your research questions and the existing literature. Address any limitations of your study and suggest avenues for future research.
  • Conclusion : Summarize the key points of your research and its significance. Restate your main findings and their implications.
  • References : Cite all sources used in your research following a specific citation style (e.g., APA, MLA, Chicago). Ensure accuracy and consistency in your citations.
  • Appendices : Include any supplementary material, such as questionnaires, data coding sheets, or additional analyses, in the appendices.

Writing a research paper is a skill that improves with practice. Ensure clarity, coherence, and conciseness in your writing to make your research accessible to a broader audience.

2. Create Visuals and Tables

Visuals and tables are powerful tools for presenting complex data in an accessible and understandable manner.

  • Clarity : Ensure that your visuals and tables are clear and easy to interpret. Use descriptive titles and labels.
  • Consistency : Maintain consistency in formatting, such as font size and style, across all visuals and tables.
  • Appropriateness : Choose the most suitable visual representation for your data. Bar charts, line graphs, and scatter plots work well for different types of data.
  • Simplicity : Avoid clutter and unnecessary details. Focus on conveying the main points.
  • Accessibility : Make sure your visuals and tables are accessible to a broad audience, including those with visual impairments.
  • Captions : Include informative captions that explain the significance of each visual or table.

Compelling visuals and tables enhance the reader's understanding of your research and can be the key to conveying complex information efficiently.

3. Interpret Findings

Interpreting your findings is where you bridge the gap between data and meaning. It's your opportunity to provide context, discuss implications, and offer insights. When interpreting your findings:

  • Relate to Research Questions : Discuss how your findings directly address your research questions or hypotheses.
  • Compare with Literature : Analyze how your results align with or deviate from previous research in your field. What insights can you draw from these comparisons?
  • Discuss Limitations : Be transparent about the limitations of your study. Address any constraints, biases, or potential sources of error.
  • Practical Implications : Explore the real-world implications of your findings. How can they be applied or inform decision-making?
  • Future Research Directions : Suggest areas for future research based on the gaps or unanswered questions that emerged from your study.

Interpreting findings goes beyond simply presenting data; it's about weaving a narrative that helps readers grasp the significance of your research in the broader context.

With your research paper written, structured, and enriched with visuals, and your findings expertly interpreted, you are now prepared to communicate your research effectively. Sharing your insights and contributing to the body of knowledge in your field is a significant accomplishment in empirical research.

Examples of Empirical Research

To solidify your understanding of empirical research, let's delve into some real-world examples across different fields. These examples will illustrate how empirical research is applied to gather data, analyze findings, and draw conclusions.

Social Sciences

In the realm of social sciences, consider a sociological study exploring the impact of socioeconomic status on educational attainment. Researchers gather data from a diverse group of individuals, including their family backgrounds, income levels, and academic achievements.

Through statistical analysis, they can identify correlations and trends, revealing whether individuals from lower socioeconomic backgrounds are less likely to attain higher levels of education. This empirical research helps shed light on societal inequalities and informs policymakers on potential interventions to address disparities in educational access.

Environmental Science

Environmental scientists often employ empirical research to assess the effects of environmental changes. For instance, researchers studying the impact of climate change on wildlife might collect data on animal populations, weather patterns, and habitat conditions over an extended period.

By analyzing this empirical data, they can identify correlations between climate fluctuations and changes in wildlife behavior, migration patterns, or population sizes. This empirical research is crucial for understanding the ecological consequences of climate change and informing conservation efforts.

Business and Economics

In the business world, empirical research is essential for making data-driven decisions. Consider a market research study conducted by a business seeking to launch a new product. They collect data through surveys, focus groups, and consumer behavior analysis.

By examining this empirical data, the company can gauge consumer preferences, demand, and potential market size. Empirical research in business helps guide product development, pricing strategies, and marketing campaigns, increasing the likelihood of a successful product launch.

Psychological studies frequently rely on empirical research to understand human behavior and cognition. For instance, a psychologist interested in examining the impact of stress on memory might design an experiment. Participants are exposed to stress-inducing situations, and their memory performance is assessed through various tasks.

By analyzing the data collected, the psychologist can determine whether stress has a significant effect on memory recall. This empirical research contributes to our understanding of the complex interplay between psychological factors and cognitive processes.

These examples highlight the versatility and applicability of empirical research across diverse fields. Whether in medicine, social sciences, environmental science, business, or psychology, empirical research serves as a fundamental tool for gaining insights, testing hypotheses, and driving advancements in knowledge and practice.

Conclusion for Empirical Research

Empirical research is a powerful tool for gaining insights, testing hypotheses, and making informed decisions. By following the steps outlined in this guide, you've learned how to select research topics, collect data, analyze findings, and effectively communicate your research to the world. Remember, empirical research is a journey of discovery, and each step you take brings you closer to a deeper understanding of the world around you. Whether you're a scientist, a student, or someone curious about the process, the principles of empirical research empower you to explore, learn, and contribute to the ever-expanding realm of knowledge.

How to Collect Data for Empirical Research?

Introducing Appinio , the real-time market research platform revolutionizing how companies gather consumer insights for their empirical research endeavors. With Appinio, you can conduct your own market research in minutes, gaining valuable data to fuel your data-driven decisions.

Appinio is more than just a market research platform; it's a catalyst for transforming the way you approach empirical research, making it exciting, intuitive, and seamlessly integrated into your decision-making process.

Here's why Appinio is the go-to solution for empirical research:

  • From Questions to Insights in Minutes : With Appinio's streamlined process, you can go from formulating your research questions to obtaining actionable insights in a matter of minutes, saving you time and effort.
  • Intuitive Platform for Everyone : No need for a PhD in research; Appinio's platform is designed to be intuitive and user-friendly, ensuring that anyone can navigate and utilize it effectively.
  • Rapid Response Times : With an average field time of under 23 minutes for 1,000 respondents, Appinio delivers rapid results, allowing you to gather data swiftly and efficiently.
  • Global Reach with Targeted Precision : With access to over 90 countries and the ability to define target groups based on 1200+ characteristics, Appinio empowers you to reach your desired audience with precision and ease.

Register now EN

Get free access to the platform!

Join the loop 💌

Be the first to hear about new updates, product news, and data insights. We'll send it all straight to your inbox.

Get the latest market research news straight to your inbox! 💌

Wait, there's more

What is a Confidence Interval and How to Calculate It

09.04.2024 | 29min read

What is a Confidence Interval and How to Calculate It?

What is Field Research Definition Types Methods Examples

05.04.2024 | 27min read

What is Field Research? Definition, Types, Methods, Examples

What is Cluster Sampling Definition Methods Examples

03.04.2024 | 29min read

What is Cluster Sampling? Definition, Methods, Examples

Table of Contents

Collaboration, information literacy, writing process, empirical research methods.

empirical research is concerned with

Alternative Article Title: Primary Research, Scientific Research , or Field Research .

  • Empirical Research may be called Primary Research, Scientific Research , or Field Research . People who conduct empirical research are typically called investigators , but they may also be called knowledge workers, scientists, empiricists, or researchers.

Empirical research is a research method that investigators use to test knowledge claims and develop new knowledge .

Empirical methods focus on observation and experimentation .

Investigators observe and conduct experiments in systematic ways

is largely determined by their rhetorical contexts. Different workplace contexts and academic disciplines have developed unique tools and techniques for gathering and interpreting information .

professions and business organizations—i.e., discourse communities , especially methodological communities.

Professions and workplaces develop unique tools and technique

Empirical research is informed by

  • empiricism , a philosophy that assumes knowledge is grounded in what you can see, hear, or experience
  • positivism , a philosophy that assumes the universe is an orderly place; a nonrandom order of the universe exists; events have causes and occur in regular patterns that can be determined through observation.

Investigators and discourse communities use empirical research methods

  • to create new knowledge (e.g., Basic Research )
  • to solve a problem at work, school, or personal life (e.g., Applied Research ).
  • to conduct replication studies–i.e., repeat a study with the same methods (or with slight variations, such as changes in subjects and experimenters).

Textual research plays an important role in empirical research . Empiricists engage in some textual research in order to understand scholarly conversations around the topics that interest them. Empiricists consult archives to learn methods for conducting empirical studies. However, there are important distinctions between how scholars weight claims in textual research and how scientists weigh claims in empirical studies.

Unlike investigators who use primarily textual methods , empiricists do not consider “claims of authority, intuition, imaginative conjecture, and abstract, theoretical, or systematic reasoning as sources of reliable belief” (Duignan, Fumerton, Quinton, Quinton 2020).

Instead of relying on logical reasoning and Following Most contemporary empiricists would acknowledge that any act of observation and experimentation are somewhat subjective processes.

There are three major types of empirical research :

  • e.g., numbers, mathematical equations).
  • Mixed Methods (a mixture of Quantitative Methods and Qualitative Methods .

Empirical research aims to be as objective as possible by being RAD —

  • (sufficient details about the research protocol is provided so the study can be repeated)
  • (the results and implications of the study can be extended in future research)
  • ( quantitative evidence and/or Qualitative evidence are provided to substantiate claims, results, interpretations, implications).

Key Terms: positivism ; research methods ; research methodologies .

As humans, we learn about the world from experience, observation and experimentation. Even as babies we conduct informal research: what happens when we cry and complain? If we do x , does it cause y ? Over time, we invariably learn from our experience that our actions have consequences. We sharpen our abilities to identify commons patterns (e.g., whenwe write a lot, we are more creative). Invariably, as we evolve during our lives, we come to trust our experiences, our senses, and our procedural knowledge and declarative knowledge evolves.

In work and school settings, systematic engagement at efforts of observaion are called empirical or scientific research.

Investigators conduct empirical research when the answers to research questions are not readily available from informal research or textual research , when the occasion is kairotic , when personal or financial gains are on the table. That said, most empirical research is informed by textual research: investigators review the conclusions and implications of previously published research past studies—they analyze scholarly conversations and research methods—prior to engaging in empirical studies.

Informally, as humans, we engage routinely in the intellectual strategies that inform empirical research:

  • we talk with others and listen to their stories to better understand their perceptions and experiences,
  • we make observations,
  • we survey friends, peers, coworkers
  • we cross cultures and learn about difference, and
  • we make predictions about future events based on our experiences and observations.

These same intellectual strategies we use to reason from our observations and experiences also undergird empirical research methods. For example,

  • a psychologist might develop a case study based on interviews
  • an anthropologist or sociologist might engage in participant observation to write an ethnographic study
  • a political science researcher might survey voter trends
  • a stock trader may project a stock bounce based on a 30-day moving average.

The main difference between informal and formal empirical research is intentionality : Formal empirical research presupposes a Research Plan , which is sometimes referred to as as Research Protocol . When investigators want their results to be taken seriously they have to employ the research methods a methodological community has for vetting knowledge claims .

Different academic communities (e.g., Natural Sciences, Social Science, Humanities, Arts) have unique ideas about how to conduct empirical research. Professionals in the workplace — e.g., geologists, anthropologists, biologists — use entirely different tools to gather and interpret data. Being credentialed in a particular discipline or profession is tied to mastery of unique methodological practices.

Across disciplines, however, empiricists share a number of operating assumptions: Empiricists

  • develop a research plan prior to engaging in research.
  • seek approval from Ethics Committees when human subjects or animal testing is involved
  • explain how subjects/research participants are chosen and given opportunities to opt in or opt out of studies.

Empiricists are meticulous about how they collect data because their research must be verifiable if they want other empiricists to take their work seriously. In other words, their research plan needs to be so explicit that subsequent researchers can conduct the same study.

Empirical Research is a Rhetorical Practice

Empiricists develop their research question and their research methods by considering their audience and purpose . Prior to initiating a study, researchers conduct secondary research–especially Searching as Strategic Exploration –to identify the current knowledge about a topic. As a consequence of their deep understanding of pertinent scholarly conversations on the topic, empiricists identify gaps in knowledge.

Duignan, B., Fumerton, R.,  Quinton, A. M., & Quinton, B. (2020). Empiricism. Encyclopedia Britannica.  https://www.britannica.com/topic/empiricism

Haswell, R. (2005). NCTE/CCCC’s recent war on scholarship. Written Communication, 22 (2), 198-223.

Brevity - Say More with Less

Brevity - Say More with Less

Clarity (in Speech and Writing)

Clarity (in Speech and Writing)

Coherence - How to Achieve Coherence in Writing

Coherence - How to Achieve Coherence in Writing

Diction

Flow - How to Create Flow in Writing

Inclusivity - Inclusive Language

Inclusivity - Inclusive Language

Simplicity

The Elements of Style - The DNA of Powerful Writing

Unity

Suggested Edits

  • Please select the purpose of your message. * - Corrections, Typos, or Edits Technical Support/Problems using the site Advertising with Writing Commons Copyright Issues I am contacting you about something else
  • Your full name
  • Your email address *
  • Page URL needing edits *
  • Email This field is for validation purposes and should be left unchanged.

Other Topics:

Citation - Definition - Introduction to Citation in Academic & Professional Writing

Citation - Definition - Introduction to Citation in Academic & Professional Writing

  • Joseph M. Moxley

Explore the different ways to cite sources in academic and professional writing, including in-text (Parenthetical), numerical, and note citations.

Collaboration - What is the Role of Collaboration in Academic & Professional Writing?

Collaboration - What is the Role of Collaboration in Academic & Professional Writing?

Collaboration refers to the act of working with others or AI to solve problems, coauthor texts, and develop products and services. Collaboration is a highly prized workplace competency in academic...

Genre

Genre may reference a type of writing, art, or musical composition; socially-agreed upon expectations about how writers and speakers should respond to particular rhetorical situations; the cultural values; the epistemological assumptions...

Grammar

Grammar refers to the rules that inform how people and discourse communities use language (e.g., written or spoken English, body language, or visual language) to communicate. Learn about the rhetorical...

Information Literacy - Discerning Quality Information from Noise

Information Literacy - Discerning Quality Information from Noise

Information Literacy refers to the competencies associated with locating, evaluating, using, and archiving information. In order to thrive, much less survive in a global information economy — an economy where information functions as a...

Mindset

Mindset refers to a person or community’s way of feeling, thinking, and acting about a topic. The mindsets you hold, consciously or subconsciously, shape how you feel, think, and act–and...

Rhetoric: Exploring Its Definition and Impact on Modern Communication

Rhetoric: Exploring Its Definition and Impact on Modern Communication

Learn about rhetoric and rhetorical practices (e.g., rhetorical analysis, rhetorical reasoning,  rhetorical situation, and rhetorical stance) so that you can strategically manage how you compose and subsequently produce a text...

Style

Style, most simply, refers to how you say something as opposed to what you say. The style of your writing matters because audiences are unlikely to read your work or...

The Writing Process - Research on Composing

The Writing Process - Research on Composing

The writing process refers to everything you do in order to complete a writing project. Over the last six decades, researchers have studied and theorized about how writers go about...

Writing Studies

Writing Studies

Writing studies refers to an interdisciplinary community of scholars and researchers who study writing. Writing studies also refers to an academic, interdisciplinary discipline – a subject of study. Students in...

Featured Articles

Student engrossed in reading on her laptop, surrounded by a stack of books

Academic Writing – How to Write for the Academic Community

empirical research is concerned with

Professional Writing – How to Write for the Professional World

empirical research is concerned with

Authority – How to Establish Credibility in Speech & Writing

Banner

  • University of Memphis Libraries
  • Research Guides

Empirical Research: Defining, Identifying, & Finding

Introduction.

  • Defining Empirical Research

The Introduction Section

  • Database Tools
  • Search Terms
  • Image Descriptions

The Introduction exists to explain the research project and to justify why this research has been done. The introduction will discuss: 

  • The topic covered by the research,
  • Previous research done on this topic,
  • What is still unknown about the topic that this research will answer, and
  • Why someone would want to know that answer.

What Criteria to Look For

The "Introduction" is where you are most likely to find the  research question . 

Finding the Criteria

The research question may not be clearly labeled in the Introduction. Often, the author(s) may rephrase their question as a research statement or a hypothesis . Some research may have more than one research question or a research question with multiple parts. 

Words That May Signify the Research Question

These are some common word choices authors make when they are describing their research question as a research statement or hypothesis. 

  • Hypothesize, hypothesized, or hypothesis
  • Investigation, investigate(s), or investigated
  • Predict(s) or predicted
  • Evaluate(s) or evaluated
  • This research, this study, the current study, or this paper
  • The aim of this study or this research

You might also look for common question words (who, what, when, where, why, how) in a statement to see if it might be a rephrased research question. 

What Headings to Look Under

  • General heading for the section. 
  • Since this is the first heading after the title and abstract, some authors leave it unlabeled. 
  • Likely where the research question is located if there is not a separate heading for it. 
  • Explicit discussion of what is being investigated in the research. 
  • Should have some form of the research question.
  • Often a separate heading where the authors discuss previous research done on the topic. 
  • May be labeled by the topic being reviewed. 
  • Less likely to find the research question clearly stated. The authors may be talking about their topic more broadly than their current research question. 
  • Single "Introduction" heading. 
  • Includes phrase "this paper."
  • Includes question word "how." 
  • You could turn the phrase "how people perceive inequality in outcomes and risk at the collective level" into the question "How do  people perceive inequality in outcomes and risk at the collective level?"
  • Labeled "Introduction" heading along with headings for topics of literature review. 
  • Includes phrase "this research investigates." 
  • Includes question word "how."
  • You could turn the phrase "how LGBTQ college students negotiate the hookup scene on college campuses" into the question "How do LGBTQ college students negotiate the hookup scene on college campuses?"  
  • Beginning of Introduction section is unlabeled. It then includes headings for different parts of the literature review and ends with a heading called "The Current Study" on page 573 for discussing the research questions.  
  • Includes the words and phrases "aim of this study," "hypothesized," and "predicted." 
  • You could turn the phrase "examine the effects of racial discrimination on anxiety symptom distress" into the question "What are the effects of racial discrimination on anxiety symptom distress?"
  • You could turn the phrase "explore the moderating role of internalized racism in the link between racial discrimination and changes in anxiety symptom distress" into the question "How doe internalized racism moderate the link ink between racial discrimination and changes in anxiety symptom distress?"
  • << Previous: Identifying Empirical Research
  • Next: Methods >>
  • Last Updated: Apr 2, 2024 11:25 AM
  • URL: https://libguides.memphis.edu/empirical-research

Canvas | University | Ask a Librarian

  • Library Homepage
  • Arrendale Library

Empirical Research: Quantitative & Qualitative

  • Empirical Research

Introduction: What is Empirical Research?

Quantitative methods, qualitative methods.

  • Quantitative vs. Qualitative
  • Reference Works for Social Sciences Research
  • Contact Us!

 Call us at 706-776-0111

  Chat with a Librarian

  Send Us Email

  Library Hours

Empirical research  is based on phenomena that can be observed and measured. Empirical research derives knowledge from actual experience rather than from theory or belief. 

Key characteristics of empirical research include:

  • Specific research questions to be answered;
  • Definitions of the population, behavior, or phenomena being studied;
  • Description of the methodology or research design used to study this population or phenomena, including selection criteria, controls, and testing instruments (such as surveys);
  • Two basic research processes or methods in empirical research: quantitative methods and qualitative methods (see the rest of the guide for more about these methods).

(based on the original from the Connelly LIbrary of LaSalle University)

empirical research is concerned with

Empirical Research: Qualitative vs. Quantitative

Learn about common types of journal articles that use APA Style, including empirical studies; meta-analyses; literature reviews; and replication, theoretical, and methodological articles.

Academic Writer

© 2024 American Psychological Association.

  • More about Academic Writer ...

Quantitative Research

A quantitative research project is characterized by having a population about which the researcher wants to draw conclusions, but it is not possible to collect data on the entire population.

  • For an observational study, it is necessary to select a proper, statistical random sample and to use methods of statistical inference to draw conclusions about the population. 
  • For an experimental study, it is necessary to have a random assignment of subjects to experimental and control groups in order to use methods of statistical inference.

Statistical methods are used in all three stages of a quantitative research project.

For observational studies, the data are collected using statistical sampling theory. Then, the sample data are analyzed using descriptive statistical analysis. Finally, generalizations are made from the sample data to the entire population using statistical inference.

For experimental studies, the subjects are allocated to experimental and control group using randomizing methods. Then, the experimental data are analyzed using descriptive statistical analysis. Finally, just as for observational data, generalizations are made to a larger population.

Iversen, G. (2004). Quantitative research . In M. Lewis-Beck, A. Bryman, & T. Liao (Eds.), Encyclopedia of social science research methods . (pp. 897-898). Thousand Oaks, CA: SAGE Publications, Inc.

Qualitative Research

What makes a work deserving of the label qualitative research is the demonstrable effort to produce richly and relevantly detailed descriptions and particularized interpretations of people and the social, linguistic, material, and other practices and events that shape and are shaped by them.

Qualitative research typically includes, but is not limited to, discerning the perspectives of these people, or what is often referred to as the actor’s point of view. Although both philosophically and methodologically a highly diverse entity, qualitative research is marked by certain defining imperatives that include its case (as opposed to its variable) orientation, sensitivity to cultural and historical context, and reflexivity. 

In its many guises, qualitative research is a form of empirical inquiry that typically entails some form of purposive sampling for information-rich cases; in-depth interviews and open-ended interviews, lengthy participant/field observations, and/or document or artifact study; and techniques for analysis and interpretation of data that move beyond the data generated and their surface appearances. 

Sandelowski, M. (2004).  Qualitative research . In M. Lewis-Beck, A. Bryman, & T. Liao (Eds.),  Encyclopedia of social science research methods . (pp. 893-894). Thousand Oaks, CA: SAGE Publications, Inc.

  • Next: Quantitative vs. Qualitative >>
  • Last Updated: Mar 22, 2024 10:47 AM
  • URL: https://library.piedmont.edu/empirical-research
  • Ebooks & Online Video
  • New Materials
  • Renew Checkouts
  • Faculty Resources
  • Friends of the Library
  • Library Services
  • Request Books from Demorest
  • Our Mission
  • Library History
  • Ask a Librarian!
  • Making Citations
  • Working Online

Friend us on Facebook!

Arrendale Library Piedmont University 706-776-0111

Banner

Empirical Research: What is Empirical Research?

  • What is Empirical Research?
  • Finding Empirical Research in Library Databases
  • Designing Empirical Research

Introduction

Empirical research is based on observed and measured phenomena and derives knowledge from actual experience rather than from theory or belief. 

How do you know if a study is empirical? Read the subheadings within the article, book, or report and look for a description of the research "methodology." Ask yourself: Could I recreate this study and test these results?

Key characteristics to look for:

  • Specific research questions to be answered
  • Definition of the population, behavior, or phenomena being studied
  • Description of the process used to study this population or phenomena, including selection criteria, controls, and testing instruments (such as surveys)

Another hint: some scholarly journals use a specific layout, called the "IMRaD" format (Introduction – Method – Results – and – Discussion), to communicate empirical research findings. Such articles typically have 4 components:

  • Introduction : sometimes called "literature review" -- what is currently known about the topic -- usually includes a theoretical framework and/or discussion of previous studies
  • Methodology : sometimes called "research design" -- how to recreate the study -- usually describes the population, research process, and analytical tools
  • Results : sometimes called "findings" -- what was learned through the study -- usually appears as statistical data or as substantial quotations from research participants
  • Discussion : sometimes called "conclusion" or "implications" -- why the study is important -- usually describes how the research results influence professional practices or future studies

empirical research is concerned with

Empirical research  is published in books and in  scholarly, peer-reviewed journals .

Make sure to select the  peer-review box  within each database!

  • Next: Finding Empirical Research in Library Databases >>
  • Last Updated: Nov 21, 2022 8:55 AM
  • URL: https://libguides.lahc.edu/empirical

Library Homepage

  • Literature Reviews
  • Empirical Research
  • Tests & Measurements
  • Clinical Psychology
  • Educational Psychology
  • Citing Sources This link opens in a new window
  • Get Assistance

Your Liaison Librarian

Profile Photo

What is Empirical Research?

Empirical research refers to a way of gaining knowledge using direct or indirect observation or experience. Therefore an empirical research article will report research based on observations, experiments, surveys, or other data collected. Empirical research can be either qualitative and quantitative in nature. 

Empirical research, especially for psychology, will most likely be published in academic/scholarly journals. Fortunately, as a Murray State University student, you have access to a variety of scholarly databases, such as PsychINFO and Academic Search Complete. When looking for empirical research articles, look to see if the author specifically references data, surveys, assessments, or any other methods for observations/experiments. Typically they will include the survey instrument and any graphical representations of data.

Example of data from the article: Burton CZ, Ryan KA, Kamali M, et al. Psychosis in bipolar disorder: Does it represent a more “severe” illness?. Bipolar Disord. 2018;20:18–26

Example from: Burton CZ, Ryan KA, Kamali M, et al. Psychosis in bipolar disorder: Does it represent a more “severe” illness?. Bipolar Disord . 2018;20:18–26. https://doi.org/10.1111/bdi.12527

Empirical articles will usually contain the following sections (headers might differ slightly): Introduction, Literature Review, Methodology, Results, Discussion, Conclusion, and References.

Searching for Empirical Articles

  • PsycINFO This link opens in a new window Provides indexing and abstracts for over one million articles in 1,700 journals from over 50 countries. This database, provided by the American Psychological Association, also includes abstracts for dissertations, books and book chapters ranging in date from 1887 to the present.

Finding Empirical Research Articles Using PsychINFO

PsycINFO, one of the most useful databases for psychology research, makes it quite simple to find empirical research. 

First, log-in to the database, and find the "Select a Field" box.

Search box with "Select a Field" option

Next, select "Methodology" from the dropdown menu. 

empirical research is concerned with

After selecting the methodology filter for the search box, type in "Empirical Study." This will limit all results to articles classified as such.

Below is an example of what your search may look like, combining the methodology filter with a search term.

Sample search: "Bipolar Disorder" AND Methodology: "Empirical Study"

Empirical research articles will also be identified as such on the item record page.

empirical research is concerned with

Finding Empirical Research Articles when there is no Methodology Filter in the Database

PsycINFO is unique in the fact the empirical study label is included in an item's metadata. Most databases, such as Academic Search Complete or SocINDEX do not have this feature. In other databases, you may try using terms such as "study" as a keyword in an unfiltered search box. There may be some trial-and-error involved, but below are a list of suggested terms you can add to your search to find empirical research articles:

  • empirical research
  • empirical study
  • comparative study
  • quantitative study
  • qualitative study
  • longitudinal study
  • observation
  • participants
  • participant group

Example search query: social media AND teenagers AND studies

  • << Previous: Literature Reviews
  • Next: Websites >>
  • Last Updated: Mar 26, 2024 1:53 PM
  • URL: https://lib.murraystate.edu/psychology
  • quicklinks Academic admin council Academic calendar Academic stds cte Admission Advising African studies Alumni engagement American studies Anthropology/sociology Arabic Arboretum Archives Arcus center Art Assessment committee Athletics Athletic training Biology Biology&chem center Black faculty&staff assoc Bookstore BrandK Business office Campus event calendar Campus safety Catalog Career & prof dev Health science Ctr for civic engagement Ctr for international pgrms Chemistry Chinese Classics College communication Community & global health Community council Complex systems studies Computer science Copyright Counseling Council of student reps Crisis response Critical ethnic studies Critical theory Development Dining services Directories Disability services Donor relations East Asian studies Economics and business Educational policies cte Educational quality assmt Engineering Environmental stewardship Environmental studies English Experiential education cte Facilities management Facilities reservations Faculty development cte Faculty executive cte Faculty grants Faculty personnel cte Fellowships & grants Festival playhouse Film & media studies Financial aid First year experience Fitness & wellness ctr French Gardens & growing spaces German Global crossroads Health center Jewish studies History Hornet hive Hornet HQ Hornet sports Human resources Inclusive excellence Index (student newspaper) Information services Institutional research Institutional review board Intercultural student life International & area studies International programs Intramural sports Japanese LandSea Learning commons Learning support Lgbtqai+ student resources Library Mail and copy center Math Math/physics center Microsoft Stream Microsoft Teams Moodle Movies (ch 22 online) Music OneDrive Outdoor programs Parents' resources Payroll Phi Beta Kappa Philharmonia Philosophy Physics Physical education Political science Pre-law advising Provost Psychology Public pol & urban affairs Recycling Registrar Religion Religious & spiritual life Research Guides (libguides) Residential life Safety (security) Sexual safety Shared passages program SharePoint online Sophomore experience Spanish Strategic plan Student accounts Student development Student activities Student organizations Study abroad Support staff Sustainability Teaching and learning cte Teaching commons Theatre arts Title IX Webmail Women, gender & sexuality Writing center

Psychology Research Guide

What is empirical research, finding empirical research, what is peer review.

  • Research Tips & Tricks
  • Statistics This link opens in a new window
  • Cite Sources
  • Library FAQ This link opens in a new window

two overlapping conversation bubbles

Empirical research  is based on observed and measured phenomena and derives knowledge from actual experience rather than from theory or belief. 

How do you know if a study is empirical? Read the subheadings within the article, book, or report and look for a description of the research "methodology." Ask yourself: Could I recreate this study and test these results?

Key characteristics to look for:

  • Specific research questions  to be answered
  • Definition of the  population, behavior, or   phenomena  being studied
  • Description of the  process  used to study this population or phenomena, including selection criteria, controls, and testing instruments (such as surveys)

Another hint: some scholarly journals use a specific layout, called the "IMRaD" format, to communicate empirical research findings. Such articles typically have 4 components:

  • Introduction : sometimes called "literature review" -- what is currently known about the topic -- usually includes a theoretical framework and/or discussion of previous studies
  • Methodology:  sometimes called "research design" -- how to recreate the study -- usually describes the population, research process, and analytical tools
  • Results : sometimes called "findings"  --  what was learned through the study -- usually appears as statistical data or as substantial quotations from research participants
  • Discussion : sometimes called "conclusion" or "implications" -- why the study is important -- usually describes how the research results influence professional practices or future studies

Adapted from PennState University Libraries, Empirical Research in the Social Sciences and Education

Empirical research is published in books and in scholarly, peer-reviewed journals. Keep in mind that most library databases do not offer straightforward ways to identifying empirical research.

Finding Empirical Research in PsycINFO

  • PsycInfo Use the "Advanced Search" Type your keywords into the search boxes Scroll down the page to "Methodology," and choose "Empirical Study" Choose other limits, such as publication date, if needed Click on the "Search" button

Finding Empirical Research in PubMed

  • PubMED One technique is to limit your search results after you perform a search: Type in your keywords and click on the "Search" button To the left of your results, under "Article Types," check off the types of studies that interest you Another alternative is to construct a more sophisticated search: From PubMed's main screen, click on "Advanced" link underneath the search box On the Advanced Search Builder screen type your keywords into the search boxes Change one of the empty boxes from "All Fields" to "Publication Type" To the right of Publication Type, click on "Show Index List" and choose a methodology that interests you. You can choose more than one by holding down the "Ctrl" or "⌘" on your keyboard as you click on each methodology Click on the "Search" button

Finding Empirical Research in Library OneSearch & Google Scholar

These tools do not have a method for locating empirical research. Using "empirical" as a keyword will find some studies, but miss many others. Consider using one of the more specialized databases above.

  • Library OneSearch
  • Google Scholar

This refers to the process where authors who are doing research submit a paper they have written to a journal. The journal editor then sends the article to the author's peers (researchers and scholars) who are in the same discipline for review. The reviewers determine if the article should be published based on the quality of the research, including the validity of the data, the conclusions the authors' draw and the originality of the research. This process is important because it validates the research and gives it a sort of "seal of approval" from others in the research community.

Identifying a Journal is Peer-Reviewed

One of the best places to find out if a journal is peer-reviewed is to go to the journal website.

Most publishers have a website for a journal that tells you about the journal, how authors can submit an article, and what the process is for getting published.

If you find the journal website, look for the link that says information for authors, instructions for authors, submitting an article or something similar.

Finding Peer-Reviewed Articles

Start in a library database. Look for a peer-review or scholarly filter.

  • PsycInfo Most comprehensive database of psychology. Filters allow you to limit by methodology. Articles without full-text can be requested via Interlibrary loan.
  • Library OneSearch Search almost all the library resources. Look for a peer-review filter on the left.
  • << Previous: Start Here
  • Next: Research Tips & Tricks >>
  • Last Updated: Apr 4, 2024 4:11 PM
  • URL: https://libguides.kzoo.edu/psyc

Pediaa.Com

Home » Education » Difference Between Conceptual and Empirical Research

Difference Between Conceptual and Empirical Research

The main difference between conceptual and empirical research is that conceptual research involves abstract ideas and concepts, whereas empirical research involves research based on observation, experiments and verifiable evidence.

Conceptual research and empirical research are two ways of doing scientific research. These are two opposing types of research frameworks since conceptual research doesn’t involve any experiments and empirical research does.

Key Areas Covered

1. What is Empirical Research     – Definition, Characteristics, Uses 2. What is Empirical Research     – Definition, Characteristics, Uses 3. What is the Difference Between Conceptual and Empirical Research     – Comparison of Key Differences

Conceptual Research, Empirical Research, Research

Difference Between Conceptual and Empirical Research - Comparison Summary

What is Conceptual Research?

Conceptual research is a type of research that is generally related to abstract ideas or concepts. It doesn’t particularly involve any practical experimentation. However, this type of research typically involves observing and analyzing information already present on a given topic. Philosophical research is a generally good example for conceptual research.

Conceptual research can be used to solve real-world problems. Conceptual frameworks, which are analytical tools researchers use in their studies, are based on conceptual research. Furthermore, these frameworks help to make conceptual distinctions and organize ideas researchers need for research purposes.

Main Difference - Conceptual vs Empirical Research

Figure 2: Conceptual Framework

In simple words, a conceptual framework is the researcher’s synthesis of the literature (previous research studies) on how to explain a particular phenomenon. It explains the actions required in the course of the study based on the researcher’s observations on the subject of research as well as the knowledge gathered from previous studies.

What is Empirical Research?

Empirical research is basically a research that uses empirical evidence. Empirical evidence refers to evidence verifiable by observation or experience rather than theory or pure logic. Thus, empirical research is research studies with conclusions based on empirical evidence. Moreover, empirical research studies are observable and measurable.

Empirical evidence can be gathered through qualitative research studies or quantitative research studies . Qualitative research methods gather non-numerical or non-statistical data. Thus, this type of studies helps to understand the underlying reasons, opinions, and motivations behind something as well as to uncover trends in thought and opinions. Quantitative research studies, on the other hand, gather statistical data. These have the ability to quantify behaviours, opinions, or other defined variables. Moreover, a researcher can even use a combination of quantitative and qualitative methods to find answers to his research questions .

Difference Between Conceptual and Empirical Research

Figure 2: Empirical Research Cycle

A.D. de Groot, a famous psychologist, came up with a cycle (figure 2) to explain the process of the empirical research process. Moreover, this cycle has five steps, each as important as the other. These steps include observation, induction, deduction, testing and evaluation.

Conceptual research is a type of research that is generally related to abstract ideas or concepts whereas empirical research is any research study where conclusions of the study are drawn from evidence verifiable by observation or experience rather than theory or pure logic.

Conceptual research involves abstract idea and concepts; however, it doesn’t involve any practical experiments. Empirical research, on the other hand, involves phenomena that are observable and measurable.

Type of Studies

Philosophical research studies are examples of conceptual research studies, whereas empirical research includes both quantitative and qualitative studies.

The main difference between conceptual and empirical research is that conceptual research involves abstract ideas and concepts whereas empirical research involves research based on observation, experiments and verifiable evidence.

1.“Empirical Research: Definition, Methods, Types and Examples.” QuestionPro, 14 Dec. 2018, Available here . 2. “Empirical Research.” Wikipedia, Wikimedia Foundation, 15 Sept. 2019, Available here . 3.“Conceptual Research: Definition, Framework, Example and Advantages.” QuestionPro, 18 Sept. 2018, Available here. 4. Patrick. “Conceptual Framework: A Step-by-Step Guide on How to Make One.” SimplyEducate.Me, 4 Dec. 2018, Available here .

Image Courtesy:

1. “APM Conceptual Framework” By LarryDragich – Created for a Technical Management Counsel meeting Previously published: First published in APM Digest in March (CC BY-SA 3.0) via Commons Wikimedia 2. “Empirical Cycle” By Empirical_Cycle.png: TesseUndDaanderivative work: Beao (talk) – Empirical_Cycle.png (CC BY 3.0) via Commons Wikimedia

' src=

About the Author: Hasa

Hasanthi is a seasoned content writer and editor with over 8 years of experience. Armed with a BA degree in English and a knack for digital marketing, she explores her passions for literature, history, culture, and food through her engaging and informative writing.

​You May Also Like These

Leave a reply cancel reply.

Book cover

Experimentation in Software Engineering pp 9–36 Cite as

Empirical Strategies

  • Claes Wohlin 7 ,
  • Per Runeson 8 ,
  • Martin Höst 8 ,
  • Magnus C. Ohlsson 9 ,
  • Björn Regnell 8 &
  • Anders Wesslén 10  
  • First Online: 01 January 2012

10k Accesses

1 Citations

There are two types of research paradigms that have different approaches to empirical studies. Exploratory research is concerned with studying objects in their natural setting and letting the findings emerge from the observations. This implies that a flexible research design [1] is needed to adapt to changes in the observed phenomenon. Flexible design research is also referred to as qualitative research , as it primarily is informed by qualitative data. Inductive research attempts to interpret a phenomenon based on explanations that people bring forward. It is concerned with discovering causes noticed by the subjects in the study, and understanding their view of the problem at hand. The subject is the person, which is taking part in an empirical study in order to evaluate an object.

  • Technology Transfer
  • Software Engineering
  • Systematic Literature Review
  • Empirical Strategy
  • Case Study Research

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Latin for “in the glass” and refers to chemical experiments in the test tube.

Latin for “in life” and refers to experiments in a real environment.

Anastas, J.W., MacDonald, M.L.: Research Design for the Social Work and the Human Services, 2nd edn. Columbia University Press, New York (2000)

Google Scholar  

Andersson, C., Runeson, P.: A spiral process model for case studies on software quality monitoring – method and metrics. Softw. Process: Improv. Pract. 12 (2), 125–140 (2007). doi: 10.1002/spip.311

Andrews, A.A., Pradhan, A.S.: Ethical issues in empirical software engineering: the limits of policy. Empir. Softw. Eng. 6 (2), 105–110 (2001)

American Psychological Association: Ethical principles of psychologists and code of conduct. Am. Psychol. 47 , 1597–1611 (1992)

Avison, D., Baskerville, R., Myers, M.: Controlling action research projects. Inf. Technol. People 14 (1), 28–45 (2001). doi: 10.1108/09593840110384762 http://www.emeraldinsight.com/10.1108/09593840110384762

Babbie, E.R.: Survey Research Methods. Wadsworth, Belmont (1990)

Basili, V.R.: Quantitative evaluation of software engineering methodology. In: Proceedings of the First Pan Pacific Computer Conference, vol. 1, pp. 379–398. Australian Computer Society, Melbourne (1985)

Basili, V.R.: Software development: a paradigm for the future. In: Proceedings of the 13th Annual International Computer Software and Applications Conference, COMPSAC’89, Orlando, pp. 471–485. IEEE Computer Society Press, Washington (1989)

Basili, V.R.: The experimental paradigm in software engineering. In: H.D. Rombach, V.R. Basili, R.W. Selby (eds.) Experimental Software Engineering Issues: Critical Assessment and Future Directives. Lecture Notes in Computer Science, vol. 706. Springer, Berlin Heidelberg (1993)

Basili, V.R.: Evolving and packaging reading technologies. J. Syst. Softw. 38 (1), 3–12 (1997)

Basili, V.R., Weiss, D.M.: A methodology for collecting valid software engineering data. IEEE Trans. Softw. Eng. 10 (6), 728–737 (1984)

Basili, V.R., Selby, R.W.: Comparing the effectiveness of software testing strategies. IEEE Trans. Softw. Eng. 13 (12), 1278–1298 (1987)

Basili, V.R., Rombach, H.D.: The TAME project: towards improvement-oriented software environments. IEEE Trans. Softw. Eng. 14 (6), 758–773 (1988)

Basili, V.R., Green, S.: Software process evaluation at the SEL. IEEE Softw. 11 (4), pp. 58–66 (1994)

Basili, V.R., Selby, R.W., Hutchens, D.H.: Experimentation in software engineering. IEEE Trans. Softw. Eng. 12 (7), 733–743 (1986)

Basili, V.R., Caldiera, G., Rombach, H.D.: Experience factory. In: J.J. Marciniak (ed.) Encyclopedia of Software Engineering, pp. 469–476. Wiley, New York (1994)

Basili, V.R., Caldiera, G., Rombach, H.D.: Goal Question Metrics paradigm. In: J.J. Marciniak (ed.) Encyclopedia of Software Engineering, pp. 528–532. Wiley (1994)

Basili, V.R., Green, S., Laitenberger, O., Lanubile, F., Shull, F., Sørumgård, S., Zelkowitz, M.V.: The empirical investigation of perspective-based reading. Empir. Soft. Eng. 1 (2), 133–164 (1996)

Basili, V.R., Green, S., Laitenberger, O., Lanubile, F., Shull, F., Sørumgård, S., Zelkowitz, M.V.: Lab package for the empirical investigation of perspective-based reading. Technical report, Univeristy of Maryland (1998). http://www.cs.umd.edu/projects/SoftEng/ESEG/manual/pbr_package/manual.html

Basili, V.R., Shull, F., Lanubile, F.: Building knowledge through families of experiments. IEEE Trans. Softw. Eng. 25 (4), 456–473 (1999)

Baskerville, R.L., Wood-Harper, A.T.: A critical perspective on action research as a method for information systems research. J. Inf. Technol. 11 (3), 235–246 (1996). doi: 10.1080/026839696345289

Benbasat, I., Goldstein, D.K., Mead, M.: The case research strategy in studies of information systems. MIS Q. 11 (3), 369 (1987). doi: 10.2307/248684

Bergman, B., Klefsjö, B.: Quality from Customer Needs to Customer Satisfaction. Studentlitteratur, Lund (2010)

Brereton, P., Kitchenham, B.A., Budgen, D., Turner, M., Khalil, M.: Lessons from applying the systematic literature review process within the software engineering domain. J. Syst. Softw. 80 (4), 571–583 (2007). doi: 10.1016/j.jss.2006.07.009

Brereton, P., Kitchenham, B.A., Budgen, D.: Using a protocol template for case study planning. In: Proceedings of the 12th International Conference on Evaluation and Assessment in Software Engineering. University of Bari, Italy (2008)

Briand, L.C., Differding, C.M., Rombach, H.D.: Practical guidelines for measurement-based process improvement. Softw. Process: Improv. Pract. 2 (4), 253–280 (1996)

Briand, L.C., El Emam, K., Morasca, S.: On the application of measurement theory in software engineering. Empir. Softw. Eng. 1 (1), 61–88 (1996)

Briand, L.C., Bunse, C., Daly, J.W.: A controlled experiment for evaluating quality guidelines on the maintainability of object-oriented designs. IEEE Trans. Softw. Eng. 27 (6), 513–530 (2001)

British Psychological Society: Ethical principles for conducting research with human participants. Psychologist 6 (1), 33–35 (1993)

Budgen, D., Kitchenham, B.A., Charters, S., Turner, M., Brereton, P., Linkman, S.: Presenting software engineering results using structured abstracts: a randomised experiment. Empir. Softw. Eng. 13 , 435–468 (2008). doi: 10.1007/s10664-008-9075-7

Budgen, D., Burn, A.J., Kitchenham, B.A.: Reporting computing projects through structured abstracts: a quasi-experiment. Empir. Softw. Eng. 16 (2), 244–277 (2011). doi: 10.1007/s10664-010-9139-3

Campbell, D.T., Stanley, J.C.: Experimental and Quasi-experimental Designs for Research. Houghton Mifflin Company, Boston (1963)

Chrissis, M.B., Konrad, M., Shrum, S.: CMMI(R): Guidelines for process integration and product improvement. Technical report, SEI (2003)

Ciolkowski, M., Differding, C.M., Laitenberger, O., Münch, J.: Empirical investigation of perspective-based reading: A replicated experiment. Technical report, 97-13, ISERN (1997)

Coad, P., Yourdon, E.: Object-Oriented Design, 1st edn. Prentice-Hall, Englewood (1991)

Cohen, J.: Weighted kappa: nominal scale agreement with provision for scaled disagreement or partial credit. Psychol. Bull. 70 , 213–220 (1968)

Cook, T.D., Campbell, D.T.: Quasi-experimentation – Design and Analysis Issues for Field Settings. Houghton Mifflin Company, Boston (1979)

Corbin, J., Strauss, A.: Basics of Qualitative Research, 3rd edn. SAGE, Los Angeles (2008)

Cruzes, D.S., Dybå, T.: Research synthesis in software engineering: a tertiary study. Inf. Softw. Technol. 53 (5), 440–455 (2011). doi: 10.1016/j.infsof.2011.01.004

Dalkey, N., Helmer, O.: An experimental application of the delphi method to the use of experts. Manag. Sci. 9 (3), 458–467 (1963)

DeMarco, T.: Controlling Software Projects. Yourdon Press, New York (1982)

Demming, W.E.: Out of the Crisis. MIT Centre for Advanced Engineering Study, MIT Press, Cambridge, MA (1986)

Dieste, O., Grimán, A., Juristo, N.: Developing search strategies for detecting relevant experiments. Empir. Softw. Eng. 14 , 513–539 (2009). http://dx.doi.org/10.1007/s10664-008-9091-7

Dittrich, Y., Rönkkö, K., Eriksson, J., Hansson, C., Lindeberg, O.: Cooperative method development. Empir. Softw. Eng. 13 (3), 231–260 (2007). doi: 10.1007/s10664-007-9057-1

Doolan, E.P.: Experiences with Fagan’s inspection method. Softw. Pract. Exp. 22 (2), 173–182 (1992)

Dybå, T., Dingsøyr, T.: Empirical studies of agile software development: a systematic review. Inf. Softw. Technol. 50 (9-10), 833–859 (2008). doi: DOI: 10.1016/j.infsof.2008.01.006

Dybå, T., Dingsøyr, T.: Strength of evidence in systematic reviews in software engineering. In: Proceedings of the 2nd ACM-IEEE International Symposium on Empirical Software Engineering and Measurement, ESEM ’08, Kaiserslautern, pp. 178–187. ACM, New York (2008). doi:  http://doi.acm.org/10.1145/1414004.1414034

Dybå, T., Kitchenham, B.A., Jørgensen, M.: Evidence-based software engineering for practitioners. IEEE Softw. 22 , 58–65 (2005). doi: http://doi.ieeecomputersociety.org/10.1109/MS.2005.6

Dybå, T., Kampenes, V.B., Sjøberg, D.I.K.: A systematic review of statistical power in software engineering experiments. Inf. Softw. Technol. 48 (8), 745–755 (2006). doi:  10.1016/j.infsof.2005.08.009

Easterbrook, S., Singer, J., Storey, M.-A., Damian, D.: Selecting empirical methods for software engineering research. In: F. Shull, J. Singer, D.I. Sjøberg (eds.) Guide to Advanced Empirical Software Engineering. Springer, London (2008)

Eick, S.G., Loader, C.R., Long, M.D., Votta, L.G., Vander Wiel, S.A.: Estimating software fault content before coding. In: Proceedings of the 14th International Conference on Software Engineering, Melbourne, pp. 59–65. ACM Press, New York (1992)

Eisenhardt, K.M.: Building theories from case study research. Acad. Manag. Rev. 14 (4), 532 (1989). doi: 10.2307/258557

Endres, A., Rombach, H.D.: A Handbook of Software and Systems Engineering – Empirical Observations, Laws and Theories. Pearson Addison-Wesley, Harlow/New York (2003)

Fagan, M.E.: Design and code inspections to reduce errors in program development. IBM Syst. J. 15 (3), 182–211 (1976)

Fenton, N.: Software measurement: A necessary scientific basis. IEEE Trans. Softw. Eng. 3 (20), 199–206 (1994)

Fenton, N., Pfleeger, S.L.: Software Metrics: A Rigorous and Practical Approach, 2nd edn. International Thomson Computer Press, London (1996)

Fenton, N., Pfleeger, S.L., Glass, R.: Science and substance: A challenge to software engineers. IEEE Softw. 11 , 86–95 (1994)

Fink, A.: The Survey Handbook, 2nd edn. SAGE, Thousand Oaks/London (2003)

Flyvbjerg, B.: Five misunderstandings about case-study research. In: Qualitative Research Practice, concise paperback edn., pp. 390–404. SAGE, London (2007)

Frigge, M., Hoaglin, D.C., Iglewicz, B.: Some implementations of the boxplot. Am. Stat. 43 (1), 50–54 (1989)

Fusaro, P., Lanubile, F., Visaggio, G.: A replicated experiment to assess requirements inspection techniques. Empir. Softw. Eng. 2 (1), 39–57 (1997)

Glass, R.L.: The software research crisis. IEEE Softw. 11 , 42–47 (1994)

Glass, R.L., Vessey, I., Ramesh, V.: Research in software engineering: An analysis of the literature. Inf. Softw. Technol. 44 (8), 491–506 (2002). doi: 10.1016/S0950-5849(02)00049-6

Gómez, O.S., Juristo, N., Vegas, S.: Replication types in experimental disciplines. In: Proceedings of the 4th ACM-IEEE International Symposium on Empirical Software Engineering and Measurement, Bolzano-Bozen (2010)

Gorschek, T., Wohlin, C.: Requirements abstraction model. Requir. Eng. 11 , 79–101 (2006). doi: 10.1007/s00766-005-0020-7

Gorschek, T., Garre, P., Larsson, S., Wohlin, C.: A model for technology transfer in practice. IEEE Softw. 23 (6), 88–95 (2006)

Gorschek, T., Garre, P., Larsson, S., Wohlin, C.: Industry evaluation of the requirements abstraction model. Requir. Eng. 12 , 163–190 (2007). doi: 10.1007/s00766-007-0047-z

Grady, R.B., Caswell, D.L.: Software Metrics: Establishing a Company-Wide Program. Prentice-Hall, Englewood (1994)

Grant, E.E., Sackman, H.: An exploratory investigation of programmer performance under on-line and off-line conditions. IEEE Trans. Human Factor Electron. HFE-8 (1), 33–48 (1967)

Gregor, S.: The nature of theory in information systems. MIS Q. 30 (3), 491–506 (2006)

Hall, T., Flynn, V.: Ethical issues in software engineering research: a survey of current practice. Empir. Softw. Eng. 6 , 305–317 (2001)

Hannay, J.E., Sjøberg, D.I.K., Dybå, T.: A systematic review of theory use in software engineering experiments. IEEE Trans. Softw. Eng. 33 (2), 87–107 (2007). doi: 10.1109/TSE.2007.12

Hannay, J.E., Dybå, T., Arisholm, E., Sjøberg, D.I.K.: The effectiveness of pair programming: a meta-analysis. Inf. Softw. Technol. 51 (7), 1110–1122 (2009). doi: 10.1016/j.infsof.2009.02.001

Hayes, W.: Research synthesis in software engineering: a case for meta-analysis. In: Proceedings of the 6th International Software Metrics Symposium, Boca Raton, pp. 143–151 (1999)

Hetzel, B.: Making Software Measurement Work: Building an Effective Measurement Program. Wiley, New York (1993)

Hevner, A.R., March, S.T., Park, J., Ram, S.: Design science in information systems research. MIS Q. 28 (1), 75–105 (2004)

Höst, M., Regnell, B., Wohlin, C.: Using students as subjects – a comparative study of students and professionals in lead-time impact assessment. Empir. Softw. Eng. 5 (3), 201–214 (2000)

Höst, M., Wohlin, C., Thelin, T.: Experimental context classification: Incentives and experience of subjects. In: Proceedings of the 27th International Conference on Software Engineering, St. Louis, pp. 470–478 (2005)

Höst, M., Runeson, P.: Checklists for software engineering case study research. In: Proceedings of the 1st International Symposium on Empirical Software Engineering and Measurement, Madrid, pp. 479–481 (2007)

Hove, S.E., Anda, B.: Experiences from conducting semi-structured interviews in empirical software engineering research. In: Proceedings of the 11th IEEE International Software Metrics Symposium, pp. 1–10. IEEE Computer Society Press, Los Alamitos (2005)

Humphrey, W.S.: Managing the Software Process. Addison-Wesley, Reading (1989)

Humphrey, W.S.: A Discipline for Software Engineering. Addison Wesley, Reading (1995)

Humphrey, W.S.: Introduction to the Personal Software Process. Addison Wesley, Reading (1997)

IEEE: IEEE standard glossary of software engineering terminology. Technical Report, IEEE Std 610.12-1990, IEEE (1990)

Iversen, J.H., Mathiassen, L., Nielsen, P.A.: Managing risk in software process improvement: an action research approach. MIS Q. 28 (3), 395–433 (2004)

Jedlitschka, A., Pfahl, D.: Reporting guidelines for controlled experiments in software engineering. In: Proceedings of the 4th International Symposium on Empirical Software Engineering, Noosa Heads, pp. 95–104 (2005)

Johnson, P.M., Tjahjono, D.: Does every inspection really need a meeting? Empir. Softw. Eng. 3 (1), 9–35 (1998)

Juristo, N., Moreno, A.M.: Basics of Software Engineering Experimentation. Springer, Kluwer Academic Publishers, Boston (2001)

Juristo, N., Vegas, S.: The role of non-exact replications in software engineering experiments. Empir. Softw. Eng. 16 , 295–324 (2011). doi: 10.1007/s10664-010-9141-9

Kachigan, S.K.: Statistical Analysis: An Interdisciplinary Introduction to Univariate and Multivariate Methods. Radius Press, New York (1986)

Kachigan, S.K.: Multivariate Statistical Analysis: A Conceptual Introduction, 2nd edn. Radius Press, New York (1991)

Kampenes, V.B., Dyba, T., Hannay, J.E., Sjø berg, D.I.K.: A systematic review of effect size in software engineering experiments. Inf. Softw. Technol. 49 (11–12), 1073–1086 (2007). doi: 10.1016/j.infsof.2007.02.015

Karahasanović, A., Anda, B., Arisholm, E., Hove, S.E., Jørgensen, M., Sjøberg, D., Welland, R.: Collecting feedback during software engineering experiments. Empir. Softw. Eng. 10 (2), 113–147 (2005). doi: 10.1007/s10664-004-6189-4. http://www.springerlink.com/index/10.1007/s10664-004-6189-4

Karlström, D., Runeson, P., Wohlin, C.: Aggregating viewpoints for strategic software process improvement. IEE Proc. Softw. 149 (5), 143–152 (2002). doi: 10.1049/ip-sen:20020696

Kitchenham, B.A.: The role of replications in empirical software engineering – a word of warning. Empir. Softw. Eng. 13 , 219–221 (2008). 10.1007/s10664-008-9061-0

Kitchenham, B.A., Charters, S.: Guidelines for performing systematic literature reviews in software engineering (version 2.3). Technical Report, EBSE Technical Report EBSE-2007-01, Keele University and Durham University (2007)

Kitchenham, B.A., Pickard, L.M., Pfleeger, S.L.: Case studies for method and tool evaluation. IEEE Softw. 12 (4), 52–62 (1995)

Kitchenham, B.A., Pfleeger, S.L., Pickard, L.M., Jones, P.W., Hoaglin, D.C., El Emam, K., Rosenberg, J.: Preliminary guidelines for empirical research in software engineering. IEEE Trans. Softw. Eng. 28 (8), 721–734 (2002). doi: 10.1109/TSE.2002.1027796. http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=1027796

Kitchenham, B., Fry, J., Linkman, S.G.: The case against cross-over designs in software engineering. In: Proceedings of the 11th International Workshop on Software Technology and Engineering Practice, Amsterdam, pp. 65–67. IEEE Computer Society, Los Alamitos (2003)

Kitchenham, B.A., Dybå, T., Jørgensen, M.: Evidence-based software engineering. In: Proceedings of the 26th International Conference on Software Engineering, Edinburgh, pp. 273–281 (2004)

Kitchenham, B.A., Al-Khilidar, H., Babar, M.A., Berry, M., Cox, K., Keung, J., Kurniawati, F., Staples, M., Zhang, H., Zhu, L.: Evaluating guidelines for reporting empirical software engineering studies. Empir. Softw. Eng. 13 (1), 97–121 (2007). doi: 10.1007/s10664-007-9053-5. http://www.springerlink.com/index/10.1007/s10664-007-9053-5

Kitchenham, B.A., Jeffery, D.R., Connaughton, C.: Misleading metrics and unsound analyses. IEEE Softw. 24 , 73–78 (2007). doi: 10.1109/MS.2007.49

Kitchenham, B.A., Brereton, P., Budgen, D., Turner, M., Bailey, J., Linkman, S.G.: Systematic literature reviews in software engineering – a systematic literature review. Inf. Softw. Technol. 51 (1), 7–15 (2009). doi: 10.1016/j.infsof.2008.09.009. http://www.dx.doi.org/10.1016/j.infsof.2008.09.009

Kitchenham, B.A., Pretorius, R., Budgen, D., Brereton, P., Turner, M., Niazi, M., Linkman, S.: Systematic literature reviews in software engineering – a tertiary study. Inf. Softw. Technol.  52 (8), 792–805 (2010). doi: 10.1016/j.infsof.2010.03.006

Kitchenham, B.A., Sjøberg, D.I.K., Brereton, P., Budgen, D., Dybå, T., Höst, M., Pfahl, D., Runeson, P.: Can we evaluate the quality of software engineering experiments? In: Proceedings of the 4th ACM-IEEE International Symposium on Empirical Software Engineering and Measurement. ACM, Bolzano/Bozen (2010)

Kitchenham, B.A., Budgen, D., Brereton, P.: Using mapping studies as the basis for further research – a participant-observer case study. Inf. Softw. Technol. 53 (6), 638–651 (2011). doi: 10.1016/j.infsof.2010.12.011

Laitenberger, O., Atkinson, C., Schlich, M., El Emam, K.: An experimental comparison of reading techniques for defect detection in UML design documents. J. Syst. Softw. 53 (2), 183–204 (2000)

Larsson, R.: Case survey methodology: quantitative analysis of patterns across case studies. Acad. Manag. J. 36 (6), 1515–1546 (1993)

Lee, A.S.: A scientific methodology for MIS case studies. MIS Q. 13 (1), 33 (1989). doi: 10.2307/248698. http://www.jstor.org/stable/248698?origin=crossref

Lehman, M.M.: Program, life-cycles and the laws of software evolution. Proc. IEEE 68 (9), 1060–1076 (1980)

Lethbridge, T.C., Sim, S.E., Singer, J.: Studying software engineers: data collection techniques for software field studies. Empir. Softw. Eng. 10 , 311–341 (2005)

Linger, R.: Cleanroom process model. IEEE Softw. pp. 50–58 (1994)

Linkman, S., Rombach, H.D.: Experimentation as a vehicle for software technology transfer – a family of software reading techniques. Inf. Softw. Technol. 39 (11), 777–780 (1997)

Lucas, W.A.: The case survey method: aggregating case experience. Technical Report, R-1515-RC, The RAND Corporation, Santa Monica (1974)

Lucas, H.C., Kaplan, R.B.: A structured programming experiment. Comput. J. 19 (2), 136–138 (1976)

Lyu, M.R. (ed.): Handbook of Software Reliability Engineering. McGraw-Hill, New York (1996)

Maldonado, J.C., Carver, J., Shull, F., Fabbri, S., Dória, E., Martimiano, L., Mendonça, M., Basili, V.: Perspective-based reading: a replicated experiment focused on individual reviewer effectiveness. Empir. Softw. Eng. 11 , 119–142 (2006). doi: 10.1007/s10664-006-5967-6

Manly, B.F.J.: Multivariate Statistical Methods: A Primer, 2nd edn. Chapman and Hall, London (1994)

Marascuilo, L.A., Serlin, R.C.: Statistical Methods for the Social and Behavioral Sciences. W. H. Freeman and Company, New York (1988)

Miller, J.: Estimating the number of remaining defects after inspection. Softw. Test. Verif. Reliab. 9 (4), 167–189 (1999)

Miller, J.: Applying meta-analytical procedures to software engineering experiments. J. Syst. Softw. 54 (1), 29–39 (2000)

Miller, J.: Statistical significance testing: a panacea for software technology experiments? J. Syst. Softw. 73 , 183–192 (2004). doi:  http://dx.doi.org/10.1016/j.jss.2003.12.019

Miller, J.: Replicating software engineering experiments: a poisoned chalice or the holy grail. Inf. Softw. Technol. 47 (4), 233–244 (2005)

Miller, J., Wood, M., Roper, M.: Further experiences with scenarios and checklists. Empir. Softw. Eng. 3 (1), 37–64 (1998)

Montgomery, D.C.: Design and Analysis of Experiments, 5th edn. Wiley, New York (2000)

Myers, G.J.: A controlled experiment in program testing and code walkthroughs/inspections. Commun. ACM 21 , 760–768 (1978). doi:  http://doi.acm.org/10.1145/359588.359602

Noblit, G.W., Hare, R.D.: Meta-Ethnography: Synthesizing Qualitative Studies. Sage Publications, Newbury Park (1988)

Ohlsson, M.C., Wohlin, C.: A project effort estimation study. Inf. Softw. Technol. 40 (14), 831–839 (1998)

Owen, S., Brereton, P., Budgen, D.: Protocol analysis: a neglected practice. Commun. ACM 49 (2), 117–122 (2006). doi: 10.1145/1113034.1113039

Paulk, M.C., Curtis, B., Chrissis, M.B., Weber, C.V.: Capability maturity model for software. Technical Report, CMU/SEI-93-TR-24, Software Engineering Institute, Pittsburgh (1993)

Petersen, K., Feldt, R., Mujtaba, S., Mattsson, M.: Systematic mapping studies in software engineering. In: Proceedings of the 12th International Conference on Evaluation and Assessment in Software Engineering, Electronic Workshops in Computing (eWIC). BCS, University of Bari, Italy (2008)

Petersen, K., Wohlin, C.: Context in industrial software engineering research. In: Proceedings of the 3rd ACM-IEEE International Symposium on Empirical Software Engineering and Measurement, Lake Buena Vista, pp. 401–404 (2009)

Pfleeger, S.L.: Experimental design and analysis in software engineering part 1–5. ACM Sigsoft, Softw. Eng. Notes, 19 (4), 16–20; 20 (1), 22–26; 20 (2), 14–16; 20 (3), 13–15; 20 , (1994)

Pfleeger, S.L., Atlee, J.M.: Software Engineering: Theory and Practice, 4th edn. Pearson Prentice-Hall, Upper Saddle River (2009)

Pickard, L.M., Kitchenham, B.A., Jones, P.W.: Combining empirical results in software engineering. Inf. Softw. Technol. 40 (14), 811–821 (1998). doi: 10.1016/S0950-5849(98)00101-3

Porter, A.A., Votta, L.G.: An experiment to assess different defect detection methods for software requirements inspections. In: Proceedings of the 16th International Conference on Software Engineering, Sorrento, pp. 103–112 (1994)

Porter, A.A., Votta, L.G.: Comparing detection methods for software requirements inspection: a replicated experiment. IEEE Trans. Softw. Eng. 21 (6), 563–575 (1995)

Porter, A.A., Votta, L.G.: Comparing detection methods for software requirements inspection: a replicated experimentation: a replication using professional subjects. Empir. Softw. Eng. 3 (4), 355–380 (1998)

Porter, A.A., Siy, H.P., Toman, C.A., Votta, L.G.: An experiment to assess the cost-benefits of code inspections in large scale software development. IEEE Trans. Softw. Eng. 23 (6), 329–346 (1997)

Potts, C.: Software engineering research revisited. IEEE Softw. pp. 19–28 (1993)

Rainer, A.W.: The longitudinal, chronological case study research strategy: a definition, and an example from IBM Hursley Park. Inf. Softw. Technol. 53 (7), 730–746 (2011)

Robinson, H., Segal, J., Sharp, H.: Ethnographically-informed empirical studies of software practice. Inf. Softw. Technol. 49 (6), 540–551 (2007). doi: 10.1016/j.infsof.2007.02.007

Robson, C.: Real World Research: A Resource for Social Scientists and Practitioners-Researchers, 1st edn. Blackwell, Oxford/Cambridge (1993)

Robson, C.: Real World Research: A Resource for Social Scientists and Practitioners-Researchers, 2nd edn. Blackwell, Oxford/Madden (2002)

Runeson, P., Skoglund, M.: Reference-based search strategies in systematic reviews. In: Proceedings of the 13th International Conference on Empirical Assessment and Evaluation in Software Engineering. Electronic Workshops in Computing (eWIC). BCS, Durham University, UK (2009)

Runeson, P., Höst, M., Rainer, A.W., Regnell, B.: Case Study Research in Software Engineering. Guidelines and Examples. Wiley, Hoboken (2012)

Sandahl, K., Blomkvist, O., Karlsson, J., Krysander, C., Lindvall, M., Ohlsson, N.: An extended replication of an experiment for assessing methods for software requirements. Empir. Softw. Eng. 3 (4), 381–406 (1998)

Seaman, C.B.: Qualitative methods in empirical studies of software engineering. IEEE Trans. Softw. Eng. 25 (4), 557–572 (1999)

Selby, R.W., Basili, V.R., Baker, F.T.: Cleanroom software development: An empirical evaluation. IEEE Trans. Softw. Eng. 13 (9), 1027–1037 (1987)

Shepperd, M.: Foundations of Software Measurement. Prentice-Hall, London/New York (1995)

Shneiderman, B., Mayer, R., McKay, D., Heller, P.: Experimental investigations of the utility of detailed flowcharts in programming. Commun. ACM 20 , 373–381 (1977). doi: 10.1145/359605.359610

Shull, F.: Developing techniques for using software documents: a series of empirical studies. Ph.D. thesis, Computer Science Department, University of Maryland, USA (1998)

Shull, F., Basili, V.R., Carver, J., Maldonado, J.C., Travassos, G.H., Mendonça, M.G., Fabbri, S.: Replicating software engineering experiments: addressing the tacit knowledge problem. In: Proceedings of the 1st International Symposium on Empirical Software Engineering, Nara, pp. 7–16 (2002)

Shull, F., Mendoncça, M.G., Basili, V.R., Carver, J., Maldonado, J.C., Fabbri, S., Travassos, G.H., Ferreira, M.C.: Knowledge-sharing issues in experimental software engineering. Empir. Softw. Eng.  9 , 111–137 (2004). doi: 10.1023/B:EMSE.0000013516.80487.33

Shull, F., Carver, J., Vegas, S., Juristo, N.: The role of replications in empirical software engineering. Empir. Softw. Eng. 13 , 211–218 (2008). doi: 10.1007/s10664-008-9060-1

Sieber, J.E.: Protecting research subjects, employees and researchers: implications for software engineering. Empir. Softw. Eng. 6 (4), 329–341 (2001)

Siegel, S., Castellan, J.: Nonparametric Statistics for the Behavioral Sciences, 2nd edn. McGraw-Hill International Editions, New York (1988)

Singer, J., Vinson, N.G.: Why and how research ethics matters to you. Yes, you! Empir. Softw. Eng. 6 , 287–290 (2001). doi: 10.1023/A:1011998412776

Singer, J., Vinson, N.G.: Ethical issues in empirical studies of software engineering. IEEE Trans. Softw. Eng. 28 (12), 1171–1180 (2002). doi: 10.1109/TSE.2002.1158289. http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=1158289

Simon S.: Fermat’s Last Theorem. Fourth Estate, London (1997)

Sjøberg, D.I.K., Hannay, J.E., Hansen, O., Kampenes, V.B., Karahasanovic, A., Liborg, N.-K., Rekdal, A.C.: A survey of controlled experiments in software engineering. IEEE Trans. Softw. Eng. 31 (9), 733–753 (2005). doi: 10.1109/TSE.2005.97. http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=1514443

Sjøberg, D.I.K., Dybå, T., Anda, B., Hannay, J.E.: Building theories in software engineering. In: Shull, F., Singer, J., Sjøberg D. (eds.) Guide to Advanced Empirical Software Engineering. Springer, London (2008)

Sommerville, I.: Software Engineering, 9th edn. Addison-Wesley, Wokingham, England/ Reading (2010)

Sørumgård, S.: Verification of process conformance in empirical studies of software development. Ph.D. thesis, The Norwegian University of Science and Technology, Department of Computer and Information Science, Norway (1997)

Stake, R.E.: The Art of Case Study Research. SAGE Publications, Thousand Oaks (1995)

Staples, M., Niazi, M.: Experiences using systematic review guidelines. J. Syst. Softw. 80 (9), 1425–1437 (2007). doi: 10.1016/j.jss.2006.09.046

Thelin, T., Runeson, P.: Capture-recapture estimations for perspective-based reading – a simulated experiment. In: Proceedings of the 1st International Conference on Product Focused Software Process Improvement (PROFES), Oulu, pp. 182–200 (1999)

Thelin, T., Runeson, P., Wohlin, C.: An experimental comparison of usage-based and checklist-based reading. IEEE Trans. Softw. Eng. 29 (8), 687–704 (2003). doi: 10.1109/TSE.2003.1223644

Tichy, W.F.: Should computer scientists experiment more? IEEE Comput. 31 (5), 32–39 (1998)

Tichy, W.F., Lukowicz, P., Prechelt, L., Heinz, E.A.: Experimental evaluation in computer science: a quantitative study. J. Syst. Softw. 28 (1), 9–18 (1995)

Trochim, W.M.K.: The Research Methods Knowledge Base, 2nd edn. Cornell Custom Publishing, Cornell University, Ithaca (1999)

van Solingen, R., Berghout, E.: The Goal/Question/Metric Method: A Practical Guide for Quality Improvement and Software Development. McGraw-Hill International, London/Chicago (1999)

Verner, J.M., Sampson, J., Tosic, V., Abu Bakar, N.A., Kitchenham, B.A.: Guidelines for industrially-based multiple case studies in software engineering. In: Third International Conference on Research Challenges in Information Science, Fez, pp. 313–324 (2009)

Vinson, N.G., Singer, J.: A practical guide to ethical research involving humans. In: Shull, F., Singer, J., Sjøberg, D. (eds.) Guide to Advanced Empirical Software Engineering. Springer, London (2008)

Votta, L.G.: Does every inspection need a meeting? In: Proceedings of the ACM SIGSOFT Symposium on Foundations of Software Engineering, ACM Software Engineering Notes, vol. 18, pp. 107–114. ACM Press, New York (1993)

Wallace, C., Cook, C., Summet, J., Burnett, M.: Human centric computing languages and environments. In: Proceedings of Symposia on Human Centric Computing Languages and Environments, Arlington, pp. 63–65 (2002)

Wohlin, C., Gustavsson, A., Höst, M., Mattsson, C.: A framework for technology introduction in software organizations. In: Proceedings of the Conference on Software Process Improvement, Brighton, pp. 167–176 (1996)

Wohlin, C., Runeson, P., Höst, M., Ohlsson, M.C., Regnell, B., Wesslén, A.: Experimentation in Software Engineering: An Introduction. Kluwer, Boston (2000)

Wohlin, C., Aurum, A., Angelis, L., Phillips, L., Dittrich, Y., Gorschek, T., Grahn, H., Henningsson, K., Kågström, S., Low, G., Rovegård, P., Tomaszewski, P., van Toorn, C., Winter, J.: Success factors powering industry-academia collaboration in software research. IEEE Softw. (PrePrints) (2011). doi:  10.1109/MS.2011.92

Yin, R.K.: Case Study Research Design and Methods, 4th edn. Sage Publications, Beverly Hills (2009)

Zelkowitz, M.V., Wallace, D.R.: Experimental models for validating technology. IEEE Comput. 31 (5), 23–31 (1998)

Zendler, A.: A preliminary software engineering theory as investigated by published experiments. Empir. Softw. Eng. 6 , 161–180 (2001). doi:  http://dx.doi.org/10.1023/A:1011489321999

Download references

Author information

Authors and affiliations.

School of Computing Blekinge Institute of Technology, Karlskrona, Sweden

Claes Wohlin

Department of Computer Science, Lund University, Lund, Sweden

Per Runeson, Martin Höst & Björn Regnell

System Verification Sweden AB, Malmö, Sweden

Magnus C. Ohlsson

ST-Ericsson AB, Lund, Sweden

Anders Wesslén

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter.

Wohlin, C., Runeson, P., Höst, M., Ohlsson, M.C., Regnell, B., Wesslén, A. (2012). Empirical Strategies. In: Experimentation in Software Engineering. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-29044-2_2

Download citation

DOI : https://doi.org/10.1007/978-3-642-29044-2_2

Published : 02 May 2012

Publisher Name : Springer, Berlin, Heidelberg

Print ISBN : 978-3-642-29043-5

Online ISBN : 978-3-642-29044-2

eBook Packages : Computer Science Computer Science (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

SEP home page

  • Table of Contents
  • Random Entry
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Advanced Tools
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Entry Contents

Bibliography

Academic tools.

  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

Moral Psychology: Empirical Approaches

Moral psychology investigates human functioning in moral contexts, and asks how these results may impact debate in ethical theory. This work is necessarily interdisciplinary, drawing on both the empirical resources of the human sciences and the conceptual resources of philosophical ethics. The present article discusses several topics that illustrate this type of inquiry: thought experiments, responsibility, character, egoism v . altruism, and moral disagreement.

1. Introduction: What is Moral Psychology?

2. thought experiments and the methods of ethics, 3. moral responsibility, 4. virtue ethics and skepticism about character, 5. egoism vs. altruism, 6. moral disagreement, 7. conclusion, other internet resources, related entries.

Contemporary moral psychology—the study of human thought and behavior in ethical contexts—is resolutely interdisciplinary: psychologists freely draw on philosophical theories to help structure their empirical research, while philosophers freely draw on empirical findings from psychology to help structure their theories. [ 1 ]

While this extensive interdisciplinarity is a fairly recent development (with few exceptions, most of the relevant work dates from the past quarter century), it should not be a surprising development. From antiquity to the present, philosophers have not been bashful about making empirical claims, and many of these empirical claims have been claims about human psychology (Doris & Stich 2005). It is therefore unremarkable that, with the emergence of scientific psychology over the past century and a half, some of these philosophers would think to check their work against the systematic findings of psychologists (hopefully, while taking special care to avoid being misled by scientific controversy; see Doris 2015, Chapter 3; Machery & Doris forthcoming).

Similarly, at least since the demise of behaviorism, psychologists have been keenly interested in normative phenomena in general and ethical phenomena in particular. It is therefore unremarkable that some of these psychologists would seek to enrich their theoretical frameworks with the conceptual resources of a field intensively focused on normative phenomena: philosophical ethics. As a result, the field demarcated by “moral psychology”, routinely involves an admixture of empirical and normative inquiry, pursued by both philosophers and psychologists—increasingly, in the form of collaborative efforts involving practitioners from both fields.

For philosophers, the special interest of this interdisciplinary inquiry lies in the ways moral psychology may help adjudicate between competing ethical theories. The plausibility of its associated moral psychology is not, of course, the only dimension on which an ethical theory may be evaluated; equally important are normative questions having to do with how well a theory fares when compared to important convictions about such things as justice, fairness, and the good life. Such questions have been, and will continue to be, of central importance for philosophical ethics. Nonetheless, it is commonly supposed that an ethical theory committed to an impoverished or inaccurate conception of moral psychology is at a serious competitive disadvantage. As Bernard Williams (1973, 1985; cf. Flanagan 1991) forcefully argued, an ethical conception that commends relationships, commitments, or life projects that are at odds with the sorts of attachments that can be reasonably expected to take root in and vivify actual human lives is an ethical conception with—at best—a very tenuous claim to our assent.

With this in mind, problems in ethical theory choice making reference to moral psychology can be framed by two related inquiries:

  • What empirical claims about human psychology do advocates of competing perspectives on ethical theory assert or presuppose?
  • How empirically well supported are these claims?

The first question is one of philosophical scholarship: what are the psychological commitments of various positions in philosophical ethics? The second question takes us beyond the corridors of philosophy departments and to the sorts of questions asked, and sometimes answered, by the human sciences, including psychology, anthropology, sociology, history, cognitive science, linguistics and neuroscience. Thus, contemporary moral psychology is methodologically pluralistic : it aims to answer philosophical questions, but in an empirically responsible way.

However, it will sometimes be difficult to tell which claims in philosophical ethics require empirical substantiation. Partly, this is because it is sometimes unclear whether, and to what extent, a contention counts as empirically assessable. Consider questions regarding “normal functioning” in mental health care: are the answers to these questions statistical, or evaluative (Boorse 1975; Fulford 1989; Murphy 2006)? For example, is “normal” mental health simply the psychological condition of most people, or is it good mental health? If the former, the issue is, at least in principle, empirically decidable. If the latter, the issues must be decided, if they can be decided, by arguments about value.

Additionally, philosophers have not always been explicit about whether, and to what extent, they are making empirical claims. For example, are their depictions of moral character meant to identify psychological features of actual persons, or to articulate ideals that need not be instantiated in actual human psychologies? Such questions will of course be complicated by the inevitable diversity of philosophical opinion.

In every instance, therefore, the first task is to carefully document a theory’s empirically assessable claims, whether they are explicit or, as may often be the case, tacit. Once claims apt for empirical assessment have been located, the question becomes one of identifying any relevant empirical literatures. The next job is to assess those literatures, in an attempt to determine what conclusions can be responsibly drawn from them. Science, particularly social science, being what it is, many conclusions will be provisional; the philosophical moral psychologist must be prepared to adjudicate controversies in other fields, or offer informed conjecture regarding future findings. Often, the empirical record will be crucially incomplete. In such cases, philosophers may be forced to engage in empirically disciplined conjecture, or even to engage in their own empirical work, as some philosophers are beginning to do. [ 2 ]

When the philosophical positions have been isolated, and putatively relevant empirical literatures assessed, we can begin to evaluate the plausibility of the philosophical moral psychology: Is the speculative picture of psychological functioning that informs some region of ethical theory compatible with the empirical picture that emerges from systematic observation? In short, is the philosophical picture empirically adequate ? If it is determined that the philosophical conception is empirically adequate, the result is vindicatory . Conversely, if the philosophical moral psychology in question is found to be empirically in adequate, the result is revisionary , compelling alteration, or even rejection, of those elements of the philosophical theory presupposing the problematic moral psychology. The process will often be comparative . Theory choice in moral psychology, like other theory choice, involves tradeoffs, and while an empirically undersupported approach may not be decisively eliminated from contention on empirical grounds alone, it may come to be seen as less attractive than theoretical options with firmer empirical foundations.

The winds driving the sort of disciplinary cross-pollination we describe do not blow in one direction. As philosophers writing for an encyclopedia of philosophy, we are naturally concerned with the ways empirical research might shape, or re-shape, philosophical ethics. But philosophical reflection may likewise influence empirical research, since such research is often driven by philosophical suppositions that may be more or less philosophically sound. The best interdisciplinary conversations, then, should benefit both parties. To illustrate the dialectical process we have described, we will consider a variety of topics in moral psychology. Our primary concerns will be philosophical: What are some of the most central problems in philosophical moral psychology, and how might they be resolved? However, as the hybrid nature of our topic invites us to do, we will pursue these questions in an interdisciplinary spirit, and are hopeful that our remarks will also engage interested scientists. Hopefully, the result will be a broad sense of the problems and methods that will structure research on moral psychology during the 21 st century.

“Intuition pumps” or “thought experiments” have long been well-used items in the philosopher’s toolbox (Dennett 1984: 17–18; Stuart et al. 2018). Typically, a thought experiment presents an example, often a hypothetical example, in order to elicit some philosophically telling response. If a thought experiment is successful, it may be concluded that competing theories must account for the resulting response. These responses are supposed to serve an evidential role in philosophical theory choice; if you like, they can be understood as data competing theories must accommodate. [ 3 ] If an appropriate audience’s ethical responses to a thought experiment conflict with the response a theory prescribes for the case, the theory has suffered a counterexample.

The question of whose responses “count” philosophically (or, who is the “appropriate” audience) has been answered in a variety of ways, but for many philosophers, the intended audience for thought experiments seems to be some species of “ordinary folk” (see Jackson 1998: 118, 129; Jackson & Pettit 1995: 22–9; Lewis 1989: 126–9). Of course, the relevant folk must possess such cognitive attainments as are required to understand the case at issue; very young children are probably not an ideal audience for thought experiments. Accordingly, some philosophers may insist that the relevant responses are the considered judgments of people with the training required to see “what is at stake philosophically”. But if the responses are to help adjudicate between competing theories, the responders must be more or less theoretically neutral, and this sort of neutrality is rather likely to be vitiated by philosophical education. A dilemma emerges. On the one hand, philosophically naïve subjects may be thought to lack the erudition required to grasp the philosophical stakes. On the other, with increasing philosophical sophistication comes, very likely, philosophical partiality; one audience is naïve, and the other prejudiced. [ 4 ]

However exactly the philosophically relevant audience is specified, there are empirical questions that must be addressed in determining the philosophical potency of a thought experiment. In particular, when deciding what philosophical weight to give a response, philosophers need to determine its origins . What features of the example are implicated in a given judgment—are people reacting to the substance of the case, or the style of exposition? What features of the audience are implicated in their reaction—do different demographic groups respond to the example differently? Are there factors in the environment that are affecting people’s intuitive judgments? Does the order in which people consider examples affect their judgments? Such questions raise the following concern: judgments about thought experiments dealing with moral issues might be strongly influenced by ethically irrelevant characteristics of the example or the audience or the environment or the order of presentation. Whether a characteristic is ethically relevant is a matter for philosophical discussion, but determining the status of a particular thought experiment also requires empirical investigation of its causally relevant characteristics. We’ll now describe some examples of such investigation.

As part of their famous research on the “heuristics and biases” that underlie human reasoning, Tversky and Kahneman (1981) presented subjects with the following problem:

Imagine that the U.S. is preparing for the outbreak of an unusual Asian disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimate of the consequences of the programs are as follows: If Program A is adopted, 200 people will be saved. If Program B is adopted, there is a 1/3 probability that 600 people will be saved, and a 2/3 probability that no people will be saved.

A second group of subjects was given an identical problem, except that the programs were described as follows:

If Program C is adopted, 400 people will die. If Program D is adopted, there is a 1/3 probability that nobody will die and a 2/3 probability that 600 people will die.

On the first version of the problem, most subjects thought that Program A should be adopted. But on the second version, most chose Program D, despite the fact that the outcome described in A is identical to the one described in C. The disconcerting implication of this study is that ethical responses may be strongly influenced by the manner in which cases are described or framed . It seems that such framing sensitivities constitute ethically irrelevant influences on ethical responses. Unless this sort of possibility can be confidently eliminated, one should hesitate to rely on responses to a thought experiment for adjudicating theoretical controversies. Such possibilities can only be eliminated through systematic empirical work. [ 5 ]

While a relatively small percentage of empirical work on “heuristics and biases” directly addresses moral reasoning, numerous philosophers who have addressed the issue (Horowitz 1998; Doris & Stich 2005; Sinnott-Armstrong 2005; Sunstein 2005) agree that phenomena like framing effects are likely to be pervasively implicated in responses to ethically freighted examples, and argue that this state of affairs should cause philosophers to view the thought-experimental method with considerable concern.

We turn now to order effects. In a pioneering study, Petrinovich and O’Neill (1996) found that participants’ moral intuitions varied with the order in which the thought experiments were presented. Similar findings have been reported by Liao et al. (2012), Wiegman et al. (2012), and Schwitzgebel & Cushman (2011, 2015). The Schwitzgebel and Cushman studies are particularly striking, since they set out to explore whether order effects in moral intuitions were smaller or non-existent in professional philosophers. Surprisingly, they found that professional philosophers were also subject to order effects, even though the thought experiments used are well known in the field. Schwitzgebel and Cushman also report that in some cases philosophers intuitions show substantial order effects when the intuitions of non-philosophers don’t.

Audience characteristics may also affect the outcome of thought experiments. Haidt and associates (1993: 613) presented stories about “harmless yet offensive violations of strong social norms” to men and women of high and low socioeconomic status (SES) in Philadelphia (USA), Porto Alegre, and Recife (both in Brazil). For example:

A man goes to the supermarket once a week and buys a dead chicken. But before cooking the chicken, he has sexual intercourse with it. Then he cooks it and eats it. (Haidt et al. 1993: 617)

Lower SES subjects tended to “moralize” harmless and offensive behaviors like that in the chicken story. These subjects were more inclined than their high SES counterparts to say that the actor should be “stopped or punished”, and more inclined to deny that such behaviors would be “OK” if customary in a given country (Haidt et al. 1993: 618–19). The point is not that lower SES subjects are mistaken in their moralization of such behaviors while the urbanity of higher SES subjects represents a more rationally defensible response. The difficulty is deciding which—if any—of the conflicting responses is fit to serve as a constraint on ethical theory, when both may equally be the result of more or less arbitrary cultural factors.

Philosophical audiences typically decline to moralize the offensive behaviors, and we ourselves share their tolerant attitude. But of course these audiences—by virtue of educational attainments, if not stock portfolios—are overwhelmingly high SES. Haidt’s work suggests that it is a mistake for a philosopher to say, as Jackson (1998: 32n4; cf. 37) does, that “my intuitions reveal the folk conception in as much as I am reasonably entitled, as I usually am, to regard myself as typical”. The question is: typical of what demographic? Are philosophers’ ethical responses determined by the philosophical substance of the examples, or by cultural idiosyncrasies that are very plausibly thought to be ethically irrelevant? Once again, until such possibilities are ruled out by systematic empirical investigation, the philosophical heft of a thought experiment is open to question.

In recent years there has been a growing body of research reporting that judgments evoked by moral thought experiments are affected by environmental factors that look to be completely irrelevant to the moral issue at hand. The presence of dirty pizza boxes and a whiff of fart spray (Schnall et al. 2008a), the use of soap (Schnall et al. 2008b) or an antiseptic handwipe (Zhong et al. 2010), or even the proximity of a hand sanitizer dispenser (Helzer & Pizarro 2011) have all been reported to influence moral intuitions. Tobia et al. (2013) found that the moral intuitions of both students and professional philosophers are affected by spraying the questionnaire with a disinfectant spray. Valdesolo and DeSteno (2006) reported that viewing a humorous video clip can have a substantial impact on participant’s moral intuitions. And Strohminger et al. (2011) have shown that hearing different kinds of audio clips (stand-up comedy or inspirational stories from a volume called Chicken Soup for the Soul ) has divergent effects on moral intuitions.

How should moral theorists react to findings like these? One might, of course, eschew thought experiments in ethical theorizing. While this methodological austerity is not without appeal, it comes at a cost. Despite the difficulties, thought experiments are a window, in some cases the only accessible window, into important regions of ethical experience. In so far as it is disconnected from the thoughts and feels of the lived ethical life, ethical theory risks being “motivationally inaccessible”, or incapable of engaging the ethical concern of agents who are supposed to live in accordance with the normative standards of the theory. [ 6 ] Fortunately, there is another possibility: continue pursuing the research program that systematically investigates responses to intuition pumps. In effect, the idea is to subject philosophical thought experiments to the critical methods of experimental social psychology. If investigations employing different experimental scenarios and subject populations reveal a clear trend in responses, we can begin to have some confidence that we are identifying a deeply and widely shared moral conviction. Philosophical discussion may establish that convictions of this sort should serve as a constraint on moral theory, while responses to thought experiments that empirical research determines to lack such solidity, such as those susceptible to order, framing or environmental effects, or those admitting of strong cultural variation, may be ones that ethical theorists can safely disregard.

A philosophically informed empirical research program akin to the one just described is more than a methodological fantasy. This approach accurately describes a number of research programs aimed at informing philosophical debates through interdisciplinary research.

One of the earliest examples of this kind of work was inspired in large part by the work of Knobe (2003a,b, 2006) and addressed questions surrounding “folk morality” on issues ranging from intentional action to causal responsibility (see Knobe 2010 for review and discussion). This early work helped to spur the development of a truly interdisciplinary research program with both philosophers and psychologists investigating the folk morality of everyday life. (See the Stanford Encyclopedia of Philosophy article on Experimental Moral Philosophy for a more complete treatment of this research.)

Another related philosophical debate concerns the compatibility of free will and moral responsibility with determinism. On the one hand, incompatibilists insist that determinism (the view that all events are jointly determined by antecedent events as governed by laws of nature), is incompatible with moral responsibility. Typically, these accounts also go on to specify what particular capacity is required to be responsible for one’s own behavior (e.g., that agents have alternate possibilities for behavior, or are the “ultimate” source of their behavior, or both (Kane 2002: 5; Haji 2002: 202–3). [ 7 ] On the other hand, compatibilists argue that determinism and responsibility are compatible , often by denying that responsible agency requires that the actor have genuinely open alternatives, or rejecting the ultimacy condition that requires indeterminism (or impossible demands for self-creation). In short, compatibilists hold that people may legitimately be held responsible even though there is some sense in which they “could not have done otherwise” or are not the “ultimate source” of their behavior. Incompatibilists deny that this is the case. Proponents of these two opposing positions have remained relatively entrenched, and some participants have raised fears of a “dialectical stalemate” (Fischer 1994: 83–5).

A critical issue in these debates has been the claim that the incompatibilist position better captures folk moral judgments about agents whose actions have been completely determined (e.g., G. Strawson 1986: 88; Smilansky 2003: 259; Pereboom 2001: xvi; O’Connor 2000: 4; Nagel 1986: 113, 125; Campbell 1951: 451; Pink 2004: 12). For example, Robert Kane (1999: 218; cf. 1996: 83–5), a leading incompatibilist, reports that in his experience “most ordinary persons start out as natural incompatibilists”, and “have to be talked out of this natural incompatibilism by the clever arguments of philosophers”.

Unsurprisingly, some compatibilists have been quick to assert the contrary. For example, Peter Strawson (1982) famously argued that in the context of “ordinary interpersonal relationships”, people are not haunted by the specter of determinism; such metaphysical concerns are irrelevant to their experience and expression of the “reactive attitudes”—anger, resentment, gratitude, forgiveness, and the like—associated with responsibility assessment. Any anxiety about determinism, Strawson insisted, is due to the “panicky metaphysics” of philosophers, not incompatibilist convictions on the part of ordinary people. However, incompatibilists have historically been thought to have ordinary intuitions on their side; even some philosophers with compatibilist leanings are prepared to concede the incompatibilist point about “typical” response tendencies (e.g., Vargas 2005a,b).

Neither side, so far as we are aware, has offered much in the way of systematic evidence of actual patterns of folk moral judgments. Recently however, a now substantial research program has begun to offer empirical evidence on the relationship between determinism and moral responsibility in folk moral judgments.

Inspired by the work of Frankfurt (1988) and others, Woolfolk, Doris, and Darley (2006) hypothesized that observers may hold actors responsible even when the observers judge that the actors could not have done otherwise, if the actors appear to “identify” with their behavior. Roughly, the idea is that the actor identifies with a behavior—and is therefore responsible for it—to the extent that she “embraces” the behavior, or performs it “wholeheartedly” regardless of whether genuine alternatives for behavior are possible. [ 8 ] Woolfolk et al.’s suspicion was, in effect, that people’s (presumably tacit) theory of responsibility is compatibilist.

To test this, subjects were asked to read a story about an agent who was forced by a group of armed hijackers to kill a man who had been having an affair with his wife. In the “low identification” condition, the man was described as being horrified at being forced to kill his wife’s lover, and as not wanting to do so. In the “high identification” condition, the man is instead described as welcoming the opportunity and wanting to kill his wife’s lover. In both cases, the man is not given a choice, and does kill his wife’s lover.

Consistent with Woolfolk and colleagues’ hypothesis, subjects judged that the highly identifying actor was more responsible, more appropriately blamed, and more properly subject to guilt than the low identification actor. [ 9 ] This pattern in folk moral judgments seems to suggest that participants were not consistently incompatibilist in their responsibility attributions, because the lack of alternatives available to the actor was not alone sufficient to rule out such attributions.

In response to these results, those who believe that folk morality is incompatibilist may be quick to object that the study merely suggests that responsibility attributions are influenced by identification, but says nothing about incompatibilist commitments or the lack thereof. Subjects still may have believed that the actor could have done otherwise. To address this concern, Woolfolk and colleagues also conducted a version of the study in which the man acted under the influence of a “compliance drug”. In this case, participants were markedly less likely to agree that the man “was free to behave other than he did” and yet they still held the agent who identified with the action as more responsible than the agent who did not. These results look to pose a clear challenge to the view that ordinary folk are typically incompatibilists.

A related pattern of responses was obtained by Nahmias, Morris, Nadelhoffer and Turner (2009) who instead described agents preforming immoral behaviors in a “deterministic world” of the sort often described in philosophy classrooms. One variation read as follows:

Imagine that in the next century we discover all the laws of nature, and we build a supercomputer which can deduce from these laws of nature and from the current state of everything in the world exactly what will be happening in the world at any future time. It can look at everything about the way the world is and predict everything about how it will be with 100% accuracy. Suppose that such a supercomputer existed, and it looks at the state of the universe at a certain time on March 25th, 2150 C.E., twenty years before Jeremy Hall is born. The computer then deduces from this information and the laws of nature that Jeremy will definitely rob Fidelity Bank at 6:00 PM on January 26th, 2195. As always, the supercomputer’s prediction is correct; Jeremy robs Fidelity Bank at 6:00 PM on January 26th, 2195.

Subjects were then asked whether Jeremy was morally blameworthy. Most said yes, indicating that they thought an agent could be morally blameworthy even if his behaviors were entirely determined by natural laws. Consistent with the Woolfolk et al. results, it appears that the subjects’ judgments, at least those having to do with moral blameworthiness, were not governed by a commitment to incompatibilism.

This emerging picture was complicated, however, by Nichols and Knobe (2007), which argued that the ostensibly compatibilist responses were performance errors driven by an affective response to the agents’ immoral actions. To demonstrate this, all subjects were asked to imagine two universes—a universe completely governed by deterministic laws (Universe A) and a universe (Universe B) in which everything is determined except for human decisions which are not completely determined by deterministic laws and what has happened in the past. In Universe B, but not Universe A, “each human decision does not have to happen the way it does”. Some subjects were assigned to a concrete condition, and asked to make a judgment about a specific individual in specific circumstances, while others were assigned to an abstract condition, and asked to make a more general judgment, divorced from any particular individual. The hypothesis was that the difference between these two conditions would generate different responses regarding the relationship between determinism and moral responsibility. Subjects in the concrete condition read a story about a man, “Bill”, in the deterministic universe who murders his wife and children in a particularly ghastly manner, and were asked whether Bill was morally responsible for what he had done. By contrast, subjects in the abstract condition were asked “In Universe A, is it possible for a person to be fully morally responsible for their actions?” Seventy-two percent of subjects in the concrete condition gave a compatibilist response, holding Bill responsible in Universe A, whereas less than fifteen percent of subjects in the abstract condition gave a compatibilist response, allowing that people could be fully morally responsible in the deterministic Universe A.

In line with previous experimental work demonstrating that increased affective arousal amplified punitive responses to wrongdoing (Lerner, Goldberg, & Tetlock 1998), Nichols and Knobe hypothesized that previously observed compatibilist responses were the result of the affectively laden nature of the stimulus materials. When this affective element was eliminated from the materials (as in the abstract condition), participants instead exhibited an incompatibilist pattern of responses.

More recently, Nichols and Knobe’s line of reasoning has come under fire from two directions. First, a number of studies have now tried to systematically manipulate how affectively arousing the immoral behavior performed is, but have not found that these changes significantly alter participants’ judgments of moral responsibility in deterministic scenarios. Rather, the differences seem to be best explained simply by whether the case was described abstractly or concretely (see Cova et al. 2012 for work with patients who have frontotemporal dementia, and see Feltz & Cova 2014 for a meta-analysis). Second, a separate line of studies from Murray and Nahmias (2014) argued that participants who exhibited the apparently incompatibilist pattern of responses were making a critical error in how they understood the deterministic scenario. In particular, they argued these participants mistakenly took the agents, or their mental states, in these deterministic scenarios to be “bypassed” in the causal chain leading up to their behavior. In support of their argument, Murray and Nahmias (2014) demonstrated that when analyses were restricted to the participants who clearly did not take the agent to be bypassed, these participants judged the agent to be morally responsible (blameworthy, etc.) despite being in a deterministic universe. Unsurprisingly, this line of argument has, in turn, inspired a number of further counter-responses, both empirical (Rose & Nichols 2013) and theoretical (Björnsson & Pereboom 2016), which caution against the conclusions of Murray and Nahmias.

While the debate continues over whether the compatibilist or incompatibilist position better captures folk moral judgments of agents in deterministic universes, a related line of research has sprung up around what is widely taken to be the most convincing contemporary form of argument for incompatibilism: manipulation arguments (e.g., Mele 2006, 2013, Pereboom 2001, 2014). Pereboom’s Four-Case version, for example, begins with the case of an agent named Plum who is manipulated by neuroscientists who use a radio-like technology to change Plum’s neural states, which results in him wanting and then deciding to kill a man named White. In this case, it seems clear that Plum did not freely decide to kill White. Compare this case to a second one, in which the team of neuroscientists programmed Plum at the beginning of his life in a way that resulted in him developing the desire (and making the decision) to kill White. The incompatibilist argues that these two cases do not differ in a way that is relevant for whether Plum acted freely, and so, once again, it seems that Plum did not freely decide to kill White. Now compare this to a third case, in which Plum’s desire and decision to kill White were instead determined by his cultural and social milieu, rather than by a team of neuroscientists. Since the only difference between the second and third case is the particular technological process through which Plum’s mental states were determined, he would again seem to not have freely decided to kill White. Finally, in a fourth and final case, Plum’s desire and decision to kill White was determined jointly by the past states and the laws of nature in our own deterministic universe. Regarding these four cases, Pereboom argues that, since there is no difference between any of the four cases that is relevant to free will, if Plum was not morally responsible in the first case, then he was not morally responsible in the fourth.

In response to this kind of manipulation-based argument for incompatibilism, a number of researchers have taken aim at painting a better empirical picture of ordinary moral judgments concerning manipulated agents. This line of inquiry has been productive on two levels. First, a growing number of empirical studies have investigated moral responsibility judgments about cases of manipulation, and now provide a clearer psychological picture for why manipulated agents are judged to lack free will and moral responsibility. Second, continuing theoretical work, informed by this empirical picture, has provided new reasons for doubting that manipulation based arguments actually provide evidence against compatibilism.

One line of empirical research, led by Chandra Sripada (2012) has asked whether manipulated agents are perceived to be unfree because (a) they lack ultimate control over their actions (a capacity incompatibilists take to be essential for moral responsibility) or instead because (b) their psychological or volitional capacities (the capacities focused on by compatibilists) have been damaged. Using a statistical approach called Structural Equation Modeling (or SEM), Sripada found that participants’ moral responsibility judgments were best explained by whether they believed the psychological and volitional capacities of the agent were damaged by manipulation and not whether the agent lacked control over her actions. This finding suggests that patterns of judgment in cases of manipulation are more consistent with the predictions of compatibilism than with incompatibilism.

Taking a different approach, Phillips and Shaw (2014) demonstrated that the reduction of moral responsibility that is typically observed in cases of manipulation depends critically on the role of an intentional manipulator. In particular, ordinary people were shown to distinguish between (1) the moral responsibility of agents who are made to do a particular act by features of the situation they are in (i.e., situational determinism), and (2) the moral responsibility of agents who are made to do that same act by another intentional agent (i.e., manipulation). This work suggests that the ordinary practice of assessing freedom and responsibility is likely to clearly distinguish between cases that do and do not involve a manipulator who intervenes with the intention of causing the manipulated agent to do the immoral action. A series of studies by Murray and Lombrozo (2016) further elaborates these findings by providing evidence that the specific reduction of moral responsibility that results from being manipulated arises from the perception that the agent’s mental states are bypassed .

Collectively, two lessons have come out of this work on the ordinary practice of assessing the moral responsibility of manipulated agents: (1) folk morality provides a natural way of distinguishing between the different cases used in manipulation-based arguments (those that do involve the intentional intervention of a manipulator vs. those that don’t) and (2) folk morality draws an intimate link between the moral responsibility of an agent and that agent’s mental and volitional capacities. Building on this increasingly clear empirical picture, Deery and Nahmias (2017) formalized these basic principles in theoretical work that argues for a principled way of distinguishing between the moral responsibility of determined and manipulated agents.

While the majority of evidence may currently be in favor of the view that folk morality adheres to a kind of “natural compatibilism” (Cova & Kitano 2013), this remains a contentious topic, and new work is continually emerging on both sides of the debate (Andow & Cova 2016; Bear & Knobe 2016; Björnsson 2014; Feltz & Millan 2013; Figdor & Phelan 2015; Knobe 2014). One thing that has now been agreed on by parties on both sides of this debate, however, is a critical role for careful empirical studies (Björnsson & Pereboom 2016; Knobe 2014; Nahmias 2011).

To date, empirically informed approaches to moral psychology have been most prominent in discussions of moral character and virtue. The focus is decades of experimentation in “situationist” social psychology: unobtrusive features of situations have repeatedly been shown to impact behavior in seemingly arbitrary, and sometimes alarming, ways. Among the findings that have most interested philosophers:

  • The Phone Booth Study (Isen & Levin (1972: 387): people who had just found a dime in a payphone’s coin return were 22 times more likely than those who did not find a dime to help a woman who had dropped some papers (88% v. 4%).
  • The Good Samaritan Study (Darley & Batson 1973: 105): unhurried passersby were 6 times more likely than hurried passersby to help an unfortunate who appeared to be in significant distress (63% v. 10%).
  • The Obedience Experiments (Milgram 1974) subjects repeatedly punished a screaming victim with realistic (but simulated) electric shocks at the polite request of an experimenter.
  • The Stanford Prison Study (Zimbardo 2007): college students role-playing as “guards” in a simulated prison subjected student “prisoners” to grotesque verbal and emotional abuse.

These experiments are part of an extensive empirical literature, where social psychologists have time and again found that disappointing omissions and appalling actions are readily induced by apparently minor situational features. [ 10 ] The striking fact is not that people fail standards for good conduct, but that they can be so easily induced to do so.

Exploiting this observation, “character skeptics” contend that if moral conduct varies so sharply, often for the worse, with minor perturbations in circumstance, ostensibly good character provides very limited assurance of good conduct. In addition to this claim in descriptive psychology , concerning the fragility of moral character, some character skeptics also forward a thesis in normative ethics , to the effect that character merits less attention in ethical thought than it traditionally gets. [ 11 ]

Character skepticism contravenes the influential program of contemporary virtue ethics , which maintains that advancing ethical theory requires more attention to character, and virtue ethicists offer vigorous resistance. [ 12 ] Discussion has sometimes been overheated, but it has resulted in a large literature in a vibrantly interdisciplinary field of “character studies” (e.g., Miller et al. 2015). [ 13 ] The literature is too extensive for the confines of this entry, but we will endeavor to outline some of the main issues.

The first thing to observe is that the science which inspires the character skeptics may itself be subject to skepticism. Given the uneven history of the human sciences, it might be argued that the relevant findings are too uncertain to stand as a constraint on philosophical theorizing. This contention is potentially buttressed by recent prominent replication failures in social psychology.

The psychology at issue is, like much of science, unfinished business. But the replication controversy, and the attendant suspicion of science, is insufficient grounds for dismissing the psychology out of hand. Philosophical conclusions should not be based on a few studies; the task of the philosophical consumer of science is to identify trends in convergent strands of evidence (Doris 2015: 49, 56; Machery & Doris forthcoming). The observation that motivates character skepticism—the surprising situational sensitivity of behavior—is supported by a wide range of scientific findings, as well as by recurring themes in history and biography (Doris 2002, 2005). The strong situational discriminativeness of behavior is accepted as fact by high proportion of involved scientists; accordingly, it is not much contested in debates about character skepticism.

But the philosophical implications of this fact remain, after considerable debate, a contentious issue. The various responses to character skepticism need not be forwarded in isolation, and some of them may be combined as part of a multi-pronged defense. Different rejoinders have differing strengths and weaknesses, particularly with respect to the differing pieces of evidence on which character skeptics rely; the phenomena are not unitary, and accommodating them all may preclude a unitary response.

One way of defusing empirically motivated skepticism—dubbed by Alfano (2013) “the dodge”—is simply to deny that virtue ethics makes empirical claims. On this understanding, virtue ethics is cast as a “purely normative” endeavor aiming at erecting ethical ideals in complete absence of empirical commitments regarding actual human psychologies. This sort of purity is perhaps less honored than honored in the breach: historically, virtue ethics has been typified by an interest in how actual people become good. Aristotle ( Nicomachean Ethics , 1099b18–19) thought that anyone not “maimed” with regard to the capacity for virtue may acquire it “by a certain kind of study and care”, and contemporary Aristotelians have emphasized the importance of moral education and development (e.g., Annas 2011). More generally, virtue-based approaches have been claimed to have an advantage over major Kantian and consequentialist competitors with respect to “psychological realism”—the advantage of a more lifelike moral psychology (see Anscombe 1958: 1, 15; Williams 1985; Flanagan 1991: 182; Hursthouse 1999: 19–20).

To be sure, eschewing empirical commitment allows virtue ethics to escape empirical threat: obviously, empirical evidence cannot be used to undermine a theory that makes no empirical claims. However, it is not clear such theories could claim advantages traditionally claimed for virtue theories with regard to moral development and psychological realism. In any event, they are not contributions to empirical moral psychology, and needn’t be further discussed here.

Before seeing how the debate in moral psychology might be advanced, it is necessary to correct a mischaracterization that serves to arrest progress. It is too often said, particularly in reference to Doris (1998, 2002) and Harman (1999, 2000), that character skepticism comes to the view that character traits “do not exist” (e.g., Flanagan 2009: 55). Frequently, this attribution is made without documentation, but when documentation is provided, it is typically in reference to some early, characteristically pointed, remarks of Harman (e.g., 1999). Yet in his most recent contribution, Harman (2009: 241) says, “I do not think that social psychology demonstrates there are no character traits”. For his part, Doris has repeatedly asserted that traits exist, and has repeatedly drawn attention to such assertions (Doris 1998: 507–509; 2002: 62–6; 2005: 667; 2010: 138–141; Doris & Stich 2005: 119–20; Doris & Prinz 2009).

With good reason, to say “traits do not exist” is tantamount to denying that there are individual dispositional differences, an unlikely view that character skeptics and antiskeptics are united in rejecting. Quite unsurprisingly, this unlikely view is seriously undersubscribed in both philosophy and psychology. It is endorsed by neither the most aggressive critics of personality, situationists in social psychology such as Ross and Nisbett (1991), nor by the patron saint of situationism in personality psychology: Mischel (1999: 45). Mischel disavows a trait-based approach, but his skepticism concerns a particular approach to traits , not individual dispositional differences more generally.

Then the question of whether or not traits exist is emphatically not the issue dividing more and less skeptical approaches to character. Today, all mainstream parties to the debate are “interactionist”, treating behavioral outcomes as the function of a (complex) person by situation interaction (Mehl et al. 2015)—and it’s likely most participants have always been so (Doris 2002: 25–6). Contemporary research programs in personality and social psychology freely deploy both personal and situational variables (e.g., Cameron, Payne, & Doris 2013; Leikas, Lönnqvist, & Verkasalo 2012; Sherman, Nave, & Funder 2010). The issue worth discussing is not whether individual dispositional differences exist, but how these differences should be characterized , and how (or whether) these individual differences, when appropriately characterized, should inform ethical thought .

An important feature of early forays into character skepticism was that skeptics tended to focus on behavioral implications of traits rather than the psychological antecedents of behavior (Doris 2015: 15). Defenders of virtue ethics observe that character skeptics have had much to say about situational variation in behavior and little to say about the psychological processes underlying it, with the result that they overlook the rational order in people’s lives (Adams 2006: 115–232). These virtue ethicists maintain that the behavioral variation provoking character skepticism evinces not unreliability, but rationally appropriate sensitivity to differing situations (Adams 2006; Kamtekar 2004). The virtuous person, such as Aristotle’s exemplary phronimos (“man of practical wisdom”) may sometimes come clean, and sometimes dissemble, or sometimes fight, and sometimes flee, depending on the particular ethical demands of his circumstances.

For example, in the Good Samaritan Study, the hurried passersby was on the way to an appointment where they had agreed to give a presentation; perhaps these people made a rational determination—perhaps even an ethically defensible determination—to weigh the demands of punctuality and professionalism over ethical requirement to check on the welfare of a stranger in apparent distress. However attractive one finds such accounting for this case (note that some of Darley and Batson’s [1973] hurried passersby failed to notice the victim, which strains explanations in terms of their rational discriminations), there are other cases where the “rationality response” seems plainly unattractive. These are cases of ethically irrelevant influences ( Sec. 2 above ; Doris & Stich 2005), where it seems unlikely the influence could be cited as part of a rationalizing explanation of the behavior: it’s odd to cite failing to find a dime as justification for failing to help—or for that matter, finding a dime as justification for doing so.

It is certainly appropriate for virtue ethicists to emphasize practical rationality in their accounts of character. This is a central theme in the tradition going back to Aristotle himself, who is probably the most oft-cited canonical philosopher in contemporary virtue ethics. But while the rationality response may initially accommodate some of the troubling behavioral evidence, it encounters further empirical difficulty. There is an extensive empirical literature problematizing familiar conceptions of rationality: psychologists have endlessly documented a dispiriting range of reasoning errors (Baron 1994, 2001; Gilovich et al. 2002; Kahneman et al. 1982; Tversky & Kahneman 1973; Kruger & Dunning 1999; Nisbett & Borgida 1975; Nisbett & Ross 1980; Stich 1990; Tversky & Kahneman 1981). In light of this evidence, character skeptics claim that the vagaries afflicting behavior also afflict reasoning (Alfano 2013; Olin & Doris 2014).

Research supporting this discouraging assessment of human rationality is controversial, and not all psychologists think things are so bleak (Gigerenzer 2000; Gigerenzer et al. 1999; for philosophical commentary see Samuels & Stich 2002). Nevertheless, if virtue ethics is to have an empirically credible moral psychology, it needs to account for the empirical challenges to practical reasoning: how can the relevant excellence in practical reasoning be developed?

Faced with the challenge to practical rationality, virtue ethicists may respond that their theories concern excellent reasoning, not the ordinary reasoning studied in psychology. Practical wisdom, and the ethical virtue it supports, are expected to be rare , and not widely instantiated. This state of affairs, it is said, is quite compatible with the disturbing, but not exceptionlessly disturbing, behavior in experiments like Milgram’s (see Athanassoulis 1999: 217–219; DePaul 1999; Kupperman 2001: 242–3). If this account is supposed to be part of an empirically contentful moral psychology, rather than unverified speculation, we require a detailed and empirically substantiated account of how the virtuous few get that way—remember that an emphasis on moral development is central to the virtue ethics tradition. Moreover, if virtue ethics is supposed to have widespread practical implications—as opposed to being merely a celebration of a tiny “virtue elite”—it should have an account of how the less-than-virtuous-many may at least tolerably approximate virtue.

This point is underscored by the fact that for some of the troubling evidence, as in the Stanford Prison Study, the worry is not so much that people fail standards of virtue, but that they fail standards of minimal decency . Surely an approach to ethics that celebrates moral development, even one that acknowledges (or rather, insists) that most people will not attain its ideal, might be expected to have an account of how people can become minimally decent.

Recently, proponents of virtue ethics have been increasingly proposing a suggestive solution to this problem: virtue is a skill acquired through effortful practice, so virtue is a kind of expertise (Annas 2011; Bloomfield 2000, 2001, 2014; Jacobson 2005; Russell 2015; Snow 2010; Sosa 2009; Stichter 2007, 2011; for reservations, see Doris, in preparation). The virtuous are expert at morality and—given the Aristotelian association of virtue and happiness—expert at life.

An extensive scientific literature indicates that developing expert skill requires extensive preparation, whether the practitioner is a novelist, doctor, or chess master—around 10,000 hours of “deliberate practice”, according to a popular generalization (Ericsson 2014; Ericsson et al. 1993). The “10,000–hour rule” is likely an oversimplification, but there is no doubt that attaining expertise requires intensive training. Because of this, people rarely achieve eminence in more than one area; for instance, “baseball trivia” experts display superior recall for baseball-related material, but not for non-baseball material (Chiesi et al. 1979). Conversely, becoming expert at morality, or (even more ambitiously) expert at the whole of life, would apparently require a highly generalized form of expertise: to be good, there’s a lot to be good at. Moreover, it’s quite unclear what deliberate practice at life involves; how exactly does one get better at being good?

One obvious problem concerns specifying the “good” in question. Expertises like chess have been effectively studied in part because there are accepted standards of excellence (the “ELO” score used for ranking chess players; Glickman 1995). To put it blithely, there aren’t any chess skeptics. But there have, historically, been lots of moral skeptics. And if there’s not moral knowledge, how could there be moral experts? And even if there are moral experts, there’s the problem of how are they to be identified, since it is not clear we are possessed of standard independent of expert opinion itself (like winning chess matches) for doing so (for the “metaethics of expertise”, see McGrath 2008, 2011).

Even if these notorious philosophical difficulties can be resolved—as defenders of expertise approaches to virtue must think they can—matters remain complicated, because if moral expertise is like other expertises, practice alone—assuming we have a clear notion of what “moral practice” entails—will be insufficient. While practice matters in attaining expertise, other factors, such as talent, also matter (Hambrick et al. 2014; Macnamara et al. 2014). And some of the required endowments may be quite unequally distributed across populations: practice cannot make a jockey into an NFL lineman, or an NFL lineman into a jockey.

What are the natural endowments required for moral expertise, and how widely are they distributed in the population? If they are rare, like the skill of a chess master or the strength of an NFL lineman, virtue will also be rare. Some virtue ethicists believe virtue should be widely attainable, and they will resist this result (Adams 2006: 119–123, and arguably Aristotle Nicomachean Ethics 1099b15–20). But even virtue ethicists who embrace the rarity of virtue require an account of what the necessary natural endowments are, and if they wish to also have an account of how the less well-endowed may achieve at least minimal decency, they should have something to say about how moral development will proceed across a population with widely varying endowments.

What is needed, for the study of moral character research to advance, is an account of the biological, psychological, and social factors requisite for successful moral development—on the expertise model, the conditions conducive to developing “moral skill”. This, quite obviously, is a tall order, and the research needed to systematically address these issue is in comparative infancy. Yet the expertise model, in exploiting connections with areas in which skill acquisition has been well studied, such as music and sport, provides a framework for moving discussion of character beyond the empirically under-informed conjectures and assumptions about “habituation” that have been too frequent in previous literature (Doris 2015: 128).

People often behave in ways that benefit others, and they sometimes do this knowing that it will be costly, unpleasant or dangerous. But at least since Plato’s classic discussion in the second Book of the Republic , debate has raged over why people behave in this way. Are their motives altruistic, or is their behavior ultimately motivated by self-interest? Famously, Hobbes gave this answer:

No man giveth but with intention of good to himself, because gift is voluntary; and of all voluntary acts, the object is to every man his own good; of which, if men see they shall be frustrated, there will be no beginning of benevolence or trust, nor consequently of mutual help. (1651 [1981: Ch. 15])

Views like Hobbes’ have come to be called egoism , [ 14 ] and this rather depressing conception of human motivation has any number of eminent philosophical advocates, including Bentham, J.S. Mill and Nietzsche. [ 15 ] Dissenting voices, though perhaps fewer in number, have been no less eminent. Butler, Hume, Rousseau and Adam Smith have all argued that, sometimes at least, human motivation is genuinely altruistic.

Though the issue that divides egoistic and altruistic accounts of human motivation is largely empirical, it is easy to see why philosophers have thought that the competing answers will have important consequences for moral theory. For example, Kant famously argued that a person should act “not from inclination but from duty, and by this would his conduct first acquire true moral worth” (1785 [1949: Sec. 1, parag. 12]). But egoism maintains that all human motivation is ultimately self-interested, and thus people can’t act “from duty” in the way that Kant urged. Thus if egoism is true, Kant’s account would entail that no conduct has “true moral worth”. Additionally, if egoism is true, it would appear to impose a strong constraint on how a moral theory can answer the venerable question “Why should I be moral?” since, as Hobbes clearly saw, the answer will have to ground the motivation to be moral in the agent’s self-interest. [ 16 ]

While the egoism vs. altruism debate has historically been of great philosophical interest, the issue centrally concerns psychological questions about the nature of human motivation, so it’s not surprise that psychologists have done a great deal of empirical research aimed at determining which view is correct. Some of the most influential and philosophically sophisticated empirical work on this issue has been done by Daniel Batson and his associates. The conclusion Batson draws from this work is that people do sometimes behave altruistically, and that the emotion of empathy plays an important role in generating altruistic motivation. [ 17 ] Others are not convinced. For a discussion of Batson’s experiments, the conclusion he draws from them, and some reasons for skepticism about that conclusion, see sections 5 and 6 of the entry “Empirical Approaches to Altruism” in this encyclopedia. In this section, we’ll focus on some of the philosophical spadework that is necessary before plunging into the empirical literature.

A crucial question that needs to be addressed is: What, exactly, is the debate about; what is altruism? Unfortunately, there is no uncontroversial answer to this question, since researchers in many disciplines, including philosophy, biology, psychology, sociology, economics, anthropology and primatology, have written about altruism, and authors in different disciplines tend to use the term “altruism” in quite different ways. Even among philosophers the term has been used with importantly different meanings. There is, however, one account of altruism—actually a cluster closely related accounts—that plays a central role both in philosophy and in a great deal of psychology, including Batson’s work. We’ll call it “the standard account”. That will be our focus in the remainder of this section. [ 18 ]

According to the standard account, an action is altruistic if it is motivated by an ultimate desire for the well-being of another person. This formulation invites questions about (1) what it is for a behavior to be motivated by an ultimate desire , and (2) the distinction between desires that are self-interested and desires that are for the well-being of others .

Although the second question will need careful consideration in any comprehensive treatment, a few rough and ready examples of the distinction will suffice here. [ 19 ] Desires to save someone else’s life, to alleviate someone else’s suffering, or to make someone else happy are paradigm cases of desires for the well-being of others, while desires to experience pleasure, get rich, and become famous are typical examples of self-interested desires. The self-interested desires to experience pleasure and to avoid pain have played an especially prominent role in the debate, since one version of egoism, often called hedonism , maintains that these are our only ultimate desires.

The first question, regarding ultimate desires, requires a fuller exposition; it can be usefully explicated with the help of a familiar account of practical reasoning . [ 20 ] On this account, practical reasoning is a causal process via which a desire and a belief give rise to or sustain another desire. For example, a desire to drink an espresso and a belief that the best place to get an espresso is at the espresso bar on Main Street may cause a desire to go to the espresso bar on Main Street. This desire can then join forces with another belief to generate a third desire, and so on. Sometimes this process will lead to a desire to perform a relatively simple or “basic” action, and that desire, in turn, will cause the agent to perform the basic action without the intervention of any further desires. Desires produced or sustained by this process of practical reasoning are instrumental desires—the agent has them because she thinks that satisfying them will lead to something else that she desires. But not all desires can be instrumental desires. If we are to avoid circularity or an infinite regress there must be some desires that are not produced because the agent thinks that satisfying them will facilitate satisfying some other desire. These desires that are not produced or sustained by practical reasoning are the agent’s ultimate desires, and the objects of ultimate desires, the states of affairs desired, are desired for their own sake. A behavior is motivated by a specific ultimate desire when that desire is part of the practical reasoning process that leads to the behavior.

If people do sometimes have ultimate desires for the well-being of others, and these desires motivate behavior, then altruism is the correct view, and egoism is false. However, if all ultimate desires are self-interested, then egoism is the correct view, and altruism is false. The effort to establish one or the other of these options has given rise to a vast and enormously sophisticated empirical literature. For an overview of that literature, see the empirical approaches to altruism entry .

Given that moral disagreement—about abortion, say, or capital punishment—so often seems intractable, is there any reason to think that moral problems admit objective resolutions? While this difficulty is of ancient coinage, contemporary philosophical discussion was spurred by Mackie’s (1977: 36–8) “argument from relativity” or, as it is called by later writers, the “argument from disagreement” (Brink 1989: 197; Loeb 1998). Such “radical” differences in moral judgment as are frequently observed, Mackie (1977: 36) argued, “make it difficult to treat those judgments as apprehensions of objective truths”.

Mackie supposed that his argument undermines moral realism , the view that, as Smith (1994: 9, cf. 13) puts it,

moral questions have correct answers, that the correct answers are made correct by objective moral facts … and … by engaging in moral argument, we can discover what these objective moral facts are. [ 21 ]

This notion of objectivity, as Smith recognizes, requires convergence in moral views—the right sort of argument, reflection and discussion is expected to result in very substantial moral agreement (Smith 1994: 6). [ 22 ]

While moral realists have often taken pretty optimistic positions on the extent of actual moral agreement (e.g., Sturgeon 1988: 229; Smith 1994: 188), there is no denying that there is an abundance of persistent moral disagreement; on many moral issues there is a striking failure of convergence even after protracted argument. Anti-realists like Mackie have a ready explanation for this phenomenon: Moral judgment is not objective in Smith’s sense, and moral argument cannot be expected to accomplish what Smith and other realists think it can. [ 23 ] Conversely, the realist’s task is to explain away failures of convergence; she must provide an explanation of the phenomena consistent with it being the case that moral judgment is objective and moral argument is rationally resolvable. Doris and Plakias (2008) call these “defusing explanations”. The realist’s strategy is to insist that the preponderance of actual moral disagreement is due to limitations of disputants or their circumstances, and insist that (very substantial, if not unanimous) [ 24 ] moral agreement would emerge in ideal conditions, when, for example, disputants are fully rational and fully informed of the relevant non-moral facts.

It is immediately evident that the relative merits of these competing explanations cannot be fairly determined without close discussion of the factors implicated in actual moral disagreements. Indeed, as acute commentators with both realist (Sturgeon 1988: 230) and anti-realist (Loeb 1998: 284) sympathies have noted, the argument from disagreement cannot be evaluated by a priori philosophical means alone; what’s needed, as Loeb observes, is “a great deal of further empirical research into the circumstances and beliefs of various cultures”. This research is required not only to accurately assess the extent of actual disagreement, but also to determine why disagreement persists or dissolves. Only then can realists’ attempts to “explain away” moral disagreement be fairly assessed.

Richard Brandt, who was a pioneer in the effort to integrate ethical theory and the social sciences, looked primarily to anthropology to help determine whether moral attitudes can be expected to converge under idealized circumstances. It is of course well known that anthropology includes a substantial body of work, such as the classic studies of Westermarck (1906) and Sumner (1908 [1934]), detailing the radically divergent moral outlooks found in cultures around the world. But as Brandt (1959: 283–4) recognized, typical ethnographies do not support confident inferences about the convergence of attitudes under ideal conditions, in large measure because they often give limited guidance regarding how much of the moral disagreement can be traced to disagreement about factual matters that are not moral in nature, such as those having to do with religious or cosmological views.

With this sort of difficulty in mind, Brandt (1954) undertook his own anthropological study of Hopi people in the American southwest, and found issues for which there appeared to be serious moral disagreement between typical Hopi and white American attitudes that could not plausibly be attributed to differences in belief about nonmoral facts. [ 25 ] A notable example is the Hopi attitude toward animal suffering, an attitude that might be expected to disturb many non-Hopis:

[Hopi children] sometimes catch birds and make “pets” of them. They may be tied to a string, to be taken out and “played” with. This play is rough, and birds seldom survive long. [According to one informant:] “Sometimes they get tired and die. Nobody objects to this”. (Brandt 1954: 213)

Brandt (1959: 103) made a concerted effort to determine whether this difference in moral outlook could be traced to disagreement about nonmoral facts, but he could find no plausible explanation of this kind; his Hopi informants didn’t believe that animals lack the capacity to feel pain, for example, nor did they have cosmological beliefs that would explain away the apparent cruelty of the practice, such as beliefs to the effect that animals are rewarded for martyrdom in the afterlife. The best explanation of the divergent moral judgments, Brandt (1954: 245, 284) concluded, is a “basic difference of attitude”, since “groups do sometimes make divergent appraisals when they have identical beliefs about the objects”.

Moody-Adams argues that little of philosophical import can be concluded from Brandt’s—and indeed from much—ethnographic work. Deploying Gestalt psychology’s doctrine of “situational meaning” (e.g., Dunker 1939), Moody-Adams (1997: 34–43) contends that all institutions, utterances, and behaviors have meanings that are peculiar to their cultural milieu, so that we cannot be certain that participants in cross-cultural disagreements are talking about the same thing. [ 26 ] The problem of situational meaning, she thinks, threatens “insuperable” methodological difficulty for those asserting the existence of intractable intercultural disagreement (1997: 36). Advocates of ethnographic projects will likely respond—not unreasonably, we think—that judicious observation and interview, such as that to which Brandt aspired, can motivate confident assessments of evaluative diversity. Suppose, however, that Moody-Adams is right, and the methodological difficulties are insurmountable. Now, there’s an equitable distribution of the difficulty: if observation and interview are really as problematic as Moody-Adams suggests, neither the realists’ nor the anti-realists’ take on disagreement can be supported by appeal to empirical evidence. We do not think that such a stalemate obtains, because we think the implicated methodological pessimism excessive. Serious empirical work can, we think, tell us a lot about cultures and the differences between them. The appropriate way of proceeding is with close attention to particular studies, and what they show and fail to show. [ 27 ]

As Brandt (1959: 101–2) acknowledged, the anthropological literature of his day did not always provide as much information on the exact contours and origins of moral attitudes and beliefs as philosophers wondering about the prospects for convergence might like. However, social psychology and cognitive science have recently produced research which promises to further discussion; during the last 35 years, there has been an explosion of “cultural psychology” investigating the cognitive and emotional processes of different cultures (Shweder & Bourne 1982; Markus & Kitayama 1991; Ellsworth 1994; Nisbett & Cohen 1996; Nisbett 1998, 2003; Kitayama & Markus 1999; Heine 2008; Kitayama & Cohen 2010; Henrich 2015). Here we will focus on some cultural differences found close to (our) home, differences discovered by Nisbett and his colleagues while investigating regional patterns of violence in the American North and South. We argue that these findings support Brandt’s pessimistic conclusions regarding the likelihood of convergence in moral judgment.

The Nisbett group’s research can be seen as applying the tools of cognitive social psychology to the “culture of honor”, a phenomenon that anthropologists have documented in a variety of groups around the world. Although these groups differ in many respects, they manifest important commonalities:

A key aspect of the culture of honor is the importance placed on the insult and the necessity to respond to it. An insult implies that the target is weak enough to be bullied. Since a reputation for strength is of the essence in the culture of honor, the individual who insults someone must be forced to retract; if the instigator refuses, he must be punished—with violence or even death. (Nisbett & Cohen 1996: 5)

According to Nisbett and Cohen (1996: 5–9), an important factor in the genesis of southern honor culture was the presence of a herding economy. Honor cultures are particularly likely to develop where resources are liable to theft, and where the state’s coercive apparatus cannot be relied upon to prevent or punish thievery. These conditions often occur in relatively remote areas where herding is a main form of subsistence; the “portability” of herd animals makes them prone to theft. In areas where farming rather than herding dominates, cooperation among neighbors is more important, stronger government infrastructures are more common, and resources—like decidedly unportable farmland—are harder to steal. In such agrarian social economies, cultures of honor tend not to develop. The American South was originally settled primarily by peoples from remote areas of Britain. Since their homelands were generally unsuitable for farming, these peoples have historically been herders; when they emigrated from Britain to the American South, they initially sought out remote regions suitable for herding, and in such regions, the culture of honor flourished.

In the contemporary South, police and other government services are widely available and herding has all but disappeared as a way of life, but certain sorts of violence continue to be more common than they are in the North. Nisbett and Cohen (1996) maintain that patterns of violence in the South, as well as attitudes toward violence, insults, and affronts to honor, are best explained by the hypothesis that a culture of honor persists among contemporary white non-Hispanic southerners. In support of this hypothesis, they offer a compelling array of evidence, including:

  • demographic data indicating that (1) among southern whites, homicides rates are higher in regions more suited to herding than agriculture, and (2) white males in the South are much more likely than white males in other regions to be involved in homicides resulting from arguments, although they are not more likely to be involved in homicides that occur in the course of a robbery or other felony (Nisbett & Cohen 1996: Ch. 2)
  • survey data indicating that white southerners are more likely than northerners to believe that violence would be “extremely justified” in response to a variety of affronts, and that if a man failed to respond violently to such affronts, he was “not much of a man” (Nisbett & Cohen 1996: Ch. 3)
  • legal scholarship indicating that southern states “give citizens more freedom to use violence in defending themselves, their homes, and their property” than do northern states (Nisbett & Cohen 1996: Ch. 5, p. 63)

Two experimental studies—one in the field, the other in the laboratory—are especially striking.

In the field study (Nisbett & Cohen 1996: 73–5), letters of inquiry were sent to hundreds of employers around the United States. The letters purported to be from a hardworking 27-year-old Michigan man who had a single blemish on his otherwise solid record. In one version, the “applicant” revealed that he had been convicted for manslaughter. The applicant explained that he had been in a fight with a man who confronted him in a bar and told onlookers that “he and my fiancée were sleeping together. He laughed at me to my face and asked me to step outside if I was man enough”. According to the letter, the applicant’s nemesis was killed in the ensuing fray. In the other version of the letter, the applicant revealed that he had been convicted of motor vehicle theft, perpetrated at a time when he needed money for his family. Nisbett and his colleagues assessed 112 letters of response, and found that southern employers were significantly more likely to be cooperative and sympathetic in response to the manslaughter letter than were northern employers, while no regional differences were found in responses to the theft letter. One southern employer responded to the manslaughter letter as follows:

As for your problems of the past, anyone could probably be in the situation you were in. It was just an unfortunate incident that shouldn’t be held against you. Your honesty shows that you are sincere…. I wish you the best of luck for your future. You have a positive attitude and a willingness to work. These are qualities that businesses look for in employees. Once you are settled, if you are near here, please stop in and see us. (Nisbett & Cohen 1996: 75)

No letters from northern employers were comparably sympathetic.

In the laboratory study (Nisbett & Cohen 1996: 45–8) subjects—white males from both northern and southern states attending the University of Michigan—were told that saliva samples would be collected to measure blood sugar as they performed various tasks. After an initial sample was collected, the unsuspecting subject walked down a narrow corridor where an experimental confederate was pretending to work on some filing. The confederate bumped the subject and, feigning annoyance, called him an “asshole”. A few minutes after the incident, saliva samples were collected and analyzed to determine the level of cortisol—a hormone associated with high levels of stress, anxiety and arousal, and testosterone—a hormone associated with aggression and dominance behavior. As Figure 1 indicates, southern subjects showed dramatic increases in cortisol and testosterone levels, while northerners exhibited much smaller changes.

The two studies just described suggest that southerners respond more strongly to insult than northerners, and take a more sympathetic view of others who do so, manifesting just the sort of attitudes that are supposed to typify honor cultures. We think that the data assembled by Nisbett and his colleagues make a persuasive case that a culture of honor persists in the American South. Apparently, this culture affects people’s judgments, attitudes, emotion, behavior, and even their physiological responses. Additionally, there is evidence that child rearing practices play a significant role in passing the culture of honor on from one generation to the next, and also that relatively permissive laws regarding gun ownership, self-defense, and corporal punishment in the schools both reflect and reinforce southern honor culture (Nisbett & Cohen 1996: 60–63, 67–9). In short, it seems to us that the culture of honor is deeply entrenched in contemporary southern culture, despite the fact that many of the material and economic conditions giving rise to it no longer widely obtain. [ 28 ]

We believe that the North/South cultural differences adduced by Nisbett and colleagues support Brandt’s conclusion that moral attitudes will often fail to converge, even under ideal conditions. The data should be especially troubling for the realist, for despite the differences that we have been recounting, contemporary northern and southern Americans might be expected to have rather more in common—from circumstance to language to belief to ideology—than do, say, Yanomamö and Parisians. So if there is little ground for expecting convergence in the case at hand, there is probably little ground in a good many others.

Fraser and Hauser (2010) are not convinced by our interpretation of Nisbett and Cohen’s data. They maintain that while those data do indicate that northerners and southerners differ in the strength of their disapproval of insult-provoked violence, they do not show that northerners and southerners have a real moral disagreement. They go on to argue that the work of Abarbanell and Hauser (2010) provides a much more persuasive example of a systematic moral disagreement between people in different cultural groups. Abarbanell and Hauser focused on the moral judgments of rural Mayan people in the Mexican state of Chiapas. They found that people in that community do not judge actions causing harms to be worse than omissions (failures to act) which cause identical harms, while nearby urban Mayan people and Western internet users judge actions to be substantially worse than omissions.

Though we are not convinced by Fraser and Hauser’s interpretation of the Nisbett and Cohen data, we agree that the Abarbanell and Hauser study provides a compelling example of a systematic cultural difference in moral judgement. Barrett et al. (2016) provides another example. That study looked at the extent to which an agent’s intention affected the moral judgments of people in eight traditional small-scale societies and two Western societies, one urban, one rural. They found that in some of these societies, notably including both Western groups, the agent’s intention had a major effect, while in other societies agent intention had little or no effect.

As we said at the outset, realists defending conjectures about convergence may attempt to explain away evaluative diversity by arguing that the diversity is to be attributed to shortcomings of discussants or their circumstances. If this strategy can be made good, moral realism may survive an empirically informed argument from disagreement: so much the worse for the instance of moral reflection and discussion in question, not so much the worse for the objectivity of morality. While we cannot here canvass all the varieties of this suggestion, we will briefly remark on some of the more common forms. For concreteness, we will focus on Nisbett and Cohen’s study.

Impartiality . One strategy favored by moral realists concerned to explain away moral disagreement is to say that such disagreement stems from the distorting effects of individual interest (see Sturgeon 1988: 229–230; Enoch 2009: 24–29); perhaps persistent disagreement doesn’t so much betray deep features of moral argument and judgment as it does the doggedness with which individuals pursue their perceived advantage. For instance, seemingly moral disputes over the distribution of wealth may be due to perceptions—perhaps mostly inchoate—of individual and class interests rather than to principled disagreement about justice; persisting moral disagreement in such circumstances fails the impartiality condition, and is therefore untroubling to the moral realist. But it is rather implausible to suggest that North/South disagreements as to when violence is justified will fail the impartiality condition. There is no reason to think that southerners would be unwilling to universalize their judgments across relevantly similar individuals in relevantly similar circumstances, as indeed Nisbett and Cohen’s “letter study” suggests. One can advocate a violent honor code without going in for special pleading. [ 29 ] We do not intend to denigrate southern values; our point is that while there may be good reasons for criticizing the honor-bound southerner, it is not obvious that the reason can be failure of impartiality, if impartiality is (roughly) to be understood along the lines of a willingness to universalize one’s moral judgments.

Full and vivid awareness of relevant nonmoral facts . Moral realists have argued that moral disagreements very often derive from disagreement about nonmoral issues. According to Boyd (1988: 213; cf. Brink 1989: 202–3; Sturgeon 1988: 229),

careful philosophical examination will reveal … that agreement on nonmoral issues would eliminate almost all disagreement about the sorts of moral issues which arise in ordinary moral practice.

Is this a plausible conjecture for the data we have just considered? We find it hard to imagine what agreement on nonmoral facts could do the trick, for we can readily imagine that northerners and southerners might be in full agreement on the relevant nonmoral facts in the cases described. Members of both groups would presumably agree that the job applicant was cuckolded, for example, or that calling someone an “asshole” is an insult. We think it much more plausible to suppose that the disagreement resides in differing and deeply entrenched evaluative attitudes regarding appropriate responses to cuckolding, challenge, and insult.

Savvy philosophical readers will be quick to observe that terms like “challenge” and “insult” look like “thick” ethical terms, where the evaluative and descriptive are commingled (see Williams 1985: 128–30); therefore, it is very difficult to say what the extent of the factual disagreement is. But this is of little help for the expedient under consideration, since the disagreement-in-nonmoral-fact response apparently requires that one can disentangle factual and moral disagreement.

It is of course possible that full and vivid awareness of the nonmoral facts might motivate the sort of change in southern attitudes envisaged by the (at least the northern) moral realist. Were southerners to become vividly aware that their culture of honor was implicated in violence, they might be moved to change their moral outlook. (We take this way of putting the example to be the most natural one, but nothing philosophical turns on it. If you like, substitute the possibility of northerners endorsing honor values after exposure to the facts.) On the other hand, southerners might insist that the values of honor should be nurtured even at the cost of promoting violence; the motto “death before dishonor”, after all, has a long and honorable history. The burden of argument, we think, lies with the realist who asserts— culture and history notwithstanding —that southerners would change their mind if vividly aware of the pertinent facts.

Freedom from “Abnormality ”. Realists may contend that much moral disagreement may result from failures of rationality on the part of discussants (Brink 1989: 199–200). Obviously, disagreement stemming from cognitive impairments is no embarrassment for moral realism; at the limit, that a disagreement persists when some or all disputing parties are quite insane shows nothing deep about morality. But it doesn’t seem plausible that southerners’ more lenient attitudes towards certain forms of violence are readily attributed to widespread cognitive disability. Of course, this is an empirical issue, but we don’t know of any evidence suggesting that southerners suffer some cognitive impairment that prevents them from understanding demographic and attitudinal factors in the genesis of violence, or any other matter of fact. What is needed to press home a charge of irrationality is evidence of cognitive impairment independent of the attitudinal differences, and further evidence that this impairment is implicated in adherence to the disputed values. In this instance, as in many others, we have difficulty seeing how charges of abnormality or irrationality can be made without one side begging the question against the other.

Nisbett and colleagues’ work may represent a potent counterexample to any theory maintaining that rational argument tends to convergence on important moral issues; the evidence suggests that the North/South differences in attitudes towards violence and honor might well persist even under the sort of ideal conditions under consideration. Admittedly, such conclusions must be tentative. On the philosophical side, not every plausible strategy for “explaining away” moral disagreement and grounding expectations of convergence has been considered. [ 30 ] On the empirical side, this entry has reported on but a few studies, and those considered, like any empirical work, might be criticized on either conceptual or methodological grounds. [ 31 ] Finally, it should be clear what this entry is not claiming: any conclusions here—even if fairly earned—are not a “refutation” of all versions of moral realism, since there are versions of moral realism that do not require convergence (Bloomfield 2001; Shafer-Landau 2003). Rather, this discussion should give an idea of the empirical work philosophers must encounter, if they are to make defensible conjectures regarding moral disagreement.

Progress in ethical theorizing often requires progress on difficult psychological questions about how human beings can be expected to function in moral contexts. It is no surprise, then, that moral psychology is a central area of inquiry in philosophical ethics. It should also come as no surprise that empirical research, such as that conducted in psychology departments, may substantially abet such inquiry. Nor then, should it surprise that research in moral psychology has become methodologically pluralistic , exploiting the resources of, and endeavoring to contribute to, various disciplines. Here, we have illustrated how such interdisciplinary inquiry may proceed with regard to central problems in philosophical ethics.

  • Abarbanell, Linda and Marc D. Hauser, 2010, “Mayan Morality: An Exploration of Permissible Harms”, Cognition , 115(2): 207–224. doi:10.1016/j.cognition.2009.12.007
  • Adams, Robert Merrihew, 2006, A Theory of Virtue: Excellence in Being for the Good , Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780199207510.001.0001
  • Alfano, Mark, 2013, Character as Moral Fiction , Cambridge: Cambridge University Press. doi:10.1017/CBO9781139208536
  • –––, 2016, Moral Psychology: An Introduction , Cambridge: Polity Press.
  • Andow, James and Florian Cova, 2016, “Why Compatibilist Intuitions Are Not Mistaken: A Reply to Feltz and Millan”, Philosophical Psychology , 29(4): 550–566. doi:10.1080/09515089.2015.1082542
  • Annas, Julia, 2005, “Comments on John Doris’ Lack of Character ”, Philosophy and Phenomenological Research , 71(3): 636–642. doi:10.1111/j.1933-1592.2005.tb00476.x
  • –––, 2011, Intelligent Virtue , Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780199228782.001.0001
  • Anscombe, G.E.M., 1958, “Modern Moral Philosophy”, Philosophy , 33(124): 1–19. doi:10.1017/S0031819100037943
  • Appiah, Kwame Anthony, 2008, Experiments in Ethics , Cambridge, MA: Harvard University Press.
  • Aristotle, Nichomachean Ethics , in The Complete Works of Aristotle , edited by J. Barnes, Princeton: Princeton University Press, 1984.
  • Arpaly, Nomy, 2005, “Comments on Lack of Character by John Doris”, Philosophy and Phenomenological Research , 71(3): 643–647. doi:10.1111/j.1933-1592.2005.tb00477.x
  • Athanassoulis, Nafsika, 1999, “A Response to Harman: Virtue Ethics and Character Traits”, Proceedings of the Aristotelian Society , 100(1): 215–222. doi:10.1111/j.0066-7372.2003.00012.x
  • Badhwar, Neera K., 2009, “The Milgram Experiments, Learned Helplessness, and Character Traits”, The Journal of Ethics , 13(2–3): 257–289. doi:10.1007/s10892-009-9052-4
  • Baron, Jonathan, 1994, “Nonconsequentialist Decisions”, Behavioral and Brain Sciences , 17(1): 1–42. doi:10.1017/S0140525X0003301X
  • –––, 2001, Thinking and Deciding , 3 rd edition, Cambridge: Cambridge University Press.
  • Barrett, H.C., A. Bolyanatz, A. Crittenden, D.M.T. Fessler, S. Fitzpatrick, M. Gurven, J. Henrich, M. Kanovsky, G. Kushnick, A. Pisor, B. Scelza, S. Stich, C. von Rueden, W. Zhao and S. Laurence, 2016, “Small-Scale Societies Exhibit Fundamental Variation in the Role of Intentions in Moral Judgment”, Proceedings of the National Academy of Sciences , 113(17): 4688–4693. doi:10.1073/pnas.1522070113
  • Batson, C. Daniel, 1991, The Altruism Question: Toward a Social-Psychological Answer , Hillsdale, NJ: Lawrence Erlbaum Associates.
  • –––, 2011, Altruism in Humans , Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780195341065.001.0001
  • Bear, Adam and Joshua Knobe, 2016, “What Do People Find Incompatible With Causal Determinism?” Cognitive Science , 40(8): 2025–2049. doi:10.1111/cogs.12314
  • Besser-Jones, Lorraine, 2008, “Social Psychology, Moral Character, and Moral Fallibility”, Philosophy and Phenomenological Research , 76(2): 310–332. doi:10.1111/j.1933-1592.2007.00134.x
  • Björnsson, Gunnar, 2014, “Incompatibilism and ‘Bypassed’ Agency”, in Alfred R. Mele (ed.), Surrounding Free Will , Oxford: Oxford University Press, pp. 95–112. doi:10.1093/acprof:oso/9780199333950.003.0006
  • Björnsson, Gunnar and Derk Pereboom, 2016, “Traditional and Experimental Approaches to Free Will and Moral Responsibility”, in Sytsma and Buckwalter 2016: 142–157. doi:10.1002/9781118661666.ch9
  • Bloomfield, Paul, 2000, “Virtue Epistemology and the Epistemology of Virtue”, Philosophy and Phenomenological Research , 60(1): 23–43. doi:10.2307/2653426
  • –––, 2001, Moral Reality , New York: Oxford University Press. doi:10.1093/0195137132.001.0001
  • –––, 2014, “Some Intellectual Aspects of the Cardinal Virtues”, in Oxford Studies in Normative Ethics , volume 3, Mark Timmons (ed.), pp. 287–313. doi:10.1093/acprof:oso/9780199685905.003.0013
  • Boorse, Christopher, 1975, “On the Distinction between Disease and Illness”, Philosophy and Public Affairs , 5(1): 49–68.
  • Boyd, Richard, 1988, “How to Be a Moral Realist”, in Sayre-McCord 1988b: 181–228.
  • Brandt, Richard B., 1954, Hopi Ethics: A Theoretical Analysis , Chicago: The University of Chicago Press.
  • –––, 1959, Ethical Theory: The Problems of Normative and Critical Ethics , Englewood Cliff, NJ: Prentice-Hall.
  • Bratman, Michael E., 1996, “Identification, Decision, and Treating as a Reason”, Philosophical Topics , 24(2): 1–18. doi:10.5840/philtopics19962429
  • Brink, David Owen, 1989, Moral Realism and the Foundations of Ethics , Cambridge: Cambridge University Press. doi:10.1017/CBO9780511624612
  • Broad, C.D., 1930, Five Types of Ethical Theory , New York: Harcourt, Brace.
  • –––, 1950, “Egoism as a Theory of Human Motives”, The Hibbert Journal , 48: 105–114. Reprinted in his Ethics and the History of Philosophy: Selected Essays , London: Routledge and Kegan Paul, 1952, 218–231.
  • Cameron, C. Daryl, B. Keith Payne, and John M. Doris, 2013, “Morality in High Definition: Emotion Differentiation Calibrates the Influence of Incidental Disgust on Moral Judgments”, Journal of Experimental Social Psychology , 49(4): 719–725. doi:10.1016/j.jesp.2013.02.014
  • Campbell, C.A., 1951, “Is ‘Freewill’ a Pseudo-problem?” Mind , 60(240): 441–465. doi:10.1093/mind/LX.240.441
  • Cappelen, Herman, 2012, Philosophy Without Intuitions , Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780199644865.001.0001
  • Cervone, Daniel and Yuichi Shoda (eds.), 1999, The Coherence of Personality: Social-Cognitive Bases of Consistency, Variability, and Organization , New York and London: Guilford Press.
  • Chiesi, Harry L., George J. Spilich, and James F. Voss, 1979, “Acquisition of domain-related information in relation to high and low domain knowledge”, Journal of verbal learning and verbal behavior , 18(3): 257–273. doi:10.1016/S0022-5371(79)90146-4
  • Cialdini, Robert B., Stephanie L. Brown, Brian P. Lewis, Carol Luce and Stephen L. Neuberg, 1997, “Reinterpreting the Empathy-Altruism Relationship: When One into One Equals Oneness”, Journal of Personality and Social Psychology , 73(3), 481– 494. doi:10.1037/0022-3514.73.3.481
  • Cova, Florian and Yasuko Kitano, 2013, “Experimental Philosophy and the Compatibility of Free Will and Determinism: A Survey”, Annals of the Japan Association for Philosophy of Science , 22: 17–37. doi:10.4288/jafpos.22.0_17
  • Cova, Florian, Maxime Bertoux, Sacha Bourgeois-Gironde, and Bruno Dubois, 2012, “Judgments about Moral Responsibility and Determinism in Patients with Behavioural Variant of Frontotemporal Dementia: Still Compatibilists”, Consciousness and Cognition , 21(2): 851–864. doi:10.1016/j.concog.2012.02.004
  • Cuneo, Terence, 2014, Speech and Morality: On the Metaethical Implications of Speaking , Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780198712725.001.0001
  • Darley, John M. and C. Daniel Batson, 1973, “‘From Jerusalem to Jericho’: A Study of Situational and Dispositional Variables In Helping Behavior”, Journal of Personality and Social Psychology , 27(1): 100–108. doi:10.1037/h0034449
  • Decety, Jean and Thalia Wheatley (eds.), 2015, The Moral Brain: A Multidisciplinary Perspective , Cambridge, MA: MIT Press.
  • Deery, Oisin and Eddy Nahmias, 2017, “Defeating Manipulation Arguments: Interventionist Causation and Compatibilist Sourcehood”, Philosophical Studies , 174(5): 1255–1276. doi:10.1007/s11098-016-0754-8.
  • Dennett, Daniel C., 1984, Elbow Room: The Varieties of Free Will Worth Wanting , Cambridge, MA: MIT Press.
  • DePaul, Michael, 1999, “Character Traits, Virtues, and Vices: Are There None?” in Proceedings of the 20th World Congress of Philosophy, v. 1 , Bowling Green, OH: Philosophy Documentation Center, pp. 141–157.
  • Deutsch, Max, 2015, The Myth of the Intuitive: Experimental Philosophy and Philosophical Method , Cambridge, MA: MIT Press. doi:10.7551/mitpress/9780262028950.001.0001
  • Dixon, Thomas, 2008, The Invention of Altruism: Making Moral Meanings in Victorian Britain , Oxford: Oxford University Press. doi:10.5871/bacad/9780197264263.001.0001
  • Donnellan, M. Brent, Richard E. Lucas, and William Fleeson (eds.), 2009, “Personality and Assessment at Age 40: Reflections on the Past Person-Situation Debate and Emerging Directions of Future Person-Situation Integration and Assessment at Age 40”, Journal of Research in Personality , special issue, 43(2): 117–290.
  • Doris, John M., 1998, “Persons, Situations, and Virtue Ethics”, Noûs , 32(4): 504–530. doi:10.1111/0029-4624.00136
  • –––, 2002, Lack of Character: Personality and Moral Behavior , New York: Cambridge University Press. doi:10.1017/CBO9781139878364
  • –––, 2005, “Précis” and “Replies: Evidence and Sensibility”, Philosophy and Phenomenological Research , 71(3): 632–5, 656–77. doi:10.1111/j.1933-1592.2005.tb00479.x
  • –––, 2006, “Out of Character: On the Psychology of Excuses in the Criminal Law”, in H. LaFollette (ed.), Ethics in Practice , third edition, Oxford: Blackwell Publishing.
  • –––, 2010, “Heated Agreement: Lack of Character as Being for the Good ”, Philosophical Studies , 148(1): 135–46. doi:10.1007/s11098-010-9507-2
  • –––, 2015, Talking to Our Selves: Reflection, Ignorance, and Agency , Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780199570393.001.0001
  • –––, forthcoming, Character Trouble: Undisciplined Essays on Personality and Agency , Oxford: Oxford University Press.
  • –––, in preparation, “Making Good: In Search of Moral Expertise”.
  • Doris, John M. and Alexandra Plakias, 2008, “How to Argue about Disagreement: Evaluative Diversity and Moral Realism”, in Sinnott-Armstrong 2008b: 303–353.
  • Doris, John M. and Jesse J. Prinz, 2009, “Review of K. Anthony Appiah, Experiments in Ethics ”, Notre Dame Philosophical Reviews , 2009-10-03. URL = < http://ndpr.nd.edu/news/experiments-in-ethics/ >
  • Doris, John M. and The Moral Psychology Research Group (eds.)., 2010, The Moral Psychology Handbook , Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780199582143.001.0001
  • Doris, John M. and Stephen P. Stich, 2005, “As a Matter of Fact: Empirical Perspectives on Ethics”, in Frank Jackson and Michael Smith (eds.), The Oxford Handbook of Contemporary Philosophy , Oxford: Oxford University Press.
  • Dunker, Karl, 1939, “Ethical Relativity? (An Enquiry into the Psychology of Ethics)”, Mind , 48(189): 39–53. doi:10.1093/mind/XLVIII.189.39
  • Ellsworth, Phoebe C., 1994, “Sense, Culture, and Sensibility”, in Shinobu Kitayama and Hazel Rose Markus (eds.), Emotion and Culture: Empirical Studies of Mutual Influence , Washington: American Psychological Association.
  • Enoch, David, 2009, “How is Moral Disagreement a Problem for Realism?” The Journal of Ethics , 13(1): 15–50. doi:10.1007/s10892-008-9041-z.
  • –––, 2011, Taking Morality Seriously: A Defense of Robust Realism , Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780199579969.001.0001
  • Ericsson, K. Anders, 2014, “Why Expert Performance Is Special and Cannot Be Extrapolated From Studies of Performance in the General Population: A Response to Criticisms”, Intelligence , 45: 81–103. doi:10.1016/j.intell.2013.12.001
  • Ericsson, K. Anders, Ralf Th. Krampe, and Clemens Tesch-Römer, 1993, “The Role of Deliberate Practice in the Acquisition of Expert Performance”, Psychological Review , 100(3): 363–406. doi:10.1037/0033-295X.100.3.363
  • Feinberg, Joel, 1965 [1999], “Psychological Egoism”, in Reason and Responsibility , Joel Feinberg (ed.), Belmont, CA: Dickenson Publishing. Reprinted in various editions including the tenth, co-edited with Russ Shafer-Landau, Belmont, CA: Wadsworth, 1999. Based on materials composed for philosophy students at Brown University, 1958.
  • Feltz, Adam and Florian Cova, 2014, “Moral Responsibility and Free Will: A Meta-Analysis”, Consciousness and Cognition , 30: 234–246. doi:10.1016/j.concog.2014.08.012
  • Feltz, Adam and Melissa Millan, 2013, “An Error Theory for Compatibilist Intuitions”, Philosophical Psychology , 28(4): 529–555. doi:10.1080/09515089.2013.865513
  • Figdor, Carrie and Mark Phelan, 2015, “Is Free Will Necessary for Moral Responsibility? A Case for Rethinking Their Relationship and the Design of Experimental Studies in Moral Psychology”, Mind and Language , 30(5): 603–627. doi:10.1111/mila.12092
  • Fischer, John Martin, 1994, The Metaphysics of Free Will , Oxford: Blackwell.
  • Flanagan, Owen, 1991, Varieties of Moral Personality: Ethics and Psychological Realism , Cambridge, MA: Harvard University Press.
  • –––, 2009, “Moral Science? Still Metaphysical After All These Years”, in Darcia Narvaez and Daniel K. Lapsley (eds.), Personality, Identity, and Character , Cambridge: Cambridge University Press, pp. 52–78.
  • Frankfurt, Harry, 1988, The Importance of What We Care About , Cambridge: Cambridge University Press.
  • Fraser, Ben and Marc Hauser, 2010, “The Argument from Disagreement and the Role of Cross-Cultural Empirical Data”, Mind and Language , 25(5): 541–560. doi:10.1111/j.1468-0017.2010.01400.x
  • Fulford, K.W.M., 1989, Moral Theory and Medical Practice , Cambridge: Cambridge University Press.
  • Gigerenzer, Gerd, 2000, Adaptive Thinking: Rationality in the Real World , New York: Oxford University Press. doi:10.1093/acprof:oso/9780195153729.001.0001
  • Gigerenzer, Gerd, Peter M. Todd, and the ABC Research Group., 1999, Simple Heuristics that Make Us Smart , New York: Oxford University Press.
  • Gilovich, Thomas, Dale W. Griffin, and Daniel Kahneman (eds.), 2002, Heuristics and Biases: The Psychology of Intuitive Judgment , New York: Cambridge University Press.
  • Glickman, Mark E., 1995, “A Comprehensive Guide to Chess Ratings”, American Chess Journal , 3: 59–102.
  • Goldman, Alvin I., 1970, A Theory of Human Action , Englewood-Cliffs, NJ: Prentice-Hall.
  • Haidt, Jonathan, Silvia Helena Koller, and Maria G. Dias, 1993, “Affect, Culture, and Morality, Or Is It Wrong to Eat Your Dog?” Journal of Personality and Social Psychology , 65(4): 613–28. doi:10.1037/0022-3514.65.4.613
  • Haji, Ishtiyaque, 2002, “Compatiblist Views of Freedom and Responsibility”, in Robert Kane (ed.), The Oxford Handbook of Free Will , New York: Oxford University Press.
  • Hambrick, David Z., Frederick L. Oswald, Erik M. Altmann, Elizabeth J. Meinz, Fernand Gobet, and Guillermo Campitelli, 2014, “Deliberate Practice: Is That All It Takes to Become an Expert?” Intelligence , 45: 34–45. doi:10.1016/j.intell.2013.04.001
  • Harman, Gilbert, 1999, “Moral Philosophy Meets Social Psychology: Virtue Ethics and the Fundamental Attribution Error”, Proceedings of the Aristotelian Society , 99: 315–331.
  • –––, 2000, “The Nonexistence of Character Traits”, Proceedings of the Aristotelian Society , 100: 223–226. doi:10.1111/j.0066-7372.2003.00013.x
  • –––, 2009, “Skepticism about Character Traits”, The Journal of Ethics , 13(2–3): 235–242. doi:10.1007/s10892-009-9050-6
  • Heine, Steven J., 2008, Cultural Psychology , New York: W.W. Norton.
  • Helzer, Erik G. and David A. Pizarro, 2011, “Dirty Liberals! Reminders of Physical Cleanliness Influence Moral and Political Attitudes”, Psychological Science , 22(4): 517–522. doi:10.1177/0956797611402514
  • Henrich, Joseph, 2015, The Secret of Our Success: How Culture Is Driving Human Evolution, Domesticating Our Species, and Making Us Smarter , Princeton, NJ: Princeton University Press.
  • Hobbes, Thomas, 1651 [1981], Leviathan: Edited with an Introduction by C.B. Macpherson , London: Penguin Books.
  • Horowitz, Tamara, 1998, “Philosophical Intuitions and Psychological Theory”, in Michael R. DePaul and William Ramsey (eds.), Rethinking Intuition: The Psychology of Intuition and its Role in Philosophical Inquiry , Lanham, Maryland: Rowman and Littlefield.
  • Hursthouse, Rosalind, 1999, On Virtue Ethics , Oxford and New York: Oxford University Press. doi:10.1093/0199247994.001.0001
  • Isen, Alice M. and Paula F. Levin, 1972, “Effect of Feeling Good on Helping: Cookies and Kindness”, Journal of Personality and Social Psychology , 21(3): 384–388. doi:10.1037/h0032317
  • Jackson, Frank, 1998, From Metaphysics to Ethics: A Defense of Conceptual Analysis , New York: Oxford University Press. doi:10.1093/0198250614.001.0001
  • Jackson, Frank and Philip Pettit, 1995, “Moral Functionalism and Moral Motivation”, Philosophical Quarterly , 45(178): 20–40. doi:10.2307/2219846
  • Jacobson, Daniel, 2005, “Seeing By Feeling: Virtues, Skills, and Moral Perception”, Ethical Theory and Moral Practice , 8(4): 387–409. doi:10.1007/s10677-005-8837-1
  • Joyce, Richard, 2006, The Evolution of Morality , Cambridge, MA: MIT Press.
  • Kahneman, Daniel, 2011, Thinking, Fast and Slow , New York: Farrar, Straus and Giroux.
  • Kahneman, Daniel, Paul Slovic, and Amos Tversky, 1982, Judgment Under Uncertainty: Heuristics and Biases , Cambridge: Cambridge University Press. doi:10.1017/CBO9780511809477
  • Kamtekar, Rachana, 2004, “Situationism and Virtue Ethics on the Content of Our Character”, Ethics , 114(3): 458–91. doi:10.1086/381696
  • Kane, Robert, 1996, The Significance of Free Will , Oxford: Oxford University Press. doi:10.1093/0195126564.001.0001
  • –––, 1999, “Responsibility, Luck, and Chance: Reflections on Free Will and Indeterminism”, Journal of Philosophy , 96(5): 217–240. doi:10.5840/jphil199996537
  • –––, 2002, “Introduction: The Contours of Contemporary Free Will Debates”, in Robert Kane (ed.), The Oxford Handbook of Free Will , New York: Oxford University Press.
  • Kant, Immanuel, 1785 [1949], Fundamental Principles of the Metaphysics of Morals , Translated by Thomas K. Abbott. Englewood Cliffs, NJ: Prentice Hall / Library of Liberal Arts.
  • Kitayama, Shinobu and Hazel Rose Markus, 1999, “Yin and Yang of the Japanese Self: The Cultural Psychology of Personality Coherence”, in Cervone and Shoda 1999: ch. 8.
  • Kitayama, Shinobu and Dov Cohen, 2010, Handbook of Cultural Psychology , New York: Guilford Press.
  • Kitcher, Philip, 2010, “Varieties of Altruism”, Economics and Philosophy , 26(2): 121–148. doi:10.1017/S0266267110000167
  • –––, 2011, The Ethical Project , Cambridge, MA: Harvard University Press.
  • Knobe, Joshua, 2003a, “Intentional Action and Side Effects in Ordinary Language”, Analysis , 63(279): 190–193. doi:10.1111/1467-8284.00419
  • –––, 2003b, “Intentional Action in Folk Psychology: An Experimental Investigation”, Philosophical Psychology , 16(2): 309–324. doi:10.1080/09515080307771
  • –––, 2006, “The Concept of Intentional Action: A Case Study in the Uses of Folk Psychology”, Philosophical Studies , 130(2): 203–231. doi:10.1007/s11098-004-4510-0
  • –––, 2010, “Person as Scientist, Person as Moralist”, Behavioral and Brain Sciences , 33(4): 315–329. doi:10.1017/S0140525X10000907
  • –––, 2014, “Free Will and the Scientific Vision”, in Edouard Machery and Elizabeth O’Neill (eds.), Current Controversies in Experimental Philosophy , New York and London: Routledge.
  • Knobe, Joshua and Brian Leiter, 2007, “The Case for Nietzschean Moral Psychology”, in Brian Leiter and Neil Sinhababu (eds.) Nietzsche and Morality , Oxford: Oxford University Press. 83–109.
  • Kruger, Justin and David Dunning, 1999, “Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments”, Journal of Personality and Social Psychology , 77(6): 1121–1134. doi:10.1037/0022-3514.77.6.1121
  • Kupperman, Joel J., 2001, “The Indispensability of Character”, Philosophy , 76(02): 239–50. doi:10.1017/S0031819101000250
  • Ladd, John, 1957, The Structure of a Moral Code: A Philosophical Analysis of Ethical Discourse Applied to the Ethics of the Navaho Indians , Cambridge, MA: Harvard University Press.
  • LaFollette, Hugh (ed.), 2000, The Blackwell Guide to Ethical Theory , Oxford: Blackwell Publishing.
  • Leikas, Sointu, Jan-Erik Lönnqvist, and Markku Verkasalo, 2012, “Persons, Situations, and Behaviors: Consistency and Variability of Different Behaviors in Four Interpersonal Situations”, Journal of Personality and Social Psychology , 103(6): 1007–1022. doi:10.1037/a0030385
  • Lerner, Jennifer S., Julie H. Goldberg, and Philip E. Tetlock, 1998, “Sober Second Thought: The Effects of Accountability, Anger, and Authoritarianism on Attributions of Responsibility”, Personality and Social Psychology Bulletin , 24(6): 563–574. doi:10.1177/0146167298246001
  • Lewis, David, 1989, “Dispositional Theories of Value”, Proceedings of the Aristotelian Society , 63 (supp): 113–37.
  • Liao, S. Matthew, Alex Wiegmann, Joshua Alexander, and Gerard Vong, 2012, “Putting the Trolley in Order: Experimental Philosophy and the Loop Case”, Philosophical Psychology , 25(5): 661–671. doi:10.1080/09515089.2011.627536
  • Loeb, Don., 1998, “Moral Realism and the Argument from Disagreement”, Philosophical Studies , 90(3): 281–303. doi:10.1023/A:1004267726440
  • Machery, Edouard, 2010, “The Bleak Implications of Moral Psychology”, Neuroethics , 3(3): 223–231. doi:10.1007/s12152-010-9063-7
  • Machery, Edouard and John M. Doris, forthcoming, “An Open Letter to Our Students: Going Interdisciplinary”, in Voyer and Tarantola forthcoming.
  • MacIntyre, Alasdair, 1967, “Egoism and Altruism”, in Paul Edwards (ed.), The Encyclopedia of Philosophy , vol. 2, first edition, New York: Macmillan, pp. 462–466.
  • Mackie, J.L., 1977, Ethics: Inventing Right and Wrong , New York: Penguin Books.
  • Macnamara, Brooke N., David Z. Hambrick, and Frederick L. Oswald, 2014, “Deliberate Practice and Performance in Music, Games, Sports, Education, and Professions: A Meta-Analysis”, Psychological Science , 25(8): 1608–1618. doi:10.1177/0956797614535810
  • Markus, Hazel R. and Shinobu Kitayama, 1991, “Culture and the Self: Implications for Cognition, Emotion, and Motivation”, Psychological Review , 98(2): 224–253. doi:10.1037/0033-295X.98.2.224
  • May, Joshua, 2011a, “Psychological Egoism”, Internet Encyclopedia of Philosophy .. URL = < https://www.iep.utm.edu/psychego/ >
  • –––, 2011b, “Egoism, Empathy, and Self-Other Merging”, Southern Journal of Philosophy , 49(s1): 25–39. doi:10.1111/j.2041-6962.2011.00055.x
  • –––, 2011c, “Relational Desires and Empirical Evidence against Psychological Egoism: On Psychological Egoism”, European Journal of Philosophy , 19(1): 39–58. doi:10.1111/j.1468-0378.2009.00379.x
  • McGrath, Sarah, 2008, “Moral Disagreement and Moral Expertise”, in Oxford Studies in Metaethics , volume 3, Russ Shafer-Landau (ed.), New York: Oxford University Press, pp. 87–108.
  • –––, 2011, “Skepticism about Moral Expertise as a Puzzle for Moral Realism”, Journal of Philosophy , 108(3): 111–137. doi:10.5840/jphil201110837
  • Mehl, Matthias R., Kathryn L. Bollich, John M. Doris, and Simine Vazire, 2015, “Character and Coherence: Testing the Stability of Naturalistically Observed Daily Moral Behavior”, in Miller et al. 2015: 630–51. doi:10.1093/acprof:oso/9780190204600.003.0030
  • Mele, Alfred R., 2006, Free Will and Luck , New York: Oxford University Press. doi:10.1093/0195305043.001.0001
  • –––, 2013, “Manipulation, Moral Responsibility, and Bullet Biting”, Journal of Ethics , 17(3): 167–84. doi:10.1007/s10892-013-9147-9
  • Merritt, Maria W., 2000, “Virtue Ethics and Stuationist Personality Psychology”, Ethical Theory and Moral Practice , 3(4): 365–83. doi:10.1023/A:1009926720584
  • –––, 2009, “Aristotelean Virtue and the Interpersonal Aspect of Ethical Character”, Journal of Moral Philosophy , 6(1): 23–49. doi:10.1163/174552409X365919
  • Meritt, Maria W., John M. Doris, and Gilbert Harman, 2010, “Character”, in Doris et al. 2010: 355–401.
  • Milgram, Stanley, 1974, Obedience to Authority , New York: Harper and Row.
  • Miller, Christian B., 2003, “Social Psychology and Virtue Ethics”, The Journal of Ethics , 7(4): 365–92. doi:10.1023/A:1026136703565
  • –––, 2013, Moral Character: An Empirical Theory , Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780199674350.001.0001
  • –––, 2014, Character and Moral Psychology , Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780199674367.001.0001
  • Miller, Christian B., R. Michael Furr, Angela Knobel, and William Fleeson (eds.), 2015, Character: New Directions from Philosophy, Psychology, and Theology , New York: Oxford University Press. doi:10.1093/acprof:oso/9780190204600.001.0001
  • Mischel, Walter, 1968, Personality and Assessment , New York: John J. Wiley and Sons.
  • –––, 1999, “Personality Coherence and Dispositions in a Cognitive-Affective Personality System (CAPS) Approach”, in Cervone and Shoda 1999: ch. 2.
  • Montmarquet, James, 2003, “Moral Character and Social Science Research”, Philosophy , 78(03): 355–368. doi:10.1017/S0031819103000342
  • Moody-Adams, Michele M., 1997, Fieldwork in Familiar Places: Morality, Culture, and Philosophy , Cambridge, MA: Harvard University Press.
  • Murphy, Dominic, 2006, Psychiatry in the Scientific Image , Cambridge, MA: MIT Press.
  • Murray, Dylan and Eddy Nahmias, 2014, “Explaining Away Incompatibilist Intuitions”, Philosophy and Phenomenological Research , 88(2): 434–467. doi:10.1111/j.1933-1592.2012.00609.x
  • Murray, Dylan and Tania Lombrozo, 2016, “Effects of Manipulation on Attributions of Causation, Free Will, and Moral Responsibility”, Cognitive Science , 41(2): 447–481. doi: 10.1111/cogs.12338.
  • Nado, Jennifer, 2016, “The Intuition Deniers”, Philosophical Studies , 173(3): 781–800. doi:10.1007/s11098-015-0519-9.
  • Nagel, Thomas, 1970, The Possibility of Altruism , Oxford: Oxford University Press.
  • –––, 1986, The View From Nowhere , New York and Oxford: Oxford University Press.
  • Nahmias, Eddy, 2011, “Intuitions about Free Will, Determinism, and Bypassing”, in Robert Kane (ed.), The Oxford Handbook of Free Will , second edition, Oxford: Oxford University Press.
  • Nahmias, Eddy, Stephen G. Morris, Thomas Nadelhoffer, and Jason Turner, 2009, “Is Incompatiblism Intuitive?” Philosophy and Phenomenological Research , 73(1): 28–53. doi:10.1111/j.1933-1592.2006.tb00603.x
  • Nichols, Shaun and Joshua Knobe, 2007, “Moral Responsibility and Determinism: The Cognitive Science of Folk Intuitions”, Noûs , 41(4): 663–685. doi:10.1111/j.1468-0068.2007.00666.x
  • Nisbett, Richard E., 1998, “Essence and Accident”, in John M. Darley and Joel Cooper (eds.), Attribution and Social Interaction: The Legacy of Edward E. Jones , Washington: American Psychological Association.
  • –––, 2003, The Geography of Thought: How Asians and Westerners Think Differently … and Why , New York: Free Press.
  • Nisbett, Richard E. and Eugene Borgida, 1975, “Attribution and the psychology of prediction”, Journal of Personality and Social Psychology , 32(5): 932–943. doi:10.1037/0022-3514.32.5.932
  • Nisbett, Richard E. and Dov Cohen, 1996, Culture of Honor: The Psychology of Violence in the South , Boulder, CO: Westview Press.
  • Nisbett, Richard E. and Lee Ross, 1980, Human Inference: Strategies and Shortcomings of Social Judgment , Englewood Cliffs, NJ: Prentice-Hall.
  • O’Connor, Timothy, 2000, Persons and Causes: The Metaphysics of Free Will , New York: Oxford University Press. doi:10.1093/019515374X.001.0001
  • Olin, Lauren and John M. Doris, 2014, “Vicious Minds: Virtue Epistemology, Cognition, and Skepticism”, Philosophical Studies , 168(3): 665–92. doi:10.1007/s11098-013-0153-3
  • Pereboom, Derk, 2001, Living Without Free Will , Cambridge: Cambridge University Press. doi:10.1017/CBO9780511498824
  • –––, 2014, Free Will, Agency, and Meaning in Life , Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780199685516.001.0001
  • Petrinovich, Lewis and Patricia O’Neill, 1996, “Influence of Wording and Framing Effects on Moral Intuitions”, Ethology and Sociobiology , 17(3): 145–171. doi:10.1016/0162-3095(96)00041-6
  • Phillips, Jonathan and Alex Shaw, 2014, “Manipulating Morality: Third-Party Intentions Alter Moral Judgments by Changing Causal Reasoning”, Cognitive Science , 39(6): 1320–47. doi:10.1111/cogs.12194
  • Pink, Thomas, 2004, Free Will: A Very Short Introduction , New York: Oxford University Press. doi:10.1093/actrade/9780192853585.001.0001
  • Prinz, Jesse J., 2009, “The Normativity Challenge: Cultural Psychology Provides the Real Threat to Virtue Ethics”, The Journal of Ethics , 13(2–3): 117–144. doi:10.1007/s10892-009-9053-3
  • Pust, Joel, 2000, Intuitions as Evidence , New York: Garland Publishing.
  • Rachels, James, 2000, “Naturalism”, in LaFollette 2000: 74–91.
  • –––, 2003, The Elements of Moral Philosophy , fourth edition, New York: McGraw-Hill.
  • Railton, Peter, 1986a, “Facts and Values”, Philosophical Topics , 14(2): 5–31. doi:10.5840/philtopics19861421
  • –––, 1986b, “Moral Realism”, Philosophical Review , 95(2): 163–207. doi:10.2307/2185589
  • Rawls, John, 1951, “Outline of a Decision Procedure for Ethics”, Philosophical Review , 60(2): 177–97. doi:10.2307/2181696
  • –––, 1971, A Theory of Justice , Cambridge, MA: Harvard University Press.
  • Rosati, Connie S., 1995, “Persons, Perspectives, and Full Information Accounts of the Good”, Ethics , 105(2): 296–325. doi:10.1086/293702
  • Rose, David and Shaun Nichols, 2013, “The Lesson of Bypassing”, Review of Philosophy and Psychology , 4(4): 599–619. doi:10.1007/s13164-013-0154-3
  • Ross, Lee and Richard E. Nisbett, 1991, The Person and the Situation: Perspectives of Social Psychology , Philadelphia: Temple University Press.
  • Russell, Daniel C., 2009, Practical Intelligence and the Virtues , Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780199565795.001.0001
  • –––, 2015, “From Personality to Character to Virtue”, in Mark Alfano (ed.), Current Controversies in Virtue Theory , New York: Routledge, pp. 91–106.
  • Samuels, Richard and Stephen Stich, 2002, “Rationality”, Encyclopedia of Cognitive Science , Chichester: Wiley. doi:10.1002/0470018860.s00171
  • Samuels, Steven M. and William D. Casebeer, 2005, “A social psychological view of morality: Why knowledge of situational influences on behaviour can improve character development practices”, Journal of Moral Education , 34(1): 73–87. doi:10.1080/03057240500049349
  • Sarkissian, Hagop, 2010, “Minor Tweaks, Major Payoffs: The Problems and Promise of Situationism in Moral Philosophy”, Philosophers’ Imprint , 10(9). URL = < http://hdl.handle.net/2027/spo.3521354.0010.009 >
  • Sarkissian, Hagop and Jennifer Cole Wright (eds.)., 2014, Advances in Experimental Moral Psychology , London: Bloomsbury Press.
  • Sayre-McCord, Geoffrey, 1988a, “Introduction: The Many Moral Realisms”, in Sayre-McCord 1988b: 1–24.
  • ––– (ed.), 1988b, Essays in Moral Realism , Ithaca and London: Cornell University Press.
  • –––, 2015, “Moral Realism”, The Stanford Encyclopedia of Philosophy (Spring 2015 Edition), Edward N. Zalta (ed.), URL = < https://plato.stanford.edu/archives/spr2015/entries/moral-realism/ >
  • Schnall, Simone, Jonathan Haidt, Gerald L. Clore, and Alexander H. Jordan, 2008a, “Disgust as Embodied Moral Judgment”, Personality and Social Psychology Bulletin , 34(8): 1069–1109. doi:10.1177/0146167208317771
  • Schnall, Simone, Jennifer Benton, and Sophie Harvey, 2008b, “With a Clean Conscience: Cleanliness Reduces the Severity of Moral Judgments”, Psychological Science , 19(12): 1219–1222. doi:10.1111/j.1467-9280.2008.02227.x
  • Schwitzgebel, Eric and Fiery Cushman, 2011, “Expertise in Moral Reasoning? Order Effects on Moral Judgment in Professional Philosophers and Non-Philosophers”, Mind and Language , 27(2): 135–153. doi:10.1111/j.1468-0017.2012.01438.x.
  • –––, 2015, “Philosophers’ Biased Judgments Persist Despite Training, Expertise, and Reflection”, Cognition , 141: 127–137. doi:10.1016/j.cognition.2015.04.015
  • Shafer-Landau, R., 2003, Moral Realism: A Defence , Oxford: Clarendon Press. doi:10.1093/0199259755.001.0001
  • Sherman, Ryne A., Christopher S. Nave, and David C. Funder, 2010, “Situational Similarity and Personality Predict Behavioral Consistency”, Journal of Personality and Social Psychology , 99(2): 330–343. doi:10.1037/a0019796
  • Shweder, Richard A., and Edmund J. Bourne, 1982, “Does the Concept of the Person Vary Cross-Culturally?” in Anthony J. Marsella and Geoffrey M. White (eds.), Cultural Conceptions of Mental Health and Therapy , Boston, MA: D. Reidel Publishing.
  • Singer, Peter, 1974, “Sidgwick and Reflective Equilibrium”, Monist , 58(3): 490–517. doi:10.5840/monist197458330
  • Sinnott-Armstrong, Walter P., 2005, “Moral Intuitionism Meets Empirical Psychology”, in Terry Horgan and Mark Timmons (eds.), Metaethics After Moore , New York: Oxford University Press. doi:10.1093/acprof:oso/9780199269914.003.0016
  • ––– (ed.), 2008a, Moral Psychology, Vol. 1, The Evolution of Morality: Adaptations and Innateness , Cambridge, MA: MIT Press.
  • ––– (ed.), 2008b, Moral Psychology, Vol. 2, The Cognitive Science of Morality: Intuition and Diversity , Cambridge, MA: MIT Press.
  • ––– (ed.), 2008c, Moral Psychology, Vol.3, The Neuroscience of Morality: Emotion, Brain Disorders, and Development , Cambridge, MA: MIT Press.
  • ––– (ed.), 2014, Moral Psychology, Vol.4, Free Will and Moral Responsibility , Cambridge, MA: MIT Press.
  • Slote, Michael Anthony, 2013, “Egoism and Emotion”, Philosophia , 41(2): 313–335. doi:10.1007/s11406-013-9434-5
  • Smilansky, Saul, 2003, “Compatibilism: the Argument from Shallowness”, Philosophical Studies , 115(3): 257–282. doi:10.1023/A:1025146022431
  • Smith, Adam, 1759 [1853], The Theory of Moral Sentiments , London: Henry G. Bohn. Originally published 1759,
  • Smith, Michael, 1994, The Moral Problem , Cambridge: Basil Blackwell.
  • Snare, F.E., 1980, “The Diversity of Morals” Mind , 89(355): 353–369. doi:10.1093/mind/LXXXIX.355.353
  • Snow, Nancy E., 2010, Virtue as Social Intelligence: An Empirically Grounded Theory , London and New York: Routledge.
  • Sober, Elliott and David Sloan Wilson, 1998, Unto Others: The Evolution and Psychology of Unselfish Behavior , Cambridge, MA: Harvard University Press.
  • Solomon, Robert C., 2003, “Victims of Circumstances? A Defense of Virtue Ethics in Business”, Business Ethics Quarterly , 13(1): 43–62. doi:10.5840/beq20031314
  • –––, 2005, “‘What’s Character Got to Do with It?’”, Philosophy and Phenomenological Research , 71(3): 648–655. doi:10.1111/j.1933-1592.2005.tb00478.x
  • Sosa, Ernest, 2007, “Intuitions: Their Nature and Epistemic Efficacy”, Grazer Philosophische Studien , 74(1): 51–67. doi:10.1163/9789401204651_004
  • –––, 2009, “Situations Against Virtues: The Situationist Attack on Virtue Theory”, in Chrysostomos Mantzavinos (ed.), Philosophy of the Social Sciences: Philosophical Theory and Scientific Practice , New York: Cambridge University Press. 274–290. doi:10.1017/CBO9780511812880.021
  • Sreenivasan, Gopal, 2002, “Errors about errors: Virtue theory and trait attribution”, Mind , 111(441): 47–68. doi:10.1093/mind/111.441.47
  • Sripada, Chandra Sekhar, 2012, “What Makes a Manipulated Agent Unfree?” Philosophy and Phenomenological Research , 85(3): 563–93. doi:10.1111/j.1933-1592.2011.00527.x
  • Stich, Stephen, 1990, The Fragmentation of Reason: Preface to a Pragmatic Theory of Cognitive Evaluation , Cambridge, MA: The MIT Press.
  • Stich, Stephen, John M. Doris, and Erica Roedder, 2010, “Altruism”, in Doris et al. 2010: 147–205.
  • Stich, Stephen and Kevin P. Tobia, 2016, “Experimental Philosophy and the Philosophical Tradition”, in Sytsma and Buckwalter 2016: 3–21. doi:10.1002/9781118661666.ch1
  • –––, 2018, “Intuition and Its Critics”, in Stuart, Fehige, and Brown 2018: ch. 21.
  • Stichter, Matt, 2007, “Ethical Expertise: The Skill Model of Virtue”, Ethical Theory and Moral Practice , 10(2): 183–194. doi:10.1007/s10677-006-9054-2
  • –––, 2011, “Virtues, Skills, and Right Action”, Ethical Theory and Moral Practice , 14(1): 73–86. doi:10.1007/s10677-010-9226-y
  • Strawson, P.F., 1982, “Freedom and Resentment”, in Gary Watson (ed.), Free Will , New York: Oxford University Press. Originally published, 1962,
  • Strawson, Galen, 1986, Freedom and Belief , Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780199247493.001.0001
  • Strohminger, Nina, Richard L. Lewis and David E. Meyer, 2011, “Divergent Effects of Different Positive Emotions on Moral Judgment”, Cognition , 119(2): 295–300. doi:10.1016/j.cognition.2010.12.012
  • Stuart, Michael T., Yiftach Fehige, and James Robert Brown (eds.), 2018, The Routledge Companion to Thought Experiments , New York: Routledge.
  • Sturgeon, Nicholas L., 1988, “Moral Explanations”, in Sayre-McCord 1988b: 229–255.
  • Sumner, William Graham, 1908 [1934], Folkways , Boston: Ginn and Company.
  • Sunstein, Cass R., 2005, “Moral Heuristics”, Behavioral and Brain Sciences , 28(4): 531–42. doi:10.1017/S0140525X05000099
  • Swanton, Christine, 2003, Virtue Ethics: A Pluralistic View , Oxford: Oxford University Press. doi:10.1093/0199253889.001.0001
  • Sytsma, Justin and Wesley Buckwalter (eds.), 2016, A Companion to Experimental Philosophy , Oxford: Blackwell.
  • Tetlock, Philip E., 1999, “Review of Culture of Honor: The Psychology of Violence in the South by Robert Nisbett and Dov Cohen”, Political Psychology , 20(1): 211–13. doi:10.1111/0162-895X.t01-1-00142
  • Tiberius, Valerie, 2015, Moral Psychology: A Contemporary Introduction , New York: Routledge.
  • Tobia, Kevin Patrick, Gretchen B. Chapman, and Stephen Stich, 2013, “Cleanliness is Next to Morality, Even for Philosophers”, Journal of Consciousness Studies , 20(11 and 12): 195–204.
  • Tversky, Amos and Daniel Kahneman, 1973, “Availability: A heuristic for judging frequency and probability”, Cognitive Psychology , 5(2): 207–232. doi:10.1016/0010-0285(73)90033-9
  • –––, 1981, “The Framing of Decisions and the Psychology of Choice”, Science , 211(4481): 453–463. doi:10.1126/science.7455683
  • Upton, Candace L., 2009, Situational Traits of Character: Dispositional Foundations and Implications for Moral Psychology and Friendship , Lanham, Maryland: Lexington Books.
  • Vargas, Manuel, 2005a, “The Revisionist’s Guide to Responsibility”, Philosophical Studies , 125(3): 399–429. doi:10.1007/s11098-005-7783-z
  • –––, 2005b, “Responsibility and the Aims of Theory: Strawson and Revisionism”, Pacific Philosophical Quarterly , 85(2): 218–241. doi:10.1111/j.0279-0750.2004.00195.x
  • Valdesolo, Piercarlo and David DeSteno, 2006, “Manipulations of Emotional Context Shape Moral Judgment”, Psychological Science , 17(6): 476–477. doi:10.1111/j.1467-9280.2006.01731.x
  • Velleman, J. David, 1992, “What Happens When Someone Acts?” Mind , 101(403): 461–81. doi:10.1093/mind/101.403.461
  • Voyer, Benjamin G. and Tor Tarantola (eds.), forthcoming, Moral Psychology: A Multidisciplinary Guide , Springer.
  • Vranas, Peter B.M., 2005, “The Indeterminacy Paradox: Character Evaluations and Human Psychology”, Noûs , 39(1): 1–42.
  • Watson, Gary, 1996, “Two Faces of Responsibility”, Philosophical Topics 24(2): 227–48. doi: 10.5840/philtopics199624222
  • Webber, Jonathan, 2006a, “Character, Consistency, and Classification”, Mind , 115(459): 651–658. doi:10.1093/mind/fzl651
  • –––, 2006b, “Virtue, Character and Situation”, Journal of Moral Philosophy , 3(2): 193–213. doi:10.1177/1740468106065492
  • –––, 2007a, “Character, Common-Sense, and Expertise”, Ethical Theory and Moral Practice , 10(1): 89–104. doi:10.1007/s10677-006-9041-7
  • –––, 2007b, “Character, Global and Local”, Utilitas , 19(04): 430–434. doi:10.1017/S0953820807002725
  • Westermarck, Edvard, 1906, Origin and Development of the Moral Ideas , 2 volumes, New York: MacMillian.
  • Wiegmann, Alex, Yasmina Okan, and Jonas Nagel, 2012, “Order Effects in Moral Judgment”, Philosophical Psychology , 25(6): 813–836. doi:10.1080/09515089.2011.631995
  • Williams, Bernard, 1973, “A Critique of Utilitarianism”, in Utilitarianism: For and Against , by J.J.C. Smart and Bernard Williams, Cambridge: Cambridge University Press.
  • –––, 1985, Ethics and the Limits of Philosophy , Cambridge, MA: Harvard University Press.
  • Woolfolk, Robert L., John M. Doris and John M. Darley, 2006, “Identification, Situational Constraint, and Social Cognition: Studies in the Attribution of Moral Responsibility”, Cognition , 100(2), 283–301. doi:10.1016/j.cognition.2005.05.002
  • Zhong, Chen-Bo, Brendan Strejcek, and Niro Sivanathan, 2010, “A Clean Self Can Render Harsh Moral Judgment”, Journal of Experimental Social Psychology , 46(5): 859–862. doi:10.1016/j.jesp.2010.04.003
  • Zimbardo, Philip G., 2007, The Lucifer Effect: Understanding How Good People Turn Evil , Oxford: Blackwell Publishing Ltd.
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.
  • Moral Psychology Research Group
  • Philosophy Experiments
  • Your Morals Blog

altruism: empirical approaches | egoism | ethics: virtue | experimental moral philosophy | moral realism | practical reason | reasoning: moral

Copyright © 2020 by John Doris < jmd378 @ cornell . edu > Stephen Stich < stich . steve @ gmail . com > Jonathan Phillips < jonathan . s . phillips @ dartmouth . edu > Lachlan Walmsley < ldw917 @ gmail . com >

  • Accessibility

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

The Stanford Encyclopedia of Philosophy is copyright © 2023 by The Metaphysics Research Lab , Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • BMC Med Ethics

Logo of bmcmeth

Empirical research in medical ethics: How conceptual accounts on normative-empirical collaboration may improve research practice

Sabine salloch.

1 Institute for Medical Ethics and History of Medicine, NRW-Junior Research Group "Medical ethics at the end of life: norm and empiricism", Ruhr University Bochum, Bochum, Germany

Jan Schildmann

Jochen vollmann.

The methodology of medical ethics during the last few decades has shifted from a predominant use of normative-philosophical analyses to an increasing involvement of empirical methods. The articles which have been published in the course of this so-called 'empirical turn' can be divided into conceptual accounts of empirical-normative collaboration and studies which use socio-empirical methods to investigate ethically relevant issues in concrete social contexts.

A considered reference to normative research questions can be expected from good quality empirical research in medical ethics. However, a significant proportion of empirical studies currently published in medical ethics lacks such linkage between the empirical research and the normative analysis. In the first part of this paper, we will outline two typical shortcomings of empirical studies in medical ethics with regard to a link between normative questions and empirical data: (1) The complete lack of normative analysis, and (2) cryptonormativity and a missing account with regard to the relationship between 'is' and 'ought' statements. Subsequently, two selected concepts of empirical-normative collaboration will be presented and how these concepts may contribute to improve the linkage between normative and empirical aspects of empirical research in medical ethics will be demonstrated. Based on our analysis, as well as our own practical experience with empirical research in medical ethics, we conclude with a sketch of concrete suggestions for the conduct of empirical research in medical ethics.

High quality empirical research in medical ethics is in need of a considered reference to normative analysis. In this paper, we demonstrate how conceptual approaches of empirical-normative collaboration can enhance empirical research in medical ethics with regard to the link between empirical research and normative analysis.

The methodology of medical ethics has shifted over the last two decades from a predominant use of normative-philosophical analyses to an increasing involvement of empirical methods. In the context of this so-called 'empirical turn' [ 1 ], a multitude of articles has been published in journals of medical ethics. The respective body of literature can be divided into two types of publications. The first type encompasses conceptual accounts of empirical ethics. Examples of this are publications which focus on the interplay between normative and empirical elements in empirical-ethical research [ 2 - 4 ], contributions on definitions of empirical ethics [ 5 ] or the various conceptual frameworks for empirical research in medical ethics [ 6 - 8 ]. A common feature of these publications is to conceptualise the ways in which empirical methods in combination with normative analysis can contribute to ethical research questions.

The second type of articles presents studies which use socio-empirical methods for research on concrete ethical issues. A broad range of topics has already been explored empirically in this way and the respective articles cover a wide spectrum in relation to their aims and the ways in which empirical research and normative analysis interact. Descriptive empirical studies in medical ethics restrict themselves to providing empirical knowledge on an ethical topic without further reference to or interaction with the normative debate. In contrast to these 'descriptive ethics studies', the work presented in other publications is based on a combination of empirical research and normative analysis. In these 'empirical-ethical studies', certain aspects of the empirical findings are linked to the ethical debate in order to demonstrate their contributions to normative reasoning. While the combination of normative reasoning and empirical research in some of these studies takes place against the background of specific concepts of empirical ethics [ 6 - 9 ], other studies do not aim at a systematic connection between their empirical data and the underlying normative questions (see, for example, [ 10 , 11 ]).

In this paper, we will argue that high quality empirical research in medical ethics is in need of a considered reference to normative research questions. Furthermore, we will defend the position that theoretical concepts of normative-empirical collaboration can enhance empirical research in medical ethics. To substantiate our claim, we will, firstly, present two typical shortcomings of empirical studies in medical ethics. We will then present and analyse two theoretical accounts of normative-empirical collaboration. In a further step, we will illustrate how these conceptual accounts can serve as remedies for the deficits of current practice of empirical research in medical ethics. Based on our analysis and our own experience with empirical research in medical ethics, we will conclude with a sketch of concrete criteria which may facilitate the planning and conducting of empirical studies in medical ethics.

Shortcomings of empirical research in medical ethics

In a considerable number of the empirical studies which are currently published in journals of medical ethics or bioethics, the link between the empirical research and a normative analysis on the respective topic is not clear [ 12 - 14 ]. We would argue that publications on empirical studies in medical ethics as a normative discipline should always include at least some reference to normative analysis. Furthermore, we hold the view that an explicit connection between empirical data and normative reflection is a criterion for good quality empirical research in medical ethics. In the following, we will try to substantiate our claims with regard to good quality empirical research in medical ethics. In a first step, we will point out two deficits which concern the linkage between empirical and normative analysis and which can be encountered in the current practice of empirical research in medical ethics: (1) The complete lack of normative analysis in such research, and (2) cryptonormativity and a missing account with regard to the relationship between "is" and "ought" statements.

The lack of normative analysis: Purely descriptive studies in medical ethics

The first shortcoming of empirical studies in medical ethics concerns the complete lack of normative analysis or, to put it differently, the missing link between the empirical research and the ethical debate. A considerable number of the articles which are currently published in journals of medical ethics or bioethics present the results of empirical studies but remain on a descriptive level in their discussion and conclusion sections. One type of empirical research in medical ethics where these purely descriptive studies can particularly be found are quantitative surveys on stakeholders' attitudes regarding ethically challenging topics. An example of this type of research with regard to end-of-life decisions is the survey published by Craig et al. in 2007 [ 15 ]. This study examined the attitudes of a sample of 1,052 physicians towards physician-assisted suicide (PAS) in the US state of Vermont. One interesting finding of the empirical research is that physicians' opinions about the legalisation of PAS are polarised. Moreover, about half of the physicians indicated they would participate in PAS if it became legal in Vermont. In the end, the authors concluded: "Our findings contribute to a deeper understanding of some of the issues surrounding PAS. Specifically, we identified factors influencing physicians' opinions, and aspects of the PAS debate about which compromise is unlikely" [ 15 ], p. 403. In discussing their empirical findings, the authors remain on the descriptive level. However, there is no link between the study's findings and the normative issues relevant to the debate about an ethical justification of PAS.

Descriptive studies in medical ethics, such as the work of Craig et al., can be of excellent methodical quality measured against the background of the criteria established for socio-empirical research. In addition to their possible contributions to debates in social science such empirical studies can also be valuable for the field of medical ethics. This is the case, for example, if they deliver detailed and systematic analyses of stakeholders' moral experiences and attitudes and, therefore, contribute to a context-sensitive insight into certain moral practices in health care. Nevertheless, we would argue that the missing reflection on the impact of the empirical data for normative questions can be criticised as a shortcoming of empirical research from a medical ethics perspective. The most important reason for this position is that while the general need for empirical information in applied ethics is non-controversial, the specific kind of information which is required in argumentations about certain topics is dependent on the normative-ethical background which underlies the ethical evaluation [ 16 , 17 ]. As the specific need for empirical information is dependent on the underlying account of ethical justification, different kinds of empirical data on an ethical issue are not of the same use for a normative evaluation of the respective topic. Just to mention one quite obvious example: An ethical deliberation on a utilitarian basis will usually need other empirical information than an argumentation which is based on a Kantian view of morality. Hence, if empirical data are gathered without prior reflection about their significance within a normative deliberation on the respective topic, these data's relevance for ethics as a normative discipline must be regarded as more or less accidental. Therefore, a reflection on the presumably ethical significance of empirical data prior to the beginning of an empirical study would be desirable.

A considered reference to the normative side is also important in the case of empirical studies in medical ethics which aim at identifying new ethical challenges in practice which have not yet been recognized and discussed. The 'exploratory function' of empirical research is also dependent on normative-ethical presuppositions which decide if and in what sense the empirically identified issue can be seen as an ethical problem and not as a practical problem of another origin. The chosen background of ethical evaluation, furthermore, determines the kind of additional data which is needed to arrive at an ethical judgment about the respective issue. Therefore, we would argue that at least some conception of the relationship between the empirical research question and the normative debate on the respective topic should underlie empirical studies which want to contribute to medical ethics as a normative field.

Cryptonormativity and the is-ought problem

A second shortcoming frequently identified and criticised in empirical research in medical ethics rests upon the problem of drawing normative conclusions from empirical findings [ 3 , 4 , 16 ]. Studies which derive normative statements from their descriptive data alone run the risk of an is-ought fallacy by ignoring the fact that ethical values, norms and principles play an irreducible role in ethical judgment. Whereas it is of great importance for ethics to investigate the cultural, historical and psychological contexts of moral decision-making, this does not mean that empirically detected moral motives and behaviour are together ethically justified.

Studies which draw normative conclusions from empirical results often have a cryptonormative character, which means that they implicitly take normative statements as the basis of their ethical argumentation without mentioning or reflecting on them.

This drawback can frequently be found in publications of empirical studies which entail normative statements in their conclusion sections. Two types of unclear normative conclusions can be distinguished here. In the first case, normative statements are directly drawn from the empirical findings. In the second case, normative statements are found in the conclusion sections which are not explicitly linked to the results of the empirical study, but nevertheless, it can be asked from where these statements are derived. In both cases, the normative premise is not made explicit in the argument but is necessary to arrive at the normative conclusion.

One illustration of the second type of unclear relationship between empirical data and normative conclusions is a paper by Bendiane et al. which, in a similar way to the study of Craig mentioned above, deals with the issue of physician-assisted suicide [ 18 ]. In this study, French hospital nurses were asked whether euthanasia and PAS should be legalised for patients with incurable conditions. The study showed that 48% of the nurses supported the legalisation of euthanasia and 29% supported the legalisation of PAS. Furthermore, the authors showed that reported training in palliative care was negatively correlated with nurses' support for a legalisation of PAS. The authors concluded that: "Improving professional knowledge of palliative care would improve the management of end-of-life situations, but it could also help to clarify the debate over euthanasia" [ 18 ], p. 243.

While the demand for an improved training in palliative care as such is important, in the context of this study's empirical results, the authors' claim can be misleading. This is because, in our view, the normative statement (improvement of palliative care education) cannot be linked to the empirical findings (better education is associated with a decrease in the support of PAS). If such a link between the empirical results and the normative statement in the conclusion section is made, this can be problematic: For example, one may read the paper as if the authors do not make a hidden normative statement explicit when they plead for a better education in palliative care. Following the results of the empirical study, a better knowledge in palliative care might lead to a decrease in the support for legalisation of euthanasia and PAS in France. However, the question whether euthanasia and PAS should be legalised is itself a normative question which has strong ethical implications. Therefore, if the authors plead for better palliative training in the conclusion of this study, their statement can be understood as implicitly taking a stand against the legalisation of euthanasia and PAS in France. However, this ethical standpoint has not been discussed normatively in the study but is implicitly taken as a basis for argumentation.

After this characterization of two drawbacks in the current practice of empirical research in medical ethics, we will now present two theoretical conceptions of the normative-empirical relationship which may contribute to an improved practice of empirical research in medical ethics.

Conceptual accounts of normative-empirical collaboration and their contributions to research practice

In recent years, a number of conceptual accounts regarding the normative-empirical collaboration in medical ethics have been published. In the following, we will present two of these conceptual frameworks which may be useful for researchers who are planning empirical studies in medical ethics and who aim at an integration of empirical research and normative analysis. Although both models presented may not be fully sufficient to provide concrete guidance for planning and conducting an empirical study, we appreciate both models as they acknowledge that a social practice can be judged by both the gathering of empirical data and normative ethical analysis. Furthermore, the two models conceptualize the interaction between both elements in a plausible and systematic way which may be the most important criterion for a good concept of empirical research in medical ethics.

Birnbacher's as well as Leget et al.'s model share the characteristic that they rest upon certain meta-ethical claims, such as a cognitivist view of ethics and the acknowledgement of the fact-value distinction. Hence, the models provide a suitable theoretical background for those researchers who are in accordance with these presuppositions. In contrast to other accounts, the models of Birnbacher and Leget have not been tested in empirical studies so far.

They can be distinguished from other models currently applied in medical ethics, such as hermeneutic ethics or reflective equilibrium, for example, [ 7 - 9 ], which provide alternative accounts of the normative-empirical relationship and different methodological strategies.

However, the two models on normative-empirical collaboration chosen differ in several important characteristics, such as their disciplinary background and their aims. The first model by Dieter Birnbacher, a German philosopher, provides a concept of the relationship between ethics as a theoretical discipline and morals as an empirical phenomenon [ 17 , 19 ]. He discusses different steps in the examination of concrete ethical problems from the perspective of an ethicist. In contrast to this, the second model by Leget and colleagues [ 20 ] draws on a categorisation of methods for integrating empirical research and normative ethics which has been developed in the context of the "empirical turn" [ 21 ]. Leget et al., more than Birnbacher, make reference to the interdisciplinary challenge of doing empirical research in bioethics. Although the two approaches have the aforementioned and other differences, we believe that both can be useful for researchers who aim to improve the integration of empirical research and normative deliberation in medical ethics. Both approaches will be presented in a short form, followed by a discussion of how they can be used as remedies for the two shortcomings of empirical research in medical ethics which have been discussed previously.

Birnbacher's four tasks of applied ethics

One of the first, but still vividly discussed, concepts of the collaboration between ethics and social sciences is that of Dieter Birnbacher [ 17 , 19 ]. Birnbacher displays a model of the interrelationship between empirical information and ethical thinking where he distinguishes four interdependent aspects of an ethical examination of empirical moral phenomena. He firstly describes the analysis part, which consists of a clarification and reconstruction of moral concepts, arguments and ways of reasoning [ 19 ], p. 45. The next step, called critique , is a critical assessment of concepts and explanatory statements used in a certain moral context to arrive at clarity, unambiguousness and plausibility. Construction , which follows, means the development of an ethical approach and evaluation of the moral issue at stake; for instance, a construction of ethical norms that are specific to this particular context. The last aspect Birnbacher mentions is moral pragmatics , which is concerned with the practical, political or educational, implementation of moral norms, assuming that it is not only sufficient for an ethicist to discuss the moral rightness or wrongness of a certain practice on a theoretical level, but also to think about the conditions under which a moral norm or value can become effective in society.

According to Birnbacher, the construction and the pragmatic part are particularly dependent on empirical information and, therefore, on interdisciplinary cooperation. While in the construction part, empirical data are necessary for the development of context-specific moral norms, in the implementation phase, knowledge from empirical disciplines is needed to effectively influence people's attitudes and behaviour. Nevertheless, Birnbacher's position can be completed in pointing out that the first two tasks of ethics which he describes ("analysis" and "critique") similarly rely on empirical cooperation: For a clarification and reconstruction of moral arguments, empirical knowledge about the arguments which are employed in a certain context is also very important, as empirical data are necessary for a critical examination of the truth of certain claims on which ethical argumentations are based.

Birnbacher's general account of empirical-normative collaboration can be applied to empirical studies in medical ethics. In general, it may be helpful for scientists conducting empirical studies in medical ethics to think about where to position their scientific work within this model of ethical reasoning. This positioning has an influence on the kind of information which is needed for a normative discussion of the respective issues. If researchers, for instance, want to contribute to the analysis part, other empirical studies may be more useful than if they want to be conducive on the level of moral pragmatics. In helping to clarify the significance of empirically derived knowledge in specific ethical deliberations, Birnbacher's approach may provide support for those empirical studies in medical ethics which suffer from a lack of normative analysis. Here it can be said, just to mention one example, that an empirical study can contribute on the level of moral pragmatics. The empirical results of a study on people's moral attitudes may provide policy makers with relevant information about chances and challenges if the regulation of an ethically relevant issue was to be changed. Knowing beforehand that the public is highly polarised regarding this issue, for example, may enable policy makers to think about appropriate provisions before the implementation of a new law. In general, Birnbacher's model may help to make clear in which part of an ethical deliberation empirical data are to be integrated.

Leget et al.'s 'critical applied ethics'

A second conceptual approach on empirical-normative collaboration was presented by Leget et al. in 2009 under the title of 'Critical Applied Ethics' [ 20 ]. The authors describe a close interdisciplinary collaboration between ethicists and social scientists. Normative and empirical aspects are seen as two independent foci on one bioethical 'ellipse', which means that both perspectives are kept distinguished, but that they, nevertheless, influence each other in a fruitful way [ 20 ], p. 231. Normative and empirical disciplines investigate the same social practice using their respective methods during the five stages of the research process which are described by the authors: Determination of the problem, description of the problem, effects and alternatives, normative weighing, and evaluation of the effects of a decision. At the level of problem description, for example, possible empirical contributions can encompass the careful study of people's motives, actions and intentions by the social scientist, while the ethicist's task consists of a critical look at the concepts and vocabulary used in this specific context. At the point of "normative weighing", which forms a later stage of the research process, normative theory renders moral judgment, while the descriptive sciences' task is a critical examination of the ethical theories brought into play and the detection of possible empirical (for example, anthropological) premises within them. Thus, the strong interdisciplinary collaboration allows for a mutually critical look at each other's discipline and its premises and presuppositions, as well as at the social practice which is examined and criticized.

This model of interdisciplinary cooperation, as described by Leget et al., can be very useful for researchers in medical ethics, as it provides a systematic account of the different stages of an empirical study in medical ethics. Furthermore, this model can help the representatives of the empirical, as well as of the normative sciences, to become aware of and explicit about the different roles they fulfil as the research process progresses. Along these lines, the concept of 'Critical Applied Ethics' may also lead to a clearer distinction between normative and descriptive statements in the publications of empirical studies. It can also it can help to avoid unclear normative statements as conclusions from empirical data in discussing openly the normative presuppositions which underlie the research project and in reflecting on them critically up to the point of data interpretation.

Conceptual and practical aspects of empirical-normative collaboration - further perspectives

The preceding analysis illustrates how conceptual accounts of empirical-normative collaboration may contribute to the practice of empirical research in medical ethics. Reference to the existing models can stimulate a reflection on how to combine empirical research and normative analysis in a systematic way. While the focus of our paper is to analyse the contribution of conceptual accounts of normative-empirical collaboration to empirical research, it should be noted that the practice of empirical research in medical ethics may also be of value for the conceptual accounts of empirical-normative collaboration in medical ethics. Only a few of these concepts have so far provided the basis for concrete empirical-ethical studies, such as approaches which are based on a reflective equilibrium [ 9 ], a symbiotic model of empirical ethics [ 6 ] or a hermeneutic account [ 7 ]. At the same time, there are many other concepts which, to our knowledge, have not yet been empirically 'tested' in this sense, such as the approaches of Birnbacher [ 19 ] and Leget et al. [ 20 ], which have been presented previously. Nevertheless, conducting an empirical study may offer the opportunity to check the practicability of conceptual approaches and can lead to their refinement or modification. Such a modification of a conceptual account could, for example, necessitate a redevelopment of the different stages which are described in the theoretical model. Another reason for modification may be triggered by the fact that research practice reveals problems in interdisciplinary communication or cooperation which are not considered in the theoretical model but should, nevertheless, be integrated.

Our analysis also sheds light on the current discourse about quality criteria for empirical research in medical ethics [ 22 , 23 ]. As outlined in the introductory part, the premise of our article is that the development and analysis of empirical work in medical ethics should take place with reference to the relevant normative debate(s). Based on this assumption, we have defended the thesis that the quality of empirical studies in medical ethics can be enhanced by a closer connection between empirical research and theoretical approaches to the normative-empirical collaboration in medical ethics. Nevertheless, we acknowledge that our view of normative analysis as a core feature of research in medical ethics needs further specification to determine which quality criteria should be applied for empirical studies in medical ethics. This is true with respect to the threshold of what amount and type of normative analysis should be expected from these studies. While it is outside the scope of this paper to elaborate further on this question, it should be noted that, depending on the respective conception of empirical-normative collaboration, there will be different criteria for good quality in empirical-ethical research. In any case, we expect that, to some degree, such research has to meet the basic standards of both empirical and normative methods. Over and above quality criteria for the empirical research which is done (for example, "Have the empirical methods been applied appropriately?" or "Are the results presented in a clear and transparent way?"), standards should be formulated which bear on the articulation between normative and empirical aspects [ 23 ]. The development of quality criteria for empirical studies in medical ethics should take into account the challenges which arise from the need for an integration of empirical research findings and normative analysis which is specific for these studies.

Not least because of these aspects, empirical research in medical ethics is an especially challenging form of interdisciplinary research. However, a number of the already existing theoretical accounts of normative-empirical collaboration do not provide researchers with information which is concrete enough to set up an empirical study in medical ethics on their basis alone. Based on our analysis in this article, as well as our own practical experience with doing empirical research in medical ethics [ 24 - 26 ], we suggest the following concrete steps when considering an empirical study on a specific topic.

1) Empirical and normative research questions should be formulated in a careful way before starting empirical research in medical ethics. At the same time, it should be considered how these research questions are interrelated, for example, if and how the answer to the empirical question is necessary to answer the normative research question. Furthermore, the identification of possible biases is a crucial point: Normative interests can lead to bias in the interpretation of the empirical data, and the state of empirical research may lead to a bias in the formulation of the normative question.

2) The conceptual and effective interplay between normative and empirical aspects should be considered from the beginning of an empirical study, and this reflection should continue up to the point of data interpretation and publication of the results. This also means that a mutually critical view of the disciplines involved is desirable during the whole research process. This mutually critical reflection may concentrate on implicit normative or empirical premises, as well as on underlying assumptions of theories and methods which are applied in the research project.

3) Empirical research in medical ethics should take place in the form of an ongoing, open and constructive cooperation between representatives of the normative and the empirical sciences. This means that the participating researchers should be open to critique and re-adjustment of their own positions, and acknowledge that there are different perspectives on the same topic which should be integrated to arrive at an empirically informed ethical judgment.

4) The results of empirical studies in medical ethics should be presented in a clear and transparent way which is compatible with the basic standards of the disciplines involved. In addition, the development of new forms of publication of empirical-ethical studies would be preferable (for example, an adaptation of journal standards) which account for the specific demands of this form of interdisciplinary research.

We do not intend to display a new model for normative-empirical collaboration here. Possibly, the implementation of already existing theoretical conceptions into the research practice of empirical medical ethics may be even more desirable at this point than an extension of the spectrum of different approaches to normative-empirical collaboration. In allowing for a reflection of the interaction between normative and empirical elements in ethical deliberation, empirical research in medical ethics can become a very fruitful enterprise and can aid the treatment of the complex ethical challenges of modern health care.

A considered reference to normative research questions can be expected from good quality empirical research in medical ethics. In this paper, we have defended the thesis that the quality of empirical research in medical ethics can be enhanced by taking into account conceptual accounts of the normative-empirical relationship. Overcoming the missing connection between theory development and research practice in empirical medical ethics may also prove profitable for the theoretical concepts of empirical-normative cooperation. Our research further suggests that the discussion on quality criteria for empirical studies in medical ethics should take into account the specific challenges which arise from the need to bring together normative and empirical aspects in this interdisciplinary research field. We concluded with some further suggestions regarding the research practice of empirical studies in medical ethics.

Competing interests

The authors declare that they have no competing interests.

Authors' contributions

All authors have contributed substantially to the conception and design of the manuscript. SS and JS have drafted the manuscript. They contributed equally. JV has critically revised the manuscript. All authors have read and approved the final manuscript.

This publication is a result of the work of the NRW Junior Research Group "Medical Ethics at the End of Life: Norm and Empiricism" at the Institute for Medical Ethics and History of Medicine, Ruhr-University Bochum which is funded by the Ministry for Innovation, Science and Research of the German state of North Rhine-Westphalia.

Pre-publication history

The pre-publication history for this paper can be accessed here:

http://www.biomedcentral.com/1472-6939/13/5/prepub

  • Borry P, Schotsmans P, Dierickx K. The birth of the empirical turn in bioethics. Bioethics. 2005; 19 :49–71. doi: 10.1111/j.1467-8519.2005.00424.x. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Molewijk B. Integrated empirical ethics: in search for clarifying identities. Med Health Care Philos. 2004; 7 :85–87. [ PubMed ] [ Google Scholar ]
  • De Vries R, Gordijn B. Empirical ethics and its alleged meta-ethical fallacies. Bioethics. 2009; 23 :193–201. doi: 10.1111/j.1467-8519.2009.01710.x. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Borry P, Schotsmans P, Dierickx K. What is the role of empirical research in bioethical reflection and decision-making? An ethical analysis. Med Health Care Philos. 2004; 7 :41–53. [ PubMed ] [ Google Scholar ]
  • McMillan J, Hope T. In: Empirical ethics in psychiatry. Widdershoven G, McMillan J, Hope T, van der Scheer L, editor. Oxford: Oxford University Press; 2008. The possibility of empirical psychiatric ethics; pp. 9–22. [ Google Scholar ]
  • Frith L. Symbiotic empirical ethics: a practical methodology. Bioethics. 2012; 26 (4):198–206. [ PubMed ] [ Google Scholar ]
  • Abma TA, Baur VE, Molewijk B, Widdershoven GAM. Inter-Ethics: towards an interactive and interdependent bioethics. Bioethics. 2010; 24 :242–255. doi: 10.1111/j.1467-8519.2010.01810.x. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • De Vries M, Van Leeuwen E. Reflective equilibrium and empirical data: third person moral experiences in empirical medical ethics. Bioethics. 2010; 24 :490–498. [ PubMed ] [ Google Scholar ]
  • Ebbesen M, Pedersen B. Using empirical research to formulate normative ethical principles in biomedicine. Med Health Care Philos. 2007; 10 :33–48. doi: 10.1007/s11019-006-9011-9. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Buiting HM, van der Heide A, Onwuteaka-Philipsen BD, Rurup ML, Rietjens JA, Borsboom G, van der Maas PJ, van Delden JJ. Physicians' labelling of end-of-life practices: a hypothetical case study. J Med Ethics. 2010; 36 :24–29. doi: 10.1136/jme.2009.030155. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • van Bruchem-van de Scheur A, van der Arend A, van Wijmen F, Abu-Saad HH, Ter Meulen R. Dutch nurses' attitudes towards euthanasia and physician-assisted suicide. Nurs Ethics. 2008; 15 :186–198. doi: 10.1177/0969733007086016. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Nilstun T, Melltorp G, Hermeren G. Surveys on attitudes to active euthanasia and the difficulty of drawing normative conclusions. Scand J Public Health. 2000; 28 :111–116. [ PubMed ] [ Google Scholar ]
  • Miller FG, Wendler D. The relevance of empirical research in bioethics. Schizophr Bull. 2006; 32 :37–41. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Solbakk JH. Use and abuse of empirical knowledge in contemporary bioethics. Med Health Care Philos. 2004; 7 :5–16. [ PubMed ] [ Google Scholar ]
  • Craig A, Cronin B, Eward W, Metz J, Murray L, Rose G, Suess E, Vergara ME. Attitudes toward physician-assisted suicide among physicians in Vermont. J Med Ethics. 2007; 33 :400–403. doi: 10.1136/jme.2006.018713. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Düwell M. Wofür braucht die Medizinethik empirische Methoden? Eine normativ-ethische Untersuchung. Ethik Med. 2009; 21 :201–211. doi: 10.1007/s00481-009-0019-6. [ CrossRef ] [ Google Scholar ]
  • Birnbacher D. Ethics and social science: which kind of cooperation? Ethical Theory Moral Pract. 1999; 2 :319–336. doi: 10.1023/A:1009903815157. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bendiane MK, Bouhnik AD, Galinier A, Favre R, Obadia Y, Peretti-Watel P. French hospital nurses' opinion about euthanasia and physician-assisted suicide: a national phone survey. J Med Ethics. 2009; 35 :238–244. doi: 10.1136/jme.2008.025296. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Birnbacher D. In: Herausforderungen der Bioethik. Ach JS, Gaidt A, editor. Frommann-Holzboog: Stuttgart-Bad Cannstatt; 1993. Welche Ethik ist als Bioethik tauglich; pp. 45–67. [ Google Scholar ]
  • Leget C, Borry P, de Vries R. 'Nobody tosses a dwarf!' The relation between the empirical and the normative reexamined. Bioethics. 2009; 23 :226–235. doi: 10.1111/j.1467-8519.2009.01711.x. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Molewijk B, Stiggelbout AM, Otten W, Dupuis HM, Kievit J. Empirical data and moral theory. A plea for integrated empirical ethics. Med Health Care Philos. 2004; 7 :55–69. [ PubMed ] [ Google Scholar ]
  • Strech D. Evidence-based ethics - what it should be and what it shouldn't. BMC Med Ethics. 2008; 9 :16. doi: 10.1186/1472-6939-9-16. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hurst S. What 'empirical turn in bioethics'? Bioethics. 2010; 24 :439–444. doi: 10.1111/j.1467-8519.2009.01720.x. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Schildmann J, Hoetzel J, Mueller-Busch C, Vollmann J. End-of-life practices in palliative care: a cross sectional survey of physician members of the German Society for Palliative Medicine. Palliat Med. 2010; 24 :820–827. doi: 10.1177/0269216310381663. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Schildmann J, Vollmann J. [Treatment decisions in advanced cancer. An empirical-ethical study on physicians' criteria and the process of decision making] Dtsch Med Wochenschr. 2010; 135 :2230–2234. doi: 10.1055/s-0030-1267505. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Salloch S, Breitsameter C. Morality and moral conflicts in hospice care: results of a qualitative interview study. J Med Ethics. 2010; 36 :588–592. doi: 10.1136/jme.2009.034462. [ PubMed ] [ CrossRef ] [ Google Scholar ]

Read our research on: Gun Policy | International Conflict | Election 2024

Regions & Countries

4. challenges in the classroom.

In addition to asking public K-12 teachers about issues they see at their school, we asked how much each of the following is a problem among students in their classroom :

  • Showing little to no interest in learning (47% say this is a major problem)
  • Being distracted by their cellphones (33%)
  • Getting up and walking around when they’re not supposed to (21%)
  • Being disrespectful toward the teacher (21%)

A bar chart showing that 72% of high school teachers say students being distracted by cellphones is a major problem.

Some challenges are more common among high school teachers, while others are more common among those who teach elementary or middle school.

  • Cellphones: 72% of high school teachers say students being distracted by their cellphones in the classroom is a major problem. A third of middle school teachers and just 6% of elementary school teachers say the same.
  • Little to no interest in learning: A majority of high school teachers (58%) say students showing little to no interest in learning is a major problem. This compares with half of middle school teachers and 40% of elementary school teachers. 
  • Getting up and walking around: 23% of elementary school teachers and 24% of middle school teachers see students getting up and walking around when they’re not supposed to as a major problem. A smaller share of high school teachers (16%) say the same.
  • Being disrespectful: 23% of elementary school teachers and 27% of middle school teachers say students being disrespectful toward them is a major problem. Just 14% of high school teachers say this.

Policies around cellphone use

About eight-in-ten teachers (82%) say their school or district has policies regarding students’ use of cellphones in the classroom. Of those, 56% say these policies are at least somewhat easy to enforce, 30% say they’re difficult to enforce, and 14% say they’re neither easy nor difficult to enforce.

A diverging bar chart showing that most high school teachers say cellphone policies are hard to enforce.

High school teachers are the least likely to say their school or district has policies regarding students’ use of cellphones in the classroom (71% vs. 84% of elementary school teachers and 94% of middle school teachers).

Among those who say there are such policies at their school, high school teachers are the most likely to say these are very or somewhat difficult to enforce. Six-in-ten high school teachers say this, compared with 30% of middle school teachers and 12% of elementary school teachers.

Verbal abuse and physical violence from students

A horizontal stacked bar chart showing that most teachers say they have faced verbal abuse, 40% say a student has been physically violent toward them.

Most teachers (68%) say they have experienced verbal abuse from their students, such as being yelled at or verbally threatened. About one-in-five (21%) say this happens at least a few times a month.

Physical violence is far less common, but about one-in-ten teachers (9%) say a student is physically violent toward them at least a few times a month. Four-in-ten say this has ever happened to them.

Differences by school level

Elementary school teachers (26%) are more likely than middle and high school teachers (18% and 16%) to say they experience verbal abuse from students a few times a month or more often.

And while relatively small shares across school levels say students are physically violent toward them a few times a month or more often, elementary school teachers (55%) are more likely than middle and high school teachers (33% and 23%) to say this has ever happened to them.

Differences by poverty level

Among teachers in high-poverty schools, 27% say they experience verbal abuse from students at least a few times a month. This is larger than the shares of teachers in medium- and low-poverty schools (19% and 18%) who say the same.

Experiences with physical violence don’t differ as much based on school poverty level.

Differences by gender

A horizontal stacked bar chart showing that most teachers say they have faced verbal abuse, 40% say a student has been physically violent toward them.

Teachers who are women are more likely than those who are men to say a student has been physically violent toward them. Some 43% of women teachers say this, compared with 30% of men.

There is also a gender difference in the shares of teachers who say they’ve experienced verbal abuse from students. But this difference is accounted for by the fact that women teachers are more likely than men to work in elementary schools.

Addressing behavioral and mental health challenges

Eight-in-ten teachers say they have to address students’ behavioral issues at least a few times a week, with 58% saying this happens every day .

A majority of teachers (57%) also say they help students with mental health challenges at least a few times a week, with 28% saying this happens daily.

Some teachers are more likely than others to say they have to address students’ behavior and mental health challenges on a daily basis. These include:

A bar chart showing that, among teachers, women are more likely than men to say a student has been physically violent toward them.

  • Women: 62% of women teachers say they have to address behavior issues daily, compared with 43% of those who are men. And while 29% of women teachers say they have to help students with mental health challenges every day, a smaller share of men (19%) say the same.
  • Elementary and middle school teachers: 68% each among elementary and middle school teachers say they have to deal with behavior issues daily, compared with 39% of high school teachers. A third of elementary and 29% of middle school teachers say they have to help students with mental health every day, compared with 19% of high school teachers.
  • Teachers in high-poverty schools: 67% of teachers in schools with high levels of poverty say they have to address behavior issues on a daily basis. Smaller majorities of those in schools with medium or low levels of poverty say the same (56% and 54%). A third of teachers in high-poverty schools say they have to help students with mental health challenges every day, compared with about a quarter of those in medium- or low-poverty schools who say they have this experience (26% and 24%). 

Social Trends Monthly Newsletter

Sign up to to receive a monthly digest of the Center's latest research on the attitudes and behaviors of Americans in key realms of daily life

Report Materials

Table of contents, ‘back to school’ means anytime from late july to after labor day, depending on where in the u.s. you live, among many u.s. children, reading for fun has become less common, federal data shows, most european students learn english in school, for u.s. teens today, summer means more schooling and less leisure time than in the past, about one-in-six u.s. teachers work second jobs – and not just in the summer, most popular.

About Pew Research Center Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of The Pew Charitable Trusts .

IMAGES

  1. What Is Empirical Research? Definition, Types & Samples in 2024

    empirical research is concerned with

  2. Empirical Research: Definition, Methods, Types and Examples

    empirical research is concerned with

  3. Empirical Research: Definition, Methods, Types and Examples

    empirical research is concerned with

  4. What Is Empirical Research? Definition, Types & Samples

    empirical research is concerned with

  5. 15 Empirical Evidence Examples (2024)

    empirical research is concerned with

  6. Definition, Types and Examples of Empirical Research

    empirical research is concerned with

VIDEO

  1. Empirical Research Based LL.M. Dissertation and Ph.D. (Law) Thesis Writing

  2. The empirical audiophile

  3. Empirical Research Report

  4. Empirical SCIECNE #comedy #foryou #therapy #science #cops #lawenforcement #troll #alcoholfree

  5. Robert B. Mann

  6. A Guide to Nootropics for Scientific Thinking and Empirical Reasoning

COMMENTS

  1. Empirical research

    A scientist gathering data for her research. Empirical research is research using empirical evidence.It is also a way of gaining knowledge by means of direct and indirect observation or experience. Empiricism values some research more than other kinds. Empirical evidence (the record of one's direct observations or experiences) can be analyzed quantitatively or qualitatively.

  2. What Is Empirical Research? Definition, Types & Samples in 2024

    Empirical research is defined as any study whose conclusions are exclusively derived from concrete, verifiable evidence. The term empirical basically means that it is guided by scientific experimentation and/or evidence. Likewise, a study is empirical when it uses real-world evidence in investigating its assertions.

  3. Conceptual Research vs. Empirical Research

    Conceptual research focuses on the development of theories and concepts, providing a theoretical foundation for empirical investigations. Empirical research, on the other hand, relies on the collection and analysis of observable data to test and validate theories. Conceptual research is often exploratory and aims to expand the boundaries of ...

  4. Empirical Research: Definition, Methods, Types and Examples

    Types and methodologies of empirical research. Empirical research can be conducted and analysed using qualitative or quantitative methods. Quantitative research: Quantitative research methods are used to gather information through numerical data. It is used to quantify opinions, behaviors or other defined variables.

  5. Empirical Research in the Social Sciences and Education

    Another hint: some scholarly journals use a specific layout, called the "IMRaD" format, to communicate empirical research findings. Such articles typically have 4 components: Introduction : sometimes called "literature review" -- what is currently known about the topic -- usually includes a theoretical framework and/or discussion of previous ...

  6. Empirical Research: Defining, Identifying, & Finding

    Empirical research methodologies can be described as quantitative, qualitative, or a mix of both (usually called mixed-methods). Ruane (2016) (UofM login required) gets at the basic differences in approach between quantitative and qualitative research: Quantitative research -- an approach to documenting reality that relies heavily on numbers both for the measurement of variables and for data ...

  7. What is Empirical Research? Definition, Methods, Examples

    Empirical research is characterized by several key features: Observation and Measurement: It involves the systematic observation or measurement of variables, events, or behaviors. Data Collection: Researchers collect data through various methods, such as surveys, experiments, observations, or interviews.

  8. Empirical Research Methods

    Empirical research is a research method that investigators use to test knowledge claims and develop new knowledge.. Empirical methods focus on observation and experimentation.. Investigators observe and conduct experiments in systematic ways. is largely determined by their rhetorical contexts. Different workplace contexts and academic disciplines have developed unique tools and techniques for ...

  9. Empirical Research: Defining, Identifying, & Finding

    The "Introduction" is where you are most likely to find the research question. Finding the Criteria. The research question may not be clearly labeled in the Introduction. Often, the author(s) may rephrase their question as a research statement or a hypothesis. Some research may have more than one research question or a research question with ...

  10. Empirical Research

    Mcleod noted that empirical research, as a tool for investigation within the field of psychology, began in the 1800s with behaviorists who assert that psychology is a scientific discipline, which requires scientific principles in investigating human behavior, stressed its use.They further claimed that there are unseen factors that influence human behavior.

  11. Empirical Research: Quantitative & Qualitative

    Empirical research is based on phenomena that can be observed and measured. Empirical research derives knowledge from actual experience rather than from theory or belief. Key characteristics of empirical research include: Specific research questions to be answered; Definitions of the population, behavior, or phenomena being studied;

  12. Empirical Research: What is Empirical Research?

    Definition of the population, behavior, or phenomena being studied. Description of the process used to study this population or phenomena, including selection criteria, controls, and testing instruments (such as surveys) Another hint: some scholarly journals use a specific layout, called the "IMRaD" format (Introduction - Method - Results ...

  13. Empirical evidence

    empirical evidence, information gathered directly or indirectly through observation or experimentation that may be used to confirm or disconfirm a scientific theory or to help justify, or establish as reasonable, a person's belief in a given proposition. A belief may be said to be justified if there is sufficient evidence to make holding the belief reasonable.

  14. Conduct empirical research

    Share this content. Empirical research is research that is based on observation and measurement of phenomena, as directly experienced by the researcher. The data thus gathered may be compared against a theory or hypothesis, but the results are still based on real life experience. The data gathered is all primary data, although secondary data ...

  15. Empirical Research

    Empirical research refers to a way of gaining knowledge using direct or indirect observation or experience. Therefore an empirical research article will report research based on observations, experiments, surveys, or other data collected. Empirical research can be either qualitative and quantitative in nature. Empirical research, especially for ...

  16. PDF What Is Empirical Social Research?

    Third, knowing about social research, even if you never conduct any yourself, will . make you a better . consumer. of social research. Research is used to do everything from endorse the newest weight-loss product to provide the basis for a political candidate's crime reduction plan. Some of this research is very sound, but there is also a lot ...

  17. Empirical Research

    Definition of the population, behavior, or phenomena being studied. Description of the process used to study this population or phenomena, including selection criteria, controls, and testing instruments (such as surveys) Another hint: some scholarly journals use a specific layout, called the "IMRaD" format, to communicate empirical research ...

  18. Difference Between Conceptual and Empirical Research

    by Hasa. 4 min read. The main difference between conceptual and empirical research is that conceptual research involves abstract ideas and concepts, whereas empirical research involves research based on observation, experiments and verifiable evidence. Conceptual research and empirical research are two ways of doing scientific research.

  19. Empirical Strategies

    There are two types of research paradigms that have different approaches to empirical studies. Exploratory research is concerned with studying objects in their natural setting and letting the findings emerge from the observations. This implies that a flexible research design [1] is needed to adapt to changes in the observed phenomenon. Flexible design research is also referred to as ...

  20. PDF 1 Introduction

    The method of science: In this conception, the essence of science as a method is in two parts. The first part concerns the vital role of real-world data. Science accepts the authority of empirical data and ideas have to be tested against data. The second part is the role of theory, particularly theory that explains.

  21. Moral Psychology: Empirical Approaches

    As philosophers writing for an encyclopedia of philosophy, we are naturally concerned with the ways empirical research might shape, or re-shape, philosophical ethics. But philosophical reflection may likewise influence empirical research, since such research is often driven by philosophical suppositions that may be more or less philosophically ...

  22. Empirical research in medical ethics: How conceptual accounts on

    The last aspect Birnbacher mentions is moral pragmatics, which is concerned with the practical, ... Empirical research in medical ethics should take place in the form of an ongoing, open and constructive cooperation between representatives of the normative and the empirical sciences. This means that the participating researchers should be open ...

  23. What is empirical analysis and how does it work?

    Empirical analysis is an evidence-based approach to the study and interpretation of information. The empirical approach relies on real-world data, metrics and results rather than theories and concepts.

  24. 4. Challenges in the classroom

    About Pew Research Center Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions.