What Are The Steps Of The Scientific Method?

Julia Simkus

Editor at Simply Psychology

BA (Hons) Psychology, Princeton University

Julia Simkus is a graduate of Princeton University with a Bachelor of Arts in Psychology. She is currently studying for a Master's Degree in Counseling for Mental Health and Wellness in September 2023. Julia's research has been published in peer reviewed journals.

Learn about our Editorial Process

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

Science is not just knowledge. It is also a method for obtaining knowledge. Scientific understanding is organized into theories.

The scientific method is a step-by-step process used by researchers and scientists to determine if there is a relationship between two or more variables. Psychologists use this method to conduct psychological research, gather data, process information, and describe behaviors.

It involves careful observation, asking questions, formulating hypotheses, experimental testing, and refining hypotheses based on experimental findings.

How it is Used

The scientific method can be applied broadly in science across many different fields, such as chemistry, physics, geology, and psychology. In a typical application of this process, a researcher will develop a hypothesis, test this hypothesis, and then modify the hypothesis based on the outcomes of the experiment.

The process is then repeated with the modified hypothesis until the results align with the observed phenomena. Detailed steps of the scientific method are described below.

Keep in mind that the scientific method does not have to follow this fixed sequence of steps; rather, these steps represent a set of general principles or guidelines.

7 Steps of the Scientific Method

Psychology uses an empirical approach.

Empiricism (founded by John Locke) states that the only source of knowledge comes through our senses – e.g., sight, hearing, touch, etc.

Empirical evidence does not rely on argument or belief. Thus, empiricism is the view that all knowledge is based on or may come from direct observation and experience.

The empiricist approach of gaining knowledge through experience quickly became the scientific approach and greatly influenced the development of physics and chemistry in the 17th and 18th centuries.

Steps of the Scientific Method

Step 1: Make an Observation (Theory Construction)

Every researcher starts at the very beginning. Before diving in and exploring something, one must first determine what they will study – it seems simple enough!

By making observations, researchers can establish an area of interest. Once this topic of study has been chosen, a researcher should review existing literature to gain insight into what has already been tested and determine what questions remain unanswered.

This assessment will provide helpful information about what has already been comprehended about the specific topic and what questions remain, and if one can go and answer them.

Specifically, a literature review might implicate examining a substantial amount of documented material from academic journals to books dating back decades. The most appropriate information gathered by the researcher will be shown in the introduction section or abstract of the published study results.

The background material and knowledge will help the researcher with the first significant step in conducting a psychology study, which is formulating a research question.

This is the inductive phase of the scientific process. Observations yield information that is used to formulate theories as explanations. A theory is a well-developed set of ideas that propose an explanation for observed phenomena.

Inductive reasoning moves from specific premises to a general conclusion. It starts with observations of phenomena in the natural world and derives a general law.

Step 2: Ask a Question

Once a researcher has made observations and conducted background research, the next step is to ask a scientific question. A scientific question must be defined, testable, and measurable.

A useful approach to develop a scientific question is: “What is the effect of…?” or “How does X affect Y?”

To answer an experimental question, a researcher must identify two variables: the independent and dependent variables.

The independent variable is the variable manipulated (the cause), and the dependent variable is the variable being measured (the effect).

An example of a research question could be, “Is handwriting or typing more effective for retaining information?” Answering the research question and proposing a relationship between the two variables is discussed in the next step.

Step 3: Form a Hypothesis (Make Predictions)

A hypothesis is an educated guess about the relationship between two or more variables. A hypothesis is an attempt to answer your research question based on prior observation and background research. Theories tend to be too complex to be tested all at once; instead, researchers create hypotheses to test specific aspects of a theory.

For example, a researcher might ask about the connection between sleep and educational performance. Do students who get less sleep perform worse on tests at school?

It is crucial to think about different questions one might have about a particular topic to formulate a reasonable hypothesis. It would help if one also considered how one could investigate the causalities.

It is important that the hypothesis is both testable against reality and falsifiable. This means that it can be tested through an experiment and can be proven wrong.

The falsification principle, proposed by Karl Popper , is a way of demarcating science from non-science. It suggests that for a theory to be considered scientific, it must be able to be tested and conceivably proven false.

To test a hypothesis, we first assume that there is no difference between the populations from which the samples were taken. This is known as the null hypothesis and predicts that the independent variable will not influence the dependent variable.

Examples of “if…then…” Hypotheses:

  • If one gets less than 6 hours of sleep, then one will do worse on tests than if one obtains more rest.
  • If one drinks lots of water before going to bed, one will have to use the bathroom often at night.
  • If one practices exercising and lighting weights, then one’s body will begin to build muscle.

The research hypothesis is often called the alternative hypothesis and predicts what change(s) will occur in the dependent variable when the independent variable is manipulated.

It states that the results are not due to chance and that they are significant in terms of supporting the theory being investigated.

Although one could state and write a scientific hypothesis in many ways, hypotheses are usually built like “if…then…” statements.

Step 4: Run an Experiment (Gather Data)

The next step in the scientific method is to test your hypothesis and collect data. A researcher will design an experiment to test the hypothesis and gather data that will either support or refute the hypothesis.

The exact research methods used to examine a hypothesis depend on what is being studied. A psychologist might utilize two primary forms of research, experimental research, and descriptive research.

The scientific method is objective in that researchers do not let preconceived ideas or biases influence the collection of data and is systematic in that experiments are conducted in a logical way.

Experimental Research

Experimental research is used to investigate cause-and-effect associations between two or more variables. This type of research systematically controls an independent variable and measures its effect on a specified dependent variable.

Experimental research involves manipulating an independent variable and measuring the effect(s) on the dependent variable. Repeating the experiment multiple times is important to confirm that your results are accurate and consistent.

One of the significant advantages of this method is that it permits researchers to determine if changes in one variable cause shifts in each other.

While experiments in psychology typically have many moving parts (and can be relatively complex), an easy investigation is rather fundamental. Still, it does allow researchers to specify cause-and-effect associations between variables.

Most simple experiments use a control group, which involves those who do not receive the treatment, and an experimental group, which involves those who do receive the treatment.

An example of experimental research would be when a pharmaceutical company wants to test a new drug. They give one group a placebo (control group) and the other the actual pill (experimental group).

Descriptive Research

Descriptive research is generally used when it is challenging or even impossible to control the variables in question. Examples of descriptive analysis include naturalistic observation, case studies , and correlation studies .

One example of descriptive research includes phone surveys that marketers often use. While they typically do not allow researchers to identify cause and effect, correlational studies are quite common in psychology research. They make it possible to spot associations between distinct variables and measure the solidity of those relationships.

Step 5: Analyze the Data and Draw Conclusions

Once a researcher has designed and done the investigation and collected sufficient data, it is time to inspect this gathered information and judge what has been found. Researchers can summarize the data, interpret the results, and draw conclusions based on this evidence using analyses and statistics.

Upon completion of the experiment, you can collect your measurements and analyze the data using statistics. Based on the outcomes, you will either reject or confirm your hypothesis.

Analyze the Data

So, how does a researcher determine what the results of their study mean? Statistical analysis can either support or refute a researcher’s hypothesis and can also be used to determine if the conclusions are statistically significant.

When outcomes are said to be “statistically significant,” it is improbable that these results are due to luck or chance. Based on these observations, investigators must then determine what the results mean.

An experiment will support a hypothesis in some circumstances, but sometimes it fails to be truthful in other cases.

What occurs if the developments of a psychology investigation do not endorse the researcher’s hypothesis? It does mean that the study was worthless. Simply because the findings fail to defend the researcher’s hypothesis does not mean that the examination is not helpful or instructive.

This kind of research plays a vital role in supporting scientists in developing unexplored questions and hypotheses to investigate in the future. After decisions have been made, the next step is to communicate the results with the rest of the scientific community.

This is an integral part of the process because it contributes to the general knowledge base and can assist other scientists in finding new research routes to explore.

If the hypothesis is not supported, a researcher should acknowledge the experiment’s results, formulate a new hypothesis, and develop a new experiment.

We must avoid any reference to results proving a theory as this implies 100% certainty, and there is always a chance that evidence may exist that could refute a theory.

Draw Conclusions and Interpret the Data

When the empirical observations disagree with the hypothesis, a number of possibilities must be considered. It might be that the theory is incorrect, in which case it needs altering, so it fully explains the data.

Alternatively, it might be that the hypothesis was poorly derived from the original theory, in which case the scientists were expecting the wrong thing to happen.

It might also be that the research was poorly conducted, or used an inappropriate method, or there were factors in play that the researchers did not consider. This will begin the process of the scientific method again.

If the hypothesis is supported, the researcher can find more evidence to support their hypothesis or look for counter-evidence to strengthen their hypothesis further.

In either scenario, the researcher should share their results with the greater scientific community.

Step 6: Share Your Results

One of the final stages of the research cycle involves the publication of the research. Once the report is written, the researcher(s) may submit the work for publication in an appropriate journal.

Usually, this is done by writing up a study description and publishing the article in a professional or academic journal. The studies and conclusions of psychological work can be seen in peer-reviewed journals such as  Developmental Psychology , Psychological Bulletin, the  Journal of Social Psychology, and numerous others.

Scientists should report their findings by writing up a description of their study and any subsequent findings. This enables other researchers to build upon the present research or replicate the results.

As outlined by the American Psychological Association (APA), there is a typical structure of a journal article that follows a specified format. In these articles, researchers:

  • Supply a brief narrative and background on previous research
  • Give their hypothesis
  • Specify who participated in the study and how they were chosen
  • Provide operational definitions for each variable
  • Explain the measures and methods used to collect data
  • Describe how the data collected was interpreted
  • Discuss what the outcomes mean

A detailed record of psychological studies and all scientific studies is vital to clearly explain the steps and procedures used throughout the study. So that other researchers can try this experiment too and replicate the results.

The editorial process utilized by academic and professional journals guarantees that each submitted article undergoes a thorough peer review to help assure that the study is scientifically sound. Once published, the investigation becomes another piece of the current puzzle of our knowledge “base” on that subject.

This last step is important because all results, whether they supported or did not support the hypothesis, can contribute to the scientific community. Publication of empirical observations leads to more ideas that are tested against the real world, and so on. In this sense, the scientific process is circular.

The editorial process utilized by academic and professional journals guarantees that each submitted article undergoes a thorough peer review to help assure that the study is scientifically sound.

Once published, the investigation becomes another piece of the current puzzle of our knowledge “base” on that subject.

By replicating studies, psychologists can reduce errors, validate theories, and gain a stronger understanding of a particular topic.

Step 7: Repeat the Scientific Method (Iteration)

Now, if one’s hypothesis turns out to be accurate, find more evidence or find counter-evidence. If one’s hypothesis is false, create a new hypothesis or try again.

One may wish to revise their first hypothesis to make a more niche experiment to design or a different specific question to test.

The amazingness of the scientific method is that it is a comprehensive and straightforward process that scientists, and everyone, can utilize over and over again.

So, draw conclusions and repeat because the scientific method is never-ending, and no result is ever considered perfect.

The scientific method is a process of:

  • Making an observation.
  • Forming a hypothesis.
  • Making a prediction.
  • Experimenting to test the hypothesis.

The procedure of repeating the scientific method is crucial to science and all fields of human knowledge.

Further Information

  • Karl Popper – Falsification
  • Thomas – Kuhn Paradigm Shift
  • Positivism in Sociology: Definition, Theory & Examples
  • Is Psychology a Science?
  • Psychology as a Science (PDF)

List the 6 steps of the scientific methods in order

  • Make an observation (theory construction)
  • Ask a question. A scientific question must be defined, testable, and measurable.
  • Form a hypothesis (make predictions)
  • Run an experiment to test the hypothesis (gather data)
  • Analyze the data and draw conclusions
  • Share your results so that other researchers can make new hypotheses

What is the first step of the scientific method?

The first step of the scientific method is making an observation. This involves noticing and describing a phenomenon or group of phenomena that one finds interesting and wishes to explain.

Observations can occur in a natural setting or within the confines of a laboratory. The key point is that the observation provides the initial question or problem that the rest of the scientific method seeks to answer or solve.

What is the scientific method?

The scientific method is a step-by-step process that investigators can follow to determine if there is a causal connection between two or more variables.

Psychologists and other scientists regularly suggest motivations for human behavior. On a more casual level, people judge other people’s intentions, incentives, and actions daily.

While our standard assessments of human behavior are subjective and anecdotal, researchers use the scientific method to study psychology objectively and systematically.

All utilize a scientific method to study distinct aspects of people’s thinking and behavior. This process allows scientists to analyze and understand various psychological phenomena, but it also provides investigators and others a way to disseminate and debate the results of their studies.

The outcomes of these studies are often noted in popular media, which leads numerous to think about how or why researchers came to the findings they did.

Why Use the Six Steps of the Scientific Method

The goal of scientists is to understand better the world that surrounds us. Scientific research is the most critical tool for navigating and learning about our complex world.

Without it, we would be compelled to rely solely on intuition, other people’s power, and luck. We can eliminate our preconceived concepts and superstitions through methodical scientific research and gain an objective sense of ourselves and our world.

All psychological studies aim to explain, predict, and even control or impact mental behaviors or processes. So, psychologists use and repeat the scientific method (and its six steps) to perform and record essential psychological research.

So, psychologists focus on understanding behavior and the cognitive (mental) and physiological (body) processes underlying behavior.

In the real world, people use to understand the behavior of others, such as intuition and personal experience. The hallmark of scientific research is evidence to support a claim.

Scientific knowledge is empirical, meaning it is grounded in objective, tangible evidence that can be observed repeatedly, regardless of who is watching.

The scientific method is crucial because it minimizes the impact of bias or prejudice on the experimenter. Regardless of how hard one tries, even the best-intentioned scientists can’t escape discrimination. can’t

It stems from personal opinions and cultural beliefs, meaning any mortal filters data based on one’s experience. Sadly, this “filtering” process can cause a scientist to favor one outcome over another.

For an everyday person trying to solve a minor issue at home or work, succumbing to these biases is not such a big deal; in fact, most times, it is important.

But in the scientific community, where results must be inspected and reproduced, bias or discrimination must be avoided.

When to Use the Six Steps of the Scientific Method ?

One can use the scientific method anytime, anywhere! From the smallest conundrum to solving global problems, it is a process that can be applied to any science and any investigation.

Even if you are not considered a “scientist,” you will be surprised to know that people of all disciplines use it for all kinds of dilemmas.

Try to catch yourself next time you come by a question and see how you subconsciously or consciously use the scientific method.

Print Friendly, PDF & Email

PrepScholar

Choose Your Test

Sat / act prep online guides and tips, the 6 scientific method steps and how to use them.

author image

General Education

feature_microscope-1

When you’re faced with a scientific problem, solving it can seem like an impossible prospect. There are so many possible explanations for everything we see and experience—how can you possibly make sense of them all? Science has a simple answer: the scientific method.

The scientific method is a method of asking and answering questions about the world. These guiding principles give scientists a model to work through when trying to understand the world, but where did that model come from, and how does it work?

In this article, we’ll define the scientific method, discuss its long history, and cover each of the scientific method steps in detail.

What Is the Scientific Method?

At its most basic, the scientific method is a procedure for conducting scientific experiments. It’s a set model that scientists in a variety of fields can follow, going from initial observation to conclusion in a loose but concrete format.

The number of steps varies, but the process begins with an observation, progresses through an experiment, and concludes with analysis and sharing data. One of the most important pieces to the scientific method is skepticism —the goal is to find truth, not to confirm a particular thought. That requires reevaluation and repeated experimentation, as well as examining your thinking through rigorous study.

There are in fact multiple scientific methods, as the basic structure can be easily modified.  The one we typically learn about in school is the basic method, based in logic and problem solving, typically used in “hard” science fields like biology, chemistry, and physics. It may vary in other fields, such as psychology, but the basic premise of making observations, testing, and continuing to improve a theory from the results remain the same.

body_history

The History of the Scientific Method

The scientific method as we know it today is based on thousands of years of scientific study. Its development goes all the way back to ancient Mesopotamia, Greece, and India.

The Ancient World

In ancient Greece, Aristotle devised an inductive-deductive process , which weighs broad generalizations from data against conclusions reached by narrowing down possibilities from a general statement. However, he favored deductive reasoning, as it identifies causes, which he saw as more important.

Aristotle wrote a great deal about logic and many of his ideas about reasoning echo those found in the modern scientific method, such as ignoring circular evidence and limiting the number of middle terms between the beginning of an experiment and the end. Though his model isn’t the one that we use today, the reliance on logic and thorough testing are still key parts of science today.

The Middle Ages

The next big step toward the development of the modern scientific method came in the Middle Ages, particularly in the Islamic world. Ibn al-Haytham, a physicist from what we now know as Iraq, developed a method of testing, observing, and deducing for his research on vision. al-Haytham was critical of Aristotle’s lack of inductive reasoning, which played an important role in his own research.

Other scientists, including Abū Rayhān al-Bīrūnī, Ibn Sina, and Robert Grosseteste also developed models of scientific reasoning to test their own theories. Though they frequently disagreed with one another and Aristotle, those disagreements and refinements of their methods led to the scientific method we have today.

Following those major developments, particularly Grosseteste’s work, Roger Bacon developed his own cycle of observation (seeing that something occurs), hypothesis (making a guess about why that thing occurs), experimentation (testing that the thing occurs), and verification (an outside person ensuring that the result of the experiment is consistent).

After joining the Franciscan Order, Bacon was granted a special commission to write about science; typically, Friars were not allowed to write books or pamphlets. With this commission, Bacon outlined important tenets of the scientific method, including causes of error, methods of knowledge, and the differences between speculative and experimental science. He also used his own principles to investigate the causes of a rainbow, demonstrating the method’s effectiveness.

Scientific Revolution

Throughout the Renaissance, more great thinkers became involved in devising a thorough, rigorous method of scientific study. Francis Bacon brought inductive reasoning further into the method, whereas Descartes argued that the laws of the universe meant that deductive reasoning was sufficient. Galileo’s research was also inductive reasoning-heavy, as he believed that researchers could not account for every possible variable; therefore, repetition was necessary to eliminate faulty hypotheses and experiments.

All of this led to the birth of the Scientific Revolution , which took place during the sixteenth and seventeenth centuries. In 1660, a group of philosophers and physicians joined together to work on scientific advancement. After approval from England’s crown , the group became known as the Royal Society, which helped create a thriving scientific community and an early academic journal to help introduce rigorous study and peer review.

Previous generations of scientists had touched on the importance of induction and deduction, but Sir Isaac Newton proposed that both were equally important. This contribution helped establish the importance of multiple kinds of reasoning, leading to more rigorous study.

As science began to splinter into separate areas of study, it became necessary to define different methods for different fields. Karl Popper was a leader in this area—he established that science could be subject to error, sometimes intentionally. This was particularly tricky for “soft” sciences like psychology and social sciences, which require different methods. Popper’s theories furthered the divide between sciences like psychology and “hard” sciences like chemistry or physics.

Paul Feyerabend argued that Popper’s methods were too restrictive for certain fields, and followed a less restrictive method hinged on “anything goes,” as great scientists had made discoveries without the Scientific Method. Feyerabend suggested that throughout history scientists had adapted their methods as necessary, and that sometimes it would be necessary to break the rules. This approach suited social and behavioral scientists particularly well, leading to a more diverse range of models for scientists in multiple fields to use.

body_experiment-3

The Scientific Method Steps

Though different fields may have variations on the model, the basic scientific method is as follows:

#1: Make Observations 

Notice something, such as the air temperature during the winter, what happens when ice cream melts, or how your plants behave when you forget to water them.

#2: Ask a Question

Turn your observation into a question. Why is the temperature lower during the winter? Why does my ice cream melt? Why does my toast always fall butter-side down?

This step can also include doing some research. You may be able to find answers to these questions already, but you can still test them!

#3: Make a Hypothesis

A hypothesis is an educated guess of the answer to your question. Why does your toast always fall butter-side down? Maybe it’s because the butter makes that side of the bread heavier.

A good hypothesis leads to a prediction that you can test, phrased as an if/then statement. In this case, we can pick something like, “If toast is buttered, then it will hit the ground butter-first.”

#4: Experiment

Your experiment is designed to test whether your predication about what will happen is true. A good experiment will test one variable at a time —for example, we’re trying to test whether butter weighs down one side of toast, making it more likely to hit the ground first.

The unbuttered toast is our control variable. If we determine the chance that a slice of unbuttered toast, marked with a dot, will hit the ground on a particular side, we can compare those results to our buttered toast to see if there’s a correlation between the presence of butter and which way the toast falls.

If we decided not to toast the bread, that would be introducing a new question—whether or not toasting the bread has any impact on how it falls. Since that’s not part of our test, we’ll stick with determining whether the presence of butter has any impact on which side hits the ground first.

#5: Analyze Data

After our experiment, we discover that both buttered toast and unbuttered toast have a 50/50 chance of hitting the ground on the buttered or marked side when dropped from a consistent height, straight down. It looks like our hypothesis was incorrect—it’s not the butter that makes the toast hit the ground in a particular way, so it must be something else.

Since we didn’t get the desired result, it’s back to the drawing board. Our hypothesis wasn’t correct, so we’ll need to start fresh. Now that you think about it, your toast seems to hit the ground butter-first when it slides off your plate, not when you drop it from a consistent height. That can be the basis for your new experiment.

#6: Communicate Your Results

Good science needs verification. Your experiment should be replicable by other people, so you can put together a report about how you ran your experiment to see if other peoples’ findings are consistent with yours.

This may be useful for class or a science fair. Professional scientists may publish their findings in scientific journals, where other scientists can read and attempt their own versions of the same experiments. Being part of a scientific community helps your experiments be stronger because other people can see if there are flaws in your approach—such as if you tested with different kinds of bread, or sometimes used peanut butter instead of butter—that can lead you closer to a good answer.

body_toast-1

A Scientific Method Example: Falling Toast

We’ve run through a quick recap of the scientific method steps, but let’s look a little deeper by trying again to figure out why toast so often falls butter side down.

#1: Make Observations

At the end of our last experiment, where we learned that butter doesn’t actually make toast more likely to hit the ground on that side, we remembered that the times when our toast hits the ground butter side first are usually when it’s falling off a plate.

The easiest question we can ask is, “Why is that?”

We can actually search this online and find a pretty detailed answer as to why this is true. But we’re budding scientists—we want to see it in action and verify it for ourselves! After all, good science should be replicable, and we have all the tools we need to test out what’s really going on.

Why do we think that buttered toast hits the ground butter-first? We know it’s not because it’s heavier, so we can strike that out. Maybe it’s because of the shape of our plate?

That’s something we can test. We’ll phrase our hypothesis as, “If my toast slides off my plate, then it will fall butter-side down.”

Just seeing that toast falls off a plate butter-side down isn’t enough for us. We want to know why, so we’re going to take things a step further—we’ll set up a slow-motion camera to capture what happens as the toast slides off the plate.

We’ll run the test ten times, each time tilting the same plate until the toast slides off. We’ll make note of each time the butter side lands first and see what’s happening on the video so we can see what’s going on.

When we review the footage, we’ll likely notice that the bread starts to flip when it slides off the edge, changing how it falls in a way that didn’t happen when we dropped it ourselves.

That answers our question, but it’s not the complete picture —how do other plates affect how often toast hits the ground butter-first? What if the toast is already butter-side down when it falls? These are things we can test in further experiments with new hypotheses!

Now that we have results, we can share them with others who can verify our results. As mentioned above, being part of the scientific community can lead to better results. If your results were wildly different from the established thinking about buttered toast, that might be cause for reevaluation. If they’re the same, they might lead others to make new discoveries about buttered toast. At the very least, you have a cool experiment you can share with your friends!

Key Scientific Method Tips

Though science can be complex, the benefit of the scientific method is that it gives you an easy-to-follow means of thinking about why and how things happen. To use it effectively, keep these things in mind!

Don’t Worry About Proving Your Hypothesis

One of the important things to remember about the scientific method is that it’s not necessarily meant to prove your hypothesis right. It’s great if you do manage to guess the reason for something right the first time, but the ultimate goal of an experiment is to find the true reason for your observation to occur, not to prove your hypothesis right.

Good science sometimes means that you’re wrong. That’s not a bad thing—a well-designed experiment with an unanticipated result can be just as revealing, if not more, than an experiment that confirms your hypothesis.

Be Prepared to Try Again

If the data from your experiment doesn’t match your hypothesis, that’s not a bad thing. You’ve eliminated one possible explanation, which brings you one step closer to discovering the truth.

The scientific method isn’t something you’re meant to do exactly once to prove a point. It’s meant to be repeated and adapted to bring you closer to a solution. Even if you can demonstrate truth in your hypothesis, a good scientist will run an experiment again to be sure that the results are replicable. You can even tweak a successful hypothesis to test another factor, such as if we redid our buttered toast experiment to find out whether different kinds of plates affect whether or not the toast falls butter-first. The more we test our hypothesis, the stronger it becomes!

What’s Next?

Want to learn more about the scientific method? These important high school science classes will no doubt cover it in a variety of different contexts.

Test your ability to follow the scientific method using these at-home science experiments for kids !

Need some proof that science is fun? Try making slime

author image

Melissa Brinks graduated from the University of Washington in 2014 with a Bachelor's in English with a creative writing emphasis. She has spent several years tutoring K-12 students in many subjects, including in SAT prep, to help them prepare for their college education.

Ask a Question Below

Have any questions about this article or other topics? Ask below and we'll reply!

Improve With Our Famous Guides

  • For All Students

The 5 Strategies You Must Be Using to Improve 160+ SAT Points

How to Get a Perfect 1600, by a Perfect Scorer

Series: How to Get 800 on Each SAT Section:

Score 800 on SAT Math

Score 800 on SAT Reading

Score 800 on SAT Writing

Series: How to Get to 600 on Each SAT Section:

Score 600 on SAT Math

Score 600 on SAT Reading

Score 600 on SAT Writing

Free Complete Official SAT Practice Tests

What SAT Target Score Should You Be Aiming For?

15 Strategies to Improve Your SAT Essay

The 5 Strategies You Must Be Using to Improve 4+ ACT Points

How to Get a Perfect 36 ACT, by a Perfect Scorer

Series: How to Get 36 on Each ACT Section:

36 on ACT English

36 on ACT Math

36 on ACT Reading

36 on ACT Science

Series: How to Get to 24 on Each ACT Section:

24 on ACT English

24 on ACT Math

24 on ACT Reading

24 on ACT Science

What ACT target score should you be aiming for?

ACT Vocabulary You Must Know

ACT Writing: 15 Tips to Raise Your Essay Score

How to Get Into Harvard and the Ivy League

How to Get a Perfect 4.0 GPA

How to Write an Amazing College Essay

What Exactly Are Colleges Looking For?

Is the ACT easier than the SAT? A Comprehensive Guide

Should you retake your SAT or ACT?

When should you take the SAT or ACT?

Stay Informed

6 steps of the scientific method hypothesis

Get the latest articles and test prep tips!

Looking for Graduate School Test Prep?

Check out our top-rated graduate blogs here:

GRE Online Prep Blog

GMAT Online Prep Blog

TOEFL Online Prep Blog

Holly R. "I am absolutely overjoyed and cannot thank you enough for helping me!”

Back Home

  • Science Notes Posts
  • Contact Science Notes
  • Todd Helmenstine Biography
  • Anne Helmenstine Biography
  • Free Printable Periodic Tables (PDF and PNG)
  • Periodic Table Wallpapers
  • Interactive Periodic Table
  • Periodic Table Posters
  • How to Grow Crystals
  • Chemistry Projects
  • Fire and Flames Projects
  • Holiday Science
  • Chemistry Problems With Answers
  • Physics Problems
  • Unit Conversion Example Problems
  • Chemistry Worksheets
  • Biology Worksheets
  • Periodic Table Worksheets
  • Physical Science Worksheets
  • Science Lab Worksheets
  • My Amazon Books

Steps of the Scientific Method 2

Scientific Method Steps

The scientific method is a system scientists and other people use to ask and answer questions about the natural world. In a nutshell, the scientific method works by making observations, asking a question or identifying a problem, and then designing and analyzing an experiment to test a prediction of what you expect will happen. It’s a powerful analytical tool because once you draw conclusions, you may be able to answer a question and make predictions about future events.

These are the steps of the scientific method:

  • Make observations.

Sometimes this step is omitted in the list, but you always make observations before asking a question, whether you recognize it or not. You always have some background information about a topic. However, it’s a good idea to be systematic about your observations and to record them in a lab book or another way. Often, these initial observations can help you identify a question. Later on, this information may help you decide on another area of investigation of a topic.

  • Ask a question, identify a problem, or state an objective.

There are various forms of this step. Sometimes you may want to state an objective and a problem and then phrase it in the form of a question. The reason it’s good to state a question is because it’s easiest to design an experiment to answer a question. A question helps you form a hypothesis, which focuses your study.

  • Research the topic.

You should conduct background research on your topic to learn as much as you can about it. This can occur both before and after you state an objective and form a hypothesis. In fact, you may find yourself researching the topic throughout the entire process.

  • Formulate a hypothesis.

A hypothesis is a formal prediction. There are two forms of a hypothesis that are particularly easy to test. One is to state the hypothesis as an “if, then” statement. An example of an if-then hypothesis is: “If plants are grown under red light, then they will be taller than plants grown under white light.” Another good type of hypothesis is what is called a “ null hypothesis ” or “no difference” hypothesis. An example of a null hypothesis is: “There is no difference in the rate of growth of plants grown under red light compared with plants grown under white light.”

  • Design and perform an experiment to test the hypothesis.

Once you have a hypothesis, you need to find a way to test it. This involves an experiment . There are many ways to set up an experiment. A basic experiment contains variables, which are factors you can measure. The two main variables are the independent variable (the one you control or change) and the dependent variable (the one you measure to see if it is affected when you change the independent variable).

  • Record and analyze the data you obtain from the experiment.

It’s a good idea to record notes alongside your data, stating anything unusual or unexpected. Once you have the data, draw a chart, table, or graph to present your results. Next, analyze the results to understand what it all means.

  • Determine whether you accept or reject the hypothesis.

Do the results support the hypothesis or not? Keep in mind, it’s okay if the hypothesis is not supported, especially if you are testing a null hypothesis. Sometimes excluding an explanation answers your question! There is no “right” or “wrong” here. However, if you obtain an unexpected result, you might want to perform another experiment.

  • Draw a conclusion and report the results of the experiment.

What good is knowing something if you keep it to yourself? You should report the outcome of the experiment, even if it’s just in a notebook. What did you learn from the experiment?

How Many Steps Are There?

You may be asked to list the 5 steps of the scientific method or the 6 steps of the method or some other number. There are different ways of grouping together the steps outlined here, so it’s a good idea to learn the way an instructor wants you to list the steps. No matter how many steps there are, the order is always the same.

Related Posts

2 thoughts on “ steps of the scientific method ”.

You raise a valid point, but peer review has its limitations. Consider the case of Galileo, for example.

That’s a good point too. But that was a rare limitation due to religion, and scientific consensus prevailed in the end. It’s nowhere near a reason to doubt scientific consensus in general. I’m thinking about issues such as climate change where so many people are skeptical despite 97% consensus among climate scientists. I was just surprised to see that this is not included as an important part of the process.

Comments are closed.

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons
  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Biology LibreTexts

2.1: The Scientific Method

  • Last updated
  • Save as PDF
  • Page ID 94370

Hypothesis Testing and The scientific Method

The scientific method is a process of research with defined steps that include data collection and careful observation. The scientific method was used even in ancient times, but it was first documented by England’s Sir Francis Bacon (1561–1626) (Figure \(\PageIndex{5}\)), who set up inductive methods for scientific inquiry.

Painting depicts Sir Francis Bacon in a long cloak.

Observation

Scientific advances begin with observations . This involves noticing a pattern, either directly or indirectly from the literature. An example of a direct observation is noticing that there have been a lot of toads in your yard ever since you turned on the sprinklers, where as an indirect observation would be reading a scientific study reporting high densities of toads in urban areas with watered lawns.

During the Vietnam War (figure \(\PageIndex{6}\)), press reports from North Vietnam documented an increasing rate of birth defects. While this credibility of this information was initially questioned by the U.S., it evoked questions about what could be causing these birth defects. Furthermore, increased incidence of certain cancers and other diseases later emerged in Vietnam veterans who had returned to the U.S. This leads us to the next step of the scientific method, the question.

An old map shows North Vietnam separated from South Vietnam

Figure \(\PageIndex{6}\): A map of Vietnam 1954-1975. Image from Bureau of Public Affairs U.S. Government Printing Office (public domain).

The question step of the scientific method is simply asking, what explains the observed pattern? Multiple questions can stem from a single observation. Scientists and the public began to ask, what is causing the birth defects in Vietnam and diseases in Vietnam veterans? Could it be associated with the widespread military use of the herbicide Agent Orange to clear the forests (figure \(\PageIndex{7-8}\)), which helped identify enemies more easily?

Stacks of green drums, each with an orange stripe in the middle

Figure \(\PageIndex{7}\): Agent Orange drums in Vietnam. Image by U.S. Government (public domain).

Aerial view of a healthy forest surrounding a river (top) and a barren, brown landscape following herbicide application.

Figure \(\PageIndex{8}\): A healthy mangrove forest (top), and another forest after application of Agent Orange. Image by unknown author (public domain).

Hypothesis and Prediction

The hypothesis is the expected answer to the question. The best hypotheses state the proposed direction of the effect (increases, decreases, etc.) and explain why the hypothesis could be true.

  • OK hypothesis: Agent Orange influences rates of birth defects and disease.
  • Better hypothesis: Agent Orange increases the incidence of birth defects and disease.
  • Best hypothesis: Agent Orange increases the incidence of birth defects and disease because these health problems have been frequently reported by individuals exposed to this herbicide.

If two or more hypotheses meet this standard, the simpler one is preferred.

Predictions stem from the hypothesis. The prediction explains what results would support hypothesis. The prediction is more specific than the hypothesis because it references the details of the experiment. For example, "If Agent Orange causes health problems, then mice experimentally exposed to TCDD, a contaminant of Agent Orange, during development will have more frequent birth defects than control mice" (figure \(\PageIndex{9}\)).

The structural formula of TCDD, showing three fused rings

Figure \(\PageIndex{9}\): The chemical structure of TCDD (2,3,7,8-tetrachlorodibenzo-p-dioxin), which is produced when synthesizing the chemicals in Agent Orange. It contaminates Agent Orange at low but harmful concentrations. Image by Emeldir (public domain).

Hypotheses and predictions must be testable to ensure that it is valid. For example, a hypothesis that depends on what a bear thinks is not testable, because it can never be known what a bear thinks. It should also be falsifiable , meaning that they have the capacity to be tested and demonstrated to be untrue. An example of an unfalsifiable hypothesis is “Botticelli’s Birth of Venus is beautiful.” There is no experiment that might show this statement to be false. To test a hypothesis, a researcher will conduct one or more experiments designed to eliminate one or more of the hypotheses. This is important. A hypothesis can be disproven, or eliminated, but it can never be proven. Science does not deal in proofs like mathematics. If an experiment fails to disprove a hypothesis, then we find support for that explanation, but this is not to say that down the road a better explanation will not be found, or a more carefully designed experiment will be found to falsify the hypothesis.

Hypotheses are tentative explanations and are different from scientific theories. A scientific theory is a widely-accepted, thoroughly tested and confirmed explanation for a set of observations or phenomena. Scientific theory is the foundation of scientific knowledge. In addition, in many scientific disciplines (less so in biology) there are scientific laws , often expressed in mathematical formulas, which describe how elements of nature will behave under certain specific conditions, but they do not offer explanations for why they occur.

Design an Experiment

Next, a scientific study (experiment) is planned to test the hypothesis and determine whether the results match the predictions. Each experiment will have one or more variables. The explanatory variable is what scientists hypothesize might be causing something else. In a manipulative experiment (see below), the explanatory variable is manipulated by the scientist. The response variable is the response, the variable ultimately measured in the study. Controlled variables (confounding factors) might affect the response variable, but they are not the focus of the study. Scientist attempt to standardize the controlled variables so that they do not influence the results. In our previous example, exposure to Agent Orange is the explanatory variable. It is hypothesized to cause a change in health (likelihood of having children with birth defects or developing a disease), the response variable. Many other things could affect health, including diet, exercise, and family history. These are the controlled variables.

There are two main types of scientific studies: experimental studies (manipulative experiments) and observational studies.

In a manipulative experiment , the explanatory variable is altered by the scientists, who then observe the response. In other words, the scientists apply a treatment . An example would be exposing developing mice to TCDD and comparing the rate of birth defects to a control group. The control group is group of test subjects that are as similar as possible to all other test subjects, with the exception that they don’t receive the experimental treatment (those that do receive it are known as the experimental, treatment, or test group ). The purpose of the control group is to establish what the dependent variable would be under normal conditions, in the absence of the experimental treatment. It serves as a baseline to which the test group can be compared. In this example, the control group would contain mice that were not exposed to TCDD but were otherwise handled the same way as the other mice (figure \(\PageIndex{10}\))

Five white mice in a cage with red eyes

Figure \(\PageIndex{10}\): Laboratory mice. In a proper scientific study, the treatment would be applied to multiple mice. Another group of mice would not receive the treatment (the control group). Image by Aaron Logan ( CC-BY ).

In an observational study , scientists examine multiple samples with and without the presumed cause. An example would be monitoring the health of veterans who had varying levels of exposure to Agent Orange.

Scientific studies contain many replicates. Multiple samples ensure that any observed pattern is due to the treatment rather than naturally occurring differences between individuals. A scientific study should also be repeatable , meaning that if it is conducted again, following the same procedure, it should reproduce the same general results. Additionally, multiple studies will ultimately test the same hypothesis.

Finally, the data are collected and the results are analyzed. As described in the Math Blast chapter, statistics can be used to describe the data and summarize data. They also provide a criterion for deciding whether the pattern in the data is strong enough to support the hypothesis.

The manipulative experiment in our example found that mice exposed to high levels of 2,4,5-T (a component of Agent Orange) or TCDD (a contaminant found in Agent Orange) during development had a cleft palate birth defect more frequently than control mice (figure \(\PageIndex{11}\)). Mice embryos were also more likely to die when exposed to TCDD compared to controls.

A baby with a gap in the upper lip

Figure \(\PageIndex{11}\): Cleft lip and palate, a birth defect in which these structures are split. Image by James Heilman, MD ( CC-BY-SA ).

An observational study found that self-reported exposure to Agent Orange was positively correlated with incidence of multiple diseases in Korean veterans of the Vietnam War, including various cancers, diseases of the cardiovascular and nervous systems, skin diseases, and psychological disorders. Note that a positive correlation simply means that the independent and dependent variables both increase or decrease together, but further data, such as the evidence provided by manipulative experiments is needed to document a cause-and-effect relationship . (A negative correlation occurs when one variable increases as the other decreases.)

Lastly, scientists make a conclusion regarding whether the data support the hypothesis. In the case of Agent Orange, the data, that mice exposed to TCDD and 2,4,5-T had higher frequencies of cleft palate, matches the prediction. Additionally, veterans exposed to Agent Orange had higher rates of certain diseases, further supporting the hypothesis. We can thus accept the hypothesis that Agent Orange increases the incidence of birth defects and disease.

Scientific Method in Practice

In practice, the scientific method is not as rigid and structured as it might first appear. Sometimes an experiment leads to conclusions that favor a change in approach; often, an experiment brings entirely new scientific questions to the puzzle. Many times, science does not operate in a linear fashion; instead, scientists continually draw inferences and make generalizations, finding patterns as their research proceeds (figure \(\PageIndex{12}\)). Even if the hypothesis was supported, scientists may still continue to test it in different ways. For example, scientists explore the impacts of Agent Orange, examining long-term health impacts as Vietnam veterans age.

A flow chart shows the steps in the scientific method. In step 1, an observation is made. In step 2, a question is asked about the observation. In step 3, an answer to the question, called a hypothesis, is proposed. In step 4, a prediction is made based on the hypothesis. In step 5, an experiment is done to test the prediction. In step 6, the results are analyzed to determine whether or not the hypothesis is supported. If the hypothesis is not supported, another hypothesis is made. In either case, the results are reported.

Scientific findings can influence decision making. In response to evidence regarding the effect of Agent Orange on human health, compensation is now available for Vietnam veterans who were exposed to Agent Orange and develop certain diseases. The use of Agent Orange is also banned in the U.S. Finally, the U.S. has began cleaning sites in Vietnam that are still contaminated with TCDD.

As another simple example, an experiment might be conducted to test the hypothesis that phosphate limits the growth of algae in freshwater ponds. A series of artificial ponds are filled with water and half of them are treated by adding phosphate each week, while the other half are treated by adding a salt that is known not to be used by algae. The variable here is the phosphate (or lack of phosphate), the experimental or treatment cases are the ponds with added phosphate and the control ponds are those with something inert added, such as the salt. Just adding something is also a control against the possibility that adding extra matter to the pond has an effect. If the treated ponds show lesser growth of algae, then we have found support for our hypothesis. If they do not, then we reject our hypothesis. Be aware that rejecting one hypothesis does not determine whether or not the other hypotheses can be accepted; it simply eliminates one hypothesis that is not valid (Figure \(\PageIndex{12}\)). Using the scientific method, the hypotheses that are inconsistent with experimental data are rejected.

Institute of Medicine (US) Committee to Review the Health Effects in Vietnam Veterans of Exposure to Herbicides. Veterans and Agent Orange: Health Effects of Herbicides Used in Vietnam . Washington (DC): National Academies Press (US); 1994. 2, History of the Controversy Over the Use of Herbicides.

Neubert, D., Dillmann, I. Embryotoxic effects in mice treated with 2,4,5-trichlorophenoxyacetic acid and 2,3,7,8-tetrachlorodibenzo-p-dioxin . Naunyn-Schmiedeberg's Arch. Pharmacol. 272, 243–264 (1972).

Stellman, J. M., & Stellman, S. D. (2018). Agent Orange During the Vietnam War: The Lingering Issue of Its Civilian and Military Health Impact . American journal of public health , 108 (6), 726–728.

Yi, S. W., Ohrr, H., Hong, J. S., & Yi, J. J. (2013). Agent Orange exposure and prevalence of self-reported diseases in Korean Vietnam veterans . Journal of preventive medicine and public health = Yebang Uihakhoe chi , 46 (5), 213–225.

American Association for the Advancement of Science (AAAS). 1990. Science for All Americans. AAAS, Washington, DC.

Barnes, B. 1985. About Science. Blackwell Ltd ,London, UK.

Giere, R.N. 2005. Understanding Scientific Reasoning. 5th ed. Wadsworth Publishing, New York, NY.

Kuhn, T.S. 1996. The Structure of Scientific Revolutions. 3rd ed. University of Chicago Press, Chicago, IL.

McCain, G. and E.M. Siegal. 1982. The Game of Science. Holbrook Press Inc., Boston, MA.

Moore, J.A. 1999. Science as a Way of Knowing. Harvard University Press, Boston, MA.

Popper, K. 1979. Objective Knowledge: An Evolutionary Approach. Clarendon Press, Oxford, UK.

Raven, P.H., G.B. Johnson, K.A. Mason, and J. Losos. 2013. Biology. 10th ed. McGraw-Hill, Columbus, OH.

Silver, B.L. 2000. The Ascent of Science. Oxford University Press, Oxford, UK.

Contributors and Attributions

  • Modified by Kyle Whittinghill (University of Pittsburgh)

Samantha Fowler (Clayton State University), Rebecca Roush (Sandhills Community College), James Wise (Hampton University). Original content by OpenStax (CC BY 4.0; Access for free at https://cnx.org/contents/b3c1e1d2-83...4-e119a8aafbdd ).

  • Modified by Melissa Ha
  • 1.2: The Process of Science by OpenStax , is licensed CC BY
  • What is Science? from An Introduction to Geology by Chris Johnson et al. (licensed under CC-BY-NC-SA )
  • The Process of Science from Environmental Biology by Matthew R. Fisher (licensed under CC-BY )
  • Scientific Methods from Biology by John W. Kimball (licensed under CC-BY )
  • Scientific Papers from Biology by John W. Kimball ( CC-BY )
  • Environmental Science: A Canadian perspective by Bill Freedman Chapter 2: Science as a Way of Understanding the Natural World

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons

Margin Size

  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Chemistry LibreTexts

1.3: The Scientific Method - How Chemists Think

  • Last updated
  • Save as PDF
  • Page ID 47444

Learning Objectives

  • Identify the components of the scientific method.

Scientists search for answers to questions and solutions to problems by using a procedure called the scientific method. This procedure consists of making observations, formulating hypotheses, and designing experiments; which leads to additional observations, hypotheses, and experiments in repeated cycles (Figure \(\PageIndex{1}\)).

1.4.jpg

Step 1: Make observations

Observations can be qualitative or quantitative. Qualitative observations describe properties or occurrences in ways that do not rely on numbers. Examples of qualitative observations include the following: "the outside air temperature is cooler during the winter season," "table salt is a crystalline solid," "sulfur crystals are yellow," and "dissolving a penny in dilute nitric acid forms a blue solution and a brown gas." Quantitative observations are measurements, which by definition consist of both a number and a unit. Examples of quantitative observations include the following: "the melting point of crystalline sulfur is 115.21° Celsius," and "35.9 grams of table salt—the chemical name of which is sodium chloride—dissolve in 100 grams of water at 20° Celsius." For the question of the dinosaurs’ extinction, the initial observation was quantitative: iridium concentrations in sediments dating to 66 million years ago were 20–160 times higher than normal.

Step 2: Formulate a hypothesis

After deciding to learn more about an observation or a set of observations, scientists generally begin an investigation by forming a hypothesis, a tentative explanation for the observation(s). The hypothesis may not be correct, but it puts the scientist’s understanding of the system being studied into a form that can be tested. For example, the observation that we experience alternating periods of light and darkness corresponding to observed movements of the sun, moon, clouds, and shadows is consistent with either one of two hypotheses:

  • Earth rotates on its axis every 24 hours, alternately exposing one side to the sun.
  • The sun revolves around Earth every 24 hours.

Suitable experiments can be designed to choose between these two alternatives. For the disappearance of the dinosaurs, the hypothesis was that the impact of a large extraterrestrial object caused their extinction. Unfortunately (or perhaps fortunately), this hypothesis does not lend itself to direct testing by any obvious experiment, but scientists can collect additional data that either support or refute it.

Step 3: Design and perform experiments

After a hypothesis has been formed, scientists conduct experiments to test its validity. Experiments are systematic observations or measurements, preferably made under controlled conditions—that is—under conditions in which a single variable changes.

Step 4: Accept or modify the hypothesis

A properly designed and executed experiment enables a scientist to determine whether or not the original hypothesis is valid. If the hypothesis is valid, the scientist can proceed to step 5. In other cases, experiments often demonstrate that the hypothesis is incorrect or that it must be modified and requires further experimentation.

Step 5: Development into a law and/or theory

More experimental data are then collected and analyzed, at which point a scientist may begin to think that the results are sufficiently reproducible (i.e., dependable) to merit being summarized in a law, a verbal or mathematical description of a phenomenon that allows for general predictions. A law simply states what happens; it does not address the question of why.

One example of a law, the law of definite proportions , which was discovered by the French scientist Joseph Proust (1754–1826), states that a chemical substance always contains the same proportions of elements by mass. Thus, sodium chloride (table salt) always contains the same proportion by mass of sodium to chlorine, in this case 39.34% sodium and 60.66% chlorine by mass, and sucrose (table sugar) is always 42.11% carbon, 6.48% hydrogen, and 51.41% oxygen by mass.

Whereas a law states only what happens, a theory attempts to explain why nature behaves as it does. Laws are unlikely to change greatly over time unless a major experimental error is discovered. In contrast, a theory, by definition, is incomplete and imperfect, evolving with time to explain new facts as they are discovered.

Because scientists can enter the cycle shown in Figure \(\PageIndex{1}\) at any point, the actual application of the scientific method to different topics can take many different forms. For example, a scientist may start with a hypothesis formed by reading about work done by others in the field, rather than by making direct observations.

Example \(\PageIndex{1}\)

Classify each statement as a law, a theory, an experiment, a hypothesis, an observation.

  • Ice always floats on liquid water.
  • Birds evolved from dinosaurs.
  • Hot air is less dense than cold air, probably because the components of hot air are moving more rapidly.
  • When 10 g of ice were added to 100 mL of water at 25°C, the temperature of the water decreased to 15.5°C after the ice melted.
  • The ingredients of Ivory soap were analyzed to see whether it really is 99.44% pure, as advertised.
  • This is a general statement of a relationship between the properties of liquid and solid water, so it is a law.
  • This is a possible explanation for the origin of birds, so it is a hypothesis.
  • This is a statement that tries to explain the relationship between the temperature and the density of air based on fundamental principles, so it is a theory.
  • The temperature is measured before and after a change is made in a system, so these are observations.
  • This is an analysis designed to test a hypothesis (in this case, the manufacturer’s claim of purity), so it is an experiment.

Exercise \(\PageIndex{1}\) 

Classify each statement as a law, a theory, an experiment, a hypothesis, a qualitative observation, or a quantitative observation.

  • Measured amounts of acid were added to a Rolaids tablet to see whether it really “consumes 47 times its weight in excess stomach acid.”
  • Heat always flows from hot objects to cooler ones, not in the opposite direction.
  • The universe was formed by a massive explosion that propelled matter into a vacuum.
  • Michael Jordan is the greatest pure shooter to ever play professional basketball.
  • Limestone is relatively insoluble in water, but dissolves readily in dilute acid with the evolution of a gas.

The scientific method is a method of investigation involving experimentation and observation to acquire new knowledge, solve problems, and answer questions. The key steps in the scientific method include the following:

  • Step 1: Make observations.
  • Step 2: Formulate a hypothesis.
  • Step 3: Test the hypothesis through experimentation.
  • Step 4: Accept or modify the hypothesis.
  • Step 5: Develop into a law and/or a theory.

Contributions & Attributions

Pfeiffer Library

The Scientific Method

What is the scientific method, research starters, observation, analyze results, draw conclusions.

  • Scientific Method Resources

According to Kosso (2011), the scientific method is a specific step-by-step method that aims to answer a question or prove a hypothesis.  It is the process used among all scientific disciplines and is used to conduct both small and large experiments.  It has been used for centuries to solve scientific problems and identify solutions.  While the terminology can be different across disciplines, the scientific method follows these six steps (Larson, 2015):

  • Analyze results
  • Draw conclusions

Click on each link to learn more about each step in the scientific method, or watch the video below for an introduction to each step.

Research Starters  is a feature available when searching  DragonQuest . You may notice when you enter a generic search term into DragonQuest that a research starter is your first result.

If available, research starters appear at the top of you search results in DragonQuest.

Research Starter  entries are similar to a Wikipedia entry of the topic, but  Research Starters  are pulled from quality sources such as Salem Press, Encyclopedia Britannica, and American National Biography.  Research Starters  can be a great place to begin your research, if you're not yet sure about your topic details.  There are several Research Starters related to the steps of the scientific method:

  • Scientific method
  • Research methodology
  • Research methods

Using Research Starters

To use  Research Starters,  click on the title just as you would for any other  DragonQuest  entry. You will then find a broad overview of the topic. This entry is great for finding

  • Subtopics that can narrow your searching
  • Background information to support your claims
  • Sources you can use and cite in your research

We do not recommend that you use  Research Starters  as a source itself though, because of the difficulties in citation.

Citing Research Starters

Using  Research Starters  as an actual source is not recommended.

Just as we do not recommend using Wikipedia as a source,  Research Starters  is the same. Use  Research Starters  as a starting point to get ideas about how to narrow your search and to use its bibliography to find sources you can cite.

We recommend this because citing  Research Starters  can be tricky as sometimes it will have insufficient bibliographic data to create your reference page.

To begin the scientific method, you have to observe something and identify a problem.  You can observe basically anything, such as a person, place, object, situation, or environment.  Examples of an observation include:

  • "My cotton shirt gets more wet in the rain than my friend's silk shirt."
  • "I feel more tired after eating a cookie than I do after eating a salad."

Once you have made an observation, it will lead to creating a scientific question (Larson, 2015).  The question focuses on a specific part of your observation:

  • Why does a cotton shirt get more wet in the rain than a silk shirt?
  • Why do I more tired after eating a cookie than if I ate a salad?

Scientific questions lead to research and crafting a hypothesis, which are the next steps in the scientific method.  Watch the video below for more information on observations.

Once you identify a topic and question from your observations, it is time to conduct some preliminary research.  It is meant to locate a potential answer to your research question or give you ideas on how to draft your hypothesis.  In some cases, it can also help you design an experiment once you determine your hypothesis.  It is a good idea to research your topic or problem using the library and/or the Internet.  It is also recommended to check out different source types for information, such as:

  • Academic journals
  • News reports
  • Audiovisual media (radio, podcasts, etc.)

Background Information

It is important to gather lots of background information on your topic or problem so you understand the topic thoroughly.  It is also critical to find and understand what others have already written about your research question.  This prevents you from experimenting on an issue that already has a definitive answer.

If you need assistance in conducting preliminary research, view our guide on locating background information at the bottom of this box.

If you are unsure where you should start researching, you can view our list of science databases through our  A-Z database list  by selecting "Science" from the subjects dropdown menu.  We also have several research guides that cover topics in the sciences, which can be viewed on our Help page.

Not sure where to begin your research?  Try searching a database in our A-Z list or using one of our  EBSCOhost databases !

  • Finding Background Information by Pfeiffer Library Last Updated May 22, 2023 47 views this year

When you have gathered enough information on your research question and determined that your question has not already been answered, you can form a hypothesis.  A hypothesis is an educated guess or possible explanation meant to answer your research question.  It often follows the "if, then..." sentence structure because it explains a cause/effect relationship between two variables.  A hypothesis is supposed to form a relationship between the two variables.

  • Example hypothesis: "If I soak a penny in lemon juice, then it will look cleaner than if I soak it in soap."

In this example, it is explaining a relationship between a penny and different cleaning agents.  While crafting your hypothesis, it is important to make sure that your "then" statement is something that can be measured, either quantitatively or qualitatively.  In the above example, an experiment for the hypothesis would be measuring the cleanliness of the penny after being exposed to either soap or lemon juice.

For more information on hypotheses, view DragonQuest's Research Starter on hypotheses here .  Alternatively, you can watch the video below for more details on crafting hypotheses.

The fourth step in the scientific method is the experiment stage.  This is where you craft an experiment to test your hypothesis.  The point of an experiment is to find out how changing one thing impacts another (Larson, 2015).  To test a hypothesis, you must implement and change different variables in your experiment.

Anything that you modify in an experiment is considered a variable.  There are two types of variables:

  • Independent variable:  The variable that is modified in an experiment so that is has a direct impact on the dependent variable.  It is the variable that you control in the experiment (Larson, 2015).
  • Dependent variable:  The variable that is being tested in an experiment, whose measure is directly related to the change of the independent variable (the dependent variable is dependent on the independent variable).  This is what you measure to prove or disprove your hypothesis.

Every experiment must also have a control group , which is a variable that remains unchanged for the duration of the experiment (Larson, 2015).  It is used to compare the results of the dependent variable.  In the case of the sample hypothesis above, a control variable would be a penny that does not receive any cleaning agent.

Research Methods

There are several ways to conduct an experiment.  The approach you take is dependent on your own strengths and weaknesses, the nature of your topic/hypothesis, and the resources you have available to conduct the experiment.  If you are unsure as to what research method you would like to use for your experiment, you can view our research methodologies guide below.  DragonQuest also has a Research Starter on research methods, located  here .

  • Research Methodologies by Pfeiffer Library Last Updated Aug 2, 2022 30339 views this year

When designing your experiment:

  • Make a list of materials that you will need to conduct your experiment.  If you will need to purchase additional materials, create a budget.
  • Consider the best locations for your experiment, especially if outside factors (weather, etc.) may effect the results.
  • If you need additional funding for an experiment, it is recommended to consider writing a research proposal for the entity from which you want to receive funding.  You can view our guide on writing research proposals below.

You can also watch the video below to learn more about designing experiments.  Or, you can view DragonQuest's Research Starter on experiments here .

  • Writing a Research Proposal by Pfeiffer Library Last Updated May 22, 2023 16119 views this year

When conducting your experiment:

  • Record or write down your experimental procedure so that each variable it tested equally.  It is likely that you will conduct your experiment more than once, so it is important that it is conducted exactly the same each time (Larson, 2015).
  • Be aware of outside factors that could impact your experiment and results.  Outside factors could include weather patterns, time of day, location, and temperature.
  • Wear protective equipment to keep yourself safe during the experiment.
  • Record your results on a transferrable platform (Google Spreadsheets, Microsoft Excel, etc.), especially if you plan on running statistical analyses on your data using a computer program.  You should also back your data up electronically so you do not lose it!
  • Use a table or chart to record data by hand.  The x-axis (row) of a chart should represent the independent variable, while the y-axis (column) should represent the dependent variable (Riverside Local Schools, n.d.).
  • Be prepared for unexpected results.  Some experiments can unexpectedly "go wrong" resulting in different data than planned.  Do not feel defeated if this happens in your experiment!  Once the tests are completed, you can analyze and determine why the experiment went differently.

Before arriving at a conclusion, you must look at all your evidence and analyze it.  Data analysis is "the process of interpreting the meaning of the data we have collected, organized, and displayed in the form of a chart or graph" (Riverside Local Schools, p. 1.).  If you did not create a graph or chart while recording your data, you may choose to create one to analyze your results.  Or, you may choose to create a more elaborate chart from the one you used in the experiment.  Graphs and charts organize data so that you can easily identify trends or patterns.  Patterns are similarities, differences, and relationships that tell you the "big picture" of an experiment (Riverside Local Schools, n.d.).

Questions to Consider

There are several things to consider when analyzing your data:

  • What exactly am I trying to discover from this data?
  • How does my data relate to my hypothesis?
  • Are there any noticeable patterns or trends in the data?  If so, what do these patterns mean?
  • Is my data good quality?  Was my data skewed in any way?
  • Were there any limitations to retrieving this data during the experiment?

Once you have identified patterns or trends and considered the above questions, you can summarize your findings to draw your final conclusions.

Drawing conclusions is the final step in the scientific method.  It gives you the opportunity to combine your findings and communicate them to your audience.  A conclusion is "a summary of what you have learned from the experiment" (Riverside Local Schools, p. 1).  To draw a conclusion, you will compare your data analysis to your hypothesis and make a statement based on the comparison.  Your conclusion should answer the following questions:

  • Was your hypothesis correct?
  • Does my data support my hypothesis?
  • If your hypothesis was incorrect, what did you learn from the experiment?
  • Do you need to change a variable if the experiment is repeated?
  • Is your data coherent and easy to understand?
  • If the experiment failed, what did you learn?

A strong conclusion should also (American Psychological Association, 2021):

  • Be justifiable by the data you collected.
  • Provide generalizations that are limited to the sample you studied.
  • Relate your preliminary research (background information) to your experiment and state how your conclusion is relevant.
  • Be logical and address any potential discrepancies (American Psychological Association, 2021).

Reporting Your Results

Once you have drawn your conclusions, you will communicate your results to others.  This can be in the form of a formal research paper, presentation, or assignment that you submit to an instructor for a grade.  If you are looking to submit an original work to an academic journal, it will require approval and undergo peer-review before being published.  However, it is important to be aware of predatory publishers.  You can view our guide on predatory publishing below.

  • Predatory Publishing by Pfeiffer Library Last Updated Aug 2, 2023 410 views this year
  • << Previous: Welcome
  • Next: Scientific Method Resources >>
  • Last Updated: Mar 30, 2023 2:24 PM
  • URL: https://library.tiffin.edu/thescientificmethod

What Is a Hypothesis? (Science)

If...,Then...

Angela Lumsden/Getty Images

  • Scientific Method
  • Chemical Laws
  • Periodic Table
  • Projects & Experiments
  • Biochemistry
  • Physical Chemistry
  • Medical Chemistry
  • Chemistry In Everyday Life
  • Famous Chemists
  • Activities for Kids
  • Abbreviations & Acronyms
  • Weather & Climate
  • Ph.D., Biomedical Sciences, University of Tennessee at Knoxville
  • B.A., Physics and Mathematics, Hastings College

A hypothesis (plural hypotheses) is a proposed explanation for an observation. The definition depends on the subject.

In science, a hypothesis is part of the scientific method. It is a prediction or explanation that is tested by an experiment. Observations and experiments may disprove a scientific hypothesis, but can never entirely prove one.

In the study of logic, a hypothesis is an if-then proposition, typically written in the form, "If X , then Y ."

In common usage, a hypothesis is simply a proposed explanation or prediction, which may or may not be tested.

Writing a Hypothesis

Most scientific hypotheses are proposed in the if-then format because it's easy to design an experiment to see whether or not a cause and effect relationship exists between the independent variable and the dependent variable . The hypothesis is written as a prediction of the outcome of the experiment.

  • Null Hypothesis and Alternative Hypothesis

Statistically, it's easier to show there is no relationship between two variables than to support their connection. So, scientists often propose the null hypothesis . The null hypothesis assumes changing the independent variable will have no effect on the dependent variable.

In contrast, the alternative hypothesis suggests changing the independent variable will have an effect on the dependent variable. Designing an experiment to test this hypothesis can be trickier because there are many ways to state an alternative hypothesis.

For example, consider a possible relationship between getting a good night's sleep and getting good grades. The null hypothesis might be stated: "The number of hours of sleep students get is unrelated to their grades" or "There is no correlation between hours of sleep and grades."

An experiment to test this hypothesis might involve collecting data, recording average hours of sleep for each student and grades. If a student who gets eight hours of sleep generally does better than students who get four hours of sleep or 10 hours of sleep, the hypothesis might be rejected.

But the alternative hypothesis is harder to propose and test. The most general statement would be: "The amount of sleep students get affects their grades." The hypothesis might also be stated as "If you get more sleep, your grades will improve" or "Students who get nine hours of sleep have better grades than those who get more or less sleep."

In an experiment, you can collect the same data, but the statistical analysis is less likely to give you a high confidence limit.

Usually, a scientist starts out with the null hypothesis. From there, it may be possible to propose and test an alternative hypothesis, to narrow down the relationship between the variables.

Example of a Hypothesis

Examples of a hypothesis include:

  • If you drop a rock and a feather, (then) they will fall at the same rate.
  • Plants need sunlight in order to live. (if sunlight, then life)
  • Eating sugar gives you energy. (if sugar, then energy)
  • White, Jay D.  Research in Public Administration . Conn., 1998.
  • Schick, Theodore, and Lewis Vaughn.  How to Think about Weird Things: Critical Thinking for a New Age . McGraw-Hill Higher Education, 2002.
  • Null Hypothesis Definition and Examples
  • Definition of a Hypothesis
  • What Are the Elements of a Good Hypothesis?
  • Six Steps of the Scientific Method
  • Independent Variable Definition and Examples
  • What Are Examples of a Hypothesis?
  • Understanding Simple vs Controlled Experiments
  • Scientific Method Flow Chart
  • Scientific Method Vocabulary Terms
  • What Is a Testable Hypothesis?
  • Null Hypothesis Examples
  • What 'Fail to Reject' Means in a Hypothesis Test
  • How To Design a Science Fair Experiment
  • What Is an Experiment? Definition and Design
  • Hypothesis Test for the Difference of Two Population Proportions

If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

To log in and use all the features of Khan Academy, please enable JavaScript in your browser.

Mechanics (Essentials) - Class 11th

Course: mechanics (essentials) - class 11th   >   unit 2.

  • Introduction to physics
  • What is physics?

The scientific method

  • Models and Approximations in Physics

Introduction

  • Make an observation.
  • Ask a question.
  • Form a hypothesis , or testable explanation.
  • Make a prediction based on the hypothesis.
  • Test the prediction.
  • Iterate: use the results to make new hypotheses or predictions.

Scientific method example: Failure to toast

1. make an observation..

  • Observation: the toaster won't toast.

2. Ask a question.

  • Question: Why won't my toaster toast?

3. Propose a hypothesis.

  • Hypothesis: Maybe the outlet is broken.

4. Make predictions.

  • Prediction: If I plug the toaster into a different outlet, then it will toast the bread.

5. Test the predictions.

  • Test of prediction: Plug the toaster into a different outlet and try again.
  • If the toaster does toast, then the hypothesis is supported—likely correct.
  • If the toaster doesn't toast, then the hypothesis is not supported—likely wrong.

Logical possibility

Practical possibility, building a body of evidence, 6. iterate..

  • Iteration time!
  • If the hypothesis was supported, we might do additional tests to confirm it, or revise it to be more specific. For instance, we might investigate why the outlet is broken.
  • If the hypothesis was not supported, we would come up with a new hypothesis. For instance, the next hypothesis might be that there's a broken wire in the toaster.

Want to join the conversation?

The Scientific Method Tutorial

The scientific method, steps in the scientific method.

There is a great deal of variation in the specific techniques scientists use explore the natural world. However, the following steps characterize the majority of scientific investigations:

Step 1: Make observations Step 2: Propose a hypothesis to explain observations Step 3: Test the hypothesis with further observations or experiments Step 4: Analyze data Step 5: State conclusions about hypothesis based on data analysis

Each of these steps is explained briefly below, and in more detail later in this section.

Step 1: Make observations

A scientific inquiry typically starts with observations. Often, simple observations will trigger a question in the researcher's mind.

Example: A biologist frequently sees monarch caterpillars feeding on milkweed plants, but rarely sees them feeding on other types of plants. She wonders if it is because the caterpillars prefer milkweed over other food choices.

Step 2: Propose a hypothesis

The researcher develops a hypothesis (singular) or hypotheses (plural) to explain these observations. A hypothesis is a tentative explanation of a phenomenon or observation(s) that can be supported or falsified by further observations or experimentation.

Example: The researcher hypothesizes that monarch caterpillars prefer to feed on milkweed compared to other common plants. (Notice how the hypothesis is a statement, not a question as in step 1.)

Step 3: Test the hypothesis

The researcher makes further observations and/or may design an experiment to test the hypothesis. An experiment is a controlled situation created by a researcher to test the validity of a hypothesis. Whether further observations or an experiment is used to test the hypothesis will depend on the nature of the question and the practicality of manipulating the factors involved.

Example: The researcher sets up an experiment in the lab in which a number of monarch caterpillars are given a choice between milkweed and a number of other common plants to feed on.

Step 4: Analyze data

The researcher summarizes and analyzes the information, or data, generated by these further observations or experiments.

Example: In her experiment, milkweed was chosen by caterpillars 9 times out of 10 over all other plant selections.

Step 5: State conclusions

The researcher interprets the results of experiments or observations and forms conclusions about the meaning of these results. These conclusions are generally expressed as probability statements about their hypothesis.

Example: She concludes that when given a choice, 90 percent of monarch caterpillars prefer to feed on milkweed over other common plants.

Often, the results of one scientific study will raise questions that may be addressed in subsequent research. For example, the above study might lead the researcher to wonder why monarchs seem to prefer to feed on milkweed, and she may plan additional experiments to explore this question. For example, perhaps the milkweed has higher nutritional value than other available plants.

Return to top of page

The Scientific Method Flowchart

The steps in the scientific method are presented visually in the following flow chart. The question raised or the results obtained at each step directly determine how the next step will proceed. Following the flow of the arrows, pass the cursor over each blue box. An explanation and example of each step will appear. As you read the example given at each step, see if you can predict what the next step will be.

Activity: Apply the Scientific Method to Everyday Life Use the steps of the scientific method described above to solve a problem in real life. Suppose you come home one evening and flick the light switch only to find that the light doesn’t turn on. What is your hypothesis? How will you test that hypothesis? Based on the result of this test, what are your conclusions? Follow your instructor's directions for submitting your response.

The above flowchart illustrates the logical sequence of conclusions and decisions in a typical scientific study. There are some important points to note about this process:

1. The steps are clearly linked.

The steps in this process are clearly linked. The hypothesis, formed as a potential explanation for the initial observations, becomes the focus of the study. The hypothesis will determine what further observations are needed or what type of experiment should be done to test its validity. The conclusions of the experiment or further observations will either be in agreement with or will contradict the hypothesis. If the results are in agreement with the hypothesis, this does not prove that the hypothesis is true! In scientific terms, it "lends support" to the hypothesis, which will be tested again and again under a variety of circumstances before researchers accept it as a fairly reliable description of reality.

2. The same steps are not followed in all types of research.

The steps described above present a generalized method followed in a many scientific investigations. These steps are not carved in stone. The question the researcher wishes to answer will influence the steps in the method and how they will be carried out. For example, astronomers do not perform many experiments as defined here. They tend to rely on observations to test theories. Biologists and chemists have the ability to change conditions in a test tube and then observe whether the outcome supports or invalidates their starting hypothesis, while astronomers are not able to change the path of Jupiter around the Sun and observe the outcome!

3. Collected observations may lead to the development of theories.

When a large number of observations and/or experimental results have been compiled, and all are consistent with a generalized description of how some element of nature operates, this description is called a theory. Theories are much broader than hypotheses and are supported by a wide range of evidence. Theories are important scientific tools. They provide a context for interpretation of new observations and also suggest experiments to test their own validity. Theories are discussed in more detail in another section.

The Scientific Method in Detail

In the sections that follow, each step in the scientific method is described in more detail.

Step 1: Observations

Observations in science.

An observation is some thing, event, or phenomenon that is noticed or observed. Observations are listed as the first step in the scientific method because they often provide a starting point, a source of questions a researcher may ask. For example, the observation that leaves change color in the fall may lead a researcher to ask why this is so, and to propose a hypothesis to explain this phenomena. In fact, observations also will provide the key to answering the research question.

In science, observations form the foundation of all hypotheses, experiments, and theories. In an experiment, the researcher carefully plans what observations will be made and how they will be recorded. To be accepted, scientific conclusions and theories must be supported by all available observations. If new observations are made which seem to contradict an established theory, that theory will be re-examined and may be revised to explain the new facts. Observations are the nuts and bolts of science that researchers use to piece together a better understanding of nature.

Observations in science are made in a way that can be precisely communicated to (and verified by) other researchers. In many types of studies (especially in chemistry, physics, and biology), quantitative observations are used. A quantitative observation is one that is expressed and recorded as a quantity, using some standard system of measurement. Quantities such as size, volume, weight, time, distance, or a host of others may be measured in scientific studies.

Some observations that researchers need to make may be difficult or impossible to quantify. Take the example of color. Not all individuals perceive color in exactly the same way. Even apart from limiting conditions such as colorblindness, the way two people see and describe the color of a particular flower, for example, will not be the same. Color, as perceived by the human eye, is an example of a qualitative observation.

Qualitative observations note qualities associated with subjects or samples that are not readily measured. Other examples of qualitative observations might be descriptions of mating behaviors, human facial expressions, or "yes/no" type of data, where some factor is present or absent. Though the qualities of an object may be more difficult to describe or measure than any quantities associated with it, every attempt is made to minimize the effects of the subjective perceptions of the researcher in the process. Some types of studies, such as those in the social and behavioral sciences (which deal with highly variable human subjects), may rely heavily on qualitative observations.

Question: Why are observations important to science?

Limits of Observations

Because all observations rely to some degree on the senses (eyes, ears, or steady hand) of the researcher, complete objectivity is impossible. Our human perceptions are limited by the physical abilities of our sense organs and are interpreted according to our understanding of how the world works, which can be influenced by culture, experience, or education. According to science education specialist, George F. Kneller, "Surprising as it may seem, there is no fact that is not colored by our preconceptions" ("A Method of Enquiry," from Science and Its Ways of Knowing [Upper Saddle River: Prentice-Hall Inc., 1997], 15).

Observations made by a scientist are also limited by the sensitivity of whatever equipment he is using. Research findings will be limited at times by the available technology. For example, Italian physicist and philosopher Galileo Galilei (1564–1642) was reportedly the first person to observe the heavens with a telescope. Imagine how it must have felt to him to see the heavens through this amazing new instrument! It opened a window to the stars and planets and allowed new observations undreamed of before.

In the centuries since Galileo, increasingly more powerful telescopes have been devised that dwarf the power of that first device. In the past decade, we have marveled at images from deep space , courtesy of the Hubble Space Telescope, a large telescope that orbits Earth. Because of its view from outside the distorting effects of the atmosphere, the Hubble can look 50 times farther into space than the best earth-bound telescopes, and resolve details a tenth of the size (Seeds, Michael A., Horizons: Exploring the Universe , 5 th ed. [Belmont: Wadsworth Publishing Company, 1998], 86-87).

Construction is underway on a new radio telescope that scientists say will be able to detect electromagnetic waves from the very edges of the universe! This joint U.S.-Mexican project may allow us to ask questions about the origins of the universe and the beginnings of time that we could never have hoped to answer before. Completion of the new telescope is expected by the end of 2001.

Although the amount of detail observed by Galileo and today's astronomers is vastly different, the stars and their relationships have not changed very much. Yet with each technological advance, the level of detail of observation has been increased, and with it, the power to answer more and more challenging questions with greater precision.

Question: What are some of the differences between a casual observation and a 'scientific observation'?

Step 2: The Hypothesis

A hypothesis is a statement created by the researcher as a potential explanation for an observation or phenomena. The hypothesis converts the researcher's original question into a statement that can be used to make predictions about what should be observed if the hypothesis is true. For example, given the hypothesis, "exposure to ultraviolet (UV) radiation increases the risk of skin cancer," one would predict higher rates of skin cancer among people with greater UV exposure. These predictions could be tested by comparing skin cancer rates among individuals with varying amounts of UV exposure. Note how the hypothesis itself determines what experiments or further observations should be made to test its validity. Results of tests are then compared to predictions from the hypothesis, and conclusions are stated in terms of whether or not the data supports the hypothesis. So the hypothesis serves a guide to the full process of scientific inquiry.

The Qualities of a Good Hypothesis

  • A hypothesis must be testable or provide predictions that are testable. It can potentially be shown to be false by further observations or experimentation.
  • A hypothesis should be specific. If it is too general it cannot be tested, or tests will have so many variables that the results will be complicated and difficult to interpret. A well-written hypothesis is so specific it actually determines how the experiment should be set up.
  • A hypothesis should not include any untested assumptions if they can be avoided. The hypothesis itself may be an assumption that is being tested, but it should be phrased in a way that does not include assumptions that are not tested in the experiment.
  • It is okay (and sometimes a good idea) to develop more than one hypothesis to explain a set of observations. Competing hypotheses can often be tested side-by-side in the same experiment.

Question: Why is the hypothesis important to the scientific method?

Step 3: Testing the Hypothesis

A hypothesis may be tested in one of two ways: by making additional observations of a natural situation, or by setting up an experiment. In either case, the hypothesis is used to make predictions, and the observations or experimental data collected are examined to determine if they are consistent or inconsistent with those predictions. Hypothesis testing, especially through experimentation, is at the core of the scientific process. It is how scientists gain a better understanding of how things work.

Testing a Hypothesis by Observation

Some hypotheses may be tested through simple observation. For example, a researcher may formulate the hypothesis that the sun always rises in the east. What might an alternative hypothesis be? If his hypothesis is correct, he would predict that the sun will rise in the east tomorrow. He can easily test such a prediction by rising before dawn and going out to observe the sunrise. If the sun rises in the west, he will have disproved the hypothesis. He will have shown that it does not hold true in every situation. However, if he observes on that morning that the sun does in fact rise in the east, he has not proven the hypothesis. He has made a single observation that is consistent with, or supports, the hypothesis. As a scientist, to confidently state that the sun will always rise in the east, he will want to make many observations, under a variety of circumstances. Note that in this instance no manipulation of circumstance is required to test the hypothesis (i.e., you aren't altering the sun in any way).

Testing a Hypothesis by Experimentation

An experiment is a controlled series of observations designed to test a specific hypothesis. In an experiment, the researcher manipulates factors related to the hypothesis in such a way that the effect of these factors on the observations (data) can be readily measured and compared. Most experiments are an attempt to define a cause-and-effect relationship between two factors or events—to explain why something happens. For example, with the hypothesis "roses planted in sunny areas bloom earlier than those grown in shady areas," the experiment would be testing a cause-and-effect relationship between sunlight and time of blooming.

A major advantage of setting up an experiment versus making observations of what is already available is that it allows the researcher to control all the factors or events related to the hypothesis, so that the true cause of an event can be more easily isolated. In all cases, the hypothesis itself will determine the way the experiment will be set up. For example, suppose my hypothesis is "the weight of an object is proportional to the amount of time it takes to fall a certain distance." How would you test this hypothesis?

The Qualities of a Good Experiment

  • The experiment must be conducted on a group of subjects that are narrowly defined and have certain aspects in common. This is the group to which any conclusions must later be confined. (Examples of possible subjects: female cancer patients over age 40, E. coli bacteria, red giant stars, the nicotine molecule and its derivatives.)
  • All subjects of the experiment should be (ideally) completely alike in all ways except for the factor or factors that are being tested. Factors that are compared in scientific experiments are called variables. A variable is some aspect of a subject or event that may differ over time or from one group of subjects to another. For example, if a biologist wanted to test the effect of nitrogen on grass growth, he would apply different amounts of nitrogen fertilizer to several plots of grass. The grass in each of the plots should be as alike as possible so that any difference in growth could be attributed to the effect of the nitrogen. For example, all the grass should be of the same species, planted at the same time and at the same density, receive the same amount of water and sunlight, and so on. The variable in this case would be the amount of nitrogen applied to the plants. The researcher would not compare differing amounts of nitrogen across different grass species to determine the effect of nitrogen on grass growth. What is the problem with using different species of plants to compare the effect of nitrogen on plant growth? There are different kinds of variables in an experiment. A factor that the experimenter controls, and changes intentionally to determine if it has an effect, is called an independent variable . A factor that is recorded as data in the experiment, and which is compared across different groups of subjects, is called a dependent variable . In many cases, the value of the dependent variable will be influenced by the value of an independent variable. The goal of the experiment is to determine a cause-and-effect relationship between independent and dependent variables—in this case, an effect of nitrogen on plant growth. In the nitrogen/grass experiment, (1) which factor was the independent variable? (2) Which factor was the dependent variable?
  • Nearly all types of experiments require a control group and an experimental group. The control group generally is not changed in any way, but remains in a "natural state," while the experimental group is modified in some way to examine the effect of the variable which of interest to the researcher. The control group provides a standard of comparison for the experimental groups. For example, in new drug trials, some patients are given a placebo while others are given doses of the drug being tested. The placebo serves as a control by showing the effect of no drug treatment on the patients. In research terminology, the experimental groups are often referred to as treatments , since each group is treated differently. In the experimental test of the effect of nitrogen on grass growth, what is the control group? In the example of the nitrogen experiment, what is the purpose of a control group?
  • In research studies a great deal of emphasis is placed on repetition. It is essential that an experiment or study include enough subjects or enough observations for the researcher to make valid conclusions. The two main reasons why repetition is important in scientific studies are (1) variation among subjects or samples and (2) measurement error.

Variation among Subjects

There is a great deal of variation in nature. In a group of experimental subjects, much of this variation may have little to do with the variables being studied, but could still affect the outcome of the experiment in unpredicted ways. For example, in an experiment designed to test the effects of alcohol dose levels on reflex time in 18- to 22-year-old males, there would be significant variation among individual responses to various doses of alcohol. Some of this variation might be due to differences in genetic make-up, to varying levels of previous alcohol use, or any number of factors unknown to the researcher.

Because what the researcher wants to discover is average dose level effects for this group, he must run the test on a number of different subjects. Suppose he performed the test on only 10 individuals. Do you think the average response calculated would be the same as the average response of all 18- to 22-year-old males? What if he tests 100 individuals, or 1,000? Do you think the average he comes up with would be the same in each case? Chances are it would not be. So which average would you predict would be most representative of all 18- to 22-year-old males?

A basic rule of statistics is, the more observations you make, the closer the average of those observations will be to the average for the whole population you are interested in. This is because factors that vary among a population tend to occur most commonly in the middle range, and least commonly at the two extremes. Take human height for example. Although you may find a man who is 7 feet tall, or one who is 4 feet tall, most men will fall somewhere between 5 and 6 feet in height. The more men we measure to determine average male height, the less effect those uncommon extreme (tall or short) individuals will tend to impact the average. Thus, one reason why repetition is so important in experiments is that it helps to assure that the conclusions made will be valid not only for the individuals tested, but also for the greater population those individuals represent.

"The use of a sample (or subset) of a population, an event, or some other aspect of nature for an experimental group that is not large enough to be representative of the whole" is called sampling error (Starr, Cecie, Biology: Concepts and Applications , 4 th ed. [Pacific Cove: Brooks/Cole, 2000], glossary). If too few samples or subjects are used in an experiment, the researcher may draw incorrect conclusions about the population those samples or subjects represent.

Use the jellybean activity below to see a simple demonstration of samping error.

Directions: There are 400 jellybeans in the jar. If you could not see the jar and you initially chose 1 green jellybean from the jar, you might assume the jar only contains green jelly beans. The jar actually contains both green and black jellybeans. Use the "pick 1, 5, or 10" buttons to create your samples. For example, use the "pick" buttons now to create samples of 2, 13, and 27 jellybeans. After you take each sample, try to predict the ratio of green to black jellybeans in the jar. How does your prediction of the ratio of green to black jellybeans change as your sample changes?

Measurement Error

The second reason why repetition is necessary in research studies has to do with measurement error. Measurement error may be the fault of the researcher, a slight difference in measuring techniques among one or more technicians, or the result of limitations or glitches in measuring equipment. Even the most careful researcher or the best state-of-the-art equipment will make some mistakes in measuring or recording data. Another way of looking at this is to say that, in any study, some measurements will be more accurate than others will. If the researcher is conscientious and the equipment is good, the majority of measurements will be highly accurate, some will be somewhat inaccurate, and a few may be considerably inaccurate. In this case, the same reasoning used above also applies here: the more measurements taken, the less effect a few inaccurate measurements will have on the overall average.

Step 4: Data Analysis

In any experiment, observations are made, and often, measurements are taken. Measurements and observations recorded in an experiment are referred to as data . The data collected must relate to the hypothesis being tested. Any differences between experimental and control groups must be expressed in some way (often quantitatively) so that the groups may be compared. Graphs and charts are often used to visualize the data and to identify patterns and relationships among the variables.

Statistics is the branch of mathematics that deals with interpretation of data. Data analysis refers to statistical methods of determining whether any differences between the control group and experimental groups are too great to be attributed to chance alone. Although a discussion of statistical methods is beyond the scope of this tutorial, the data analysis step is crucial because it provides a somewhat standardized means for interpreting data. The statistical methods of data analysis used, and the results of those analyses, are always included in the publication of scientific research. This convention limits the subjective aspects of data interpretation and allows scientists to scrutinize the working methods of their peers.

Why is data analysis an important step in the scientific method?

Step 5: Stating Conclusions

The conclusions made in a scientific experiment are particularly important. Often, the conclusion is the only part of a study that gets communicated to the general public. As such, it must be a statement of reality, based upon the results of the experiment. To assure that this is the case, the conclusions made in an experiment must (1) relate back to the hypothesis being tested, (2) be limited to the population under study, and (3) be stated as probabilities.

The hypothesis that is being tested will be compared to the data collected in the experiment. If the experimental results contradict the hypothesis, it is rejected and further testing of that hypothesis under those conditions is not necessary. However, if the hypothesis is not shown to be wrong, that does not conclusively prove that it is right! In scientific terms, the hypothesis is said to be "supported by the data." Further testing will be done to see if the hypothesis is supported under a number of trials and under different conditions.

If the hypothesis holds up to extensive testing then the temptation is to claim that it is correct. However, keep in mind that the number of experiments and observations made will only represent a subset of all the situations in which the hypothesis may potentially be tested. In other words, experimental data will only show part of the picture. There is always the possibility that a further experiment may show the hypothesis to be wrong in some situations. Also, note that the limits of current knowledge and available technologies may prevent a researcher from devising an experiment that would disprove a particular hypothesis.

The researcher must be sure to limit his or her conclusions to apply only to the subjects tested in the study. If a particular species of fish is shown to consume their young 90 percent of the time when raised in captivity, that doesn't necessarily mean that all fish will do so, or that this fish's behavior would be the same in its native habitat.

Finally, the conclusions of the experiment are generally stated as probabilities. A careful scientist would never say, "drug x kills cancer cells;" she would more likely say, "drug x was shown to destroy 85 percent of cancerous skin cells in rats in lab trials." Notice how very different these two statements are. There is a tendency in the media and in the general public to gravitate toward the first statement. This makes a terrific headline and is also easy to interpret; it is absolute. Remember though, in science conclusions must be confined to the population under study; broad generalizations should be avoided. The second statement is sound science. There is data to back it up. Later studies may reveal a more universal effect of the drug on cancerous cells, or they may not. Most researchers would be unwilling to stake their reputations on the first statement.

As a student, you should read and interpret popular press articles about research studies very carefully. From the text, can you determine how the experiment was set up and what variables were measured? Are the observations and data collected appropriate to the hypothesis being tested? Are the conclusions supported by the data? Are the conclusions worded in a scientific context (as probability statements) or are they generalized for dramatic effect? In any researched-based assignment, it is a good idea to refer to the original publication of a study (usually found in professional journals) and to interpret the facts for yourself.

Qualities of a Good Experiment

  • narrowly defined subjects
  • all subjects treated alike except for the factor or variable being studied
  • a control group is used for comparison
  • measurements related to the factors being studied are carefully recorded
  • enough samples or subjects are used so that conclusions are valid for the population of interest
  • conclusions made relate back to the hypothesis, are limited to the population being studied, and are stated in terms of probabilities

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons
  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Statistics LibreTexts

6.1: Scientific Method

  • Last updated
  • Save as PDF
  • Page ID 2110

  • Rice University

Learning Objectives

  • B rief discussion of the most important principles of the scientific method

This section contains a brief discussion of the most important principles of the scientific method. A thorough treatment of the philosophy of science is beyond the scope of this work.

One of the hallmarks of the scientific method is that it depends on empirical data. To be a proper scientific investigation the data must be collected systematically. However, scientific investigation does not necessarily require experimentation in the sense of manipulating variables and observing the results. Observational studies in the fields of astronomy, developmental psychology, and ethology are common and provide valuable scientific information.

Theories and explanations are very important in science. Theories in science can never be proved since one can never be 100% certain that a new empirical finding inconsistent with the theory will never be found.

Scientific theories must be potentially disconfirmable. If a theory can accommodate all possible results then it is not a scientific theory. Therefore, a scientific theory should lead to testable hypotheses. If a hypothesis is disconfirmed, then the theory from which the hypothesis was deduced is incorrect. For example, the secondary reinforcement theory of attachment states that an infant becomes attached to its parent by means of a pairing of the parent with a primary reinforcer (food). It is through this "secondary reinforcement" that the child-parent bond forms. The secondary reinforcement theory has been disconfirmed by numerous experiments. Perhaps the most notable is one in which infant monkeys were fed by a surrogate wire mother while a surrogate cloth mother was available. The infant monkeys formed no attachment to the wire monkeys and frequently clung to the cloth surrogate mothers.

History of Attachment Theory

If a hypothesis derived from a theory is confirmed then the theory has survived a test and it becomes more useful and better thought of by the researchers in the field. A theory is not confirmed when correct hypotheses are derived from it.

A key difference between scientific explanations and faith-based explanations is simply that faith-based explanations are based on faith and do not need to be testable. This does not mean that an explanation that cannot be tested is incorrect in some cosmic sense. It just means that it is not a scientific explanation.

The method of investigation in which a hypothesis is developed from a theory and then confirmed or disconfirmed involves deductive reasoning. However, deductive reasoning does not explain where the theory came from in the first place. In general, a theory is developed by a scientist who is aware of many empirical findings on a topic of interest. Then, through a generally poorly understood process called "induction" the scientist develops a way to explain all or most of the findings within a relatively simple framework or theory.

An important attribute of a good scientific theory is that it is parsimonious. That is, that it is simple in the sense that it uses relatively few constructs to explain many empirical findings. A theory that it so complex that it has as many assumptions as it has predictions is not very valuable.

Although strictly speaking, disconfirming an hypothesis deduced from a theory disconfirms the theory, it rarely leads to the abandonment of the theory. Instead, the theory will probably be modified to accommodate the inconsistent finding. If the theory has to be modified over and over to accommodate new findings, the theory generally becomes less and less parsimonious. This can lead to discontent with the theory and the search for a new theory. If a new theory is developed that can explain the same facts in a more parsimonious way, then the new theory will eventually supercede the old theory.

SEP home page

  • Table of Contents
  • Random Entry
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Advanced Tools
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Entry Contents

Bibliography

Academic tools.

  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

Scientific Method

Science is an enormously successful human enterprise. The study of scientific method is the attempt to discern the activities by which that success is achieved. Among the activities often identified as characteristic of science are systematic observation and experimentation, inductive and deductive reasoning, and the formation and testing of hypotheses and theories. How these are carried out in detail can vary greatly, but characteristics like these have been looked to as a way of demarcating scientific activity from non-science, where only enterprises which employ some canonical form of scientific method or methods should be considered science (see also the entry on science and pseudo-science ). Others have questioned whether there is anything like a fixed toolkit of methods which is common across science and only science. Some reject privileging one view of method as part of rejecting broader views about the nature of science, such as naturalism (Dupré 2004); some reject any restriction in principle (pluralism).

Scientific method should be distinguished from the aims and products of science, such as knowledge, predictions, or control. Methods are the means by which those goals are achieved. Scientific method should also be distinguished from meta-methodology, which includes the values and justifications behind a particular characterization of scientific method (i.e., a methodology) — values such as objectivity, reproducibility, simplicity, or past successes. Methodological rules are proposed to govern method and it is a meta-methodological question whether methods obeying those rules satisfy given values. Finally, method is distinct, to some degree, from the detailed and contextual practices through which methods are implemented. The latter might range over: specific laboratory techniques; mathematical formalisms or other specialized languages used in descriptions and reasoning; technological or other material means; ways of communicating and sharing results, whether with other scientists or with the public at large; or the conventions, habits, enforced customs, and institutional controls over how and what science is carried out.

While it is important to recognize these distinctions, their boundaries are fuzzy. Hence, accounts of method cannot be entirely divorced from their methodological and meta-methodological motivations or justifications, Moreover, each aspect plays a crucial role in identifying methods. Disputes about method have therefore played out at the detail, rule, and meta-rule levels. Changes in beliefs about the certainty or fallibility of scientific knowledge, for instance (which is a meta-methodological consideration of what we can hope for methods to deliver), have meant different emphases on deductive and inductive reasoning, or on the relative importance attached to reasoning over observation (i.e., differences over particular methods.) Beliefs about the role of science in society will affect the place one gives to values in scientific method.

The issue which has shaped debates over scientific method the most in the last half century is the question of how pluralist do we need to be about method? Unificationists continue to hold out for one method essential to science; nihilism is a form of radical pluralism, which considers the effectiveness of any methodological prescription to be so context sensitive as to render it not explanatory on its own. Some middle degree of pluralism regarding the methods embodied in scientific practice seems appropriate. But the details of scientific practice vary with time and place, from institution to institution, across scientists and their subjects of investigation. How significant are the variations for understanding science and its success? How much can method be abstracted from practice? This entry describes some of the attempts to characterize scientific method or methods, as well as arguments for a more context-sensitive approach to methods embedded in actual scientific practices.

1. Overview and organizing themes

2. historical review: aristotle to mill, 3.1 logical constructionism and operationalism, 3.2. h-d as a logic of confirmation, 3.3. popper and falsificationism, 3.4 meta-methodology and the end of method, 4. statistical methods for hypothesis testing, 5.1 creative and exploratory practices.

  • 5.2 Computer methods and the ‘new ways’ of doing science

6.1 “The scientific method” in science education and as seen by scientists

6.2 privileged methods and ‘gold standards’, 6.3 scientific method in the court room, 6.4 deviating practices, 7. conclusion, other internet resources, related entries.

This entry could have been given the title Scientific Methods and gone on to fill volumes, or it could have been extremely short, consisting of a brief summary rejection of the idea that there is any such thing as a unique Scientific Method at all. Both unhappy prospects are due to the fact that scientific activity varies so much across disciplines, times, places, and scientists that any account which manages to unify it all will either consist of overwhelming descriptive detail, or trivial generalizations.

The choice of scope for the present entry is more optimistic, taking a cue from the recent movement in philosophy of science toward a greater attention to practice: to what scientists actually do. This “turn to practice” can be seen as the latest form of studies of methods in science, insofar as it represents an attempt at understanding scientific activity, but through accounts that are neither meant to be universal and unified, nor singular and narrowly descriptive. To some extent, different scientists at different times and places can be said to be using the same method even though, in practice, the details are different.

Whether the context in which methods are carried out is relevant, or to what extent, will depend largely on what one takes the aims of science to be and what one’s own aims are. For most of the history of scientific methodology the assumption has been that the most important output of science is knowledge and so the aim of methodology should be to discover those methods by which scientific knowledge is generated.

Science was seen to embody the most successful form of reasoning (but which form?) to the most certain knowledge claims (but how certain?) on the basis of systematically collected evidence (but what counts as evidence, and should the evidence of the senses take precedence, or rational insight?) Section 2 surveys some of the history, pointing to two major themes. One theme is seeking the right balance between observation and reasoning (and the attendant forms of reasoning which employ them); the other is how certain scientific knowledge is or can be.

Section 3 turns to 20 th century debates on scientific method. In the second half of the 20 th century the epistemic privilege of science faced several challenges and many philosophers of science abandoned the reconstruction of the logic of scientific method. Views changed significantly regarding which functions of science ought to be captured and why. For some, the success of science was better identified with social or cultural features. Historical and sociological turns in the philosophy of science were made, with a demand that greater attention be paid to the non-epistemic aspects of science, such as sociological, institutional, material, and political factors. Even outside of those movements there was an increased specialization in the philosophy of science, with more and more focus on specific fields within science. The combined upshot was very few philosophers arguing any longer for a grand unified methodology of science. Sections 3 and 4 surveys the main positions on scientific method in 20 th century philosophy of science, focusing on where they differ in their preference for confirmation or falsification or for waiving the idea of a special scientific method altogether.

In recent decades, attention has primarily been paid to scientific activities traditionally falling under the rubric of method, such as experimental design and general laboratory practice, the use of statistics, the construction and use of models and diagrams, interdisciplinary collaboration, and science communication. Sections 4–6 attempt to construct a map of the current domains of the study of methods in science.

As these sections illustrate, the question of method is still central to the discourse about science. Scientific method remains a topic for education, for science policy, and for scientists. It arises in the public domain where the demarcation or status of science is at issue. Some philosophers have recently returned, therefore, to the question of what it is that makes science a unique cultural product. This entry will close with some of these recent attempts at discerning and encapsulating the activities by which scientific knowledge is achieved.

Attempting a history of scientific method compounds the vast scope of the topic. This section briefly surveys the background to modern methodological debates. What can be called the classical view goes back to antiquity, and represents a point of departure for later divergences. [ 1 ]

We begin with a point made by Laudan (1968) in his historical survey of scientific method:

Perhaps the most serious inhibition to the emergence of the history of theories of scientific method as a respectable area of study has been the tendency to conflate it with the general history of epistemology, thereby assuming that the narrative categories and classificatory pigeon-holes applied to the latter are also basic to the former. (1968: 5)

To see knowledge about the natural world as falling under knowledge more generally is an understandable conflation. Histories of theories of method would naturally employ the same narrative categories and classificatory pigeon holes. An important theme of the history of epistemology, for example, is the unification of knowledge, a theme reflected in the question of the unification of method in science. Those who have identified differences in kinds of knowledge have often likewise identified different methods for achieving that kind of knowledge (see the entry on the unity of science ).

Different views on what is known, how it is known, and what can be known are connected. Plato distinguished the realms of things into the visible and the intelligible ( The Republic , 510a, in Cooper 1997). Only the latter, the Forms, could be objects of knowledge. The intelligible truths could be known with the certainty of geometry and deductive reasoning. What could be observed of the material world, however, was by definition imperfect and deceptive, not ideal. The Platonic way of knowledge therefore emphasized reasoning as a method, downplaying the importance of observation. Aristotle disagreed, locating the Forms in the natural world as the fundamental principles to be discovered through the inquiry into nature ( Metaphysics Z , in Barnes 1984).

Aristotle is recognized as giving the earliest systematic treatise on the nature of scientific inquiry in the western tradition, one which embraced observation and reasoning about the natural world. In the Prior and Posterior Analytics , Aristotle reflects first on the aims and then the methods of inquiry into nature. A number of features can be found which are still considered by most to be essential to science. For Aristotle, empiricism, careful observation (but passive observation, not controlled experiment), is the starting point. The aim is not merely recording of facts, though. For Aristotle, science ( epistêmê ) is a body of properly arranged knowledge or learning—the empirical facts, but also their ordering and display are of crucial importance. The aims of discovery, ordering, and display of facts partly determine the methods required of successful scientific inquiry. Also determinant is the nature of the knowledge being sought, and the explanatory causes proper to that kind of knowledge (see the discussion of the four causes in the entry on Aristotle on causality ).

In addition to careful observation, then, scientific method requires a logic as a system of reasoning for properly arranging, but also inferring beyond, what is known by observation. Methods of reasoning may include induction, prediction, or analogy, among others. Aristotle’s system (along with his catalogue of fallacious reasoning) was collected under the title the Organon . This title would be echoed in later works on scientific reasoning, such as Novum Organon by Francis Bacon, and Novum Organon Restorum by William Whewell (see below). In Aristotle’s Organon reasoning is divided primarily into two forms, a rough division which persists into modern times. The division, known most commonly today as deductive versus inductive method, appears in other eras and methodologies as analysis/​synthesis, non-ampliative/​ampliative, or even confirmation/​verification. The basic idea is there are two “directions” to proceed in our methods of inquiry: one away from what is observed, to the more fundamental, general, and encompassing principles; the other, from the fundamental and general to instances or implications of principles.

The basic aim and method of inquiry identified here can be seen as a theme running throughout the next two millennia of reflection on the correct way to seek after knowledge: carefully observe nature and then seek rules or principles which explain or predict its operation. The Aristotelian corpus provided the framework for a commentary tradition on scientific method independent of science itself (cosmos versus physics.) During the medieval period, figures such as Albertus Magnus (1206–1280), Thomas Aquinas (1225–1274), Robert Grosseteste (1175–1253), Roger Bacon (1214/1220–1292), William of Ockham (1287–1347), Andreas Vesalius (1514–1546), Giacomo Zabarella (1533–1589) all worked to clarify the kind of knowledge obtainable by observation and induction, the source of justification of induction, and best rules for its application. [ 2 ] Many of their contributions we now think of as essential to science (see also Laudan 1968). As Aristotle and Plato had employed a framework of reasoning either “to the forms” or “away from the forms”, medieval thinkers employed directions away from the phenomena or back to the phenomena. In analysis, a phenomena was examined to discover its basic explanatory principles; in synthesis, explanations of a phenomena were constructed from first principles.

During the Scientific Revolution these various strands of argument, experiment, and reason were forged into a dominant epistemic authority. The 16 th –18 th centuries were a period of not only dramatic advance in knowledge about the operation of the natural world—advances in mechanical, medical, biological, political, economic explanations—but also of self-awareness of the revolutionary changes taking place, and intense reflection on the source and legitimation of the method by which the advances were made. The struggle to establish the new authority included methodological moves. The Book of Nature, according to the metaphor of Galileo Galilei (1564–1642) or Francis Bacon (1561–1626), was written in the language of mathematics, of geometry and number. This motivated an emphasis on mathematical description and mechanical explanation as important aspects of scientific method. Through figures such as Henry More and Ralph Cudworth, a neo-Platonic emphasis on the importance of metaphysical reflection on nature behind appearances, particularly regarding the spiritual as a complement to the purely mechanical, remained an important methodological thread of the Scientific Revolution (see the entries on Cambridge platonists ; Boyle ; Henry More ; Galileo ).

In Novum Organum (1620), Bacon was critical of the Aristotelian method for leaping from particulars to universals too quickly. The syllogistic form of reasoning readily mixed those two types of propositions. Bacon aimed at the invention of new arts, principles, and directions. His method would be grounded in methodical collection of observations, coupled with correction of our senses (and particularly, directions for the avoidance of the Idols, as he called them, kinds of systematic errors to which naïve observers are prone.) The community of scientists could then climb, by a careful, gradual and unbroken ascent, to reliable general claims.

Bacon’s method has been criticized as impractical and too inflexible for the practicing scientist. Whewell would later criticize Bacon in his System of Logic for paying too little attention to the practices of scientists. It is hard to find convincing examples of Bacon’s method being put in to practice in the history of science, but there are a few who have been held up as real examples of 16 th century scientific, inductive method, even if not in the rigid Baconian mold: figures such as Robert Boyle (1627–1691) and William Harvey (1578–1657) (see the entry on Bacon ).

It is to Isaac Newton (1642–1727), however, that historians of science and methodologists have paid greatest attention. Given the enormous success of his Principia Mathematica and Opticks , this is understandable. The study of Newton’s method has had two main thrusts: the implicit method of the experiments and reasoning presented in the Opticks, and the explicit methodological rules given as the Rules for Philosophising (the Regulae) in Book III of the Principia . [ 3 ] Newton’s law of gravitation, the linchpin of his new cosmology, broke with explanatory conventions of natural philosophy, first for apparently proposing action at a distance, but more generally for not providing “true”, physical causes. The argument for his System of the World ( Principia , Book III) was based on phenomena, not reasoned first principles. This was viewed (mainly on the continent) as insufficient for proper natural philosophy. The Regulae counter this objection, re-defining the aims of natural philosophy by re-defining the method natural philosophers should follow. (See the entry on Newton’s philosophy .)

To his list of methodological prescriptions should be added Newton’s famous phrase “ hypotheses non fingo ” (commonly translated as “I frame no hypotheses”.) The scientist was not to invent systems but infer explanations from observations, as Bacon had advocated. This would come to be known as inductivism. In the century after Newton, significant clarifications of the Newtonian method were made. Colin Maclaurin (1698–1746), for instance, reconstructed the essential structure of the method as having complementary analysis and synthesis phases, one proceeding away from the phenomena in generalization, the other from the general propositions to derive explanations of new phenomena. Denis Diderot (1713–1784) and editors of the Encyclopédie did much to consolidate and popularize Newtonianism, as did Francesco Algarotti (1721–1764). The emphasis was often the same, as much on the character of the scientist as on their process, a character which is still commonly assumed. The scientist is humble in the face of nature, not beholden to dogma, obeys only his eyes, and follows the truth wherever it leads. It was certainly Voltaire (1694–1778) and du Chatelet (1706–1749) who were most influential in propagating the latter vision of the scientist and their craft, with Newton as hero. Scientific method became a revolutionary force of the Enlightenment. (See also the entries on Newton , Leibniz , Descartes , Boyle , Hume , enlightenment , as well as Shank 2008 for a historical overview.)

Not all 18 th century reflections on scientific method were so celebratory. Famous also are George Berkeley’s (1685–1753) attack on the mathematics of the new science, as well as the over-emphasis of Newtonians on observation; and David Hume’s (1711–1776) undermining of the warrant offered for scientific claims by inductive justification (see the entries on: George Berkeley ; David Hume ; Hume’s Newtonianism and Anti-Newtonianism ). Hume’s problem of induction motivated Immanuel Kant (1724–1804) to seek new foundations for empirical method, though as an epistemic reconstruction, not as any set of practical guidelines for scientists. Both Hume and Kant influenced the methodological reflections of the next century, such as the debate between Mill and Whewell over the certainty of inductive inferences in science.

The debate between John Stuart Mill (1806–1873) and William Whewell (1794–1866) has become the canonical methodological debate of the 19 th century. Although often characterized as a debate between inductivism and hypothetico-deductivism, the role of the two methods on each side is actually more complex. On the hypothetico-deductive account, scientists work to come up with hypotheses from which true observational consequences can be deduced—hence, hypothetico-deductive. Because Whewell emphasizes both hypotheses and deduction in his account of method, he can be seen as a convenient foil to the inductivism of Mill. However, equally if not more important to Whewell’s portrayal of scientific method is what he calls the “fundamental antithesis”. Knowledge is a product of the objective (what we see in the world around us) and subjective (the contributions of our mind to how we perceive and understand what we experience, which he called the Fundamental Ideas). Both elements are essential according to Whewell, and he was therefore critical of Kant for too much focus on the subjective, and John Locke (1632–1704) and Mill for too much focus on the senses. Whewell’s fundamental ideas can be discipline relative. An idea can be fundamental even if it is necessary for knowledge only within a given scientific discipline (e.g., chemical affinity for chemistry). This distinguishes fundamental ideas from the forms and categories of intuition of Kant. (See the entry on Whewell .)

Clarifying fundamental ideas would therefore be an essential part of scientific method and scientific progress. Whewell called this process “Discoverer’s Induction”. It was induction, following Bacon or Newton, but Whewell sought to revive Bacon’s account by emphasising the role of ideas in the clear and careful formulation of inductive hypotheses. Whewell’s induction is not merely the collecting of objective facts. The subjective plays a role through what Whewell calls the Colligation of Facts, a creative act of the scientist, the invention of a theory. A theory is then confirmed by testing, where more facts are brought under the theory, called the Consilience of Inductions. Whewell felt that this was the method by which the true laws of nature could be discovered: clarification of fundamental concepts, clever invention of explanations, and careful testing. Mill, in his critique of Whewell, and others who have cast Whewell as a fore-runner of the hypothetico-deductivist view, seem to have under-estimated the importance of this discovery phase in Whewell’s understanding of method (Snyder 1997a,b, 1999). Down-playing the discovery phase would come to characterize methodology of the early 20 th century (see section 3 ).

Mill, in his System of Logic , put forward a narrower view of induction as the essence of scientific method. For Mill, induction is the search first for regularities among events. Among those regularities, some will continue to hold for further observations, eventually gaining the status of laws. One can also look for regularities among the laws discovered in a domain, i.e., for a law of laws. Which “law law” will hold is time and discipline dependent and open to revision. One example is the Law of Universal Causation, and Mill put forward specific methods for identifying causes—now commonly known as Mill’s methods. These five methods look for circumstances which are common among the phenomena of interest, those which are absent when the phenomena are, or those for which both vary together. Mill’s methods are still seen as capturing basic intuitions about experimental methods for finding the relevant explanatory factors ( System of Logic (1843), see Mill entry). The methods advocated by Whewell and Mill, in the end, look similar. Both involve inductive generalization to covering laws. They differ dramatically, however, with respect to the necessity of the knowledge arrived at; that is, at the meta-methodological level (see the entries on Whewell and Mill entries).

3. Logic of method and critical responses

The quantum and relativistic revolutions in physics in the early 20 th century had a profound effect on methodology. Conceptual foundations of both theories were taken to show the defeasibility of even the most seemingly secure intuitions about space, time and bodies. Certainty of knowledge about the natural world was therefore recognized as unattainable. Instead a renewed empiricism was sought which rendered science fallible but still rationally justifiable.

Analyses of the reasoning of scientists emerged, according to which the aspects of scientific method which were of primary importance were the means of testing and confirming of theories. A distinction in methodology was made between the contexts of discovery and justification. The distinction could be used as a wedge between the particularities of where and how theories or hypotheses are arrived at, on the one hand, and the underlying reasoning scientists use (whether or not they are aware of it) when assessing theories and judging their adequacy on the basis of the available evidence. By and large, for most of the 20 th century, philosophy of science focused on the second context, although philosophers differed on whether to focus on confirmation or refutation as well as on the many details of how confirmation or refutation could or could not be brought about. By the mid-20 th century these attempts at defining the method of justification and the context distinction itself came under pressure. During the same period, philosophy of science developed rapidly, and from section 4 this entry will therefore shift from a primarily historical treatment of the scientific method towards a primarily thematic one.

Advances in logic and probability held out promise of the possibility of elaborate reconstructions of scientific theories and empirical method, the best example being Rudolf Carnap’s The Logical Structure of the World (1928). Carnap attempted to show that a scientific theory could be reconstructed as a formal axiomatic system—that is, a logic. That system could refer to the world because some of its basic sentences could be interpreted as observations or operations which one could perform to test them. The rest of the theoretical system, including sentences using theoretical or unobservable terms (like electron or force) would then either be meaningful because they could be reduced to observations, or they had purely logical meanings (called analytic, like mathematical identities). This has been referred to as the verifiability criterion of meaning. According to the criterion, any statement not either analytic or verifiable was strictly meaningless. Although the view was endorsed by Carnap in 1928, he would later come to see it as too restrictive (Carnap 1956). Another familiar version of this idea is operationalism of Percy William Bridgman. In The Logic of Modern Physics (1927) Bridgman asserted that every physical concept could be defined in terms of the operations one would perform to verify the application of that concept. Making good on the operationalisation of a concept even as simple as length, however, can easily become enormously complex (for measuring very small lengths, for instance) or impractical (measuring large distances like light years.)

Carl Hempel’s (1950, 1951) criticisms of the verifiability criterion of meaning had enormous influence. He pointed out that universal generalizations, such as most scientific laws, were not strictly meaningful on the criterion. Verifiability and operationalism both seemed too restrictive to capture standard scientific aims and practice. The tenuous connection between these reconstructions and actual scientific practice was criticized in another way. In both approaches, scientific methods are instead recast in methodological roles. Measurements, for example, were looked to as ways of giving meanings to terms. The aim of the philosopher of science was not to understand the methods per se , but to use them to reconstruct theories, their meanings, and their relation to the world. When scientists perform these operations, however, they will not report that they are doing them to give meaning to terms in a formal axiomatic system. This disconnect between methodology and the details of actual scientific practice would seem to violate the empiricism the Logical Positivists and Bridgman were committed to. The view that methodology should correspond to practice (to some extent) has been called historicism, or intuitionism. We turn to these criticisms and responses in section 3.4 . [ 4 ]

Positivism also had to contend with the recognition that a purely inductivist approach, along the lines of Bacon-Newton-Mill, was untenable. There was no pure observation, for starters. All observation was theory laden. Theory is required to make any observation, therefore not all theory can be derived from observation alone. (See the entry on theory and observation in science .) Even granting an observational basis, Hume had already pointed out that one could not deductively justify inductive conclusions without begging the question by presuming the success of the inductive method. Likewise, positivist attempts at analyzing how a generalization can be confirmed by observations of its instances were subject to a number of criticisms. Goodman (1965) and Hempel (1965) both point to paradoxes inherent in standard accounts of confirmation. Recent attempts at explaining how observations can serve to confirm a scientific theory are discussed in section 4 below.

The standard starting point for a non-inductive analysis of the logic of confirmation is known as the Hypothetico-Deductive (H-D) method. In its simplest form, a sentence of a theory which expresses some hypothesis is confirmed by its true consequences. As noted in section 2 , this method had been advanced by Whewell in the 19 th century, as well as Nicod (1924) and others in the 20 th century. Often, Hempel’s (1966) description of the H-D method, illustrated by the case of Semmelweiss’ inferential procedures in establishing the cause of childbed fever, has been presented as a key account of H-D as well as a foil for criticism of the H-D account of confirmation (see, for example, Lipton’s (2004) discussion of inference to the best explanation; also the entry on confirmation ). Hempel described Semmelsweiss’ procedure as examining various hypotheses explaining the cause of childbed fever. Some hypotheses conflicted with observable facts and could be rejected as false immediately. Others needed to be tested experimentally by deducing which observable events should follow if the hypothesis were true (what Hempel called the test implications of the hypothesis), then conducting an experiment and observing whether or not the test implications occurred. If the experiment showed the test implication to be false, the hypothesis could be rejected. If the experiment showed the test implications to be true, however, this did not prove the hypothesis true. The confirmation of a test implication does not verify a hypothesis, though Hempel did allow that “it provides at least some support, some corroboration or confirmation for it” (Hempel 1966: 8). The degree of this support then depends on the quantity, variety and precision of the supporting evidence.

Another approach that took off from the difficulties with inductive inference was Karl Popper’s critical rationalism or falsificationism (Popper 1959, 1963). Falsification is deductive and similar to H-D in that it involves scientists deducing observational consequences from the hypothesis under test. For Popper, however, the important point was not the degree of confirmation that successful prediction offered to a hypothesis. The crucial thing was the logical asymmetry between confirmation, based on inductive inference, and falsification, which can be based on a deductive inference. (This simple opposition was later questioned, by Lakatos, among others. See the entry on historicist theories of scientific rationality. )

Popper stressed that, regardless of the amount of confirming evidence, we can never be certain that a hypothesis is true without committing the fallacy of affirming the consequent. Instead, Popper introduced the notion of corroboration as a measure for how well a theory or hypothesis has survived previous testing—but without implying that this is also a measure for the probability that it is true.

Popper was also motivated by his doubts about the scientific status of theories like the Marxist theory of history or psycho-analysis, and so wanted to demarcate between science and pseudo-science. Popper saw this as an importantly different distinction than demarcating science from metaphysics. The latter demarcation was the primary concern of many logical empiricists. Popper used the idea of falsification to draw a line instead between pseudo and proper science. Science was science because its method involved subjecting theories to rigorous tests which offered a high probability of failing and thus refuting the theory.

A commitment to the risk of failure was important. Avoiding falsification could be done all too easily. If a consequence of a theory is inconsistent with observations, an exception can be added by introducing auxiliary hypotheses designed explicitly to save the theory, so-called ad hoc modifications. This Popper saw done in pseudo-science where ad hoc theories appeared capable of explaining anything in their field of application. In contrast, science is risky. If observations showed the predictions from a theory to be wrong, the theory would be refuted. Hence, scientific hypotheses must be falsifiable. Not only must there exist some possible observation statement which could falsify the hypothesis or theory, were it observed, (Popper called these the hypothesis’ potential falsifiers) it is crucial to the Popperian scientific method that such falsifications be sincerely attempted on a regular basis.

The more potential falsifiers of a hypothesis, the more falsifiable it would be, and the more the hypothesis claimed. Conversely, hypotheses without falsifiers claimed very little or nothing at all. Originally, Popper thought that this meant the introduction of ad hoc hypotheses only to save a theory should not be countenanced as good scientific method. These would undermine the falsifiabililty of a theory. However, Popper later came to recognize that the introduction of modifications (immunizations, he called them) was often an important part of scientific development. Responding to surprising or apparently falsifying observations often generated important new scientific insights. Popper’s own example was the observed motion of Uranus which originally did not agree with Newtonian predictions. The ad hoc hypothesis of an outer planet explained the disagreement and led to further falsifiable predictions. Popper sought to reconcile the view by blurring the distinction between falsifiable and not falsifiable, and speaking instead of degrees of testability (Popper 1985: 41f.).

From the 1960s on, sustained meta-methodological criticism emerged that drove philosophical focus away from scientific method. A brief look at those criticisms follows, with recommendations for further reading at the end of the entry.

Thomas Kuhn’s The Structure of Scientific Revolutions (1962) begins with a well-known shot across the bow for philosophers of science:

History, if viewed as a repository for more than anecdote or chronology, could produce a decisive transformation in the image of science by which we are now possessed. (1962: 1)

The image Kuhn thought needed transforming was the a-historical, rational reconstruction sought by many of the Logical Positivists, though Carnap and other positivists were actually quite sympathetic to Kuhn’s views. (See the entry on the Vienna Circle .) Kuhn shares with other of his contemporaries, such as Feyerabend and Lakatos, a commitment to a more empirical approach to philosophy of science. Namely, the history of science provides important data, and necessary checks, for philosophy of science, including any theory of scientific method.

The history of science reveals, according to Kuhn, that scientific development occurs in alternating phases. During normal science, the members of the scientific community adhere to the paradigm in place. Their commitment to the paradigm means a commitment to the puzzles to be solved and the acceptable ways of solving them. Confidence in the paradigm remains so long as steady progress is made in solving the shared puzzles. Method in this normal phase operates within a disciplinary matrix (Kuhn’s later concept of a paradigm) which includes standards for problem solving, and defines the range of problems to which the method should be applied. An important part of a disciplinary matrix is the set of values which provide the norms and aims for scientific method. The main values that Kuhn identifies are prediction, problem solving, simplicity, consistency, and plausibility.

An important by-product of normal science is the accumulation of puzzles which cannot be solved with resources of the current paradigm. Once accumulation of these anomalies has reached some critical mass, it can trigger a communal shift to a new paradigm and a new phase of normal science. Importantly, the values that provide the norms and aims for scientific method may have transformed in the meantime. Method may therefore be relative to discipline, time or place

Feyerabend also identified the aims of science as progress, but argued that any methodological prescription would only stifle that progress (Feyerabend 1988). His arguments are grounded in re-examining accepted “myths” about the history of science. Heroes of science, like Galileo, are shown to be just as reliant on rhetoric and persuasion as they are on reason and demonstration. Others, like Aristotle, are shown to be far more reasonable and far-reaching in their outlooks then they are given credit for. As a consequence, the only rule that could provide what he took to be sufficient freedom was the vacuous “anything goes”. More generally, even the methodological restriction that science is the best way to pursue knowledge, and to increase knowledge, is too restrictive. Feyerabend suggested instead that science might, in fact, be a threat to a free society, because it and its myth had become so dominant (Feyerabend 1978).

An even more fundamental kind of criticism was offered by several sociologists of science from the 1970s onwards who rejected the methodology of providing philosophical accounts for the rational development of science and sociological accounts of the irrational mistakes. Instead, they adhered to a symmetry thesis on which any causal explanation of how scientific knowledge is established needs to be symmetrical in explaining truth and falsity, rationality and irrationality, success and mistakes, by the same causal factors (see, e.g., Barnes and Bloor 1982, Bloor 1991). Movements in the Sociology of Science, like the Strong Programme, or in the social dimensions and causes of knowledge more generally led to extended and close examination of detailed case studies in contemporary science and its history. (See the entries on the social dimensions of scientific knowledge and social epistemology .) Well-known examinations by Latour and Woolgar (1979/1986), Knorr-Cetina (1981), Pickering (1984), Shapin and Schaffer (1985) seem to bear out that it was social ideologies (on a macro-scale) or individual interactions and circumstances (on a micro-scale) which were the primary causal factors in determining which beliefs gained the status of scientific knowledge. As they saw it therefore, explanatory appeals to scientific method were not empirically grounded.

A late, and largely unexpected, criticism of scientific method came from within science itself. Beginning in the early 2000s, a number of scientists attempting to replicate the results of published experiments could not do so. There may be close conceptual connection between reproducibility and method. For example, if reproducibility means that the same scientific methods ought to produce the same result, and all scientific results ought to be reproducible, then whatever it takes to reproduce a scientific result ought to be called scientific method. Space limits us to the observation that, insofar as reproducibility is a desired outcome of proper scientific method, it is not strictly a part of scientific method. (See the entry on reproducibility of scientific results .)

By the close of the 20 th century the search for the scientific method was flagging. Nola and Sankey (2000b) could introduce their volume on method by remarking that “For some, the whole idea of a theory of scientific method is yester-year’s debate …”.

Despite the many difficulties that philosophers encountered in trying to providing a clear methodology of conformation (or refutation), still important progress has been made on understanding how observation can provide evidence for a given theory. Work in statistics has been crucial for understanding how theories can be tested empirically, and in recent decades a huge literature has developed that attempts to recast confirmation in Bayesian terms. Here these developments can be covered only briefly, and we refer to the entry on confirmation for further details and references.

Statistics has come to play an increasingly important role in the methodology of the experimental sciences from the 19 th century onwards. At that time, statistics and probability theory took on a methodological role as an analysis of inductive inference, and attempts to ground the rationality of induction in the axioms of probability theory have continued throughout the 20 th century and in to the present. Developments in the theory of statistics itself, meanwhile, have had a direct and immense influence on the experimental method, including methods for measuring the uncertainty of observations such as the Method of Least Squares developed by Legendre and Gauss in the early 19 th century, criteria for the rejection of outliers proposed by Peirce by the mid-19 th century, and the significance tests developed by Gosset (a.k.a. “Student”), Fisher, Neyman & Pearson and others in the 1920s and 1930s (see, e.g., Swijtink 1987 for a brief historical overview; and also the entry on C.S. Peirce ).

These developments within statistics then in turn led to a reflective discussion among both statisticians and philosophers of science on how to perceive the process of hypothesis testing: whether it was a rigorous statistical inference that could provide a numerical expression of the degree of confidence in the tested hypothesis, or if it should be seen as a decision between different courses of actions that also involved a value component. This led to a major controversy among Fisher on the one side and Neyman and Pearson on the other (see especially Fisher 1955, Neyman 1956 and Pearson 1955, and for analyses of the controversy, e.g., Howie 2002, Marks 2000, Lenhard 2006). On Fisher’s view, hypothesis testing was a methodology for when to accept or reject a statistical hypothesis, namely that a hypothesis should be rejected by evidence if this evidence would be unlikely relative to other possible outcomes, given the hypothesis were true. In contrast, on Neyman and Pearson’s view, the consequence of error also had to play a role when deciding between hypotheses. Introducing the distinction between the error of rejecting a true hypothesis (type I error) and accepting a false hypothesis (type II error), they argued that it depends on the consequences of the error to decide whether it is more important to avoid rejecting a true hypothesis or accepting a false one. Hence, Fisher aimed for a theory of inductive inference that enabled a numerical expression of confidence in a hypothesis. To him, the important point was the search for truth, not utility. In contrast, the Neyman-Pearson approach provided a strategy of inductive behaviour for deciding between different courses of action. Here, the important point was not whether a hypothesis was true, but whether one should act as if it was.

Similar discussions are found in the philosophical literature. On the one side, Churchman (1948) and Rudner (1953) argued that because scientific hypotheses can never be completely verified, a complete analysis of the methods of scientific inference includes ethical judgments in which the scientists must decide whether the evidence is sufficiently strong or that the probability is sufficiently high to warrant the acceptance of the hypothesis, which again will depend on the importance of making a mistake in accepting or rejecting the hypothesis. Others, such as Jeffrey (1956) and Levi (1960) disagreed and instead defended a value-neutral view of science on which scientists should bracket their attitudes, preferences, temperament, and values when assessing the correctness of their inferences. For more details on this value-free ideal in the philosophy of science and its historical development, see Douglas (2009) and Howard (2003). For a broad set of case studies examining the role of values in science, see e.g. Elliott & Richards 2017.

In recent decades, philosophical discussions of the evaluation of probabilistic hypotheses by statistical inference have largely focused on Bayesianism that understands probability as a measure of a person’s degree of belief in an event, given the available information, and frequentism that instead understands probability as a long-run frequency of a repeatable event. Hence, for Bayesians probabilities refer to a state of knowledge, whereas for frequentists probabilities refer to frequencies of events (see, e.g., Sober 2008, chapter 1 for a detailed introduction to Bayesianism and frequentism as well as to likelihoodism). Bayesianism aims at providing a quantifiable, algorithmic representation of belief revision, where belief revision is a function of prior beliefs (i.e., background knowledge) and incoming evidence. Bayesianism employs a rule based on Bayes’ theorem, a theorem of the probability calculus which relates conditional probabilities. The probability that a particular hypothesis is true is interpreted as a degree of belief, or credence, of the scientist. There will also be a probability and a degree of belief that a hypothesis will be true conditional on a piece of evidence (an observation, say) being true. Bayesianism proscribes that it is rational for the scientist to update their belief in the hypothesis to that conditional probability should it turn out that the evidence is, in fact, observed (see, e.g., Sprenger & Hartmann 2019 for a comprehensive treatment of Bayesian philosophy of science). Originating in the work of Neyman and Person, frequentism aims at providing the tools for reducing long-run error rates, such as the error-statistical approach developed by Mayo (1996) that focuses on how experimenters can avoid both type I and type II errors by building up a repertoire of procedures that detect errors if and only if they are present. Both Bayesianism and frequentism have developed over time, they are interpreted in different ways by its various proponents, and their relations to previous criticism to attempts at defining scientific method are seen differently by proponents and critics. The literature, surveys, reviews and criticism in this area are vast and the reader is referred to the entries on Bayesian epistemology and confirmation .

5. Method in Practice

Attention to scientific practice, as we have seen, is not itself new. However, the turn to practice in the philosophy of science of late can be seen as a correction to the pessimism with respect to method in philosophy of science in later parts of the 20 th century, and as an attempted reconciliation between sociological and rationalist explanations of scientific knowledge. Much of this work sees method as detailed and context specific problem-solving procedures, and methodological analyses to be at the same time descriptive, critical and advisory (see Nickles 1987 for an exposition of this view). The following section contains a survey of some of the practice focuses. In this section we turn fully to topics rather than chronology.

A problem with the distinction between the contexts of discovery and justification that figured so prominently in philosophy of science in the first half of the 20 th century (see section 2 ) is that no such distinction can be clearly seen in scientific activity (see Arabatzis 2006). Thus, in recent decades, it has been recognized that study of conceptual innovation and change should not be confined to psychology and sociology of science, but are also important aspects of scientific practice which philosophy of science should address (see also the entry on scientific discovery ). Looking for the practices that drive conceptual innovation has led philosophers to examine both the reasoning practices of scientists and the wide realm of experimental practices that are not directed narrowly at testing hypotheses, that is, exploratory experimentation.

Examining the reasoning practices of historical and contemporary scientists, Nersessian (2008) has argued that new scientific concepts are constructed as solutions to specific problems by systematic reasoning, and that of analogy, visual representation and thought-experimentation are among the important reasoning practices employed. These ubiquitous forms of reasoning are reliable—but also fallible—methods of conceptual development and change. On her account, model-based reasoning consists of cycles of construction, simulation, evaluation and adaption of models that serve as interim interpretations of the target problem to be solved. Often, this process will lead to modifications or extensions, and a new cycle of simulation and evaluation. However, Nersessian also emphasizes that

creative model-based reasoning cannot be applied as a simple recipe, is not always productive of solutions, and even its most exemplary usages can lead to incorrect solutions. (Nersessian 2008: 11)

Thus, while on the one hand she agrees with many previous philosophers that there is no logic of discovery, discoveries can derive from reasoned processes, such that a large and integral part of scientific practice is

the creation of concepts through which to comprehend, structure, and communicate about physical phenomena …. (Nersessian 1987: 11)

Similarly, work on heuristics for discovery and theory construction by scholars such as Darden (1991) and Bechtel & Richardson (1993) present science as problem solving and investigate scientific problem solving as a special case of problem-solving in general. Drawing largely on cases from the biological sciences, much of their focus has been on reasoning strategies for the generation, evaluation, and revision of mechanistic explanations of complex systems.

Addressing another aspect of the context distinction, namely the traditional view that the primary role of experiments is to test theoretical hypotheses according to the H-D model, other philosophers of science have argued for additional roles that experiments can play. The notion of exploratory experimentation was introduced to describe experiments driven by the desire to obtain empirical regularities and to develop concepts and classifications in which these regularities can be described (Steinle 1997, 2002; Burian 1997; Waters 2007)). However the difference between theory driven experimentation and exploratory experimentation should not be seen as a sharp distinction. Theory driven experiments are not always directed at testing hypothesis, but may also be directed at various kinds of fact-gathering, such as determining numerical parameters. Vice versa , exploratory experiments are usually informed by theory in various ways and are therefore not theory-free. Instead, in exploratory experiments phenomena are investigated without first limiting the possible outcomes of the experiment on the basis of extant theory about the phenomena.

The development of high throughput instrumentation in molecular biology and neighbouring fields has given rise to a special type of exploratory experimentation that collects and analyses very large amounts of data, and these new ‘omics’ disciplines are often said to represent a break with the ideal of hypothesis-driven science (Burian 2007; Elliott 2007; Waters 2007; O’Malley 2007) and instead described as data-driven research (Leonelli 2012; Strasser 2012) or as a special kind of “convenience experimentation” in which many experiments are done simply because they are extraordinarily convenient to perform (Krohs 2012).

5.2 Computer methods and ‘new ways’ of doing science

The field of omics just described is possible because of the ability of computers to process, in a reasonable amount of time, the huge quantities of data required. Computers allow for more elaborate experimentation (higher speed, better filtering, more variables, sophisticated coordination and control), but also, through modelling and simulations, might constitute a form of experimentation themselves. Here, too, we can pose a version of the general question of method versus practice: does the practice of using computers fundamentally change scientific method, or merely provide a more efficient means of implementing standard methods?

Because computers can be used to automate measurements, quantifications, calculations, and statistical analyses where, for practical reasons, these operations cannot be otherwise carried out, many of the steps involved in reaching a conclusion on the basis of an experiment are now made inside a “black box”, without the direct involvement or awareness of a human. This has epistemological implications, regarding what we can know, and how we can know it. To have confidence in the results, computer methods are therefore subjected to tests of verification and validation.

The distinction between verification and validation is easiest to characterize in the case of computer simulations. In a typical computer simulation scenario computers are used to numerically integrate differential equations for which no analytic solution is available. The equations are part of the model the scientist uses to represent a phenomenon or system under investigation. Verifying a computer simulation means checking that the equations of the model are being correctly approximated. Validating a simulation means checking that the equations of the model are adequate for the inferences one wants to make on the basis of that model.

A number of issues related to computer simulations have been raised. The identification of validity and verification as the testing methods has been criticized. Oreskes et al. (1994) raise concerns that “validiation”, because it suggests deductive inference, might lead to over-confidence in the results of simulations. The distinction itself is probably too clean, since actual practice in the testing of simulations mixes and moves back and forth between the two (Weissart 1997; Parker 2008a; Winsberg 2010). Computer simulations do seem to have a non-inductive character, given that the principles by which they operate are built in by the programmers, and any results of the simulation follow from those in-built principles in such a way that those results could, in principle, be deduced from the program code and its inputs. The status of simulations as experiments has therefore been examined (Kaufmann and Smarr 1993; Humphreys 1995; Hughes 1999; Norton and Suppe 2001). This literature considers the epistemology of these experiments: what we can learn by simulation, and also the kinds of justifications which can be given in applying that knowledge to the “real” world. (Mayo 1996; Parker 2008b). As pointed out, part of the advantage of computer simulation derives from the fact that huge numbers of calculations can be carried out without requiring direct observation by the experimenter/​simulator. At the same time, many of these calculations are approximations to the calculations which would be performed first-hand in an ideal situation. Both factors introduce uncertainties into the inferences drawn from what is observed in the simulation.

For many of the reasons described above, computer simulations do not seem to belong clearly to either the experimental or theoretical domain. Rather, they seem to crucially involve aspects of both. This has led some authors, such as Fox Keller (2003: 200) to argue that we ought to consider computer simulation a “qualitatively different way of doing science”. The literature in general tends to follow Kaufmann and Smarr (1993) in referring to computer simulation as a “third way” for scientific methodology (theoretical reasoning and experimental practice are the first two ways.). It should also be noted that the debates around these issues have tended to focus on the form of computer simulation typical in the physical sciences, where models are based on dynamical equations. Other forms of simulation might not have the same problems, or have problems of their own (see the entry on computer simulations in science ).

In recent years, the rapid development of machine learning techniques has prompted some scholars to suggest that the scientific method has become “obsolete” (Anderson 2008, Carrol and Goodstein 2009). This has resulted in an intense debate on the relative merit of data-driven and hypothesis-driven research (for samples, see e.g. Mazzocchi 2015 or Succi and Coveney 2018). For a detailed treatment of this topic, we refer to the entry scientific research and big data .

6. Discourse on scientific method

Despite philosophical disagreements, the idea of the scientific method still figures prominently in contemporary discourse on many different topics, both within science and in society at large. Often, reference to scientific method is used in ways that convey either the legend of a single, universal method characteristic of all science, or grants to a particular method or set of methods privilege as a special ‘gold standard’, often with reference to particular philosophers to vindicate the claims. Discourse on scientific method also typically arises when there is a need to distinguish between science and other activities, or for justifying the special status conveyed to science. In these areas, the philosophical attempts at identifying a set of methods characteristic for scientific endeavors are closely related to the philosophy of science’s classical problem of demarcation (see the entry on science and pseudo-science ) and to the philosophical analysis of the social dimension of scientific knowledge and the role of science in democratic society.

One of the settings in which the legend of a single, universal scientific method has been particularly strong is science education (see, e.g., Bauer 1992; McComas 1996; Wivagg & Allchin 2002). [ 5 ] Often, ‘the scientific method’ is presented in textbooks and educational web pages as a fixed four or five step procedure starting from observations and description of a phenomenon and progressing over formulation of a hypothesis which explains the phenomenon, designing and conducting experiments to test the hypothesis, analyzing the results, and ending with drawing a conclusion. Such references to a universal scientific method can be found in educational material at all levels of science education (Blachowicz 2009), and numerous studies have shown that the idea of a general and universal scientific method often form part of both students’ and teachers’ conception of science (see, e.g., Aikenhead 1987; Osborne et al. 2003). In response, it has been argued that science education need to focus more on teaching about the nature of science, although views have differed on whether this is best done through student-led investigations, contemporary cases, or historical cases (Allchin, Andersen & Nielsen 2014)

Although occasionally phrased with reference to the H-D method, important historical roots of the legend in science education of a single, universal scientific method are the American philosopher and psychologist Dewey’s account of inquiry in How We Think (1910) and the British mathematician Karl Pearson’s account of science in Grammar of Science (1892). On Dewey’s account, inquiry is divided into the five steps of

(i) a felt difficulty, (ii) its location and definition, (iii) suggestion of a possible solution, (iv) development by reasoning of the bearing of the suggestions, (v) further observation and experiment leading to its acceptance or rejection. (Dewey 1910: 72)

Similarly, on Pearson’s account, scientific investigations start with measurement of data and observation of their correction and sequence from which scientific laws can be discovered with the aid of creative imagination. These laws have to be subject to criticism, and their final acceptance will have equal validity for “all normally constituted minds”. Both Dewey’s and Pearson’s accounts should be seen as generalized abstractions of inquiry and not restricted to the realm of science—although both Dewey and Pearson referred to their respective accounts as ‘the scientific method’.

Occasionally, scientists make sweeping statements about a simple and distinct scientific method, as exemplified by Feynman’s simplified version of a conjectures and refutations method presented, for example, in the last of his 1964 Cornell Messenger lectures. [ 6 ] However, just as often scientists have come to the same conclusion as recent philosophy of science that there is not any unique, easily described scientific method. For example, the physicist and Nobel Laureate Weinberg described in the paper “The Methods of Science … And Those By Which We Live” (1995) how

The fact that the standards of scientific success shift with time does not only make the philosophy of science difficult; it also raises problems for the public understanding of science. We do not have a fixed scientific method to rally around and defend. (1995: 8)

Interview studies with scientists on their conception of method shows that scientists often find it hard to figure out whether available evidence confirms their hypothesis, and that there are no direct translations between general ideas about method and specific strategies to guide how research is conducted (Schickore & Hangel 2019, Hangel & Schickore 2017)

Reference to the scientific method has also often been used to argue for the scientific nature or special status of a particular activity. Philosophical positions that argue for a simple and unique scientific method as a criterion of demarcation, such as Popperian falsification, have often attracted practitioners who felt that they had a need to defend their domain of practice. For example, references to conjectures and refutation as the scientific method are abundant in much of the literature on complementary and alternative medicine (CAM)—alongside the competing position that CAM, as an alternative to conventional biomedicine, needs to develop its own methodology different from that of science.

Also within mainstream science, reference to the scientific method is used in arguments regarding the internal hierarchy of disciplines and domains. A frequently seen argument is that research based on the H-D method is superior to research based on induction from observations because in deductive inferences the conclusion follows necessarily from the premises. (See, e.g., Parascandola 1998 for an analysis of how this argument has been made to downgrade epidemiology compared to the laboratory sciences.) Similarly, based on an examination of the practices of major funding institutions such as the National Institutes of Health (NIH), the National Science Foundation (NSF) and the Biomedical Sciences Research Practices (BBSRC) in the UK, O’Malley et al. (2009) have argued that funding agencies seem to have a tendency to adhere to the view that the primary activity of science is to test hypotheses, while descriptive and exploratory research is seen as merely preparatory activities that are valuable only insofar as they fuel hypothesis-driven research.

In some areas of science, scholarly publications are structured in a way that may convey the impression of a neat and linear process of inquiry from stating a question, devising the methods by which to answer it, collecting the data, to drawing a conclusion from the analysis of data. For example, the codified format of publications in most biomedical journals known as the IMRAD format (Introduction, Method, Results, Analysis, Discussion) is explicitly described by the journal editors as “not an arbitrary publication format but rather a direct reflection of the process of scientific discovery” (see the so-called “Vancouver Recommendations”, ICMJE 2013: 11). However, scientific publications do not in general reflect the process by which the reported scientific results were produced. For example, under the provocative title “Is the scientific paper a fraud?”, Medawar argued that scientific papers generally misrepresent how the results have been produced (Medawar 1963/1996). Similar views have been advanced by philosophers, historians and sociologists of science (Gilbert 1976; Holmes 1987; Knorr-Cetina 1981; Schickore 2008; Suppe 1998) who have argued that scientists’ experimental practices are messy and often do not follow any recognizable pattern. Publications of research results, they argue, are retrospective reconstructions of these activities that often do not preserve the temporal order or the logic of these activities, but are instead often constructed in order to screen off potential criticism (see Schickore 2008 for a review of this work).

Philosophical positions on the scientific method have also made it into the court room, especially in the US where judges have drawn on philosophy of science in deciding when to confer special status to scientific expert testimony. A key case is Daubert vs Merrell Dow Pharmaceuticals (92–102, 509 U.S. 579, 1993). In this case, the Supreme Court argued in its 1993 ruling that trial judges must ensure that expert testimony is reliable, and that in doing this the court must look at the expert’s methodology to determine whether the proffered evidence is actually scientific knowledge. Further, referring to works of Popper and Hempel the court stated that

ordinarily, a key question to be answered in determining whether a theory or technique is scientific knowledge … is whether it can be (and has been) tested. (Justice Blackmun, Daubert v. Merrell Dow Pharmaceuticals; see Other Internet Resources for a link to the opinion)

But as argued by Haack (2005a,b, 2010) and by Foster & Hubner (1999), by equating the question of whether a piece of testimony is reliable with the question whether it is scientific as indicated by a special methodology, the court was producing an inconsistent mixture of Popper’s and Hempel’s philosophies, and this has later led to considerable confusion in subsequent case rulings that drew on the Daubert case (see Haack 2010 for a detailed exposition).

The difficulties around identifying the methods of science are also reflected in the difficulties of identifying scientific misconduct in the form of improper application of the method or methods of science. One of the first and most influential attempts at defining misconduct in science was the US definition from 1989 that defined misconduct as

fabrication, falsification, plagiarism, or other practices that seriously deviate from those that are commonly accepted within the scientific community . (Code of Federal Regulations, part 50, subpart A., August 8, 1989, italics added)

However, the “other practices that seriously deviate” clause was heavily criticized because it could be used to suppress creative or novel science. For example, the National Academy of Science stated in their report Responsible Science (1992) that it

wishes to discourage the possibility that a misconduct complaint could be lodged against scientists based solely on their use of novel or unorthodox research methods. (NAS: 27)

This clause was therefore later removed from the definition. For an entry into the key philosophical literature on conduct in science, see Shamoo & Resnick (2009).

The question of the source of the success of science has been at the core of philosophy since the beginning of modern science. If viewed as a matter of epistemology more generally, scientific method is a part of the entire history of philosophy. Over that time, science and whatever methods its practitioners may employ have changed dramatically. Today, many philosophers have taken up the banners of pluralism or of practice to focus on what are, in effect, fine-grained and contextually limited examinations of scientific method. Others hope to shift perspectives in order to provide a renewed general account of what characterizes the activity we call science.

One such perspective has been offered recently by Hoyningen-Huene (2008, 2013), who argues from the history of philosophy of science that after three lengthy phases of characterizing science by its method, we are now in a phase where the belief in the existence of a positive scientific method has eroded and what has been left to characterize science is only its fallibility. First was a phase from Plato and Aristotle up until the 17 th century where the specificity of scientific knowledge was seen in its absolute certainty established by proof from evident axioms; next was a phase up to the mid-19 th century in which the means to establish the certainty of scientific knowledge had been generalized to include inductive procedures as well. In the third phase, which lasted until the last decades of the 20 th century, it was recognized that empirical knowledge was fallible, but it was still granted a special status due to its distinctive mode of production. But now in the fourth phase, according to Hoyningen-Huene, historical and philosophical studies have shown how “scientific methods with the characteristics as posited in the second and third phase do not exist” (2008: 168) and there is no longer any consensus among philosophers and historians of science about the nature of science. For Hoyningen-Huene, this is too negative a stance, and he therefore urges the question about the nature of science anew. His own answer to this question is that “scientific knowledge differs from other kinds of knowledge, especially everyday knowledge, primarily by being more systematic” (Hoyningen-Huene 2013: 14). Systematicity can have several different dimensions: among them are more systematic descriptions, explanations, predictions, defense of knowledge claims, epistemic connectedness, ideal of completeness, knowledge generation, representation of knowledge and critical discourse. Hence, what characterizes science is the greater care in excluding possible alternative explanations, the more detailed elaboration with respect to data on which predictions are based, the greater care in detecting and eliminating sources of error, the more articulate connections to other pieces of knowledge, etc. On this position, what characterizes science is not that the methods employed are unique to science, but that the methods are more carefully employed.

Another, similar approach has been offered by Haack (2003). She sets off, similar to Hoyningen-Huene, from a dissatisfaction with the recent clash between what she calls Old Deferentialism and New Cynicism. The Old Deferentialist position is that science progressed inductively by accumulating true theories confirmed by empirical evidence or deductively by testing conjectures against basic statements; while the New Cynics position is that science has no epistemic authority and no uniquely rational method and is merely just politics. Haack insists that contrary to the views of the New Cynics, there are objective epistemic standards, and there is something epistemologically special about science, even though the Old Deferentialists pictured this in a wrong way. Instead, she offers a new Critical Commonsensist account on which standards of good, strong, supportive evidence and well-conducted, honest, thorough and imaginative inquiry are not exclusive to the sciences, but the standards by which we judge all inquirers. In this sense, science does not differ in kind from other kinds of inquiry, but it may differ in the degree to which it requires broad and detailed background knowledge and a familiarity with a technical vocabulary that only specialists may possess.

  • Aikenhead, G.S., 1987, “High-school graduates’ beliefs about science-technology-society. III. Characteristics and limitations of scientific knowledge”, Science Education , 71(4): 459–487.
  • Allchin, D., H.M. Andersen and K. Nielsen, 2014, “Complementary Approaches to Teaching Nature of Science: Integrating Student Inquiry, Historical Cases, and Contemporary Cases in Classroom Practice”, Science Education , 98: 461–486.
  • Anderson, C., 2008, “The end of theory: The data deluge makes the scientific method obsolete”, Wired magazine , 16(7): 16–07
  • Arabatzis, T., 2006, “On the inextricability of the context of discovery and the context of justification”, in Revisiting Discovery and Justification , J. Schickore and F. Steinle (eds.), Dordrecht: Springer, pp. 215–230.
  • Barnes, J. (ed.), 1984, The Complete Works of Aristotle, Vols I and II , Princeton: Princeton University Press.
  • Barnes, B. and D. Bloor, 1982, “Relativism, Rationalism, and the Sociology of Knowledge”, in Rationality and Relativism , M. Hollis and S. Lukes (eds.), Cambridge: MIT Press, pp. 1–20.
  • Bauer, H.H., 1992, Scientific Literacy and the Myth of the Scientific Method , Urbana: University of Illinois Press.
  • Bechtel, W. and R.C. Richardson, 1993, Discovering complexity , Princeton, NJ: Princeton University Press.
  • Berkeley, G., 1734, The Analyst in De Motu and The Analyst: A Modern Edition with Introductions and Commentary , D. Jesseph (trans. and ed.), Dordrecht: Kluwer Academic Publishers, 1992.
  • Blachowicz, J., 2009, “How science textbooks treat scientific method: A philosopher’s perspective”, The British Journal for the Philosophy of Science , 60(2): 303–344.
  • Bloor, D., 1991, Knowledge and Social Imagery , Chicago: University of Chicago Press, 2 nd edition.
  • Boyle, R., 1682, New experiments physico-mechanical, touching the air , Printed by Miles Flesher for Richard Davis, bookseller in Oxford.
  • Bridgman, P.W., 1927, The Logic of Modern Physics , New York: Macmillan.
  • –––, 1956, “The Methodological Character of Theoretical Concepts”, in The Foundations of Science and the Concepts of Science and Psychology , Herbert Feigl and Michael Scriven (eds.), Minnesota: University of Minneapolis Press, pp. 38–76.
  • Burian, R., 1997, “Exploratory Experimentation and the Role of Histochemical Techniques in the Work of Jean Brachet, 1938–1952”, History and Philosophy of the Life Sciences , 19(1): 27–45.
  • –––, 2007, “On microRNA and the need for exploratory experimentation in post-genomic molecular biology”, History and Philosophy of the Life Sciences , 29(3): 285–311.
  • Carnap, R., 1928, Der logische Aufbau der Welt , Berlin: Bernary, transl. by R.A. George, The Logical Structure of the World , Berkeley: University of California Press, 1967.
  • –––, 1956, “The methodological character of theoretical concepts”, Minnesota studies in the philosophy of science , 1: 38–76.
  • Carrol, S., and D. Goodstein, 2009, “Defining the scientific method”, Nature Methods , 6: 237.
  • Churchman, C.W., 1948, “Science, Pragmatics, Induction”, Philosophy of Science , 15(3): 249–268.
  • Cooper, J. (ed.), 1997, Plato: Complete Works , Indianapolis: Hackett.
  • Darden, L., 1991, Theory Change in Science: Strategies from Mendelian Genetics , Oxford: Oxford University Press
  • Dewey, J., 1910, How we think , New York: Dover Publications (reprinted 1997).
  • Douglas, H., 2009, Science, Policy, and the Value-Free Ideal , Pittsburgh: University of Pittsburgh Press.
  • Dupré, J., 2004, “Miracle of Monism ”, in Naturalism in Question , Mario De Caro and David Macarthur (eds.), Cambridge, MA: Harvard University Press, pp. 36–58.
  • Elliott, K.C., 2007, “Varieties of exploratory experimentation in nanotoxicology”, History and Philosophy of the Life Sciences , 29(3): 311–334.
  • Elliott, K. C., and T. Richards (eds.), 2017, Exploring inductive risk: Case studies of values in science , Oxford: Oxford University Press.
  • Falcon, Andrea, 2005, Aristotle and the science of nature: Unity without uniformity , Cambridge: Cambridge University Press.
  • Feyerabend, P., 1978, Science in a Free Society , London: New Left Books
  • –––, 1988, Against Method , London: Verso, 2 nd edition.
  • Fisher, R.A., 1955, “Statistical Methods and Scientific Induction”, Journal of The Royal Statistical Society. Series B (Methodological) , 17(1): 69–78.
  • Foster, K. and P.W. Huber, 1999, Judging Science. Scientific Knowledge and the Federal Courts , Cambridge: MIT Press.
  • Fox Keller, E., 2003, “Models, Simulation, and ‘computer experiments’”, in The Philosophy of Scientific Experimentation , H. Radder (ed.), Pittsburgh: Pittsburgh University Press, 198–215.
  • Gilbert, G., 1976, “The transformation of research findings into scientific knowledge”, Social Studies of Science , 6: 281–306.
  • Gimbel, S., 2011, Exploring the Scientific Method , Chicago: University of Chicago Press.
  • Goodman, N., 1965, Fact , Fiction, and Forecast , Indianapolis: Bobbs-Merrill.
  • Haack, S., 1995, “Science is neither sacred nor a confidence trick”, Foundations of Science , 1(3): 323–335.
  • –––, 2003, Defending science—within reason , Amherst: Prometheus.
  • –––, 2005a, “Disentangling Daubert: an epistemological study in theory and practice”, Journal of Philosophy, Science and Law , 5, Haack 2005a available online . doi:10.5840/jpsl2005513
  • –––, 2005b, “Trial and error: The Supreme Court’s philosophy of science”, American Journal of Public Health , 95: S66-S73.
  • –––, 2010, “Federal Philosophy of Science: A Deconstruction-and a Reconstruction”, NYUJL & Liberty , 5: 394.
  • Hangel, N. and J. Schickore, 2017, “Scientists’ conceptions of good research practice”, Perspectives on Science , 25(6): 766–791
  • Harper, W.L., 2011, Isaac Newton’s Scientific Method: Turning Data into Evidence about Gravity and Cosmology , Oxford: Oxford University Press.
  • Hempel, C., 1950, “Problems and Changes in the Empiricist Criterion of Meaning”, Revue Internationale de Philosophie , 41(11): 41–63.
  • –––, 1951, “The Concept of Cognitive Significance: A Reconsideration”, Proceedings of the American Academy of Arts and Sciences , 80(1): 61–77.
  • –––, 1965, Aspects of scientific explanation and other essays in the philosophy of science , New York–London: Free Press.
  • –––, 1966, Philosophy of Natural Science , Englewood Cliffs: Prentice-Hall.
  • Holmes, F.L., 1987, “Scientific writing and scientific discovery”, Isis , 78(2): 220–235.
  • Howard, D., 2003, “Two left turns make a right: On the curious political career of North American philosophy of science at midcentury”, in Logical Empiricism in North America , G.L. Hardcastle & A.W. Richardson (eds.), Minneapolis: University of Minnesota Press, pp. 25–93.
  • Hoyningen-Huene, P., 2008, “Systematicity: The nature of science”, Philosophia , 36(2): 167–180.
  • –––, 2013, Systematicity. The Nature of Science , Oxford: Oxford University Press.
  • Howie, D., 2002, Interpreting probability: Controversies and developments in the early twentieth century , Cambridge: Cambridge University Press.
  • Hughes, R., 1999, “The Ising Model, Computer Simulation, and Universal Physics”, in Models as Mediators , M. Morgan and M. Morrison (eds.), Cambridge: Cambridge University Press, pp. 97–145
  • Hume, D., 1739, A Treatise of Human Nature , D. Fate Norton and M.J. Norton (eds.), Oxford: Oxford University Press, 2000.
  • Humphreys, P., 1995, “Computational science and scientific method”, Minds and Machines , 5(1): 499–512.
  • ICMJE, 2013, “Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals”, International Committee of Medical Journal Editors, available online , accessed August 13 2014
  • Jeffrey, R.C., 1956, “Valuation and Acceptance of Scientific Hypotheses”, Philosophy of Science , 23(3): 237–246.
  • Kaufmann, W.J., and L.L. Smarr, 1993, Supercomputing and the Transformation of Science , New York: Scientific American Library.
  • Knorr-Cetina, K., 1981, The Manufacture of Knowledge , Oxford: Pergamon Press.
  • Krohs, U., 2012, “Convenience experimentation”, Studies in History and Philosophy of Biological and BiomedicalSciences , 43: 52–57.
  • Kuhn, T.S., 1962, The Structure of Scientific Revolutions , Chicago: University of Chicago Press
  • Latour, B. and S. Woolgar, 1986, Laboratory Life: The Construction of Scientific Facts , Princeton: Princeton University Press, 2 nd edition.
  • Laudan, L., 1968, “Theories of scientific method from Plato to Mach”, History of Science , 7(1): 1–63.
  • Lenhard, J., 2006, “Models and statistical inference: The controversy between Fisher and Neyman-Pearson”, The British Journal for the Philosophy of Science , 57(1): 69–91.
  • Leonelli, S., 2012, “Making Sense of Data-Driven Research in the Biological and the Biomedical Sciences”, Studies in the History and Philosophy of the Biological and Biomedical Sciences , 43(1): 1–3.
  • Levi, I., 1960, “Must the scientist make value judgments?”, Philosophy of Science , 57(11): 345–357
  • Lindley, D., 1991, Theory Change in Science: Strategies from Mendelian Genetics , Oxford: Oxford University Press.
  • Lipton, P., 2004, Inference to the Best Explanation , London: Routledge, 2 nd edition.
  • Marks, H.M., 2000, The progress of experiment: science and therapeutic reform in the United States, 1900–1990 , Cambridge: Cambridge University Press.
  • Mazzochi, F., 2015, “Could Big Data be the end of theory in science?”, EMBO reports , 16: 1250–1255.
  • Mayo, D.G., 1996, Error and the Growth of Experimental Knowledge , Chicago: University of Chicago Press.
  • McComas, W.F., 1996, “Ten myths of science: Reexamining what we think we know about the nature of science”, School Science and Mathematics , 96(1): 10–16.
  • Medawar, P.B., 1963/1996, “Is the scientific paper a fraud”, in The Strange Case of the Spotted Mouse and Other Classic Essays on Science , Oxford: Oxford University Press, 33–39.
  • Mill, J.S., 1963, Collected Works of John Stuart Mill , J. M. Robson (ed.), Toronto: University of Toronto Press
  • NAS, 1992, Responsible Science: Ensuring the integrity of the research process , Washington DC: National Academy Press.
  • Nersessian, N.J., 1987, “A cognitive-historical approach to meaning in scientific theories”, in The process of science , N. Nersessian (ed.), Berlin: Springer, pp. 161–177.
  • –––, 2008, Creating Scientific Concepts , Cambridge: MIT Press.
  • Newton, I., 1726, Philosophiae naturalis Principia Mathematica (3 rd edition), in The Principia: Mathematical Principles of Natural Philosophy: A New Translation , I.B. Cohen and A. Whitman (trans.), Berkeley: University of California Press, 1999.
  • –––, 1704, Opticks or A Treatise of the Reflections, Refractions, Inflections & Colors of Light , New York: Dover Publications, 1952.
  • Neyman, J., 1956, “Note on an Article by Sir Ronald Fisher”, Journal of the Royal Statistical Society. Series B (Methodological) , 18: 288–294.
  • Nickles, T., 1987, “Methodology, heuristics, and rationality”, in Rational changes in science: Essays on Scientific Reasoning , J.C. Pitt (ed.), Berlin: Springer, pp. 103–132.
  • Nicod, J., 1924, Le problème logique de l’induction , Paris: Alcan. (Engl. transl. “The Logical Problem of Induction”, in Foundations of Geometry and Induction , London: Routledge, 2000.)
  • Nola, R. and H. Sankey, 2000a, “A selective survey of theories of scientific method”, in Nola and Sankey 2000b: 1–65.
  • –––, 2000b, After Popper, Kuhn and Feyerabend. Recent Issues in Theories of Scientific Method , London: Springer.
  • –––, 2007, Theories of Scientific Method , Stocksfield: Acumen.
  • Norton, S., and F. Suppe, 2001, “Why atmospheric modeling is good science”, in Changing the Atmosphere: Expert Knowledge and Environmental Governance , C. Miller and P. Edwards (eds.), Cambridge, MA: MIT Press, 88–133.
  • O’Malley, M., 2007, “Exploratory experimentation and scientific practice: Metagenomics and the proteorhodopsin case”, History and Philosophy of the Life Sciences , 29(3): 337–360.
  • O’Malley, M., C. Haufe, K. Elliot, and R. Burian, 2009, “Philosophies of Funding”, Cell , 138: 611–615.
  • Oreskes, N., K. Shrader-Frechette, and K. Belitz, 1994, “Verification, Validation and Confirmation of Numerical Models in the Earth Sciences”, Science , 263(5147): 641–646.
  • Osborne, J., S. Simon, and S. Collins, 2003, “Attitudes towards science: a review of the literature and its implications”, International Journal of Science Education , 25(9): 1049–1079.
  • Parascandola, M., 1998, “Epidemiology—2 nd -Rate Science”, Public Health Reports , 113(4): 312–320.
  • Parker, W., 2008a, “Franklin, Holmes and the Epistemology of Computer Simulation”, International Studies in the Philosophy of Science , 22(2): 165–83.
  • –––, 2008b, “Computer Simulation through an Error-Statistical Lens”, Synthese , 163(3): 371–84.
  • Pearson, K. 1892, The Grammar of Science , London: J.M. Dents and Sons, 1951
  • Pearson, E.S., 1955, “Statistical Concepts in Their Relation to Reality”, Journal of the Royal Statistical Society , B, 17: 204–207.
  • Pickering, A., 1984, Constructing Quarks: A Sociological History of Particle Physics , Edinburgh: Edinburgh University Press.
  • Popper, K.R., 1959, The Logic of Scientific Discovery , London: Routledge, 2002
  • –––, 1963, Conjectures and Refutations , London: Routledge, 2002.
  • –––, 1985, Unended Quest: An Intellectual Autobiography , La Salle: Open Court Publishing Co..
  • Rudner, R., 1953, “The Scientist Qua Scientist Making Value Judgments”, Philosophy of Science , 20(1): 1–6.
  • Rudolph, J.L., 2005, “Epistemology for the masses: The origin of ‘The Scientific Method’ in American Schools”, History of Education Quarterly , 45(3): 341–376
  • Schickore, J., 2008, “Doing science, writing science”, Philosophy of Science , 75: 323–343.
  • Schickore, J. and N. Hangel, 2019, “‘It might be this, it should be that…’ uncertainty and doubt in day-to-day science practice”, European Journal for Philosophy of Science , 9(2): 31. doi:10.1007/s13194-019-0253-9
  • Shamoo, A.E. and D.B. Resnik, 2009, Responsible Conduct of Research , Oxford: Oxford University Press.
  • Shank, J.B., 2008, The Newton Wars and the Beginning of the French Enlightenment , Chicago: The University of Chicago Press.
  • Shapin, S. and S. Schaffer, 1985, Leviathan and the air-pump , Princeton: Princeton University Press.
  • Smith, G.E., 2002, “The Methodology of the Principia”, in The Cambridge Companion to Newton , I.B. Cohen and G.E. Smith (eds.), Cambridge: Cambridge University Press, 138–173.
  • Snyder, L.J., 1997a, “Discoverers’ Induction”, Philosophy of Science , 64: 580–604.
  • –––, 1997b, “The Mill-Whewell Debate: Much Ado About Induction”, Perspectives on Science , 5: 159–198.
  • –––, 1999, “Renovating the Novum Organum: Bacon, Whewell and Induction”, Studies in History and Philosophy of Science , 30: 531–557.
  • Sober, E., 2008, Evidence and Evolution. The logic behind the science , Cambridge: Cambridge University Press
  • Sprenger, J. and S. Hartmann, 2019, Bayesian philosophy of science , Oxford: Oxford University Press.
  • Steinle, F., 1997, “Entering New Fields: Exploratory Uses of Experimentation”, Philosophy of Science (Proceedings), 64: S65–S74.
  • –––, 2002, “Experiments in History and Philosophy of Science”, Perspectives on Science , 10(4): 408–432.
  • Strasser, B.J., 2012, “Data-driven sciences: From wonder cabinets to electronic databases”, Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences , 43(1): 85–87.
  • Succi, S. and P.V. Coveney, 2018, “Big data: the end of the scientific method?”, Philosophical Transactions of the Royal Society A , 377: 20180145. doi:10.1098/rsta.2018.0145
  • Suppe, F., 1998, “The Structure of a Scientific Paper”, Philosophy of Science , 65(3): 381–405.
  • Swijtink, Z.G., 1987, “The objectification of observation: Measurement and statistical methods in the nineteenth century”, in The probabilistic revolution. Ideas in History, Vol. 1 , L. Kruger (ed.), Cambridge MA: MIT Press, pp. 261–285.
  • Waters, C.K., 2007, “The nature and context of exploratory experimentation: An introduction to three case studies of exploratory research”, History and Philosophy of the Life Sciences , 29(3): 275–284.
  • Weinberg, S., 1995, “The methods of science… and those by which we live”, Academic Questions , 8(2): 7–13.
  • Weissert, T., 1997, The Genesis of Simulation in Dynamics: Pursuing the Fermi-Pasta-Ulam Problem , New York: Springer Verlag.
  • William H., 1628, Exercitatio Anatomica de Motu Cordis et Sanguinis in Animalibus , in On the Motion of the Heart and Blood in Animals , R. Willis (trans.), Buffalo: Prometheus Books, 1993.
  • Winsberg, E., 2010, Science in the Age of Computer Simulation , Chicago: University of Chicago Press.
  • Wivagg, D. & D. Allchin, 2002, “The Dogma of the Scientific Method”, The American Biology Teacher , 64(9): 645–646
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.
  • Blackmun opinion , in Daubert v. Merrell Dow Pharmaceuticals (92–102), 509 U.S. 579 (1993).
  • Scientific Method at philpapers. Darrell Rowbottom (ed.).
  • Recent Articles | Scientific Method | The Scientist Magazine

al-Kindi | Albert the Great [= Albertus magnus] | Aquinas, Thomas | Arabic and Islamic Philosophy, disciplines in: natural philosophy and natural science | Arabic and Islamic Philosophy, historical and methodological topics in: Greek sources | Arabic and Islamic Philosophy, historical and methodological topics in: influence of Arabic and Islamic Philosophy on the Latin West | Aristotle | Bacon, Francis | Bacon, Roger | Berkeley, George | biology: experiment in | Boyle, Robert | Cambridge Platonists | confirmation | Descartes, René | Enlightenment | epistemology | epistemology: Bayesian | epistemology: social | Feyerabend, Paul | Galileo Galilei | Grosseteste, Robert | Hempel, Carl | Hume, David | Hume, David: Newtonianism and Anti-Newtonianism | induction: problem of | Kant, Immanuel | Kuhn, Thomas | Leibniz, Gottfried Wilhelm | Locke, John | Mill, John Stuart | More, Henry | Neurath, Otto | Newton, Isaac | Newton, Isaac: philosophy | Ockham [Occam], William | operationalism | Peirce, Charles Sanders | Plato | Popper, Karl | rationality: historicist theories of | Reichenbach, Hans | reproducibility, scientific | Schlick, Moritz | science: and pseudo-science | science: theory and observation in | science: unity of | scientific discovery | scientific knowledge: social dimensions of | simulations in science | skepticism: medieval | space and time: absolute and relational space and motion, post-Newtonian theories | Vienna Circle | Whewell, William | Zabarella, Giacomo

Copyright © 2021 by Brian Hepburn < brian . hepburn @ wichita . edu > Hanne Andersen < hanne . andersen @ ind . ku . dk >

  • Accessibility

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

The Stanford Encyclopedia of Philosophy is copyright © 2023 by The Metaphysics Research Lab , Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

Banner

Scientific Method: Step 6: CONCLUSION

  • Step 1: QUESTION
  • Step 2: RESEARCH
  • Step 3: HYPOTHESIS
  • Step 4: EXPERIMENT
  • Step 5: DATA
  • Step 6: CONCLUSION

Step 6: Conclusion

Finally, you've reached your conclusion. Now it is time to summarize and explain what happened in your experiment. Your conclusion should answer the question posed in step one. Your conclusion should be based solely on your results.

Think about the following questions:

  • Was your hypothesis correct?
  • If your hypothesis wasn't correct, what can you conclude from that?
  • Do you need to run your experiment again changing a variable?
  • Is your data clearly defined so everyone can understand the results and follow your reasoning?

Remember, even a failed experiment can yield a valuable lesson.  

Draw your conclusion

  • Conclusion Sections in Scientific Research Reports (The Writing Center at George Mason)
  • Sample Conclusions (Science Buddies)
  • << Previous: Step 5: DATA
  • Next: Resources >>
  • Last Updated: May 9, 2024 10:59 AM
  • URL: https://harford.libguides.com/scientific_method

IMAGES

  1. Formula for Using the Scientific Method

    6 steps of the scientific method hypothesis

  2. Scientific Method: Definition and Examples

    6 steps of the scientific method hypothesis

  3. Steps of the Scientific Method

    6 steps of the scientific method hypothesis

  4. Scientific Method

    6 steps of the scientific method hypothesis

  5. 6 Steps of the Scientific Method

    6 steps of the scientific method hypothesis

  6. Scientific Method Worksheet & Example for Kids

    6 steps of the scientific method hypothesis

VIDEO

  1. Steps in Scientific Method (Simplified)

  2. Steps of Scientific Method

  3. Hypothesis

  4. Metho 6: The Research Process (Introduction)

  5. Scientific Method

  6. T- test || Hypothesis testing || In very easy way

COMMENTS

  1. The scientific method (article)

    The scientific method. At the core of biology and other sciences lies a problem-solving approach called the scientific method. The scientific method has five basic steps, plus one feedback step: Make an observation. Ask a question. Form a hypothesis, or testable explanation. Make a prediction based on the hypothesis.

  2. 6 Steps of the Scientific Method

    The number of steps can vary from one description to another (which mainly happens when data and analysis are separated into separate steps), however, this is a fairly standard list of the six scientific method steps that you are expected to know for any science class: Purpose/Question. Ask a question. Research. Conduct background research.

  3. Steps of the Scientific Method

    The six steps of the scientific method include: 1) asking a question about something you observe, 2) doing background research to learn what is already known about the topic, 3) constructing a hypothesis, 4) experimenting to test the hypothesis, 5) analyzing the data from the experiment and drawing conclusions, and 6) communicating the results ...

  4. What Are The Steps Of The Scientific Method?

    The scientific method is a process that includes several steps: First, an observation or question arises about a phenomenon. Then a hypothesis is formulated to explain the phenomenon, which is used to make predictions about other related occurrences or to predict the results of new observations quantitatively. Finally, these predictions are put to the test through experiments or further ...

  5. The 6 Scientific Method Steps and How to Use Them

    The number of steps varies, but the process begins with an observation, progresses through an experiment, and concludes with analysis and sharing data. One of the most important pieces to the scientific method is skepticism —the goal is to find truth, not to confirm a particular thought. That requires reevaluation and repeated experimentation ...

  6. Steps of the Scientific Method

    A question helps you form a hypothesis, which focuses your study. Research the topic. ... You may be asked to list the 5 steps of the scientific method or the 6 steps of the method or some other number. There are different ways of grouping together the steps outlined here, so it's a good idea to learn the way an instructor wants you to list ...

  7. Scientific method

    The scientific method is critical to the development of scientific theories, which explain empirical (experiential) laws in a scientifically rational manner.In a typical application of the scientific method, a researcher develops a hypothesis, tests it through various means, and then modifies the hypothesis on the basis of the outcome of the tests and experiments.

  8. 2.1: The Scientific Method

    The scientific method is a process of research with defined steps that include data collection and careful observation. The scientific method was used even in ancient times, but it was first documented by England's Sir Francis Bacon (1561-1626) (Figure 2.1.5 2.1. 5 ), who set up inductive methods for scientific inquiry.

  9. PDF Steps of the Scientific Method

    Steps of the Scientific Method Key Info • The scientific method is a way to ask and answer scientific questions by making observations and doing experiments. • The steps of the scientific method are to: o Ask a Question o Do Background Research o Construct a Hypothesis o Test Your Hypothesis by Doing an Experiment o Analyze Your Data and Draw a Conclusion

  10. 1.3: The Scientific Method

    The scientific method is a method of investigation involving experimentation and observation to acquire new knowledge, solve problems, and answer questions. The key steps in the scientific method include the following: Step 1: Make observations. Step 2: Formulate a hypothesis. Step 3: Test the hypothesis through experimentation.

  11. Scientific method

    The scientific method is an empirical method for acquiring knowledge that has characterized the development of science since at least the 17th ... The iterative cycle inherent in this step-by-step method goes from point 3 to 6 and back to 3 again. While this schema outlines a typical hypothesis/testing method, many philosophers, historians, ...

  12. What is the scientific method?

    According to Kosso (2011), the scientific method is a specific step-by-step method that aims to answer a question or prove a hypothesis. It is the process used among all scientific disciplines and is used to conduct both small and large experiments. It has been used for centuries to solve scientific problems and identify solutions.

  13. What Is a Hypothesis? The Scientific Method

    A hypothesis (plural hypotheses) is a proposed explanation for an observation. The definition depends on the subject. In science, a hypothesis is part of the scientific method. It is a prediction or explanation that is tested by an experiment. Observations and experiments may disprove a scientific hypothesis, but can never entirely prove one.

  14. The scientific method (article)

    The scientific method. At the core of physics and other sciences lies a problem-solving approach called the scientific method. The scientific method has five basic steps, plus one feedback step: Make an observation. Ask a question. Form a hypothesis, or testable explanation. Make a prediction based on the hypothesis.

  15. The Scientific Method Tutorial

    Steps in the Scientific Method. There is a great deal of variation in the specific techniques scientists use explore the natural world. However, the following steps characterize the majority of scientific investigations: Step 1: Make observations Step 2: Propose a hypothesis to explain observations Step 3: Test the hypothesis with further ...

  16. Scientific hypothesis

    The Royal Society - On the scope of scientific hypotheses (Apr. 24, 2024) scientific hypothesis, an idea that proposes a tentative explanation about a phenomenon or a narrow set of phenomena observed in the natural world. The two primary features of a scientific hypothesis are falsifiability and testability, which are reflected in an "If ...

  17. 6.1: Scientific Method

    The method of investigation in which a hypothesis is developed from a theory and then confirmed or disconfirmed involves deductive reasoning. However, deductive reasoning does not explain where the theory came from in the first place. In general, a theory is developed by a scientist who is aware of many empirical findings on a topic of interest.

  18. Scientific Method

    Statistical methods for hypothesis testing. ... , many of the steps involved in reaching a conclusion on the basis of an experiment are now made inside a "black box", without the direct involvement or awareness of a human. ... "Defining the scientific method", Nature Methods, 6: 237. Churchman, C.W., 1948, "Science, Pragmatics ...

  19. 6 steps of the scientific method Flashcards

    1. Question. Ask a question in a way that will guide your research. 2. Research. Use recognizable sources, including the internet, to gather information about your question. 3. Hypothesis. Make a statement explaining what you think will happen in your experiment.

  20. Theory vs. Hypothesis: Basics of the Scientific Method

    Theory vs. Hypothesis: Basics of the Scientific Method. Written by MasterClass. Last updated: Jun 7, 2021 • 2 min read. Though you may hear the terms "theory" and "hypothesis" used interchangeably, these two scientific terms have drastically different meanings in the world of science. Though you may hear the terms "theory" and "hypothesis ...

  21. Subject Guides: Scientific Method: Step 6: CONCLUSION

    6 step scientific method. Home; Step 1: QUESTION; Step 2: RESEARCH; Step 3: HYPOTHESIS; Step 4: EXPERIMENT; Step 5: DATA; Step 6: CONCLUSION; Resources; Step 6: Conclusion. Finally, you've reached your conclusion. Now it is time to summarize and explain what happened in your experiment. Your conclusion should answer the question posed in step one.