If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

To log in and use all the features of Khan Academy, please enable JavaScript in your browser.

Biology library

Course: biology library   >   unit 1, the scientific method.

  • Controlled experiments
  • The scientific method and experimental design

Introduction

  • Make an observation.
  • Ask a question.
  • Form a hypothesis , or testable explanation.
  • Make a prediction based on the hypothesis.
  • Test the prediction.
  • Iterate: use the results to make new hypotheses or predictions.

Scientific method example: Failure to toast

1. make an observation..

  • Observation: the toaster won't toast.

2. Ask a question.

  • Question: Why won't my toaster toast?

3. Propose a hypothesis.

  • Hypothesis: Maybe the outlet is broken.

4. Make predictions.

  • Prediction: If I plug the toaster into a different outlet, then it will toast the bread.

5. Test the predictions.

  • Test of prediction: Plug the toaster into a different outlet and try again.
  • If the toaster does toast, then the hypothesis is supported—likely correct.
  • If the toaster doesn't toast, then the hypothesis is not supported—likely wrong.

Logical possibility

Practical possibility, building a body of evidence, 6. iterate..

  • Iteration time!
  • If the hypothesis was supported, we might do additional tests to confirm it, or revise it to be more specific. For instance, we might investigate why the outlet is broken.
  • If the hypothesis was not supported, we would come up with a new hypothesis. For instance, the next hypothesis might be that there's a broken wire in the toaster.

Want to join the conversation?

  • Upvote Button navigates to signup page
  • Downvote Button navigates to signup page
  • Flag Button navigates to signup page

Incredible Answer

What is the Scientific Method: How does it work and why is it important?

The scientific method is a systematic process involving steps like defining questions, forming hypotheses, conducting experiments, and analyzing data. It minimizes biases and enables replicable research, leading to groundbreaking discoveries like Einstein's theory of relativity, penicillin, and the structure of DNA. This ongoing approach promotes reason, evidence, and the pursuit of truth in science.

Updated on November 18, 2023

What is the Scientific Method: How does it work and why is it important?

Beginning in elementary school, we are exposed to the scientific method and taught how to put it into practice. As a tool for learning, it prepares children to think logically and use reasoning when seeking answers to questions.

Rather than jumping to conclusions, the scientific method gives us a recipe for exploring the world through observation and trial and error. We use it regularly, sometimes knowingly in academics or research, and sometimes subconsciously in our daily lives.

In this article we will refresh our memories on the particulars of the scientific method, discussing where it comes from, which elements comprise it, and how it is put into practice. Then, we will consider the importance of the scientific method, who uses it and under what circumstances.

What is the scientific method?

The scientific method is a dynamic process that involves objectively investigating questions through observation and experimentation . Applicable to all scientific disciplines, this systematic approach to answering questions is more accurately described as a flexible set of principles than as a fixed series of steps.

The following representations of the scientific method illustrate how it can be both condensed into broad categories and also expanded to reveal more and more details of the process. These graphics capture the adaptability that makes this concept universally valuable as it is relevant and accessible not only across age groups and educational levels but also within various contexts.

a graph of the scientific method

Steps in the scientific method

While the scientific method is versatile in form and function, it encompasses a collection of principles that create a logical progression to the process of problem solving:

  • Define a question : Constructing a clear and precise problem statement that identifies the main question or goal of the investigation is the first step. The wording must lend itself to experimentation by posing a question that is both testable and measurable.
  • Gather information and resources : Researching the topic in question to find out what is already known and what types of related questions others are asking is the next step in this process. This background information is vital to gaining a full understanding of the subject and in determining the best design for experiments. 
  • Form a hypothesis : Composing a concise statement that identifies specific variables and potential results, which can then be tested, is a crucial step that must be completed before any experimentation. An imperfection in the composition of a hypothesis can result in weaknesses to the entire design of an experiment.
  • Perform the experiments : Testing the hypothesis by performing replicable experiments and collecting resultant data is another fundamental step of the scientific method. By controlling some elements of an experiment while purposely manipulating others, cause and effect relationships are established.
  • Analyze the data : Interpreting the experimental process and results by recognizing trends in the data is a necessary step for comprehending its meaning and supporting the conclusions. Drawing inferences through this systematic process lends substantive evidence for either supporting or rejecting the hypothesis.
  • Report the results : Sharing the outcomes of an experiment, through an essay, presentation, graphic, or journal article, is often regarded as a final step in this process. Detailing the project's design, methods, and results not only promotes transparency and replicability but also adds to the body of knowledge for future research.
  • Retest the hypothesis : Repeating experiments to see if a hypothesis holds up in all cases is a step that is manifested through varying scenarios. Sometimes a researcher immediately checks their own work or replicates it at a future time, or another researcher will repeat the experiments to further test the hypothesis.

a chart of the scientific method

Where did the scientific method come from?

Oftentimes, ancient peoples attempted to answer questions about the unknown by:

  • Making simple observations
  • Discussing the possibilities with others deemed worthy of a debate
  • Drawing conclusions based on dominant opinions and preexisting beliefs

For example, take Greek and Roman mythology. Myths were used to explain everything from the seasons and stars to the sun and death itself.

However, as societies began to grow through advancements in agriculture and language, ancient civilizations like Egypt and Babylonia shifted to a more rational analysis for understanding the natural world. They increasingly employed empirical methods of observation and experimentation that would one day evolve into the scientific method . 

In the 4th century, Aristotle, considered the Father of Science by many, suggested these elements , which closely resemble the contemporary scientific method, as part of his approach for conducting science:

  • Study what others have written about the subject.
  • Look for the general consensus about the subject.
  • Perform a systematic study of everything even partially related to the topic.

a pyramid of the scientific method

By continuing to emphasize systematic observation and controlled experiments, scholars such as Al-Kindi and Ibn al-Haytham helped expand this concept throughout the Islamic Golden Age . 

In his 1620 treatise, Novum Organum , Sir Francis Bacon codified the scientific method, arguing not only that hypotheses must be tested through experiments but also that the results must be replicated to establish a truth. Coming at the height of the Scientific Revolution, this text made the scientific method accessible to European thinkers like Galileo and Isaac Newton who then put the method into practice.

As science modernized in the 19th century, the scientific method became more formalized, leading to significant breakthroughs in fields such as evolution and germ theory. Today, it continues to evolve, underpinning scientific progress in diverse areas like quantum mechanics, genetics, and artificial intelligence.

Why is the scientific method important?

The history of the scientific method illustrates how the concept developed out of a need to find objective answers to scientific questions by overcoming biases based on fear, religion, power, and cultural norms. This still holds true today.

By implementing this standardized approach to conducting experiments, the impacts of researchers’ personal opinions and preconceived notions are minimized. The organized manner of the scientific method prevents these and other mistakes while promoting the replicability and transparency necessary for solid scientific research.

The importance of the scientific method is best observed through its successes, for example: 

  • “ Albert Einstein stands out among modern physicists as the scientist who not only formulated a theory of revolutionary significance but also had the genius to reflect in a conscious and technical way on the scientific method he was using.” Devising a hypothesis based on the prevailing understanding of Newtonian physics eventually led Einstein to devise the theory of general relativity .
  • Howard Florey “Perhaps the most useful lesson which has come out of the work on penicillin has been the demonstration that success in this field depends on the development and coordinated use of technical methods.” After discovering a mold that prevented the growth of Staphylococcus bacteria, Dr. Alexander Flemimg designed experiments to identify and reproduce it in the lab, thus leading to the development of penicillin .
  • James D. Watson “Every time you understand something, religion becomes less likely. Only with the discovery of the double helix and the ensuing genetic revolution have we had grounds for thinking that the powers held traditionally to be the exclusive property of the gods might one day be ours. . . .” By using wire models to conceive a structure for DNA, Watson and Crick crafted a hypothesis for testing combinations of amino acids, X-ray diffraction images, and the current research in atomic physics, resulting in the discovery of DNA’s double helix structure .

Final thoughts

As the cases exemplify, the scientific method is never truly completed, but rather started and restarted. It gave these researchers a structured process that was easily replicated, modified, and built upon. 

While the scientific method may “end” in one context, it never literally ends. When a hypothesis, design, methods, and experiments are revisited, the scientific method simply picks up where it left off. Each time a researcher builds upon previous knowledge, the scientific method is restored with the pieces of past efforts.

By guiding researchers towards objective results based on transparency and reproducibility, the scientific method acts as a defense against bias, superstition, and preconceived notions. As we embrace the scientific method's enduring principles, we ensure that our quest for knowledge remains firmly rooted in reason, evidence, and the pursuit of truth.

The AJE Team

The AJE Team

See our "Privacy Policy"

  • The Magazine
  • Stay Curious
  • The Sciences
  • Environment
  • Planet Earth

How the Scientific Method Works: An In-Depth Look

Though scientific research encompasses a broad spectrum of research, these experiments all follow the same scientific method..

Equipment and science experiments oil pouring scientist with test tube yellow making research in laboratory.

What, exactly, is science? It's something people in lab coats do, right? Science has been a potent tool, providing us with technology we once never dreamt possible. It has also helped us answer questions that have sat dormant in the human psyche for millennia.

The history of science, however, is filled with revolutions or modifications of accepted theory. Newton described gravity as an immutable background entity, an ever-present force that permeated the cosmos.

That was until Einstein came along with general relativity and described how gravity emerged out of the interaction between mass and the fabric of spacetime. Scientists are constantly seeking a deeper explanation of reality, and so scientists have to be ready for a better theory or model to come along and replace it.

The journey with which scientific discovery and change occurs has been distilled into what is referred to as the scientific method.

What Is the Scientific Method?

The scientific method is a systematic approach used by scientists to investigate and understand natural phenomena. It consists of a series of steps that guide researchers in drawing conclusions from hypotheses.

"Science never achieves final truth in theories, but one theory can be objectively truer than another, even if we never know that for sure," says British physicist David Deutsch from the University of Oxford. Deutsch is the author of  The Beginning of Infinity , a book that argues science will never reach a point in which it can describe the entirety of phenomena in the physical world, as new theories will bring along with them deeper problems in need of explanation.

What Are the Steps of the Scientific Method?

The steps of the scientific method hold importance as they provide a structured and systematic approach to conducting scientific investigations. The following steps promote the credibility of scientific findings.

Step One — Identify the Question

Firstly, scientists identify phenomena they want to investigate. This could be based on an interesting observation that was collected from data, or it could be a mathematical problem that arises out of current theories. As such, the first step is to ask  why  something is the way it is — defining the research question in established terms, setting up a line of inquiry, and identifying possible methods for answering said question.

Step Two — Make Predictions 

After defining a research question, scientists are likely to develop a hypothesis or prediction based on what theoretical framework they adopt or the set of observations they have already made. This particular step in the scientific process is important because it relates to the 'testability' of certain theories or claims about the physical world. Generally, when distinguishing scientific predictions/claims from non-scientific predictions/claims, the difference is whether they are testable or not.

However, just because we cannot test something now doesn't mean it doesn't count as science. As science delves into the ever more extreme part of the physical world, whether they are very small or large in space or long or short in time, our ability to test theory is limited by the types of technology we have. That doesn't mean we shouldn't develop theories that attempt to explain the farthest reaches of the physical world.

For example, for a long time, astrophysicists developed mathematical models of the evolution of the early Universe. However, they did not possess an instrument to confirm their predictions. This did not mean their theories were unscientific. It just meant they had to rely on mathematics and general principles before the  James Webb Space Telescope  could observe that far back in the early Universe.

Step Three — Gather Evidence 

Once a testable prediction or hypothesis has been made, evidence is gathered to test the prediction. Evidence can be acquired in several different ways. Scientists can observe the natural world to see if their models match what is happening in reality; for example, astrophysicists use the James Webb Space Telescope to observe the early Universe to see if their models of galaxy formation match observations.

Scientists can also run experiments in a laboratory, like the particle physicists who smash subatomic particles together at CERN to see what happens next. Or they might input their parameters and run computer simulations. Sometimes scientists will combine each of these strategies, repeat them as many times as possible to replicate their findings and provide them to other scientists to critique their research and give valuable feedback. 

Step Four — Analyze the Data 

Once scientists have collected their data from their various methods, they then have to organize them into tables, graphs, or diagrams that might show interesting relationships, connections or anomalies that might be important when answering their research question.

Step Five — Form a Conclusion 

And lastly, scientists will evaluate their hypothesis or prediction in light of their observations to see if it was supported or not. Sometimes results won't provide a clear answer, and new ways of testing might have to be devised.

Or they might get clear results, send their findings to a scientific journal where it could then get published, peer-reviewed by other scientists and become part of the accepted corpus of knowledge on a particular subject. Sometimes new results might modify or overturn what exists on a given subject already.

Is Science Objective?

Science attempts to be as objective as possible by removing the bias people bring to the scientific process and interpretation of scientific results. Science has a number of ways to help correct these biases, such as using large data sets, peer review and controlling the parameters of experiments.

However, it is important to remember that science is carried out by humans, and things like bias, intuition, and historical contingencies can affect the results and direction of science. For example, scientific explanations are often accused of being ' reductionistic ' (e.g., consciousness is the firing of neurons in the brain). However, reductionist explanations of phenomena are largely an artifact of the historical contingencies of science. 

The sciences which developed the fastest (physics and chemistry) dealt with small scales of reality, and so scientists applied these approaches to try and explain macroscopic phenomena like consciousness.

All in all, science is the best system we have developed for discerning knowledge about the physical world. Like us, science is a work in progress, and the more we learn about the world and ourselves through science, the better we get at sharpening the tools and methods of science itself.

Read More: What Is the Scientific Method and How Did It Shape Science?

  • behavior & society
  • memory & learning

Already a subscriber?

Register or Log In

Discover Magazine Logo

Keep reading for as low as $1.99!

Sign up for our weekly science updates.

Save up to 40% off the cover price when you subscribe to Discover magazine.

Facebook

Science and the scientific method: Definitions and examples

Here's a look at the foundation of doing science — the scientific method.

Kids follow the scientific method to carry out an experiment.

The scientific method

Hypothesis, theory and law, a brief history of science, additional resources, bibliography.

Science is a systematic and logical approach to discovering how things in the universe work. It is also the body of knowledge accumulated through the discoveries about all the things in the universe. 

The word "science" is derived from the Latin word "scientia," which means knowledge based on demonstrable and reproducible data, according to the Merriam-Webster dictionary . True to this definition, science aims for measurable results through testing and analysis, a process known as the scientific method. Science is based on fact, not opinion or preferences. The process of science is designed to challenge ideas through research. One important aspect of the scientific process is that it focuses only on the natural world, according to the University of California, Berkeley . Anything that is considered supernatural, or beyond physical reality, does not fit into the definition of science.

When conducting research, scientists use the scientific method to collect measurable, empirical evidence in an experiment related to a hypothesis (often in the form of an if/then statement) that is designed to support or contradict a scientific theory .

"As a field biologist, my favorite part of the scientific method is being in the field collecting the data," Jaime Tanner, a professor of biology at Marlboro College, told Live Science. "But what really makes that fun is knowing that you are trying to answer an interesting question. So the first step in identifying questions and generating possible answers (hypotheses) is also very important and is a creative process. Then once you collect the data you analyze it to see if your hypothesis is supported or not."

Here's an illustration showing the steps in the scientific method.

The steps of the scientific method go something like this, according to Highline College :

  • Make an observation or observations.
  • Form a hypothesis — a tentative description of what's been observed, and make predictions based on that hypothesis.
  • Test the hypothesis and predictions in an experiment that can be reproduced.
  • Analyze the data and draw conclusions; accept or reject the hypothesis or modify the hypothesis if necessary.
  • Reproduce the experiment until there are no discrepancies between observations and theory. "Replication of methods and results is my favorite step in the scientific method," Moshe Pritsker, a former post-doctoral researcher at Harvard Medical School and CEO of JoVE, told Live Science. "The reproducibility of published experiments is the foundation of science. No reproducibility — no science."

Some key underpinnings to the scientific method:

  • The hypothesis must be testable and falsifiable, according to North Carolina State University . Falsifiable means that there must be a possible negative answer to the hypothesis.
  • Research must involve deductive reasoning and inductive reasoning . Deductive reasoning is the process of using true premises to reach a logical true conclusion while inductive reasoning uses observations to infer an explanation for those observations.
  • An experiment should include a dependent variable (which does not change) and an independent variable (which does change), according to the University of California, Santa Barbara .
  • An experiment should include an experimental group and a control group. The control group is what the experimental group is compared against, according to Britannica .

The process of generating and testing a hypothesis forms the backbone of the scientific method. When an idea has been confirmed over many experiments, it can be called a scientific theory. While a theory provides an explanation for a phenomenon, a scientific law provides a description of a phenomenon, according to The University of Waikato . One example would be the law of conservation of energy, which is the first law of thermodynamics that says that energy can neither be created nor destroyed. 

A law describes an observed phenomenon, but it doesn't explain why the phenomenon exists or what causes it. "In science, laws are a starting place," said Peter Coppinger, an associate professor of biology and biomedical engineering at the Rose-Hulman Institute of Technology. "From there, scientists can then ask the questions, 'Why and how?'"

Laws are generally considered to be without exception, though some laws have been modified over time after further testing found discrepancies. For instance, Newton's laws of motion describe everything we've observed in the macroscopic world, but they break down at the subatomic level.

This does not mean theories are not meaningful. For a hypothesis to become a theory, scientists must conduct rigorous testing, typically across multiple disciplines by separate groups of scientists. Saying something is "just a theory" confuses the scientific definition of "theory" with the layperson's definition. To most people a theory is a hunch. In science, a theory is the framework for observations and facts, Tanner told Live Science.

This Copernican heliocentric solar system, from 1708, shows the orbit of the moon around the Earth, and the orbits of the Earth and planets round the sun, including Jupiter and its moons, all surrounded by the 12 signs of the zodiac.

The earliest evidence of science can be found as far back as records exist. Early tablets contain numerals and information about the solar system , which were derived by using careful observation, prediction and testing of those predictions. Science became decidedly more "scientific" over time, however.

1200s: Robert Grosseteste developed the framework for the proper methods of modern scientific experimentation, according to the Stanford Encyclopedia of Philosophy. His works included the principle that an inquiry must be based on measurable evidence that is confirmed through testing.

1400s: Leonardo da Vinci began his notebooks in pursuit of evidence that the human body is microcosmic. The artist, scientist and mathematician also gathered information about optics and hydrodynamics.

1500s: Nicolaus Copernicus advanced the understanding of the solar system with his discovery of heliocentrism. This is a model in which Earth and the other planets revolve around the sun, which is the center of the solar system.

1600s: Johannes Kepler built upon those observations with his laws of planetary motion. Galileo Galilei improved on a new invention, the telescope, and used it to study the sun and planets. The 1600s also saw advancements in the study of physics as Isaac Newton developed his laws of motion.

1700s: Benjamin Franklin discovered that lightning is electrical. He also contributed to the study of oceanography and meteorology. The understanding of chemistry also evolved during this century as Antoine Lavoisier, dubbed the father of modern chemistry , developed the law of conservation of mass.

1800s: Milestones included Alessandro Volta's discoveries regarding electrochemical series, which led to the invention of the battery. John Dalton also introduced atomic theory, which stated that all matter is composed of atoms that combine to form molecules. The basis of modern study of genetics advanced as Gregor Mendel unveiled his laws of inheritance. Later in the century, Wilhelm Conrad Röntgen discovered X-rays , while George Ohm's law provided the basis for understanding how to harness electrical charges.

1900s: The discoveries of Albert Einstein , who is best known for his theory of relativity, dominated the beginning of the 20th century. Einstein's theory of relativity is actually two separate theories. His special theory of relativity, which he outlined in a 1905 paper, " The Electrodynamics of Moving Bodies ," concluded that time must change according to the speed of a moving object relative to the frame of reference of an observer. His second theory of general relativity, which he published as " The Foundation of the General Theory of Relativity ," advanced the idea that matter causes space to curve.

In 1952, Jonas Salk developed the polio vaccine , which reduced the incidence of polio in the United States by nearly 90%, according to Britannica . The following year, James D. Watson and Francis Crick discovered the structure of DNA , which is a double helix formed by base pairs attached to a sugar-phosphate backbone, according to the National Human Genome Research Institute .

2000s: The 21st century saw the first draft of the human genome completed, leading to a greater understanding of DNA. This advanced the study of genetics, its role in human biology and its use as a predictor of diseases and other disorders, according to the National Human Genome Research Institute .

  • This video from City University of New York delves into the basics of what defines science.
  • Learn about what makes science science in this book excerpt from Washington State University .
  • This resource from the University of Michigan — Flint explains how to design your own scientific study.

Merriam-Webster Dictionary, Scientia. 2022. https://www.merriam-webster.com/dictionary/scientia

University of California, Berkeley, "Understanding Science: An Overview." 2022. ​​ https://undsci.berkeley.edu/article/0_0_0/intro_01  

Highline College, "Scientific method." July 12, 2015. https://people.highline.edu/iglozman/classes/astronotes/scimeth.htm  

North Carolina State University, "Science Scripts." https://projects.ncsu.edu/project/bio183de/Black/science/science_scripts.html  

University of California, Santa Barbara. "What is an Independent variable?" October 31,2017. http://scienceline.ucsb.edu/getkey.php?key=6045  

Encyclopedia Britannica, "Control group." May 14, 2020. https://www.britannica.com/science/control-group  

The University of Waikato, "Scientific Hypothesis, Theories and Laws." https://sci.waikato.ac.nz/evolution/Theories.shtml  

Stanford Encyclopedia of Philosophy, Robert Grosseteste. May 3, 2019. https://plato.stanford.edu/entries/grosseteste/  

Encyclopedia Britannica, "Jonas Salk." October 21, 2021. https://www.britannica.com/ biography /Jonas-Salk

National Human Genome Research Institute, "​Phosphate Backbone." https://www.genome.gov/genetics-glossary/Phosphate-Backbone  

National Human Genome Research Institute, "What is the Human Genome Project?" https://www.genome.gov/human-genome-project/What  

‌ Live Science contributor Ashley Hamer updated this article on Jan. 16, 2022.

Sign up for the Live Science daily newsletter now

Get the world’s most fascinating discoveries delivered straight to your inbox.

Alina Bradford

Rare magnitude 4.8 and 3.8 earthquakes rock Northeast, including greater New York area

Underwater robot in Siberia's Lake Baikal reveals hidden mud volcanoes — and an active fault

Longest eclipse ever: How scientists rode the supersonic Concorde jet to see a 74-minute totality

Most Popular

By Anna Gora December 27, 2023

By Anna Gora December 26, 2023

By Anna Gora December 25, 2023

By Emily Cooke December 23, 2023

By Victoria Atkinson December 22, 2023

By Anna Gora December 16, 2023

By Anna Gora December 15, 2023

By Anna Gora November 09, 2023

By Donavyn Coffey November 06, 2023

By Anna Gora October 31, 2023

By Anna Gora October 26, 2023

  • 2 James Webb telescope confirms there is something seriously wrong with our understanding of the universe
  • 3 Nuclear fusion reactor in South Korea runs at 100 million degrees C for a record-breaking 48 seconds
  • 4 'It's had 1.1 billion years to accumulate': Helium reservoir in Minnesota has 'mind-bogglingly large' concentrations
  • 5 April 8 total solar eclipse: Everything you need to know
  • 3 'Gambling with your life': Experts weigh in on dangers of the Wim Hof method
  • 4 Cholesterol-gobbling gut bacteria could protect against heart disease
  • 5 NASA engineers discover why Voyager 1 is sending a stream of gibberish from outside our solar system

a systematic approach to problem solving used by all scientists

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons
  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Chemistry LibreTexts

1.1.6: Scientific Problem Solving

  • Last updated
  • Save as PDF
  • Page ID 419240

How can we use problem solving in our everyday routines?

One day you wake up and realize your clock radio did not turn on to get you out of bed. You are puzzled, so you decide to find out what happened. You list three possible explanations:

  • There was a power failure and your radio cannot turn on.
  • Your little sister turned it off as a joke.
  • You did not set the alarm last night.

Upon investigation, you find that the clock is on, so there is no power failure. Your little sister was spending the night with a friend and could not have turned the alarm off. You notice that the alarm is not set—your forgetfulness made you late. You have used the scientific method to answer a question.

Scientific Problem Solving

Humans have always wondered about the world around them. One of the questions of interest was (and still is): what is this world made of? Chemistry has been defined in various ways as the study of matter. What matter consists of has been a source of debate over the centuries. One of the key areas for this debate in the Western world was Greek philosophy.

The basic approach of the Greek philosophers was to discuss and debate the questions they had about the world. There was no gathering of information to speak of, just talking. As a result, several ideas about matter were put forth, but never resolved. The first philosopher to carry out the gathering of data was Aristotle (384-322 B.C.). He recorded many observations on the weather, on plant and animal life and behavior, on physical motions, and a number of other topics. Aristotle could probably be considered the first "real" scientist, because he made systematic observations of nature and tried to understand what he was seeing.

Picture of Aristotle

Inductive and Deductive Reasoning

Two approaches to logical thinking developed over the centuries. These two methods are inductive reasoning and deductive reasoning . Inductive reasoning involves getting a collection of specific examples and drawing a general conclusion from them. Deductive reasoning takes a general principle and then draws a specific conclusion from the general concept. Both are used in the development of scientific ideas.

Inductive reasoning first involves the collection of data: "If I add sodium metal to water, I observe a very violent reaction. Every time I repeat the process, I see the same thing happen." A general conclusion is drawn from these observations: the addition of sodium to water results in a violent reaction.

In deductive reasoning, a specific prediction is made based on a general principle. One general principle is that acids turn blue litmus paper red. Using the deductive reasoning process, one might predict: "If I have a bottle of liquid labeled 'acid', I expect the litmus paper to turn red when I immerse it in the liquid."

The Idea of the Experiment

Inductive reasoning is at the heart of what is now called the " scientific method ." In European culture, this approach was developed mainly by Francis Bacon (1561-1626), a British scholar. He advocated the use of inductive reasoning in every area of life, not just science. The scientific method, as developed by Bacon and others, involves several steps:

  • Ask a question - identify the problem to be considered.
  • Make observations - gather data that pertains to the question.
  • Propose an explanation (a hypothesis) for the observations.
  • Make new observations to test the hypothesis further.

Picture of Sir Francis Bacon

Note that this should not be considered a "cookbook" for scientific research. Scientists do not sit down with their daily "to do" list and write down these steps. The steps may not necessarily be followed in order. But this does provide a general idea of how scientific research is usually done.

When a hypothesis is confirmed repeatedly, it eventually becomes a theory—a general principle that is offered to explain natural phenomena. Note a key word— explain , or  explanation . A theory offers a description of why something happens. A law, on the other hand, is a statement that is always true, but offers no explanation as to why. The law of gravity says a rock will fall when dropped, but does not explain why (gravitational theory is very complex and incomplete at present). The kinetic molecular theory of gases, on the other hand, states what happens when a gas is heated in a closed container (the pressure increases), but also explains why (the motions of the gas molecules are increased due to the change in temperature). Theories do not get "promoted" to laws, because laws do not answer the "why" question.

  • The early Greek philosophers spent their time talking about nature, but did little or no actual exploration or investigation.
  • Inductive reasoning - to develop a general conclusion from a collection of observations.
  • Deductive reasoning - to make a specific statement based on a general principle.
  • Scientific method - a process of observation, developing a hypothesis, and testing that hypothesis.
  • What was the basic shortcoming of the Greek philosophers approach to studying the material world?
  • How did Aristotle improve the approach?
  • Define “inductive reasoning” and give an example.
  • Define “deductive reasoning” and give an example.
  • What is the difference between a hypothesis and a theory?
  • What is the difference between a theory and a law?

Back Home

  • Search Search Search …
  • Search Search …

The Fundamentals of Scientific Thinking and Critical Analysis: A Comprehensive Guide

The Fundamentals of Scientific Thinking and Critical Analysis

Scientific thinking and critical analysis are fundamental skills that play a crucial role in our daily lives. These skills help individuals to analyze information, evaluate arguments, and make informed decisions based on facts and evidence. The ability to think critically is especially important in the field of science, where scientists rely on logical reasoning and empirical evidence to understand the natural world.

Scientific thinking involves a systematic approach to problem-solving, where individuals use empirical evidence, logical reasoning, and critical thinking to develop hypotheses, test them, and draw conclusions. Critical analysis, on the other hand, involves evaluating information, arguments, and claims in a systematic and objective way to determine their validity and reliability. By combining these two skills, individuals can develop a deeper understanding of the world around them and make informed decisions based on evidence.

In today’s world, where information is readily available, the ability to think critically and analyze information is more important than ever. With so much information at our fingertips, it can be difficult to separate fact from fiction. The ability to think critically and evaluate sources of information is crucial to making informed decisions and avoiding misinformation. Therefore, understanding the fundamentals of scientific thinking and critical analysis is essential for anyone seeking to navigate the complex world of information and science.

Understanding Scientific Thinking

Scientific thinking is the thought process and reasoning involved in the field of science. It encompasses various techniques such as observation, induction, deduction, and experimental design. This section will provide an overview of the scientific method, experimental design, and systematic reasoning.

The Scientific Method

The scientific method is a systematic approach to investigating phenomena, acquiring new knowledge, or correcting and integrating previous knowledge. It involves the following steps:

  • Define the purpose of the experiment
  • Formulate a hypothesis
  • Study the phenomenon and collect data
  • Analyze the data
  • Draw conclusions
  • Communicate the results

Experimental Design

Experimental design involves the planning and execution of experiments to test hypotheses. It involves the following elements:

  • Hypotheses: A hypothesis is a tentative explanation for an observation or phenomenon.
  • Experiment: An experiment is a test of a hypothesis.
  • Control: A control is an experimental condition that remains constant throughout the experiment.
  • Variable: A variable is any factor that can change in an experiment.
  • Control group: A control group is a group that is not exposed to the experimental treatment.
  • Variables: Variables are factors that can change in an experiment.
  • Data collection: Data collection involves the collection of data through observation or experimentation.
  • Hypothesis testing: Hypothesis testing involves the use of statistical analysis to determine the probability that an observed effect is due to chance.

Systematic Reasoning

Systematic reasoning involves the use of logical and critical thinking to evaluate hypotheses and alternative explanations. It involves the following elements:

  • Induction: Induction involves the use of observations to develop general principles or theories.
  • Deduction: Deduction involves the use of general principles or theories to make predictions about specific observations.
  • Alternative explanations: Alternative explanations are explanations that are different from the original hypothesis.
  • Qualitative: Qualitative data is descriptive data that cannot be measured.
  • Falsifiable: Falsifiable means that a hypothesis can be tested and potentially proven false.

Scientific thinking is fundamental to the fields of chemistry, physics, biology, and the study of the universe. It involves the use of controls to ensure the validity of experiments and the collection of data to support or refute hypotheses.

The Art of Critical Analysis

Critical analysis is an essential component of scientific thinking. It is a process of evaluating information, ideas, and arguments to form a well-reasoned judgment. The art of critical analysis involves the ability to identify and evaluate arguments, examine evidence, and detect bias. This section will explore the basics of critical analysis, including hypothesis and argument formation, and evaluating evidence.

Hypothesis and Argument Formation

Hypothesis and argument formation are crucial steps in critical analysis. A hypothesis is a proposed explanation for a phenomenon that can be tested through experimentation or observation. It is essential to form a hypothesis that is testable, falsifiable, and based on available evidence. An argument is a set of propositions that support or oppose a particular position. Arguments can be deductive or inductive and may involve premises, evidence, and conclusions.

When forming a hypothesis or argument, it is essential to consider the available evidence and avoid personal bias. Personal biases can influence hypothesis and argument formation, leading to confirmation bias, where individuals seek evidence that supports their pre-existing beliefs, and ignore evidence that contradicts it. It is essential to approach hypothesis and argument formation with an open mind and evaluate evidence objectively.

Evaluating Evidence

Evaluating evidence is a crucial step in critical analysis. Evidence can come in many forms, including data, expert opinions, and personal experiences. When evaluating evidence, it is essential to consider the reliability and objectivity of the source. Reliable evidence is based on accurate and verifiable data, while objective evidence is free from personal bias.

In addition to evaluating the reliability and objectivity of evidence, it is essential to examine the reasoning and logic behind the evidence. Sound reasoning involves using valid arguments that are based on premises that are true and relevant to the conclusion. It is essential to examine the reasoning behind the evidence and ensure that it is logical and valid.

In conclusion, critical analysis is an essential component of scientific thinking. It involves the ability to identify and evaluate arguments, examine evidence, and detect bias. Hypothesis and argument formation and evaluating evidence are crucial steps in critical analysis. When forming a hypothesis or argument and evaluating evidence, it is essential to consider personal biases, reliability, objectivity, reasoning, and logic. By approaching critical analysis with an open mind and evaluating evidence objectively, individuals can form well-reasoned judgments and make informed decisions.

Scientific Investigation and Research

Scientific investigation and research are essential components of scientific thinking and critical analysis. Research is a systematic process of collecting and analyzing data to answer a research question or test a hypothesis. It involves the use of various research methods to gather data, analyze it, and interpret the results.

Research Methods

Research methods are the techniques used to collect data. They can be qualitative or quantitative. Qualitative research methods are used to gather data that cannot be quantified, such as opinions and attitudes. Quantitative research methods are used to gather data that can be measured and analyzed statistically, such as numerical data.

Some common research methods used in scientific investigation include surveys, experiments, case studies, and observational studies. Each method has its strengths and weaknesses, and the choice of method depends on the research question and the type of data to be collected.

Data Analysis

Once the data has been collected, it is analyzed using statistical methods to identify trends and patterns. Data analysis involves the use of various statistical techniques to test the research hypothesis and draw conclusions from the data.

Interpreting Results

Interpreting research findings involves examining the data and drawing conclusions based on the results of the data analysis. It is important to interpret the results accurately and objectively to ensure the accuracy and validity of the research findings.

Variables are factors that can influence the outcome of the research. The independent variable is the factor that is manipulated in the study, while the dependent variable is the outcome that is measured. The sample size is the number of participants in the study.

Intervention is the process of manipulating the independent variable to observe its effect on the dependent variable. The research process involves selecting a research question, developing a hypothesis, selecting a research method, collecting data, analyzing the data, and interpreting the results.

Scientific investigation and research require a high degree of accuracy and attention to detail. It is important to ensure that the research is conducted ethically and that the results are reported accurately and objectively. By using appropriate research methods, analyzing the data, and interpreting the results accurately, researchers can make valuable contributions to the field of science.

a systematic approach to problem solving used by all scientists

Riddles for Smart People: 100+ Original Puzzles to Solve with Friends (Books for Smart People)

These 100+ riddles are designed to:.

  • Foster creative thinking
  • Encourage critical self-reflection
  • Enhance problem-solving abilities
  • Broaden your vocabulary

Bias and Objectivity in Scientific Thinking

Scientific thinking requires a commitment to objectivity, which is the idea that scientific questions, methods, and results should not be affected by personal biases or opinions. However, it is important to recognize that all scientists have some level of personal bias, which can influence their work.

Understanding and Identifying Bias

Bias can take many forms, including confirmation bias, which is the tendency to seek out information that confirms pre-existing beliefs and ignore information that contradicts them. Other biases include selection bias, which occurs when participants in a study are not representative of the population being studied, and publication bias, which occurs when studies with negative results are less likely to be published.

To identify bias in scientific research, it is important to look for potential sources of bias in the study design and analysis. For example, if a study is funded by a company that sells a product related to the study, there may be a conflict of interest that could bias the results. Similarly, if the study design is flawed or the sample size is too small, the results may not be reliable.

Maintaining Objectivity

To maintain objectivity in scientific thinking, it is important to be aware of personal biases and take steps to minimize their influence. This can include using standardized procedures and protocols to ensure that data collection and analysis are consistent and unbiased. It can also involve seeking out diverse perspectives and opinions to avoid groupthink and confirmation bias.

Maintaining objectivity also requires a commitment to transparency and openness in scientific research. This means openly sharing data and methods with other researchers and being willing to revise or retract findings if new evidence emerges.

In conclusion, while it is impossible to eliminate personal bias entirely, scientists can take steps to minimize its influence and maintain objectivity in their work. By being aware of potential sources of bias and taking steps to address them, scientists can ensure that their research is reliable and trustworthy.

The Role of Critical Thinking Skills

Critical thinking skills are essential for scientific thinking and critical analysis. They involve the ability to observe, interpret, question, reason, and make informed decisions based on acquired knowledge. Critical thinking skills enable individuals to analyze and evaluate information, ideas, and arguments to make informed decisions.

Observation and Interpretation

Observation is the first step in critical thinking. It involves the ability to gather information through the senses and interpret it objectively. Observation requires individuals to pay attention to details, identify patterns, and make connections between different pieces of information. Interpretation involves making sense of the information gathered through observation. It requires individuals to analyze and evaluate data to draw conclusions and make informed decisions.

Questioning and Reasoning

Questioning is an essential aspect of critical thinking. It involves the ability to ask relevant questions to clarify and evaluate information. Questioning enables individuals to identify assumptions, biases, and inconsistencies in arguments and ideas. Reasoning involves the ability to use logic and evidence to evaluate arguments and ideas critically. It requires individuals to identify and evaluate the strength and weaknesses of different arguments and ideas.

Making Informed Decisions

Making informed decisions is the ultimate goal of critical thinking. It involves the ability to use critical thinking skills to evaluate and analyze information to make informed decisions. Making informed decisions requires individuals to consider multiple perspectives, evaluate evidence, and weigh the pros and cons of different options. It also involves the ability to communicate ideas and arguments effectively and persuasively.

In conclusion, critical thinking skills are essential for scientific thinking and critical analysis. They involve the ability to observe, interpret, question, reason, and make informed decisions based on acquired knowledge. Critical thinking skills enable individuals to analyze and evaluate information, ideas, and arguments to make informed decisions.

Applying Scientific Thinking and Critical Analysis

Scientific thinking and critical analysis are essential skills that can be applied in various aspects of life, including everyday situations, academia, and research. By using these skills, individuals can evaluate information and make informed decisions based on evidence rather than opinions or assumptions.

In Everyday Life

In everyday life, scientific thinking and critical analysis can help individuals make informed decisions about their health, finances, and environment. For example, when evaluating health information, individuals can use scientific thinking to assess the credibility of sources and critically analyze the evidence presented. This can help them make informed decisions about their health and well-being.

Similarly, when making financial decisions, individuals can use critical analysis to evaluate investment opportunities and assess the potential risks and benefits. By applying scientific thinking and critical analysis, individuals can make informed decisions that are based on evidence rather than speculation or hearsay.

In Academia

In college and other academic settings, scientific thinking and critical analysis are essential skills that students need to develop to succeed. By applying these skills, students can evaluate information, analyze data, and make informed decisions about their academic work.

For example, when conducting research, students can use scientific thinking to develop hypotheses, design experiments, and analyze data. By using critical analysis, they can evaluate the credibility of sources and assess the quality of evidence presented.

In Research

In research, scientific thinking and critical analysis are essential skills that researchers need to develop to conduct rigorous and reliable studies. By applying these skills, researchers can design studies that are based on sound scientific principles and analyze data in a rigorous and systematic manner.

For example, when designing a study, researchers can use scientific thinking to develop hypotheses, design experiments, and select appropriate measures. By using critical analysis, they can evaluate the quality of evidence presented and assess the validity of their findings.

Overall, scientific thinking and critical analysis are essential skills that can be applied in various aspects of life. By developing these skills, individuals can evaluate information, analyze data, and make informed decisions based on evidence rather than opinions or assumptions.

The Influence of External Factors

Scientific thinking and critical analysis are not only influenced by internal factors such as cognitive skills, but also by external factors. These external factors can include the role of the author and expert, the impact of time and environment, and the effect of personal motivation. Understanding how these external factors can influence scientific thinking is crucial for researchers and students alike.

The Role of the Author and Expert

The author and expert play an important role in shaping scientific thinking. The credibility and reputation of the author or expert can influence how their work is perceived and accepted in the scientific community. For example, research conducted by top scholars in a field is often considered more credible and influential than research conducted by lesser-known scholars. In a study analyzing the relation between internal and external influences of top economics scholars, the number of pages indexed by Google and Bing was used as a measure of external influence. The study found that although the correlation between internal and external influence is low overall, it is highest among recipients of major key awards such as Nobel laureates.

The Impact of Time and Environment

Time and environment can also have a significant impact on scientific thinking. The cultural and social context in which research is conducted can influence the questions asked, the methods used, and the interpretations made. For example, research conducted in a certain time period may be influenced by the prevailing social and political attitudes of that time. Similarly, research conducted in different geographical regions may be influenced by the cultural norms and values of those regions.

The Effect of Personal Motivation

Personal motivation is another external factor that can influence scientific thinking. Researchers who are motivated by personal interests or financial gain may be more likely to pursue research that supports their interests or financial goals, rather than research that is objective and unbiased. In a study analyzing the factors related to critical thinking abilities of high school students, the significant internal factors were found to be intention and orientation in choosing the study program, while the significant external factors were found to be quality of education and the teacher’s ability to provide guidance.

In conclusion, external factors can have a significant impact on scientific thinking and critical analysis. Researchers and students should be aware of these external factors and take steps to mitigate their influence when conducting research or evaluating scientific claims. By doing so, they can ensure that their work is objective, unbiased, and credible.

Challenges and Misconceptions

Scientific thinking and critical analysis are not easy skills to master. There are many challenges and misconceptions that can hinder one’s ability to think critically. In this section, we will discuss some of the common misconceptions and challenges that people face when trying to think scientifically.

Common Misconceptions

One of the most common misconceptions about scientific thinking is that it is all about memorizing facts and figures. However, this is far from the truth. Scientific thinking is all about questioning assumptions, analyzing evidence, and making logical conclusions based on that evidence. It is not about blindly accepting what someone else tells you.

Another misconception is that scientific thinking is only for scientists. In reality, anyone can benefit from learning to think scientifically. Whether you are a student, a business person, or just someone who wants to make better decisions, scientific thinking can help you achieve your goals.

Overcoming Challenges

One of the biggest challenges with scientific thinking is overcoming our own biases and preconceptions. We all have our own beliefs and assumptions about the world, and these can sometimes get in the way of our ability to think critically. To overcome this challenge, it is important to be aware of our own biases and to actively work to overcome them.

Another challenge is dealing with misinformation and fake news. In today’s world, it is all too easy to be misled by false information. To overcome this challenge, it is important to be skeptical of information that seems too good to be true and to always verify the source of the information before accepting it as true.

In conclusion, scientific thinking and critical analysis are important skills that can help us make better decisions and lead more fulfilling lives. However, there are many challenges and misconceptions that can make it difficult to think scientifically. By being aware of these challenges and actively working to overcome them, we can all become better critical thinkers.

In conclusion, scientific thinking and critical analysis are essential skills for any individual who wants to make informed decisions and solve problems based on accurate and reliable information. The process of scientific thinking involves the application of logic, research, and methods to analyze data and draw conclusions based on evidence. It requires individuals to be unbiased, open-minded, and willing to challenge their assumptions and beliefs.

To develop these skills, individuals must have a strong foundation of knowledge on the subject matter they are analyzing. They must be able to identify and evaluate sources of information based on their accuracy and reliability. They must also be able to recognize and address biases that may affect their analysis and conclusions.

Accuracy is crucial in scientific thinking and critical analysis. Individuals must be able to distinguish between facts and opinions and use evidence-based reasoning to draw conclusions. They must also be able to communicate their findings clearly and concisely to others.

The purpose of scientific thinking and critical analysis is to improve our understanding of the world around us and to make informed decisions based on evidence. By applying these skills, individuals can solve complex problems, identify new opportunities, and contribute to the advancement of knowledge in their respective fields.

Overall, the importance of scientific thinking and critical analysis cannot be overstated. It is a fundamental aspect of human knowledge and progress, and its application has led to numerous breakthroughs and discoveries throughout history. As such, individuals who develop these skills are better equipped to navigate the complexities of the modern world and make informed decisions that positively impact their lives and those around them.

Frequently Asked Questions

What is the importance of scientific thinking in research.

Scientific thinking is crucial in research as it helps to ensure that the research is conducted in a systematic and objective manner. By using scientific thinking, researchers are able to develop hypotheses, design experiments, and analyze data in a way that minimizes bias and maximizes the reliability of the results. Scientific thinking is therefore essential for producing accurate and trustworthy research findings.

What are some examples of scientific thinking in everyday life?

Scientific thinking is not limited to research settings and can be applied in everyday life as well. Examples of scientific thinking in everyday life include using evidence to support arguments, evaluating claims based on data and facts, and making decisions based on logical reasoning. Scientific thinking can also involve questioning assumptions, seeking out new information, and being open to changing one’s beliefs based on new evidence.

What are the basics of scientific thinking?

The basics of scientific thinking include observation, hypothesis formation, experimentation, and analysis of data. Scientific thinking involves being systematic, objective, and logical in one’s approach to problem-solving. It is also important to be aware of one’s biases and assumptions when conducting scientific research.

What are the components of scientific and critical thinking?

The components of scientific and critical thinking include observation, analysis, interpretation, evaluation, and communication. These components are interconnected and involve being systematic, objective, and logical in one’s approach to problem-solving. Scientific and critical thinking also involve being open-minded, questioning assumptions, and seeking out new information.

How does critical thinking relate to scientific thinking?

Critical thinking is closely related to scientific thinking as both involve being systematic, objective, and logical in one’s approach to problem-solving. However, critical thinking can be applied to a wider range of topics beyond scientific research. Critical thinking involves evaluating arguments, analyzing evidence, and making informed decisions based on logical reasoning.

What are the three central components of scientific critical thinking?

The three central components of scientific critical thinking are skepticism, objectivity, and curiosity. Skepticism involves questioning assumptions and being open to changing one’s beliefs based on new evidence. Objectivity involves being unbiased and minimizing personal biases and assumptions when conducting research. Curiosity involves being open to new ideas and seeking out new information to expand one’s understanding of the world.

You may also like

Common Critical Thinking Fallacies

Common Critical Thinking Fallacies

Critical thinking is the process of reaching a decision or judgment by analyzing, evaluating, and reasoning with facts and data presented. However, […]

critical thinking questions

10 Critical Thinking Questions You Can Use to Solve Everyday Problems

Critical thinking implies a systematic evaluation of a situation that needs reacting.  It can help you arrive at the best possible solution. […]

religion and critical thinking

Religion and Critical Thinking: How critical thinking impacts religion

The more critical thinking skills you have, the less religious beliefs you have. It has been found that those who think critically […]

Best Critical Thinking Apps

Best Critical Thinking Apps: Enhance Your Cognitive Skills Today

In today’s fast-paced and technology-driven world, critical thinking skills are more important than ever. With an abundance of information at our fingertips, […]

SEP home page

  • Table of Contents
  • Random Entry
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Advanced Tools
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Entry Contents

Bibliography

Academic tools.

  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

Scientific Discovery

Scientific discovery is the process or product of successful scientific inquiry. Objects of discovery can be things, events, processes, causes, and properties as well as theories and hypotheses and their features (their explanatory power, for example). Most philosophical discussions of scientific discoveries focus on the generation of new hypotheses that fit or explain given data sets or allow for the derivation of testable consequences. Philosophical discussions of scientific discovery have been intricate and complex because the term “discovery” has been used in many different ways, both to refer to the outcome and to the procedure of inquiry. In the narrowest sense, the term “discovery” refers to the purported “eureka moment” of having a new insight. In the broadest sense, “discovery” is a synonym for “successful scientific endeavor” tout court. Some philosophical disputes about the nature of scientific discovery reflect these terminological variations.

Philosophical issues related to scientific discovery arise about the nature of human creativity, specifically about whether the “eureka moment” can be analyzed and about whether there are rules (algorithms, guidelines, or heuristics) according to which such a novel insight can be brought about. Philosophical issues also arise about the analysis and evaluation of heuristics, about the characteristics of hypotheses worthy of articulation and testing, and, on the meta-level, about the nature and scope of philosophical analysis itself. This essay describes the emergence and development of the philosophical problem of scientific discovery and surveys different philosophical approaches to understanding scientific discovery. In doing so, it also illuminates the meta-philosophical problems surrounding the debates, and, incidentally, the changing nature of philosophy of science.

1. Introduction

2. scientific inquiry as discovery, 3. elements of discovery, 4. pragmatic logics of discovery, 5. the distinction between the context of discovery and the context of justification, 6.1 discovery as abduction, 6.2 heuristic programming, 7. anomalies and the structure of discovery, 8.1 discoverability, 8.2 preliminary appraisal, 8.3 heuristic strategies, 9.1 kinds and features of creativity, 9.2 analogy, 9.3 mental models, 10. machine discovery, 11. social epistemology and discovery, 12. integrated approaches to knowledge generation, other internet resources, related entries.

Philosophical reflection on scientific discovery occurred in different phases. Prior to the 1930s, philosophers were mostly concerned with discoveries in the broad sense of the term, that is, with the analysis of successful scientific inquiry as a whole. Philosophical discussions focused on the question of whether there were any discernible patterns in the production of new knowledge. Because the concept of discovery did not have a specified meaning and was used in a very wide sense, almost all discussions of scientific method and practice could potentially be considered as early contributions to reflections on scientific discovery. In the course of the 18 th century, as philosophy of science and science gradually became two distinct endeavors with different audiences, the term “discovery” became a technical term in philosophical discussions. Different elements of scientific inquiry were specified. Most importantly, during the 19 th century, the generation of new knowledge came to be clearly and explicitly distinguished from its assessment, and thus the conditions for the narrower notion of discovery as the act or process of conceiving new ideas emerged. This distinction was encapsulated in the so-called “context distinction,” between the “context of discovery” and the “context of justification”.

Much of the discussion about scientific discovery in the 20 th century revolved around this distinction It was argued that conceiving a new idea is a non-rational process, a leap of insight that cannot be captured in specific instructions. Justification, by contrast, is a systematic process of applying evaluative criteria to knowledge claims. Advocates of the context distinction argued that philosophy of science is exclusively concerned with the context of justification. The assumption underlying this argument is that philosophy is a normative project; it determines norms for scientific practice. Given this assumption, only the justification of ideas, not their generation, can be the subject of philosophical (normative) analysis. Discovery, by contrast, can only be a topic for empirical study. By definition, the study of discovery is outside the scope of philosophy of science proper.

The introduction of the context distinction and the disciplinary distinction between empirical science studies and normative philosophy of science that was tied to it spawned meta-philosophical disputes. For a long time, philosophical debates about discovery were shaped by the notion that philosophical and empirical analyses are mutually exclusive. Some philosophers insisted, like their predecessors prior to the 1930s, that the philosopher’s tasks include the analysis of actual scientific practices and that scientific resources be used to address philosophical problems. They maintained that it is a legitimate task for philosophy of science to develop a theory of heuristics or problem solving. But this position was the minority view in philosophy of science until the last decades of the 20 th century. Philosophers of discovery were thus compelled to demonstrate that scientific discovery was in fact a legitimate part of philosophy of science. Philosophical reflections about the nature of scientific discovery had to be bolstered by meta-philosophical arguments about the nature and scope of philosophy of science.

Today, however, there is wide agreement that philosophy and empirical research are not mutually exclusive. Not only do empirical studies of actual scientific discoveries in past and present inform philosophical thought about the structure and cognitive mechanisms of discovery, but works in psychology, cognitive science, artificial intelligence and related fields have become integral parts of philosophical analyses of the processes and conditions of the generation of new knowledge. Social epistemology has opened up another perspective on scientific discovery, reconceptualizing knowledge generation as group process.

Prior to the 19 th century, the term “discovery” was used broadly to refer to a new finding, such as a new cure, an unknown territory, an improvement of an instrument, or a new method of measuring longitude. One strand of the discussion about discovery dating back to ancient times concerns the method of analysis as the method of discovery in mathematics and geometry, and, by extension, in philosophy and scientific inquiry. Following the analytic method, we seek to find or discover something – the “thing sought,” which could be a theorem, a solution to a geometrical problem, or a cause – by analyzing it. In the ancient Greek context, analytic methods in mathematics, geometry, and philosophy were not clearly separated; the notion of finding or discovering things by analysis was relevant in all these fields.

In the ensuing centuries, several natural and experimental philosophers, including Avicenna and Zabarella, Bacon and Boyle, the authors of the Port-Royal Logic and Newton, and many others, expounded rules of reasoning and methods for arriving at new knowledge. The ancient notion of analysis still informed these rules and methods. Newton’s famous thirty-first query in the second edition of the Opticks outlines the role of analysis in discovery as follows: “As in Mathematicks, so in Natural Philosophy, the Investigation of difficult Things by the Method of Analysis, ought ever to precede the Method of Composition. This Analysis consists in making Experiments and Observations, and in drawing general Conclusions from them by Induction, and admitting of no Objections against the Conclusions, but such as are taken from Experiments, or other certain Truths … By this way of Analysis we may proceed from Compounds to Ingredients, and from Motions to the Forces producing them; and in general, from Effects to their Causes, and from particular Causes to more general ones, till the Argument end in the most general. This is the Method of Analysis” (Newton 1718, 380, see Koertge 1980, section VI). Early modern accounts of discovery captured knowledge-seeking practices in the study of living and non-living nature, ranging from astronomy and physics to medicine, chemistry, and agriculture. These rich accounts of scientific inquiry were often expounded to bolster particular theories about the nature of matter and natural forces and were not explicitly labeled “methods of discovery ”, yet they are, in fact, accounts of knowledge generation and proper scientific reasoning, covering topics such as the role of the senses in knowledge generation, observation and experimentation, analysis and synthesis, induction and deduction, hypotheses, probability, and certainty.

Bacon’s work is a prominent example. His view of the method of science as it is presented in the Novum Organum showed how best to arrive at knowledge about “form natures” (the most general properties of matter) via a systematic investigation of phenomenal natures. Bacon described how first to collect and organize natural phenomena and experimentally produced facts in tables, how to evaluate these lists, and how to refine the initial results with the help of further trials. Through these steps, the investigator would arrive at conclusions about the “form nature” that produces particular phenomenal natures. Bacon expounded the procedures of constructing and evaluating tables of presences and absences to underpin his matter theory. In addition, in his other writings, such as his natural history Sylva Sylvarum or his comprehensive work on human learning De Augmentis Scientiarium , Bacon exemplified the “art of discovery” with practical examples and discussions of strategies of inquiry.

Like Bacon and Newton, several other early modern authors advanced ideas about how to generate and secure empirical knowledge, what difficulties may arise in scientific inquiry, and how they could be overcome. The close connection between theories about matter and force and scientific methodologies that we find in early modern works was gradually severed. 18 th - and early 19 th -century authors on scientific method and logic cited early modern approaches mostly to model proper scientific practice and reasoning, often creatively modifying them ( section 3 ). Moreover, they developed the earlier methodologies of experimentation, observation, and reasoning into practical guidelines for discovering new phenomena and devising probable hypotheses about cause-effect relations.

It was common in 20 th -century philosophy of science to draw a sharp contrast between those early theories of scientific method and modern approaches. 20 th -century philosophers of science interpreted 17 th - and 18 th -century approaches as generative theories of scientific method. They function simultaneously as guides for acquiring new knowledge and as assessments of the knowledge thus obtained, whereby knowledge that is obtained “in the right way” is considered secure (Laudan 1980; Schaffner 1993: chapter 2). On this view, scientific methods are taken to have probative force (Nickles 1985). According to modern, “consequentialist” theories, propositions must be established by comparing their consequences with observed and experimentally produced phenomena (Laudan 1980; Nickles 1985). It was further argued that, when consequentialist theories were on the rise, the two processes of generation and assessment of an idea or hypothesis became distinct, and the view that the merit of a new idea does not depend on the way in which it was arrived at became widely accepted.

More recent research in history of philosophy of science has shown, however, that there was no such sharp contrast. Consequentialist ideas were advanced throughout the 18 th century, and the early modern generative theories of scientific method and knowledge were more pragmatic than previously assumed. Early modern scholars did not assume that this procedure would lead to absolute certainty. One could only obtain moral certainty for the propositions thus secured.

During the 18 th and 19 th centuries, the different elements of discovery gradually became separated and discussed in more detail. Discussions concerned the nature of observations and experiments, the act of having an insight and the processes of articulating, developing, and testing the novel insight. Philosophical discussion focused on the question of whether and to what extent rules could be devised to guide each of these processes.

Numerous 19 th -century scholars contributed to these discussions, including Claude Bernard, Auguste Comte, George Gore, John Herschel, W. Stanley Jevons, Justus von Liebig, John Stuart Mill, and Charles Sanders Peirce, to name only a few. William Whewell’s work, especially the two volumes of Philosophy of the Inductive Sciences of 1840, is a noteworthy and, later, much discussed contribution to the philosophical debates about scientific discovery because he explicitly distinguished the creative moment or “happy thought” as he called it from other elements of scientific inquiry and because he offered a detailed analysis of the “discoverer’s induction”, i.e., the pursuit and evaluation of the new insight. Whewell’s approach is not unique, but for late 20 th -century philosophers of science, his comprehensive, historically informed philosophy of discovery became a point of orientation in the revival of interest in scientific discovery processes.

For Whewell, discovery comprised three elements: the happy thought, the articulation and development of that thought, and the testing or verification of it. His account was in part a description of the psychological makeup of the discoverer. For instance, he held that only geniuses could have those happy thoughts that are essential to discovery. In part, his account was an account of the methods by which happy thoughts are integrated into the system of knowledge. According to Whewell, the initial step in every discovery is what he called “some happy thought, of which we cannot trace the origin; some fortunate cast of intellect, rising above all rules. No maxims can be given which inevitably lead to discovery” (Whewell 1996 [1840]: 186). An “art of discovery” in the sense of a teachable and learnable skill does not exist according to Whewell. The happy thought builds on the known facts, but according to Whewell it is impossible to prescribe a method for having happy thoughts.

In this sense, happy thoughts are accidental. But in an important sense, scientific discoveries are not accidental. The happy thought is not a wild guess. Only the person whose mind is prepared to see things will actually notice them. The “previous condition of the intellect, and not the single fact, is really the main and peculiar cause of the success. The fact is merely the occasion by which the engine of discovery is brought into play sooner or later. It is, as I have elsewhere said, only the spark which discharges a gun already loaded and pointed; and there is little propriety in speaking of such an accident as the cause why the bullet hits its mark.” (Whewell 1996 [1840]: 189).

Having a happy thought is not yet a discovery, however. The second element of a scientific discovery consists in binding together—“colligating”, as Whewell called it—a set of facts by bringing them under a general conception. Not only does the colligation produce something new, but it also shows the previously known facts in a new light. Colligation involves, on the one hand, the specification of facts through systematic observation, measurements and experiment, and on the other hand, the clarification of ideas through the exposition of the definitions and axioms that are tacitly implied in those ideas. This process is extended and iterative. The scientists go back and forth between binding together the facts, clarifying the idea, rendering the facts more exact, and so forth.

The final part of the discovery is the verification of the colligation involving the happy thought. This means, first and foremost, that the outcome of the colligation must be sufficient to explain the data at hand. Verification also involves judging the predictive power, simplicity, and “consilience” of the outcome of the colligation. “Consilience” refers to a higher range of generality (broader applicability) of the theory (the articulated and clarified happy thought) that the actual colligation produced. Whewell’s account of discovery is not a deductivist system. It is essential that the outcome of the colligation be inferable from the data prior to any testing (Snyder 1997).

Whewell’s theory of discovery clearly separates three elements: the non-analyzable happy thought or eureka moment; the process of colligation which includes the clarification and explication of facts and ideas; and the verification of the outcome of the colligation. His position that the philosophy of discovery cannot prescribe how to think happy thoughts has been a key element of 20 th -century philosophical reflection on discovery. In contrast to many 20 th -century approaches, Whewell’s philosophical conception of discovery also comprises the processes by which the happy thoughts are articulated. Similarly, the process of verification is an integral part of discovery. The procedures of articulation and test are both analyzable according to Whewell, and his conception of colligation and verification serve as guidelines for how the discoverer should proceed. To verify a hypothesis, the investigator needs to show that it accounts for the known facts, that it foretells new, previously unobserved phenomena, and that it can explain and predict phenomena which are explained and predicted by a hypothesis that was obtained through an independent happy thought-cum-colligation (Ducasse 1951).

Whewell’s conceptualization of scientific discovery offers a useful framework for mapping the philosophical debates about discovery and for identifying major issues of concern in 20 th -century philosophical debates. Until the late 20 th century, most philosophers operated with a notion of discovery that is narrower than Whewell’s. In more recent treatments of discovery, however, the scope of the term “discovery” is limited to either the first of these elements, the “happy thought”, or to the happy thought and its initial articulation. In the narrower conception, what Whewell called “verification” is not part of discovery proper. Secondly, until the late 20 th century, there was wide agreement that the eureka moment, narrowly construed, is an unanalyzable, even mysterious leap of insight. The main disagreements concerned the question of whether the process of developing a hypothesis (the “colligation” in Whewell’s terms) is, or is not, a part of discovery proper – and if it is, whether and how this process is guided by rules. Much of the controversies in the 20 th century about the possibility of a philosophy of discovery can be understood against the background of the disagreement about whether the process of discovery does or does not include the articulation and development of a novel thought. Philosophers also disagreed on the issue of whether it is a philosophical task to explicate these rules.

In early 20 th -century logical empiricism, the view that discovery is or at least crucially involves a non-analyzable creative act of a gifted genius was widespread. Alternative conceptions of discovery especially in the pragmatist tradition emphasize that discovery is an extended process, i.e., that the discovery process includes the reasoning processes through which a new insight is articulated and further developed.

In the pragmatist tradition, the term “logic” is used in the broad sense to refer to strategies of human reasoning and inquiry. While the reasoning involved does not proceed according to the principles of demonstrative logic, it is systematic enough to deserve the label “logical”. Proponents of this view argued that traditional (here: syllogistic) logic is an inadequate model of scientific discovery because it misrepresents the process of knowledge generation as grossly as the notion of an “aha moment”.

Early 20 th -century pragmatic logics of discovery can best be described as comprehensive theories of the mental and physical-practical operations involved in knowledge generation, as theories of “how we think” (Dewey 1910). Among the mental operations are classification, determination of what is relevant to an inquiry, and the conditions of communication of meaning; among the physical operations are observation and (laboratory) experiments. These features of scientific discovery are either not or only insufficiently represented by traditional syllogistic logic (Schiller 1917: 236–7).

Philosophers advocating this approach agree that the logic of discovery should be characterized as a set of heuristic principles rather than as a process of applying inductive or deductive logic to a set of propositions. These heuristic principles are not understood to show the path to secure knowledge. Heuristic principles are suggestive rather than demonstrative (Carmichael 1922, 1930). One recurrent feature in these accounts of the reasoning strategies leading to new ideas is analogical reasoning (Schiller 1917; Benjamin 1934, see also section 9.2 .). However, in academic philosophy of science, endeavors to develop more systematically the heuristics guiding discovery processes were soon eclipsed by the advance of the distinction between contexts of discovery and justification.

The distinction between “context of discovery” and “context of justification” dominated and shaped the discussions about discovery in 20 th -century philosophy of science. The context distinction marks the distinction between the generation of a new idea or hypothesis and the defense (test, verification) of it. As the previous sections have shown, the distinction among different elements of scientific inquiry has a long history but in the first half of the 20 th century, the distinction between the different features of scientific inquiry turned into a powerful demarcation criterion between “genuine” philosophy and other fields of science studies, which became potent in philosophy of science. The boundary between context of discovery (the de facto thinking processes) and context of justification (the de jure defense of the correctness of these thoughts) was now understood to determine the scope of philosophy of science, whereby philosophy of science is conceived as a normative endeavor. Advocates of the context distinction argue that the generation of a new idea is an intuitive, nonrational process; it cannot be subject to normative analysis. Therefore, the study of scientists’ actual thinking can only be the subject of psychology, sociology, and other empirical sciences. Philosophy of science, by contrast, is exclusively concerned with the context of justification.

The terms “context of discovery” and “context of justification” are often associated with Hans Reichenbach’s work. Reichenbach’s original conception of the context distinction is quite complex, however (Howard 2006; Richardson 2006). It does not map easily on to the disciplinary distinction mentioned above, because for Reichenbach, philosophy of science proper is partly descriptive. Reichenbach maintains that philosophy of science includes a description of knowledge as it really is. Descriptive philosophy of science reconstructs scientists’ thinking processes in such a way that logical analysis can be performed on them, and it thus prepares the ground for the evaluation of these thoughts (Reichenbach 1938: § 1). Discovery, by contrast, is the object of empirical—psychological, sociological—study. According to Reichenbach, the empirical study of discoveries shows that processes of discovery often correspond to the principle of induction, but this is simply a psychological fact (Reichenbach 1938: 403).

While the terms “context of discovery” and “context of justification” are widely used, there has been ample discussion about how the distinction should be drawn and what their philosophical significance is (c.f. Kordig 1978; Gutting 1980; Zahar 1983; Leplin 1987; Hoyningen-Huene 1987; Weber 2005: chapter 3; Schickore and Steinle 2006). Most commonly, the distinction is interpreted as a distinction between the process of conceiving a theory and the assessment of that theory, specifically the assessment of the theory’s epistemic support. This version of the distinction is not necessarily interpreted as a temporal distinction. In other words, it is not usually assumed that a theory is first fully developed and then assessed. Rather, generation and assessment are two different epistemic approaches to theory: the endeavor to articulate, flesh out, and develop its potential and the endeavor to assess its epistemic worth. Within the framework of the context distinction, there are two main ways of conceptualizing the process of conceiving a theory. The first option is to characterize the generation of new knowledge as an irrational act, a mysterious creative intuition, a “eureka moment”. The second option is to conceptualize the generation of new knowledge as an extended process that includes a creative act as well as some process of articulating and developing the creative idea.

Both of these accounts of knowledge generation served as starting points for arguments against the possibility of a philosophy of discovery. In line with the first option, philosophers have argued that neither is it possible to prescribe a logical method that produces new ideas nor is it possible to reconstruct logically the process of discovery. Only the process of testing is amenable to logical investigation. This objection to philosophies of discovery has been called the “discovery machine objection” (Curd 1980: 207). It is usually associated with Karl Popper’s Logic of Scientific Discovery .

The initial state, the act of conceiving or inventing a theory, seems to me neither to call for logical analysis not to be susceptible of it. The question how it happens that a new idea occurs to a man—whether it is a musical theme, a dramatic conflict, or a scientific theory—may be of great interest to empirical psychology; but it is irrelevant to the logical analysis of scientific knowledge. This latter is concerned not with questions of fact (Kant’s quid facti ?) , but only with questions of justification or validity (Kant’s quid juris ?) . Its questions are of the following kind. Can a statement be justified? And if so, how? Is it testable? Is it logically dependent on certain other statements? Or does it perhaps contradict them? […]Accordingly I shall distinguish sharply between the process of conceiving a new idea, and the methods and results of examining it logically. As to the task of the logic of knowledge—in contradistinction to the psychology of knowledge—I shall proceed on the assumption that it consists solely in investigating the methods employed in those systematic tests to which every new idea must be subjected if it is to be seriously entertained. (Popper 2002 [1934/1959]: 7–8)

With respect to the second way of conceptualizing knowledge generation, many philosophers argue in a similar fashion that because the process of discovery involves an irrational, intuitive process, which cannot be examined logically, a logic of discovery cannot be construed. Other philosophers turn against the philosophy of discovery even though they explicitly acknowledge that discovery is an extended, reasoned process. They present a meta-philosophical objection argument, arguing that a theory of articulating and developing ideas is not a philosophical but a psychological or sociological theory. In this perspective, “discovery” is understood as a retrospective label, which is attributed as a sign of accomplishment to some scientific endeavors. Sociological theories acknowledge that discovery is a collective achievement and the outcome of a process of negotiation through which “discovery stories” are constructed and certain knowledge claims are granted discovery status (Brannigan 1981; Schaffer 1986, 1994).

The impact of the context distinction on 20 th -century studies of scientific discovery and on philosophy of science more generally can hardly be overestimated. The view that the process of discovery (however construed) is outside the scope of philosophy of science proper was widely shared amongst philosophers of science for most of the 20 th century. The last section shows that there were some attempts to develop logics of discovery in the 1920s and 1930s, especially in the pragmatist tradition. But for several decades, the context distinction dictated what philosophy of science should be about and how it should proceed. The dominant view was that theories of mental operations or heuristics had no place in philosophy of science and that, therefore, discovery was not a legitimate topic for philosophy of science. Until the last decades of the 20 th century, there were few attempts to challenge the disciplinary distinction tied to the context distinction. Only during the 1970s did the interest in philosophical approaches to discovery begin to increase again. But the context distinction remained a challenge for philosophies of discovery.

There are several lines of response to the disciplinary distinction tied to the context distinction. Each of these lines of response opens a philosophical perspective on discovery. Each proceeds on the assumption that philosophy of science may legitimately include some form of analysis of actual reasoning patterns as well as information from empirical sciences such as cognitive science, psychology, and sociology. All of these responses reject the idea that discovery is nothing but a mystical event. Discovery is conceived as an analyzable reasoning process, not just as a creative leap by which novel ideas spring into being fully formed. All of these responses agree that the procedures and methods for arriving at new hypotheses and ideas are no guarantee that the hypothesis or idea that is thus formed is necessarily the best or the correct one. Nonetheless, it is the task of philosophy of science to provide rules for making this process better. All of these responses can be described as theories of problem solving, whose ultimate goal is to make the generation of new ideas and theories more efficient.

But the different approaches to scientific discovery employ different terminologies. In particular, the term “logic” of discovery is sometimes used in a narrow sense and sometimes broadly understood. In the narrow sense, “logic” of discovery is understood to refer to a set of formal, generally applicable rules by which novel ideas can be mechanically derived from existing data. In the broad sense, “logic” of discovery refers to the schematic representation of reasoning procedures. “Logical” is just another term for “rational”. Moreover, while each of these responses combines philosophical analyses of scientific discovery with empirical research on actual human cognition, different sets of resources are mobilized, ranging from AI research and cognitive science to historical studies of problem-solving procedures. Also, the responses parse the process of scientific inquiry differently. Often, scientific inquiry is regarded as having two aspects, viz. generation and assessments of new ideas. At times, however, scientific inquiry is regarded as having three aspects, namely generation, pursuit or articulation, and assessment of knowledge. In the latter framework, the label “discovery” is sometimes used to refer just to generation and sometimes to refer to both generation and pursuit.

One response to the challenge of the context distinction draws on a broad understanding of the term “logic” to argue that we cannot but admit a general, domain-neutral logic if we do not want to assume that the success of science is a miracle (Jantzen 2016) and that a logic of scientific discovery can be developed ( section 6 ). Another response, drawing on a narrow understanding of the term “logic”, is to concede that there is no logic of discovery, i.e., no algorithm for generating new knowledge, but that the process of discovery follows an identifiable, analyzable pattern ( section 7 ).

Others argue that discovery is governed by a methodology . The methodology of discovery is a legitimate topic for philosophical analysis ( section 8 ). Yet another response assumes that discovery is or at least involves a creative act. Drawing on resources from cognitive science, neuroscience, computational research, and environmental and social psychology, philosophers have sought to demystify the cognitive processes involved in the generation of new ideas. Philosophers who take this approach argue that scientific creativity is amenable to philosophical analysis ( section 9.1 ).

All these responses assume that there is more to discovery than a eureka moment. Discovery comprises processes of articulating, developing, and assessing the creative thought, as well as the scientific community’s adjudication of what does, and does not, count as “discovery” (Arabatzis 1996). These are the processes that can be examined with the tools of philosophical analysis, augmented by input from other fields of science studies such as sociology, history, or cognitive science.

6. Logics of discovery after the context distinction

One way of responding to the demarcation criterion described above is to argue that discovery is a topic for philosophy of science because it is a logical process after all. Advocates of this approach to the logic of discovery usually accept the overall distinction between the two processes of conceiving and testing a hypothesis. They also agree that it is impossible to put together a manual that provides a formal, mechanical procedure through which innovative concepts or hypotheses can be derived: There is no discovery machine. But they reject the view that the process of conceiving a theory is a creative act, a mysterious guess, a hunch, a more or less instantaneous and random process. Instead, they insist that both conceiving and testing hypotheses are processes of reasoning and systematic inference, that both of these processes can be represented schematically, and that it is possible to distinguish better and worse paths to new knowledge.

This line of argument has much in common with the logics of discovery described in section 4 above but it is now explicitly pitched against the disciplinary distinction tied to the context distinction. There are two main ways of developing this argument. The first is to conceive of discovery in terms of abductive reasoning ( section 6.1 ). The second is to conceive of discovery in terms of problem-solving algorithms, whereby heuristic rules aid the processing of available data and enhance the success in finding solutions to problems ( section 6.2 ). Both lines of argument rely on a broad conception of logic, whereby the “logic” of discovery amounts to a schematic account of the reasoning processes involved in knowledge generation.

One argument, elaborated prominently by Norwood R. Hanson, is that the act of discovery—here, the act of suggesting a new hypothesis—follows a distinctive logical pattern, which is different from both inductive logic and the logic of hypothetico-deductive reasoning. The special logic of discovery is the logic of abductive or “retroductive” inferences (Hanson 1958). The argument that it is through an act of abductive inferences that plausible, promising scientific hypotheses are devised goes back to C.S. Peirce. This version of the logic of discovery characterizes reasoning processes that take place before a new hypothesis is ultimately justified. The abductive mode of reasoning that leads to plausible hypotheses is conceptualized as an inference beginning with data or, more specifically, with surprising or anomalous phenomena.

In this view, discovery is primarily a process of explaining anomalies or surprising, astonishing phenomena. The scientists’ reasoning proceeds abductively from an anomaly to an explanatory hypothesis in light of which the phenomena would no longer be surprising or anomalous. The outcome of this reasoning process is not one single specific hypothesis but the delineation of a type of hypotheses that is worthy of further attention (Hanson 1965: 64). According to Hanson, the abductive argument has the following schematic form (Hanson 1960: 104):

  • Some surprising, astonishing phenomena p 1 , p 2 , p 3 … are encountered.
  • But p 1 , p 2 , p 3 … would not be surprising were an hypothesis of H ’s type to obtain. They would follow as a matter of course from something like H and would be explained by it.
  • Therefore there is good reason for elaborating an hypothesis of type H—for proposing it as a possible hypothesis from whose assumption p 1 , p 2 , p 3 … might be explained.

Drawing on the historical record, Hanson argues that several important discoveries were made relying on abductive reasoning, such as Kepler’s discovery of the elliptic orbit of Mars (Hanson 1958). It is now widely agreed, however, that Hanson’s reconstruction of the episode is not a historically adequate account of Kepler’s discovery (Lugg 1985). More importantly, while there is general agreement that abductive inferences are frequent in both everyday and scientific reasoning, these inferences are no longer considered as logical inferences. Even if one accepts Hanson’s schematic representation of the process of identifying plausible hypotheses, this process is a “logical” process only in the widest sense whereby the term “logical” is understood as synonymous with “rational”. Notably, some philosophers have even questioned the rationality of abductive inferences (Koehler 1991; Brem and Rips 2000).

Another argument against the above schema is that it is too permissive. There will be several hypotheses that are explanations for phenomena p 1 , p 2 , p 3 …, so the fact that a particular hypothesis explains the phenomena is not a decisive criterion for developing that hypothesis (Harman 1965; see also Blackwell 1969). Additional criteria are required to evaluate the hypothesis yielded by abductive inferences.

Finally, it is worth noting that the schema of abductive reasoning does not explain the very act of conceiving a hypothesis or hypothesis-type. The processes by which a new idea is first articulated remain unanalyzed in the above schema. The schema focuses on the reasoning processes by which an exploratory hypothesis is assessed in terms of its merits and promise (Laudan 1980; Schaffner 1993).

In more recent work on abduction and discovery, two notions of abduction are sometimes distinguished: the common notion of abduction as inference to the best explanation (selective abduction) and creative abduction (Magnani 2000, 2009). Selective abduction—the inference to the best explanation—involves selecting a hypothesis from a set of known hypotheses. Medical diagnosis exemplifies this kind of abduction. Creative abduction, by contrast, involves generating a new, plausible hypothesis. This happens, for instance, in medical research, when the notion of a new disease is articulated. However, it is still an open question whether this distinction can be drawn, or whether there is a more gradual transition from selecting an explanatory hypothesis from a familiar domain (selective abduction) to selecting a hypothesis that is slightly modified from the familiar set and to identifying a more drastically modified or altered assumption.

Another recent suggestion is to broaden Peirce’s original account of abduction and to include not only verbal information but also non-verbal mental representations, such as visual, auditory, or motor representations. In Thagard’s approach, representations are characterized as patterns of activity in mental populations (see also section 9.3 below). The advantage of the neural account of human reasoning is that it covers features such as the surprise that accompanies the generation of new insights or the visual and auditory representations that contribute to it. Surprise, for instance, could be characterized as resulting from rapid changes in activation of the node in a neural network representing the “surprising” element (Thagard and Stewart 2011). If all mental representations can be characterized as patterns of firing in neural populations, abduction can be analyzed as the combination or “convolution” (Thagard) of patterns of neural activity from disjoint or overlapping patterns of activity (Thagard 2010).

The concern with the logic of discovery has also motivated research on artificial intelligence at the intersection of philosophy of science and cognitive science. In this approach, scientific discovery is treated as a form of problem-solving activity (Simon 1973; see also Newell and Simon 1971), whereby the systematic aspects of problem solving are studied within an information-processing framework. The aim is to clarify with the help of computational tools the nature of the methods used to discover scientific hypotheses. These hypotheses are regarded as solutions to problems. Philosophers working in this tradition build computer programs employing methods of heuristic selective search (e.g., Langley et al. 1987). In computational heuristics, search programs can be described as searches for solutions in a so-called “problem space” in a certain domain. The problem space comprises all possible configurations in that domain (e.g., for chess problems, all possible arrangements of pieces on a board of chess). Each configuration is a “state” of the problem space. There are two special states, namely the goal state, i.e., the state to be reached, and the initial state, i.e., the configuration at the starting point from which the search begins. There are operators, which determine the moves that generate new states from the current state. There are path constraints, which limit the permitted moves. Problem solving is the process of searching for a solution of the problem of how to generate the goal state from an initial state. In principle, all states can be generated by applying the operators to the initial state, then to the resulting state, until the goal state is reached (Langley et al. 1987: chapter 9). A problem solution is a sequence of operations leading from the initial to the goal state.

The basic idea behind computational heuristics is that rules can be identified that serve as guidelines for finding a solution to a given problem quickly and efficiently by avoiding undesired states of the problem space. These rules are best described as rules of thumb. The aim of constructing a logic of discovery thus becomes the aim of constructing a heuristics for the efficient search for solutions to problems. The term “heuristic search” indicates that in contrast to algorithms, problem-solving procedures lead to results that are merely provisional and plausible. A solution is not guaranteed, but heuristic searches are advantageous because they are more efficient than exhaustive random trial and error searches. Insofar as it is possible to evaluate whether one set of heuristics is better—more efficacious—than another, the logic of discovery turns into a normative theory of discovery.

Arguably, because it is possible to reconstruct important scientific discovery processes with sets of computational heuristics, the scientific discovery process can be considered as a special case of the general mechanism of information processing. In this context, the term “logic” is not used in the narrow sense of a set of formal, generally applicable rules to draw inferences but again in a broad sense as a label for a set of procedural rules.

The computer programs that embody the principles of heuristic searches in scientific inquiry simulate the paths that scientists followed when they searched for new theoretical hypotheses. Computer programs such as BACON (Simon et al. 1981) and KEKADA (Kulkarni and Simon 1988) utilize sets of problem-solving heuristics to detect regularities in given data sets. The program would note, for instance, that the values of a dependent term are constant or that a set of values for a term x and a set of values for a term y are linearly related. It would thus “infer” that the dependent term always has that value or that a linear relation exists between x and y . These programs can “make discoveries” in the sense that they can simulate successful discoveries such as Kepler’s third law (BACON) or the Krebs cycle (KEKADA).

Computational theories of scientific discoveries have helped identify and clarify a number of problem-solving strategies. An example of such a strategy is heuristic means-ends analysis, which involves identifying specific differences between the present and the goal situation and searches for operators (processes that will change the situation) that are associated with the differences that were detected. Another important heuristic is to divide the problem into sub-problems and to begin solving the one with the smallest number of unknowns to be determined (Simon 1977). Computational approaches have also highlighted the extent to which the generation of new knowledge draws on existing knowledge that constrains the development of new hypotheses.

As accounts of scientific discoveries, the early computational heuristics have some limitations. Compared to the problem spaces given in computational heuristics, the complex problem spaces for scientific problems are often ill defined, and the relevant search space and goal state must be delineated before heuristic assumptions could be formulated (Bechtel and Richardson 1993: chapter 1). Because a computer program requires the data from actual experiments, the simulations cover only certain aspects of scientific discoveries; in particular, it cannot determine by itself which data is relevant, which data to relate and what form of law it should look for (Gillies 1996). However, as a consequence of the rise of so-called “deep learning” methods in data-intensive science, there is renewed philosophical interest in the question of whether machines can make discoveries ( section 10 ).

Many philosophers maintain that discovery is a legitimate topic for philosophy of science while abandoning the notion that there is a logic of discovery. One very influential approach is Thomas Kuhn’s analysis of the emergence of novel facts and theories (Kuhn 1970 [1962]: chapter 6). Kuhn identifies a general pattern of discovery as part of his account of scientific change. A discovery is not a simple act, but an extended, complex process, which culminates in paradigm changes. Paradigms are the symbolic generalizations, metaphysical commitments, values, and exemplars that are shared by a community of scientists and that guide the research of that community. Paradigm-based, normal science does not aim at novelty but instead at the development, extension, and articulation of accepted paradigms. A discovery begins with an anomaly, that is, with the recognition that the expectations induced by an established paradigm are being violated. The process of discovery involves several aspects: observations of an anomalous phenomenon, attempts to conceptualize it, and changes in the paradigm so that the anomaly can be accommodated.

It is the mark of success of normal science that it does not make transformative discoveries, and yet such discoveries come about as a consequence of normal, paradigm-guided science. The more detailed and the better developed a paradigm, the more precise are its predictions. The more precisely the researchers know what to expect, the better they are able to recognize anomalous results and violations of expectations:

novelty ordinarily emerges only for the man who, knowing with precision what he should expect, is able to recognize that something has gone wrong. Anomaly appears only against the background provided by the paradigm. (Kuhn 1970 [1962]: 65)

Drawing on several historical examples, Kuhn argues that it is usually impossible to identify the very moment when something was discovered or even the individual who made the discovery. Kuhn illustrates these points with the discovery of oxygen (see Kuhn 1970 [1962]: 53–56). Oxygen had not been discovered before 1774 and had been discovered by 1777. Even before 1774, Lavoisier had noticed that something was wrong with phlogiston theory, but he was unable to move forward. Two other investigators, C. W. Scheele and Joseph Priestley, independently identified a gas obtained from heating solid substances. But Scheele’s work remained unpublished until after 1777, and Priestley did not identify his substance as a new sort of gas. In 1777, Lavoisier presented the oxygen theory of combustion, which gave rise to fundamental reconceptualization of chemistry. But according to this theory as Lavoisier first presented it, oxygen was not a chemical element. It was an atomic “principle of acidity” and oxygen gas was a combination of that principle with caloric. According to Kuhn, all of these developments are part of the discovery of oxygen, but none of them can be singled out as “the” act of discovery.

In pre-paradigmatic periods or in times of paradigm crisis, theory-induced discoveries may happen. In these periods, scientists speculate and develop tentative theories, which may lead to novel expectations and experiments and observations to test whether these expectations can be confirmed. Even though no precise predictions can be made, phenomena that are thus uncovered are often not quite what had been expected. In these situations, the simultaneous exploration of the new phenomena and articulation of the tentative hypotheses together bring about discovery.

In cases like the discovery of oxygen, by contrast, which took place while a paradigm was already in place, the unexpected becomes apparent only slowly, with difficulty, and against some resistance. Only gradually do the anomalies become visible as such. It takes time for the investigators to recognize “both that something is and what it is” (Kuhn 1970 [1962]: 55). Eventually, a new paradigm becomes established and the anomalous phenomena become the expected phenomena.

Recent studies in cognitive neuroscience of brain activity during periods of conceptual change support Kuhn’s view that conceptual change is hard to achieve. These studies examine the neural processes that are involved in the recognition of anomalies and compare them with the brain activity involved in the processing of information that is consistent with preferred theories. The studies suggest that the two types of data are processed differently (Dunbar et al. 2007).

8. Methodologies of discovery

Advocates of the view that there are methodologies of discovery use the term “logic” in the narrow sense of an algorithmic procedure to generate new ideas. But like the AI-based theories of scientific discovery described in section 6 , methodologies of scientific discovery interpret the concept “discovery” as a label for an extended process of generating and articulating new ideas and often describe the process in terms of problem solving. In these approaches, the distinction between the contexts of discovery and the context of justification is challenged because the methodology of discovery is understood to play a justificatory role. Advocates of a methodology of discovery usually rely on a distinction between different justification procedures, justification involved in the process of generating new knowledge and justification involved in testing it. Consequential or “strong” justifications are methods of testing. The justification involved in discovery, by contrast, is conceived as generative (as opposed to consequential) justification ( section 8.1 ) or as weak (as opposed to strong) justification ( section 8.2 ). Again, some terminological ambiguity exists because according to some philosophers, there are three contexts, not two: Only the initial conception of a new idea (the creative act is the context of discovery proper, and between it and justification there exists a separate context of pursuit (Laudan 1980). But many advocates of methodologies of discovery regard the context of pursuit as an integral part of the process of justification. They retain the notion of two contexts and re-draw the boundaries between the contexts of discovery and justification as they were drawn in the early 20 th century.

The methodology of discovery has sometimes been characterized as a form of justification that is complementary to the methodology of testing (Nickles 1984, 1985, 1989). According to the methodology of testing, empirical support for a theory results from successfully testing the predictive consequences derived from that theory (and appropriate auxiliary assumptions). In light of this methodology, justification for a theory is “consequential justification,” the notion that a hypothesis is established if successful novel predictions are derived from the theory or claim. Generative justification complements consequential justification. Advocates of generative justification hold that there exists an important form of justification in science that involves reasoning to a claim from data or previously established results more generally.

One classic example for a generative methodology is the set of Newton’s rules for the study of natural philosophy. According to these rules, general propositions are established by deducing them from the phenomena. The notion of generative justification seeks to preserve the intuition behind classic conceptions of justification by deduction. Generative justification amounts to the rational reconstruction of the discovery path in order to establish its discoverability had the researchers known what is known now, regardless of how it was first thought of (Nickles 1985, 1989). The reconstruction demonstrates in hindsight that the claim could have been discovered in this manner had the necessary information and techniques been available. In other words, generative justification—justification as “discoverability” or “potential discovery”—justifies a knowledge claim by deriving it from results that are already established. While generative justification does not retrace exactly those steps of the actual discovery path that were actually taken, it is a better representation of scientists’ actual practices than consequential justification because scientists tend to construe new claims from available knowledge. Generative justification is a weaker version of the traditional ideal of justification by deduction from the phenomena. Justification by deduction from the phenomena is complete if a theory or claim is completely determined from what we already know. The demonstration of discoverability results from the successful derivation of a claim or theory from the most basic and most solidly established empirical information.

Discoverability as described in the previous paragraphs is a mode of justification. Like the testing of novel predictions derived from a hypothesis, generative justification begins when the phase of finding and articulating a hypothesis worthy of assessing is drawing to a close. Other approaches to the methodology of discovery are directly concerned with the procedures involved in devising new hypotheses. The argument in favor of this kind of methodology is that the procedures of devising new hypotheses already include elements of appraisal. These preliminary assessments have been termed “weak” evaluation procedures (Schaffner 1993). Weak evaluations are relevant during the process of devising a new hypothesis. They provide reasons for accepting a hypothesis as promising and worthy of further attention. Strong evaluations, by contrast, provide reasons for accepting a hypothesis as (approximately) true or confirmed. Both “generative” and “consequential” testing as discussed in the previous section are strong evaluation procedures. Strong evaluation procedures are rigorous and systematically organized according to the principles of hypothesis derivation or H-D testing. A methodology of preliminary appraisal, by contrast, articulates criteria for the evaluation of a hypothesis prior to rigorous derivation or testing. It aids the decision about whether to take that hypothesis seriously enough to develop it further and test it. For advocates of this version of the methodology of discovery, it is the task of philosophy of science to characterize sets of constraints and methodological rules guiding the complex process of prior-to-test evaluation of hypotheses.

In contrast to the computational approaches discussed above, strategies of preliminary appraisal are not regarded as subject-neutral but as specific to particular fields of study. Philosophers of biology, for instance, have developed a fine-grained framework to account for the generation and preliminary evaluation of biological mechanisms (Darden 2002; Craver 2002; Bechtel and Richardson 1993; Craver and Darden 2013). Some philosophers have suggested that the phase of preliminary appraisal be further divided into two phases, the phase of appraising and the phase of revising. According to Lindley Darden, the phases of generation, appraisal and revision of descriptions of mechanisms can be characterized as reasoning processes governed by reasoning strategies. Different reasoning strategies govern the different phases (Darden 1991, 2002; Craver 2002; Darden 2009). The generation of hypotheses about mechanisms, for instance, is governed by the strategy of “schema instantiation” (see Darden 2002). The discovery of the mechanism of protein synthesis involved the instantiation of an abstract schema for chemical reactions: reactant 1 + reactant 2 = product. The actual mechanism of protein synthesis was found through specification and modification of this schema.

Neither of these strategies is deemed necessary for discovery, and they are not prescriptions for biological research. Rather, these strategies are deemed sufficient for the discovery of mechanisms. The methodology of the discovery of mechanisms is an extrapolation from past episodes of research on mechanisms and the result of a synthesis of rational reconstructions of several of these historical episodes. The methodology of discovery is weakly normative in the sense that the strategies for the discovery of mechanisms that were successful in the past may prove useful in future biological research (Darden 2002).

As philosophers of science have again become more attuned to actual scientific practices, interest in heuristic strategies has also been revived. Many analysts now agree that discovery processes can be regarded as problem solving activities, whereby a discovery is a solution to a problem. Heuristics-based methodologies of discovery are neither purely subjective and intuitive nor algorithmic or formalizable; the point is that reasons can be given for pursuing one or the other problem-solving strategy. These rules are open and do not guarantee a solution to a problem when applied (Ippoliti 2018). On this view, scientific researchers are no longer seen as Kuhnian “puzzle solvers” but as problem solvers and decision makers in complex, variable, and changing environments (Wimsatt 2007).

Philosophers of discovery working in this tradition draw on a growing body of literature in cognitive psychology, management science, operations research, and economy on human reasoning and decision making in contexts with limited information, under time constraints, and with sub-optimal means (Gigerenzer & Sturm 2012). Heuristic strategies characterized in these studies, such as Gigerenzer’s “tools to theory heuristic” are then applied to understand scientific knowledge generation (Gigerenzer 1992, Nickles 2018). Other analysts specify heuristic strategies in a range of scientific fields, including climate science, neurobiology, and clinical medicine (Gramelsberger 2011, Schaffner 2008, Gillies 2018). Finally, in analytic epistemology, formal methods are developed to identify and assess distinct heuristic strategies currently in use, such as Bayesian reverse engineering in cognitive science (Zednik and Jäkel 2016).

As the literature on heuristics continues to grow, it has become clear that the term “heuristics” is itself used in a variety of different ways. (For a valuable taxonomy of meanings of “heuristic,” see Chow 2015, see also Ippoliti 2018.) Moreover, as in the context of earlier debates about computational heuristics, debates continue about the limitations of heuristics. The use of heuristics may come at a cost if heuristics introduce systematic biases (Wimsatt 2007). Some philosophers thus call for general principles for the evaluation of heuristic strategies (Hey 2016).

9. Cognitive perspectives on discovery

The approaches to scientific discovery presented in the previous sections focus on the adoption, articulation, and preliminary evaluation of ideas or hypotheses prior to rigorous testing, not on how a novel hypothesis or idea is first thought up. For a long time, the predominant view among philosophers of discovery was that the initial step of discovery is a mysterious intuitive leap of the human mind that cannot be analyzed further. More recent accounts of discovery informed by evolutionary biology also do not explicate how new ideas are formed. The generation of new ideas is akin to random, blind variations of thought processes, which have to be inspected by the critical mind and assessed as neutral, productive, or useless (Campbell 1960; see also Hull 1988), but the key processes by which new ideas are generated are left unanalyzed.

With the recent rapprochement among philosophy of mind, cognitive science and psychology and the increased integration of empirical research into philosophy of science, these processes have been submitted to closer analysis, and philosophical studies of creativity have seen a surge of interest (e.g. Paul & Kaufman 2014a). The distinctive feature of these studies is that they integrate philosophical analyses with empirical work from cognitive science, psychology, evolutionary biology, and computational neuroscience (Thagard 2012). Analysts have distinguished different kinds and different features of creative thinking and have examined certain features in depth, and from new angles. Recent philosophical research on creativity comprises conceptual analyses and integrated approaches based on the assumption that creativity can be analyzed and that empirical research can contribute to the analysis (Paul & Kaufman 2014b). Two key elements of the cognitive processes involved in creative thinking that have been in the focus of philosophical analysis are analogies ( section 9.2 ) and mental models ( section 9.3 ).

General definitions of creativity highlight novelty or originality and significance or value as distinctive features of a creative act or product (Sternberg & Lubart 1999, Kieran 2014, Paul & Kaufman 2014b, although see Hills & Bird 2019). Different kinds of creativity can be distinguished depending on whether the act or product is novel for a particular individual or entirely novel. Psychologist Margaret Boden distinguishes between psychological creativity (P-creativity) and historical creativity (H-creativity). P-creativity is a development that is new, surprising and important to the particular person who comes up with it. H-creativity, by contrast, is radically novel, surprising, and important—it is generated for the first time (Boden 2004). Further distinctions have been proposed, such as anthropological creativity (construed as a human condition) and metaphysical creativity, a radically new thought or action in the sense that it is unaccounted for by antecedents and available knowledge, and thus constitutes a radical break with the past (Kronfeldner 2009, drawing on Hausman 1984).

Psychological studies analyze the personality traits and creative individuals’ behavioral dispositions that are conducive to creative thinking. They suggest that creative scientists share certain distinct personality traits, including confidence, openness, dominance, independence, introversion, as well as arrogance and hostility. (For overviews of recent studies on personality traits of creative scientists, see Feist 1999, 2006: chapter 5).

Recent work on creativity in philosophy of mind and cognitive science offers substantive analyses of the cognitive and neural mechanisms involved in creative thinking (Abrams 2018, Minai et al 2022) and critical scrutiny of the romantic idea of genius creativity as something deeply mysterious (Blackburn 2014). Some of this research aims to characterize features that are common to all creative processes, such as Thagard and Stewart’s account according to which creativity results from combinations of representations (Thagard & Stewart 2011, but see Pasquale and Poirier 2016). Other research aims to identify the features that are distinctive of scientific creativity as opposed to other forms of creativity, such as artistic creativity or creative technological invention (Simonton 2014).

Many philosophers of science highlight the role of analogy in the development of new knowledge, whereby analogy is understood as a process of bringing ideas that are well understood in one domain to bear on a new domain (Thagard 1984; Holyoak and Thagard 1996). An important source for philosophical thought about analogy is Mary Hesse’s conception of models and analogies in theory construction and development. In this approach, analogies are similarities between different domains. Hesse introduces the distinction between positive, negative, and neutral analogies (Hesse 1966: 8). If we consider the relation between gas molecules and a model for gas, namely a collection of billiard balls in random motion, we will find properties that are common to both domains (positive analogy) as well as properties that can only be ascribed to the model but not to the target domain (negative analogy). There is a positive analogy between gas molecules and a collection of billiard balls because both the balls and the molecules move randomly. There is a negative analogy between the domains because billiard balls are colored, hard, and shiny but gas molecules do not have these properties. The most interesting properties are those properties of the model about which we do not know whether they are positive or negative analogies. This set of properties is the neutral analogy. These properties are the significant properties because they might lead to new insights about the less familiar domain. From our knowledge about the familiar billiard balls, we may be able to derive new predictions about the behavior of gas molecules, which we could then test.

Hesse offers a more detailed analysis of the structure of analogical reasoning through the distinction between horizontal and vertical analogies between domains. Horizontal analogies between two domains concern the sameness or similarity between properties of both domains. If we consider sound and light waves, there are similarities between them: sound echoes, light reflects; sound is loud, light is bright, both sound and light are detectable by our senses. There are also relations among the properties within one domain, such as the causal relation between sound and the loud tone we hear and, analogously, between physical light and the bright light we see. These analogies are vertical analogies. For Hesse, vertical analogies hold the key for the construction of new theories.

Analogies play several roles in science. Not only do they contribute to discovery but they also play a role in the development and evaluation of scientific theories. Current discussions about analogy and discovery have expanded and refined Hesse’s approach in various ways. Some philosophers have developed criteria for evaluating analogy arguments (Bartha 2010). Other work has identified highly significant analogies that were particularly fruitful for the advancement of science (Holyoak and Thagard 1996: 186–188; Thagard 1999: chapter 9). The majority of analysts explore the features of the cognitive mechanisms through which aspects of a familiar domain or source are applied to an unknown target domain in order to understand what is unknown. According to the influential multi-constraint theory of analogical reasoning developed by Holyoak and Thagard, the transfer processes involved in analogical reasoning (scientific and otherwise) are guided or constrained in three main ways: 1) by the direct similarity between the elements involved; 2) by the structural parallels between source and target domain; as well as 3) by the purposes of the investigators, i.e., the reasons why the analogy is considered. Discovery, the formulation of a new hypothesis, is one such purpose.

“In vivo” investigations of scientists reasoning in their laboratories have not only shown that analogical reasoning is a key component of scientific practice, but also that the distance between source and target depends on the purpose for which analogies are sought. Scientists trying to fix experimental problems draw analogies between targets and sources from highly similar domains. In contrast, scientists attempting to formulate new models or concepts draw analogies between less similar domains. Analogies between radically different domains, however, are rare (Dunbar 1997, 2001).

In current cognitive science, human cognition is often explored in terms of model-based reasoning. The starting point of this approach is the notion that much of human reasoning, including probabilistic and causal reasoning as well as problem solving takes place through mental modeling rather than through the application of logic or methodological criteria to a set of propositions (Johnson-Laird 1983; Magnani et al. 1999; Magnani and Nersessian 2002). In model-based reasoning, the mind constructs a structural representation of a real-world or imaginary situation and manipulates this structure. In this perspective, conceptual structures are viewed as models and conceptual innovation as constructing new models through various modeling operations. Analogical reasoning—analogical modeling—is regarded as one of three main forms of model-based reasoning that appear to be relevant for conceptual innovation in science. Besides analogical modeling, visual modeling and simulative modeling or thought experiments also play key roles (Nersessian 1992, 1999, 2009). These modeling practices are constructive in that they aid the development of novel mental models. The key elements of model-based reasoning are the call on knowledge of generative principles and constraints for physical models in a source domain and the use of various forms of abstraction. Conceptual innovation results from the creation of new concepts through processes that abstract and integrate source and target domains into new models (Nersessian 2009).

Some critics have argued that despite the large amount of work on the topic, the notion of mental model is not sufficiently clear. Thagard seeks to clarify the concept by characterizing mental models in terms of neural processes (Thagard 2010). In his approach, mental models are produced through complex patterns of neural firing, whereby the neurons and the interconnections between them are dynamic and changing. A pattern of firing neurons is a representation when there is a stable causal correlation between the pattern or activation and the thing that is represented. In this research, questions about the nature of model-based reasoning are transformed into questions about the brain mechanisms that produce mental representations.

The above sections again show that the study of scientific discovery integrates different approaches, combining conceptual analysis of processes of knowledge generation with empirical work on creativity, drawing heavily and explicitly on current research in psychology and cognitive science, and on in vivo laboratory observations, as well as brain imaging techniques (Kounios & Beeman 2009, Thagard & Stewart 2011).

Earlier critics of AI-based theories of scientific discoveries argued that a computer cannot devise new concepts but is confined to the concepts included in the given computer language (Hempel 1985: 119–120). It cannot design new experiments, instruments, or methods. Subsequent computational research on scientific discovery was driven by the motivation to contribute computational tools to aid scientists in their research (Addis et al. 2016). It appears that computational methods can be used to generate new results leading to refereed scientific publications in astrophysics, cancer research, ecology, and other fields (Langley 2000). However, the philosophical discussion has continued about the question of whether these methods really generate new knowledge or whether they merely speed up data processing. It is also still an open question whether data-intensive science is fundamentally different from traditional research, for instance regarding the status of hypothesis or theory in data-intensive research (Pietsch 2015).

In the wake of recent developments in machine learning, some older discussions about automated discovery have been revived. The availability of vastly improved computational tools and software for data analysis has stimulated new discussions about computer-generated discovery (see Leonelli 2020). It is largely uncontroversial that machine learning tools can aid discovery, for instance in research on antibiotics (Stokes et al, 2020). The notion of “robot scientist” is mostly used metaphorically, and the vision that human scientists may one day be replaced by computers – by successors of the laboratory automation systems “Adam” and “Eve”, allegedly the first “robot scientists” – is evoked in writings for broader audiences (see King et al. 2009, Williams et al. 2015, for popularized descriptions of these systems), although some interesting ethical challenges do arise from “superhuman AI” (see Russell 2021). It also appears that, on the notion that products of creative acts are both novel and valuable, AI systems should be called “creative,” an implication which not all analysts will find plausible (Boden 2014)

Philosophical analyses focus on various questions arising from the processes involving human-machine complexes. One issue relevant to the problem of scientific discovery arises from the opacity of machine learning. If machine learning indeed escapes human understanding, how can we be warranted to say that knowledge or understanding is generated by deep learning tools? Might we have reason to say that humans and machines are “co-developers” of knowledge (Tamaddoni-Nezhad et al. 2021)?

New perspectives on scientific discovery have also opened up in the context of social epistemology (see Goldman & O’Connor 2021). Social epistemology investigates knowledge production as a group process, specifically the epistemic effects of group composition in terms of cognitive diversity and unity and social interactions within groups or institutions such as testimony and trust, peer disagreement and critique, and group justification, among others. On this view, discovery is a collective achievement, and the task is to explore how assorted social-epistemic activities or practices have an impact on the knowledge generated by groups in question. There are obvious implications for debates about scientific discovery of recent research in the different branches of social epistemology. Social epistemologists have examined individual cognitive agents in their roles as group members (as providers of information or as critics) and the interactions among these members (Longino 2001), groups as aggregates of diverse agents, or the entire group as epistemic agent (e.g., Koons 2021, Dragos 2019).

Standpoint theory, for instance, explores the role of outsiders in knowledge generation, considering how the sociocultural structures and practices in which individuals are embedded aid or obstruct the generation of creative ideas. According to standpoint theorists, people with standpoint are politically aware and politically engaged people outside the mainstream. Because people with standpoint have different experiences and access to different domains of expertise than most members of a culture, they can draw on rich conceptual resources for creative thinking (Solomon 2007).

Social epistemologists examining groups as aggregates of agents consider to what extent diversity among group members is conducive to knowledge production and whether and to what extent beliefs and attitudes must be shared among group members to make collective knowledge possible (Bird 2014). This is still an open question. Some formal approaches to model the influence of diversity on knowledge generation suggest that cognitive diversity is beneficial to collective knowledge generation (Weisberg and Muldoon 2009), but others have criticized the model (Alexander et al (2015), see also Thoma (2015) and Poyhönen (2017) for further discussion).

This essay has illustrated that philosophy of discovery has come full circle. Philosophy of discovery has once again become a thriving field of philosophical study, now intersecting with, and drawing on philosophical and empirical studies of creative thinking, problem solving under uncertainty, collective knowledge production, and machine learning. Recent approaches to discovery are typically explicitly interdisciplinary and integrative, cutting across previous distinctions among hypothesis generation and theory building, data collection, assessment, and selection; as well as descriptive-analytic, historical, and normative perspectives (Danks & Ippoliti 2018, Michel 2021). The goal no longer is to provide one overarching account of scientific discovery but to produce multifaceted analyses of past and present activities of knowledge generation in all their complexity and heterogeneity that are illuminating to the non-scientist and the scientific researcher alike.

  • Abraham, A. 2019, The Neuroscience of Creativity, Cambridge: Cambridge University Press.
  • Addis, M., Sozou, P.D., Gobet, F. and Lane, P. R., 2016, “Computational scientific discovery and cognitive science theories”, in Mueller, V. C. (ed.) Computing and Philosophy , Springer, 83–87.
  • Alexander, J., Himmelreich, J., and Thompson, C. 2015, Epistemic Landscapes, Optimal Search, and the Division of Cognitive Labor, Philosophy of Science 82: 424–453.
  • Arabatzis, T. 1996, “Rethinking the ‘Discovery’ of the Electron,” Studies in History and Philosophy of Science Part B Studies In History and Philosophy of Modern Physics , 27: 405–435.
  • Bartha, P., 2010, By Parallel Reasoning: The Construction and Evaluation of Analogical Arguments , New York: Oxford University Press.
  • Bechtel, W. and R. Richardson, 1993, Discovering Complexity , Princeton: Princeton University Press.
  • Benjamin, A.C., 1934, “The Mystery of Scientific Discovery ” Philosophy of Science , 1: 224–36.
  • Bird, A. 2014, “When is There a Group that Knows? Distributed Cognition, Scientific Knowledge, and the Social Epistemic Subject”, in J. Lackey (ed.), Essays in Collective Epistemology , Oxford: Oxford University Press, 42–63.
  • Blackburn, S. 2014, “Creativity and Not-So-Dumb Luck”, in Paul, E. S. and Kaufman, S. B. (eds.), The Philosophy of Creativity: New Essays , New York: Oxford Academic online edn. https://doi.org/10.1093/acprof:oso/9780199836963.003.0008.
  • Blackwell, R.J., 1969, Discovery in the Physical Sciences , Notre Dame: University of Notre Dame Press.
  • Boden, M.A., 2004, The Creative Mind: Myths and Mechanisms , London: Routledge.
  • –––, 2014, “Creativity and Artificial Intelligence: A Contradiction in Terms?”, in Paul, E. S. and Kaufman, S. B. (eds.), The Philosophy of Creativity: New Essays (New York: Oxford Academic online edn., https://doi.org/10.1093/acprof:oso/9780199836963.003.0012 .
  • Brannigan, A., 1981, The Social Basis of Scientific Discoveries , Cambridge: Cambridge University Press.
  • Brem, S. and L.J. Rips, 2000, “Explanation and Evidence in Informal Argument”, Cognitive Science , 24: 573–604.
  • Campbell, D., 1960, “Blind Variation and Selective Retention in Creative Thought as in Other Knowledge Processes”, Psychological Review , 67: 380–400.
  • Carmichael, R.D., 1922, “The Logic of Discovery”, The Monist , 32: 569–608.
  • –––, 1930, The Logic of Discovery , Chicago: Open Court.
  • Chow, S. 2015, “Many Meanings of ‘Heuristic’”, British Journal for the Philosophy of Science , 66: 977–1016
  • Craver, C.F., 2002, “Interlevel Experiments, Multilevel Mechanisms in the Neuroscience of Memory”, Philosophy of Science Supplement , 69: 83–97.
  • Craver, C.F. and L. Darden, 2013, In Search of Mechanisms: Discoveries across the Life Sciences , Chicago: University of Chicago Press.
  • Curd, M., 1980, “The Logic of Discovery: An Analysis of Three Approaches”, in T. Nickles (ed.) Scientific Discovery, Logic, and Rationality , Dordrecht: D. Reidel, 201–19.
  • Danks, D. & Ippoliti, E. (eds.) 2018, Building Theories: Heuristics and Hypotheses in Sciences , Cham: Springer.
  • Darden, L., 1991, Theory Change in Science: Strategies from Mendelian Genetics , New York: Oxford University Press.
  • –––, 2002, “Strategies for Discovering Mechanisms: Schema Instantiation, Modular Subassembly, Forward/Backward Chaining”, Philosophy of Science , 69: S354-S65.
  • –––, 2009, “Discovering Mechanisms in Molecular Biology: Finding and Fixing Incompleteness and Incorrectness”, in J. Meheus and T. Nickles (eds.), Models of Discovery and Creativity , Dordrecht: Springer, 43–55.
  • Dewey, J. 1910, How We Think . Boston: D.C. Heath
  • Dragos, C., 2019, “Groups Can Know How” American Philosophical Quarterly 56: 265–276
  • Ducasse, C.J., 1951, “Whewell’s Philosophy of Scientific Discovery II”, The Philosophical Review , 60(2): 213–34.
  • Dunbar, K., 1997, “How scientists think: On-line creativity and conceptual change in science”, in T.B. Ward, S.M. Smith, and J. Vaid (eds.), Conceptual Structures and Processes: Emergence, Discovery, and Change , Washington, DC: American Psychological Association Press, 461–493.
  • –––, 2001, “The Analogical Paradox: Why Analogy is so Easy in Naturalistic Settings Yet so Difficult in Psychological Laboratories”, in D. Gentner, K.J. Holyoak, and B.N. Kokinov (eds.), The Analogical Mind: Perspectives from Cognitive Science , Cambridge, MA: MIT Press.
  • Dunbar, K, J. Fugelsang, and C Stein, 2007, “Do Naïve Theories Ever Go Away? Using Brain and Behavior to Understand Changes in Concepts”, in M. Lovett and P. Shah (eds.), Thinking with Data: 33rd Carnegie Symposium on Cognition , Mahwah: Erlbaum, 193–205.
  • Feist, G.J., 1999, “The Influence of Personality on Artistic and Scientific Creativity”, in R.J. Sternberg (ed.), Handbook of Creativity , New York: Cambridge University Press, 273–96.
  • –––, 2006, The psychology of science and the origins of the scientific mind , New Haven: Yale University Press.
  • Gillies D., 1996, Artificial intelligence and scientific method . Oxford: Oxford University Press.
  • –––, 2018 “Discovering Cures in Medicine” in Danks, D. & Ippoliti, E. (eds.), Building Theories: Heuristics and Hypotheses in Sciences , Cham: Springer, 83–100.
  • Goldman, Alvin & O’Connor, C., 2021, “Social Epistemology”, The Stanford Encyclopedia of Philosophy (Winter 2021 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/win2021/entries/epistemology-social/>.
  • Gramelsberger, G. 2011, “What Do Numerical (Climate) Models Really Represent?” Studies in History and Philosophy of Science 42: 296–302.
  • Gutting, G., 1980, “Science as Discovery”, Revue internationale de philosophie , 131: 26–48.
  • Hanson, N.R., 1958, Patterns of Discovery , Cambridge: Cambridge University Press.
  • –––, 1960, “Is there a Logic of Scientific Discovery?”, Australasian Journal of Philosophy , 38: 91–106.
  • –––, 1965, “Notes Toward a Logic of Discovery”, in R.J. Bernstein (ed.), Perspectives on Peirce. Critical Essays on Charles Sanders Peirce , New Haven and London: Yale University Press, 42–65.
  • Harman, G.H., 1965, “The Inference to the Best Explanation”, Philosophical Review , 74.
  • Hausman, C. R. 1984, A Discourse on Novelty and Creation , New York: SUNY Press.
  • Hempel, C.G., 1985, “Thoughts in the Limitations of Discovery by Computer”, in K. Schaffner (ed.), Logic of Discovery and Diagnosis in Medicine , Berkeley: University of California Press, 115–22.
  • Hesse, M., 1966, Models and Analogies in Science , Notre Dame: University of Notre Dame Press.
  • Hey, S. 2016 “Heuristics and Meta-heuristics in Scientific Judgement”, British Journal for the Philosophy of Science , 67: 471–495
  • Hills, A., Bird, A. 2019, “Against Creativity”, Philosophy and Phenomenological Research , 99: 694–713.
  • Holyoak, K.J. and P. Thagard, 1996, Mental Leaps: Analogy in Creative Thought , Cambridge, MA: MIT Press.
  • Howard, D., 2006, “Lost Wanderers in the Forest of Knowledge: Some Thoughts on the Discovery-Justification Distinction”, in J. Schickore and F. Steinle (eds.), Revisiting Discovery and Justification. Historical and Philosophical Perspectives on the Context Distinction , Dordrecht: Springer, 3–22.
  • Hoyningen-Huene, P., 1987, “Context of Discovery and Context of Justification”, Studies in History and Philosophy of Science , 18: 501–15.
  • Hull, D.L., 1988, Science as Practice: An Evolutionary Account of the Social and Conceptual Development of Science , Chicago: University of Chicago Press.
  • Ippoliti, E. 2018, “Heuristic Logic. A Kernel” in Danks, D. & Ippoliti, E. (eds.) Building Theories: Heuristics and Hypotheses in Sciences , Cham: Springer, 191–212
  • Jantzen, B.C., 2016, “Discovery without a ‘Logic’ would be a Miracle”, Synthese , 193: 3209–3238.
  • Johnson-Laird, P., 1983, Mental Models , Cambridge: Cambridge University Press.
  • Kieran, M., 2014, “Creativity as a Virtue of Character,” in E. Paul and S. B. Kaufman (eds.), The Philosophy of Creativity: New Essays . Oxford: Oxford University Press, 125–44
  • King, R. D. et al. 2009, “The Automation of Science”, Science 324: 85–89.
  • Koehler, D.J., 1991, “Explanation, Imagination, and Confidence in Judgment”, Psychological Bulletin , 110: 499–519.
  • Koertge, N. 1980, “Analysis as a Method of Discovery during the Scientific Revolution” in Nickles, T. (ed.) Scientific Discovery, Logic, and Rationality vol. I, Dordrecht: Reidel, 139–157
  • Koons, J.R. 2021, “Knowledge as a Collective Status”, Analytic Philosophy , https://doi.org/10.1111/phib.12224
  • Kounios, J. and Beeman, M. 2009, “The Aha! Moment : The Cognitive Neuroscience of Insight”, Current Directions in Psychological Science , 18: 210–16.
  • Kordig, C., 1978, “Discovery and Justification”, Philosophy of Science , 45: 110–17.
  • Kronfeldner, M. 2009, “Creativity Naturalized”, The Philosophical Quarterly 59: 577–592.
  • Kuhn, T.S., 1970 [1962], The Structure of Scientific Revolutions , 2 nd edition, Chicago: The University of Chicago Press; first edition, 1962.
  • Kulkarni, D. and H.A. Simon, 1988, “The processes of scientific discovery: The strategy of experimentation”, Cognitive Science , 12: 139–76.
  • Langley, P., 2000, “The Computational Support of Scientific Discovery”, International Journal of Human-Computer Studies , 53: 393–410.
  • Langley, P., H.A. Simon, G.L. Bradshaw, and J.M. Zytkow, 1987, Scientific Discovery: Computational Explorations of the Creative Processes , Cambridge, MA: MIT Press.
  • Laudan, L., 1980, “Why Was the Logic of Discovery Abandoned?” in T. Nickles (ed.), Scientific Discovery (Volume I), Dordrecht: D. Reidel, 173–83.
  • Leonelli, S. 2020, “Scientific Research and Big Data”, The Stanford Encyclopedia of Philosophy (Summer 2020 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/sum2020/entries/science-big-data/>
  • Leplin, J., 1987, “The Bearing of Discovery on Justification”, Canadian Journal of Philosophy , 17: 805–14.
  • Longino, H. 2001, The Fate of Knowledge , Princeton: Princeton University Press
  • Lugg, A., 1985, “The Process of Discovery”, Philosophy of Science , 52: 207–20.
  • Magnani, L., 2000, Abduction, Reason, and Science: Processes of Discovery and Explanation , Dordrecht: Kluwer.
  • –––, 2009, “Creative Abduction and Hypothesis Withdrawal”, in J. Meheus and T. Nickles (eds.), Models of Discovery and Creativity , Dordrecht: Springer.
  • Magnani, L. and N.J. Nersessian, 2002, Model-Based Reasoning: Science, Technology, and Values , Dordrecht: Kluwer.
  • Magnani, L., N.J. Nersessian, and P. Thagard, 1999, Model-Based Reasoning in Scientific Discovery , Dordrecht: Kluwer.
  • Michel, J. (ed.) 2021, Making Scientific Discoveries. Interdisciplinary Reflections , Brill | mentis.
  • Minai, A., Doboli, S., Iyer, L. 2022 “Models of Creativity and Ideation: An Overview” in Ali A. Minai, Jared B. Kenworthy, Paul B. Paulus, Simona Doboli (eds.), Creativity and Innovation. Cognitive, Social, and Computational Approaches , Springer, 21–46.
  • Nersessian, N.J., 1992, “How do scientists think? Capturing the dynamics of conceptual change in science”, in R. Giere (ed.), Cognitive Models of Science , Minneapolis: University of Minnesota Press, 3–45.
  • –––, 1999, “Model-based reasoning in conceptual change”, in L. Magnani, N.J. Nersessian and P. Thagard (eds.), Model-Based Reasoning in Scientific Discovery , New York: Kluwer, 5–22.
  • –––, 2009, “Conceptual Change: Creativity, Cognition, and Culture ” in J. Meheus and T. Nickles (eds.), Models of Discovery and Creativity , Dordrecht: Springer, 127–66.
  • Newell, A. and H. A Simon, 1971, “Human Problem Solving: The State of the Theory in 1970”, American Psychologist , 26: 145–59.
  • Newton, I. 1718, Opticks; or, A Treatise of the Reflections, Inflections and Colours of Light , London: Printed for W. and J. Innys, Printers to the Royal Society.
  • Nickles, T., 1984, “Positive Science and Discoverability”, PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association , 1984: 13–27.
  • –––, 1985, “Beyond Divorce: Current Status of the Discovery Debate”, Philosophy of Science , 52: 177–206.
  • –––, 1989, “Truth or Consequences? Generative versus Consequential Justification in Science”, PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association , 1988, 393–405.
  • –––, 2018, “TTT: A Fast Heuristic to New Theories?” in Danks, D. & Ippoliti, E. (eds.) Building Theories: Heuristics and Hypotheses in Sciences , Cham: Springer, 213–244.
  • Pasquale, J.-F. de and Poirier, P. 2016, “Convolution and Modal Representations in Thagard and Stewart’s Neural Theory of Creativity: A Critical Analysis ”, Synthese , 193: 1535–1560
  • Paul, E. S. and Kaufman, S. B. (eds.), 2014a, The Philosophy of Creativity: New Essays , New York: Oxford Academic online edn., https://doi.org/10.1093/acprof:oso/9780199836963.001.0001.
  • –––, 2014b, “Introducing: The Philosophy of Creativity”, in Paul, E. S. and Kaufman, S. B. (eds.), The Philosophy of Creativity: New Essays (New York: Oxford Academic online edn., https://doi.org/10.1093/acprof:oso/9780199836963.003.0001.
  • Pietsch, W. 2015, “Aspects of Theory-Ladenness in Data-Intensive Science”, Philosophy of Science 82: 905–916.
  • Popper, K., 2002 [1934/1959], The Logic of Scientific Discovery , London and New York: Routledge; original published in German in 1934; first English translation in 1959.
  • Pöyhönen, S. 2017, “Value of Cognitive Diversity in Science”, Synthese , 194(11): 4519–4540. doi:10.1007/s11229–016-1147-4
  • Pulte, H. 2019, “‘‘Tis Much Better to Do a Little with Certainty’: On the Reception of Newton’s Methodology”, in The Reception of Isaac Newton in Europe , Pulte, H, and Mandelbrote, S. (eds.), Continuum Publishing Corporation, 355–84.
  • Reichenbach, H., 1938, Experience and Prediction. An Analysis of the Foundations and the Structure of Knowledge , Chicago: The University of Chicago Press.
  • Richardson, A., 2006, “Freedom in a Scientific Society: Reading the Context of Reichenbach’s Contexts”, in J. Schickore and F. Steinle (eds.), Revisiting Discovery and Justification. Historical and Philosophical Perspectives on the Context Distinction , Dordrecht: Springer, 41–54.
  • Russell, S. 2021, “Human-Compatible Artificial Intelligence”, in Human Like Machine Intelligence , Muggleton, S. and Charter, N. (eds.), Oxford: Oxford University Press, 4–23
  • Schaffer, S., 1986, “Scientific Discoveries and the End of Natural Philosophy”, Social Studies of Science , 16: 387–420.
  • –––, 1994, “Making Up Discovery”, in M.A. Boden (ed.), Dimensions of Creativity , Cambridge, MA: MIT Press, 13–51.
  • Schaffner, K., 1993, Discovery and Explanation in Biology and Medicine , Chicago: University of Chicago Press.
  • –––, 2008 “Theories, Models, and Equations in Biology: The Heuristic Search for Emergent Simplifications in Neurobiology”, Philosophy of Science , 75: 1008–21.
  • Schickore, J. and F. Steinle, 2006, Revisiting Discovery and Justification. Historical and Philosophical Perspectives on the Context Distinction , Dordrecht: Springer.
  • Schiller, F.C.S., 1917, “Scientific Discovery and Logical Proof”, in C.J. Singer (ed.), Studies in the History and Method of Science (Volume 1), Oxford: Clarendon, 235–89.
  • Simon, H.A., 1973, “Does Scientific Discovery Have a Logic?”, Philosophy of Science , 40: 471–80.
  • –––, 1977, Models of Discovery and Other Topics in the Methods of Science , Dordrecht: D. Reidel.
  • Simon, H.A., P.W. Langley, and G.L. Bradshaw, 1981, “Scientific Discovery as Problem Solving”, Synthese , 47: 1–28.
  • Smith, G.E., 2002, “The Methodology of the Principia ”, in G.E. Smith and I.B. Cohen (eds), The Cambridge Companion to Newton , Cambridge: Cambridge University Press, 138–73.
  • Simonton, D. K., “Hierarchies of Creative Domains: Disciplinary Constraints on Blind Variation and Selective Retention”, in Paul, E. S. and Kaufman, S. B. (eds), The Philosophy of Creativity: New Essays , New York: Oxford Academic online edn. https://doi.org/10.1093/acprof:oso/9780199836963.003.0013
  • Snyder, L.J., 1997, “Discoverers’ Induction”, Philosophy of Science , 64: 580–604.
  • Solomon, M., 2009, “Standpoint and Creativity”, Hypatia : 226–37.
  • Sternberg, R J. and T. I. Lubart, 1999, “The concept of creativity: Prospects and paradigms,” in R. J. Sternberg (ed.) Handbook of Creativity , Cambridge: Cambridge University Press, 3–15.
  • Stokes, D., 2011, “Minimally Creative Thought”, Metaphilosophy , 42: 658–81.
  • Tamaddoni-Nezhad, A., Bohan, D., Afroozi Milani, G., Raybould, A., Muggleton, S., 2021, “Human–Machine Scientific Discovery”, in Human Like Machine Intelligence , Muggleton, S. and Charter, N., (eds.), Oxford: Oxford University Press, 297–315
  • Thagard, P., 1984, “Conceptual Combination and Scientific Discovery”, PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association , 1984(1): 3–12.
  • –––, 1999, How Scientists Explain Disease , Princeton: Princeton University Press.
  • –––, 2010, “How Brains Make Mental Models”, in L. Magnani, N.J. Nersessian and P. Thagard (eds.), Model-Based Reasoning in Science & Technology , Berlin and Heidelberg: Springer, 447–61.
  • –––, 2012, The Cognitive Science of Science , Cambridge, MA: MIT Press.
  • Thagard, P. and Stewart, T. C., 2011, “The AHA! Experience: Creativity Through Emergent Binding in Neural Networks”, Cognitive Science , 35: 1–33.
  • Thoma, Johanna, 2015, “The Epistemic Division of Labor Revisited”, Philosophy of Science , 82: 454–472. doi:10.1086/681768
  • Weber, M., 2005, Philosophy of Experimental Biology , Cambridge: Cambridge University Press.
  • Whewell, W., 1996 [1840], The Philosophy of the Inductive Sciences (Volume II), London: Routledge/Thoemmes.
  • Weisberg, M. and Muldoon, R., 2009, “Epistemic Landscapes and the Division of Cognitive Labor”, Philosophy of Science , 76: 225–252. doi:10.1086/644786
  • Williams, K. et al. 2015, “Cheaper Faster Drug Development Validated by the Repositioning of Drugs against Neglected Tropical Diseases”, Journal of the Royal Society Interface 12: 20141289. http://dx.doi.org/10.1098/rsif.2014.1289.
  • Zahar, E., 1983, “Logic of Discovery or Psychology of Invention?”, British Journal for the Philosophy of Science , 34: 243–61.
  • Zednik, C. and Jäkel, F. 2016 “Bayesian Reverse-Engineering Considered as a Research Strategy for Cognitive Science”, Synthese , 193, 3951–3985.
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.

[Please contact the author with suggestions.]

abduction | analogy and analogical reasoning | cognitive science | epistemology: social | knowledge: analysis of | Kuhn, Thomas | models in science | Newton, Isaac: Philosophiae Naturalis Principia Mathematica | Popper, Karl | rationality: historicist theories of | scientific method | scientific research and big data | Whewell, William

Copyright © 2022 by Jutta Schickore < jschicko @ indiana . edu >

  • Accessibility

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

The Stanford Encyclopedia of Philosophy is copyright © 2023 by The Metaphysics Research Lab , Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List

Logo of springeropen

Identifying problems and solutions in scientific text

Kevin heffernan.

Department of Computer Science and Technology, University of Cambridge, 15 JJ Thomson Avenue, Cambridge, CB3 0FD UK

Simone Teufel

Research is often described as a problem-solving activity, and as a result, descriptions of problems and solutions are an essential part of the scientific discourse used to describe research activity. We present an automatic classifier that, given a phrase that may or may not be a description of a scientific problem or a solution, makes a binary decision about problemhood and solutionhood of that phrase. We recast the problem as a supervised machine learning problem, define a set of 15 features correlated with the target categories and use several machine learning algorithms on this task. We also create our own corpus of 2000 positive and negative examples of problems and solutions. We find that we can distinguish problems from non-problems with an accuracy of 82.3%, and solutions from non-solutions with an accuracy of 79.7%. Our three most helpful features for the task are syntactic information (POS tags), document and word embeddings.

Introduction

Problem solving is generally regarded as the most important cognitive activity in everyday and professional contexts (Jonassen 2000 ). Many studies on formalising the cognitive process behind problem-solving exist, for instance (Chandrasekaran 1983 ). Jordan ( 1980 ) argues that we all share knowledge of the thought/action problem-solution process involved in real life, and so our writings will often reflect this order. There is general agreement amongst theorists that state that the nature of the research process can be viewed as a problem-solving activity (Strübing 2007 ; Van Dijk 1980 ; Hutchins 1977 ; Grimes 1975 ).

One of the best-documented problem-solving patterns was established by Winter ( 1968 ). Winter analysed thousands of examples of technical texts, and noted that these texts can largely be described in terms of a four-part pattern consisting of Situation, Problem, Solution and Evaluation. This is very similar to the pattern described by Van Dijk ( 1980 ), which consists of Introduction-Theory, Problem-Experiment-Comment and Conclusion. The difference is that in Winter’s view, a solution only becomes a solution after it has been evaluated positively. Hoey changes Winter’s pattern by introducing the concept of Response in place of Solution (Hoey 2001 ). This seems to describe the situation in science better, where evaluation is mandatory for research solutions to be accepted by the community. In Hoey’s pattern, the Situation (which is generally treated as optional) provides background information; the Problem describes an issue which requires attention; the Response provides a way to deal with the issue, and the Evaluation assesses how effective the response is.

An example of this pattern in the context of the Goldilocks story can be seen in Fig.  1 . In this text, there is a preamble providing the setting of the story (i.e. Goldilocks is lost in the woods), which is called the Situation in Hoey’s system. A Problem in encountered when Goldilocks becomes hungry. Her first Response is to try the porridge in big bear’s bowl, but she gives this a negative Evaluation (“too hot!”) and so the pattern returns to the Problem. This continues in a cyclic fashion until the Problem is finally resolved by Goldilocks giving a particular Response a positive Evaluation of baby bear’s porridge (“it’s just right”).

An external file that holds a picture, illustration, etc.
Object name is 11192_2018_2718_Fig1_HTML.jpg

Example of problem-solving pattern when applied to the Goldilocks story.

Reproduced with permission from Hoey ( 2001 )

It would be attractive to detect problem and solution statements automatically in text. This holds true both from a theoretical and a practical viewpoint. Theoretically, we know that sentiment detection is related to problem-solving activity, because of the perception that “bad” situations are transformed into “better” ones via problem-solving. The exact mechanism of how this can be detected would advance the state of the art in text understanding. In terms of linguistic realisation, problem and solution statements come in many variants and reformulations, often in the form of positive or negated statements about the conditions, results and causes of problem–solution pairs. Detecting and interpreting those would give us a reasonably objective manner to test a system’s understanding capacity. Practically, being able to detect any mention of a problem is a first step towards detecting a paper’s specific research goal. Being able to do this has been a goal for scientific information retrieval for some time, and if successful, it would improve the effectiveness of scientific search immensely. Detecting problem and solution statements of papers would also enable us to compare similar papers and eventually even lead to automatic generation of review articles in a field.

There has been some computational effort on the task of identifying problem-solving patterns in text. However, most of the prior work has not gone beyond the usage of keyword analysis and some simple contextual examination of the pattern. Flowerdew ( 2008 ) presents a corpus-based analysis of lexio-grammatical patterns for problem and solution clauses using articles from professional and student reports. Problem and solution keywords were used to search their corpora, and each occurrence was analysed to determine grammatical usage of the keyword. More interestingly, the causal category associated with each keyword in their context was also analysed. For example, Reason–Result or Means-Purpose were common causal categories found to be associated with problem keywords.

The goal of the work by Scott ( 2001 ) was to determine words which are semantically similar to problem and solution, and to determine how these words are used to signal problem-solution patterns. However, their corpus-based analysis used articles from the Guardian newspaper. Since the domain of newspaper text is very different from that of scientific text, we decided not to consider those keywords associated with problem-solving patterns for use in our work.

Instead of a keyword-based approach, Charles ( 2011 ) used discourse markers to examine how the problem-solution pattern was signalled in text. In particular, they examined how adverbials associated with a result such as “thus, therefore, then, hence” are used to signal a problem-solving pattern.

Problem solving also has been studied in the framework of discourse theories such as Rhetorical Structure Theory (Mann and Thompson 1988 ) and Argumentative Zoning (Teufel et al. 2000 ). Problem- and solutionhood constitute two of the original 23 relations in RST (Mann and Thompson 1988 ). While we concentrate solely on this aspect, RST is a general theory of discourse structure which covers many intentional and informational relations. The relationship to Argumentative Zoning is more complicated. The status of certain statements as problem or solutions is one important dimension in the definitions of AZ categories. AZ additionally models dimensions other than problem-solution hood (such as who a scientific idea belongs to, or which intention the authors might have had in stating a particular negative or positive statement). When forming categories, AZ combines aspects of these dimensions, and “flattens” them out into only 7 categories. In AZ it is crucial who it is that experiences the problems or contributes a solution. For instance, the definition of category “CONTRAST” includes statements that some research runs into problems, but only if that research is previous work (i.e., not if it is the work contributed in the paper itself). Similarly, “BASIS” includes statements of successful problem-solving activities, but only if they are achieved by previous work that the current paper bases itself on. Our definition is simpler in that we are interested only in problem solution structure, not in the other dimensions covered in AZ. Our definition is also more far-reaching than AZ, in that we are interested in all problems mentioned in the text, no matter whose problems they are. Problem-solution recognition can therefore be seen as one aspect of AZ which can be independently modelled as a “service task”. This means that good problem solution structure recognition should theoretically improve AZ recognition.

In this work, we approach the task of identifying problem-solving patterns in scientific text. We choose to use the model of problem-solving described by Hoey ( 2001 ). This pattern comprises four parts: Situation, Problem, Response and Evaluation. The Situation element is considered optional to the pattern, and so our focus centres on the core pattern elements.

Goal statement and task

Many surface features in the text offer themselves up as potential signals for detecting problem-solving patterns in text. However, since Situation is an optional element, we decided to focus on either Problem or Response and Evaluation as signals of the pattern. Moreover, we decide to look for each type in isolation. Our reasons for this are as follows: It is quite rare for an author to introduce a problem without resolving it using some sort of response, and so this is a good starting point in identifying the pattern. There are exceptions to this, as authors will sometimes introduce a problem and then leave it to future work, but overall there should be enough signal in the Problem element to make our method of looking for it in isolation worthwhile. The second signal we look for is the use of Response and Evaluation within the same sentence. Similar to Problem elements, we hypothesise that this formulation is well enough signalled externally to help us in detecting the pattern. For example, consider the following Response and Evaluation: “One solution is to use smoothing”. In this statement, the author is explicitly stating that smoothing is a solution to a problem which must have been mentioned in a prior statement. In scientific text, we often observe that solutions implicitly contain both Response and Evaluation (positive) elements. Therefore, due to these reasons there should be sufficient external signals for the two pattern elements we concentrate on here.

When attempting to find Problem elements in text, we run into the issue that the word “problem” actually has at least two word senses that need to be distinguished. There is a word sense of “problem” that means something which must be undertaken (i.e. task), while another sense is the core sense of the word, something that is problematic and negative. Only the latter sense is aligned with our sense of problemhood. This is because the simple description of a task does not predispose problemhood, just a wish to perform some act. Consider the following examples, where the non-desired word sense is being used:

  • “Das and Petrov (2011) also consider the problem of unsupervised bilingual POS induction”. (Chen et al. 2011 ).
  • “In this paper, we describe advances on the problem of NER in Arabic Wikipedia”. (Mohit et al. 2012 ).

Here, the author explicitly states that the phrases in orange are problems, they align with our definition of research tasks and not with what we call here ‘problematic problems’. We will now give some examples from our corpus for the desired, core word sense:

  • “The major limitation of supervised approaches is that they require annotations for example sentences.” (Poon and Domingos 2009 ).
  • “To solve the problem of high dimensionality we use clustering to group the words present in the corpus into much smaller number of clusters”. (Saha et al. 2008 ).

When creating our corpus of positive and negative examples, we took care to select only problem strings that satisfy our definition of problemhood; “ Corpus creation ” section will explain how we did that.

Corpus creation

Our new corpus is a subset of the latest version of the ACL anthology released in March, 2016 1 which contains 22,878 articles in the form of PDFs and OCRed text. 2

The 2016 version was also parsed using ParsCit (Councill et al. 2008 ). ParsCit recognises not only document structure, but also bibliography lists as well as references within running text. A random subset of 2500 papers was collected covering the entire ACL timeline. In order to disregard non-article publications such as introductions to conference proceedings or letters to the editor, only documents containing abstracts were considered. The corpus was preprocessed using tokenisation, lemmatisation and dependency parsing with the Rasp Parser (Briscoe et al. 2006 ).

Definition of ground truth

Our goal was to define a ground truth for problem and solution strings, while covering as wide a range as possible of syntactic variations in which such strings naturally occur. We also want this ground truth to cover phenomena of problem and solution status which are applicable whether or not the problem or solution status is explicitly mentioned in the text.

To simplify the task, we only consider here problem and solution descriptions that are at most one sentence long. In reality, of course, many problem descriptions and solution descriptions go beyond single sentence, and require for instance an entire paragraph. However, we also know that short summaries of problems and solutions are very prevalent in science, and also that these tend to occur in the most prominent places in a paper. This is because scientists are trained to express their contribution and the obstacles possibly hindering their success, in an informative, succinct manner. That is the reason why we can afford to only look for shorter problem and solution descriptions, ignoring those that cross sentence boundaries.

To define our ground truth, we examined the parsed dependencies and looked for a target word (“problem/solution”) in subject position, and then chose its syntactic argument as our candidate problem or solution phrase. To increase the variation, i.e., to find as many different-worded problem and solution descriptions as possible, we additionally used semantically similar words (near-synonyms) of the target words “problem” or “solution” for the search. Semantic similarity was defined as cosine in a deep learning distributional vector space, trained using Word2Vec (Mikolov et al. 2013 ) on 18,753,472 sentences from a biomedical corpus based on all full-text Pubmed articles (McKeown et al. 2016 ). From the 200 words which were semantically closest to “problem”, we manually selected 28 clear synonyms. These are listed in Table  1 . From the 200 semantically closest words to “solution” we similarly chose 19 (Table  2 ). Of the sentences matching our dependency search, a subset of problem and solution candidate sentences were randomly selected.

Selected words for use in problem candidate phrase extraction

Selected words for use in solution candidate phrase extraction

An example of this is shown in Fig.  2 . Here, the target word “drawback” is in subject position (highlighted in red), and its clausal argument (ccomp) is “(that) it achieves low performance” (highlighted in purple). Examples of other arguments we searched for included copula constructions and direct/indirect objects.

An external file that holds a picture, illustration, etc.
Object name is 11192_2018_2718_Fig2_HTML.jpg

Example of our extraction method for problems using dependencies. (Color figure online)

If more than one candidate was found in a sentence, one was chosen at random. Non-grammatical sentences were excluded; these might appear in the corpus as a result of its source being OCRed text.

800 candidates phrases expressing problems and solutions were automatically extracted (1600 total) and then independently checked for correctness by two annotators (the two authors of this paper). Both authors found the task simple and straightforward. Correctness was defined by two criteria:

  • An unexplained phenomenon or a problematic state in science; or
  • A research question; or
  • An artifact that does not fulfil its stated specification.
  • The phrase must not lexically give away its status as problem or solution phrase.

The second criterion saves us from machine learning cues that are too obvious. If for instance, the phrase itself contained the words “lack of” or “problematic” or “drawback”, our manual check rejected it, because it would be too easy for the machine learner to learn such cues, at the expense of many other, more generally occurring cues.

Sampling of negative examples

We next needed to find negative examples for both cases. We wanted them not to stand out on the surface as negative examples, so we chose them so as to mimic the obvious characteristics of the positive examples as closely as possible. We call the negative examples ‘non-problems’ and ‘non-solutions’ respectively. We wanted the only differences between problems and non-problems to be of a semantic nature, nothing that could be read off on the surface. We therefore sampled a population of phrases that obey the same statistical distribution as our problem and solution strings while making sure they really are negative examples. We started from sentences not containing any problem/solution words (i.e. those used as target words). From each such sentence, we at random selected one syntactic subtree contained in it. From these, we randomly selected a subset of negative examples of problems and solutions that satisfy the following conditions:

  • The distribution of the head POS tags of the negative strings should perfectly match the head POS tags 3 of the positive strings. This has the purpose of achieving the same proportion of surface syntactic constructions as observed in the positive cases.
  • The average lengths of the negative strings must be within a tolerance of the average length of their respective positive candidates e.g., non-solutions must have an average length very similar (i.e. + / -  small tolerance) to solutions. We chose a tolerance value of 3 characters.

Again, a human quality check was performed on non-problems and non-solutions. For each candidate non-problem statement, the candidate was accepted if it did not contain a phenomenon, a problematic state, a research question or a non-functioning artefact. If the string expressed a research task, without explicit statement that there was anything problematic about it (i.e., the ‘wrong’ sense of “problem”, as described above), it was allowed as a non-problem. A clause was confirmed as a non-solution if the string did not represent both a response and positive evaluation.

If the annotator found that the sentence had been slightly mis-parsed, but did contain a candidate, they were allowed to move the boundaries for the candidate clause. This resulted in cleaner text, e.g., in the frequent case of coordination, when non-relevant constituents could be removed.

From the set of sentences which passed the quality-test for both independent assessors, 500 instances of positive and negative problems/solutions were randomly chosen (i.e. 2000 instances in total). When checking for correctness we found that most of the automatically extracted phrases which did not pass the quality test for problem-/solution-hood were either due to obvious learning cues or instances where the sense of problem-hood used is relating to tasks (cf. “ Goal statement and task ” section).

Experimental design

In our experiments, we used three classifiers, namely Naïve Bayes, Logistic Regression and a Support Vector Machine. For all classifiers an implementation from the WEKA machine learning library (Hall et al. 2009 ) was chosen. Given that our dataset is small, tenfold cross-validation was used instead of a held out test set. All significance tests were conducted using the (two-tailed) Sign Test (Siegel 1956 ).

Linguistic correlates of problem- and solution-hood

We first define a set of features without taking the phrase’s context into account. This will tell us about the disambiguation ability of the problem/solution description’s semantics alone. In particular, we cut out the rest of the sentence other than the phrase and never use it for classification. This is done for similar reasons to excluding certain ‘give-away’ phrases inside the phrases themselves (as explained above). As the phrases were found using templates, we know that the machine learner would simply pick up on the semantics of the template, which always contains a synonym of “problem” or “solution”, thus drowning out the more hidden features hopefully inherent in the semantics of the phrases themselves. If we allowed the machine learner to use these stronger features, it would suffer in its ability to generalise to the real task.

ngrams Bags of words are traditionally successfully used for classification tasks in NLP, so we included bags of words (lemmas) within the candidate phrases as one of our features (and treat it as a baseline later on). We also include bigrams and trigrams as multi-word combinations can be indicative of problems and solutions e.g., “combinatorial explosion”.

Polarity Our second feature concerns the polarity of each word in the candidate strings. Consider the following example of a problem taken from our dataset: “very conservative approaches to exact and partial string matches overgenerate badly”. In this sentence, words such as “badly” will be associated with negative polarity, therefore being useful in determining problem-hood. Similarly, solutions will often be associated with a positive sentiment e.g. “smoothing is a good way to overcome data sparsity” . To do this, we perform word sense disambiguation of each word using the Lesk algorithm (Lesk 1986 ). The polarity of the resulting synset in SentiWordNet (Baccianella et al. 2010 ) was then looked up and used as a feature.

Syntax Next, a set of syntactic features were defined by using the presence of POS tags in each candidate. This feature could be helpful in finding syntactic patterns in problems and solutions. We were careful not to base the model directly on the head POS tag and the length of each candidate phrase, as these are defining characteristics used for determining the non-problem and non-solution candidate set.

Negation Negation is an important property that can often greatly affect the polarity of a phrase. For example, a phrase containing a keyword pertinent to solution-hood may be a good indicator but with the presence of negation may flip the polarity to problem-hood e.g., “this can’t work as a solution”. Therefore, presence of negation is determined.

Exemplification and contrast Problems and solutions are often found to be coupled with examples as they allow the author to elucidate their point. For instance, consider the following solution: “Once the translations are generated, an obvious solution is to pick the most fluent alternative, e.g., using an n-gram language model”. (Madnani et al. 2012 ). To acknowledge this, we check for presence of exemplification. In addition to examples, problems in particular are often found when contrast is signalled by the author (e.g. “however, “but”), therefore we also check for presence of contrast in the problem and non-problem candidates only.

Discourse Problems and solutions have also been found to have a correlation with discourse properties. For example, problem-solving patterns often occur in the background sections of a paper. The rationale behind this is that the author is conventionally asked to objectively criticise other work in the background (e.g. describing research gaps which motivate the current paper). To take this in account, we examine the context of each string and capture the section header under which it is contained (e.g. Introduction, Future work). In addition, problems and solutions are often found following the Situation element in the problem-solving pattern (cf. “ Introduction ” section). This preamble setting up the problem or solution means that these elements are likely not to be found occurring at the beginning of a section (i.e. it will usually take some sort of introduction to detail how something is problematic and why a solution is needed). Therefore we record the distance from the candidate string to the nearest section header.

Subcategorisation and adverbials Solutions often involve an activity (e.g. a task). We also model the subcategorisation properties of the verbs involved. Our intuition was that since problematic situations are often described as non-actions, then these are more likely to be intransitive. Conversely solutions are often actions and are likely to have at least one argument. This feature was calculated by running the C&C parser (Curran et al. 2007 ) on each sentence. C&C is a supertagger and parser that has access to subcategorisation information. Solutions are also associated with resultative adverbial modification (e.g. “thus, therefore, consequently”) as it expresses the solutionhood relation between the problem and the solution. It has been seen to occur frequently in problem-solving patterns, as studied by Charles ( 2011 ). Therefore, we check for presence of resultative adverbial modification in the solution and non-solution candidate only.

Embeddings We also wanted to add more information using word embeddings. This was done in two different ways. Firstly, we created a Doc2Vec model (Le and Mikolov 2014 ), which was trained on  ∼  19  million sentences from scientific text (no overlap with our data set). An embedding was created for each candidate sentence. Secondly, word embeddings were calculated using the Word2Vec model (cf. “ Corpus creation ” section). For each candidate head, the full word embedding was included as a feature. Lastly, when creating our polarity feature we query SentiWordNet using synsets assigned by the Lesk algorithm. However, not all words are assigned a sense by Lesk, so we need to take care when that happens. In those cases, the distributional semantic similarity of the word is compared to two words with a known polarity, namely “poor” and “excellent”. These particular words have traditionally been consistently good indicators of polarity status in many studies (Turney 2002 ; Mullen and Collier 2004 ). Semantic similarity was defined as cosine similarity on the embeddings of the Word2Vec model (cf. “ Corpus creation ” section).

Modality Responses to problems in scientific writing often express possibility and necessity, and so have a close connection with modality. Modality can be broken into three main categories, as described by Kratzer ( 1991 ), namely epistemic (possibility), deontic (permission / request / wish) and dynamic (expressing ability).

Problems have a strong relationship to modality within scientific writing. Often, this is due to a tactic called “hedging” (Medlock and Briscoe 2007 ) where the author uses speculative language, often using Epistemic modality, in an attempt to make either noncommital or vague statements. This has the effect of allowing the author to distance themselves from the statement, and is often employed when discussing negative or problematic topics. Consider the following example of Epistemic modality from Nakov and Hearst ( 2008 ): “A potential drawback is that it might not work well for low-frequency words”.

To take this linguistic correlate into account as a feature, we replicated a modality classifier as described by (Ruppenhofer and Rehbein 2012 ). More sophisticated modality classifiers have been recently introduced, for instance using a wide range of features and convolutional neural networks, e.g, (Zhou et al. 2015 ; Marasović and Frank 2016 ). However, we wanted to check the effect of a simpler method of modality classification on the final outcome first before investing heavily into their implementation. We trained three classifiers using the subset of features which Ruppenhofer et al. reported as performing best, and evaluated them on the gold standard dataset provided by the authors 4 . The results of the are shown in Table  3 . The dataset contains annotations of English modal verbs on the 535 documents of the first MPQA corpus release (Wiebe et al. 2005 ).

Modality classifier results (precision/recall/f-measure) using Naïve Bayes (NB), logistic regression, and a support vector machine (SVM)

Italicized results reflect highest f-measure reported per modal category

Logistic Regression performed best overall and so this model was chosen for our upcoming experiments. With regards to the optative and concessive modal categories, they can be seen to perform extremely poorly, with the optative category receiving a null score across all three classifiers. This is due to a limitation in the dataset, which is unbalanced and contains very few instances of these two categories. This unbalanced data also is the reason behind our decision of reporting results in terms of recall, precision and f-measure in Table  3 .

The modality classifier was then retrained on the entirety of the dataset used by Ruppenhofer and Rehbein ( 2012 ) using the best performing model from training (Logistic Regression). This new model was then used in the upcoming experiment to predict modality labels for each instance in our dataset.

As can be seen from Table  4 , we are able to achieve good results for distinguishing a problematic statement from non-problematic one. The bag-of-words baseline achieves a very good performance of 71.0% for the Logistic Regression classifier, showing that there is enough signal in the candidate phrases alone to distinguish them much better than random chance.

Results distinguishing problems from non-problems using Naïve Bayes (NB), logistic regression (LR) and a support vector machine (SVM)

Each feature set’s performance is shown in isolation followed by combinations with other features. Tenfold stratified cross-validation was used across all experiments. Statistical significance with respect to the baseline at the p  < 0.05 , 0.01, 0.001 levels is denoted by *, ** and *** respectively

Taking a look at Table  5 , which shows the information gain for the top lemmas,

Information gain (IG) in bits of top lemmas from the bag-of-words baseline in Table  4

we can see that the top lemmas are indeed indicative of problemhood (e.g. “limit”,“explosion”). Bigrams achieved good performance on their own (as did negation and discourse) but unfortunately performance deteriorated when using trigrams, particularly with the SVM and LR. The subcategorisation feature was the worst performing feature in isolation. Upon taking a closer look at our data, we saw that our hypothesis that intransitive verbs are commonly used in problematic statements was true, with over 30% of our problems (153) using them. However, due to our sampling method for the negative cases we also picked up many intransitive verbs (163). This explains the almost random chance performance (i.e.  50%) given that the distribution of intransitive verbs amongst the positive and negative candidates was almost even.

The modality feature was the most expensive to produce, but also didn’t perform very well is isolation. This surprising result may be partly due to a data sparsity issue

where only a small portion (169) of our instances contained modal verbs. The breakdown of how many types of modal senses which occurred is displayed in Table  6 . The most dominant modal sense was epistemic. This is a good indicator of problemhood (e.g. hedging, cf. “ Linguistic correlates of problem- and solution-hood ” section) but if the accumulation of additional data was possible, we think that this feature may have the potential to be much more valuable in determining problemhood. Another reason for the performance may be domain dependence of the classifier since it was trained on text from different domains (e.g. news). Additionally, modality has also shown to be helpful in determining contextual polarity (Wilson et al. 2005 ) and argumentation (Becker et al. 2016 ), so using the output from this modality classifier may also prove useful for further feature engineering taking this into account in future work.

Number of instances of modal senses

Polarity managed to perform well but not as good as we hoped. However, this feature also suffers from a sparsity issue resulting from cases where the Lesk algorithm (Lesk 1986 ) is not able to resolve the synset of the syntactic head.

Knowledge of syntax provides a big improvement with a significant increase over the baseline results from two of the classifiers.

Examining this in greater detail, POS tags with high information gain mostly included tags from open classes (i.e. VB-, JJ-, NN- and RB-). These tags are often more associated with determining polarity status than tags such as prepositions and conjunctions (i.e. adverbs and adjectives are more likely to be describing something with a non-neutral viewpoint).

The embeddings from Doc2Vec allowed us to obtain another significant increase in performance (72.9% with Naïve Bayes) over the baseline and polarity using Word2Vec provided the best individual feature result (77.2% with SVM).

Combining all features together, each classifier managed to achieve a significant result over the baseline with the best result coming from the SVM (81.8%). Problems were also better classified than non-problems as shown in the confusion matrix in Table  7 . The addition of the Word2Vec vectors may be seen as a form of smoothing in cases where previous linguistic features had a sparsity issue i.e., instead of a NULL entry, the embeddings provide some sort of value for each candidate. Particularly wrt. the polarity feature, cases where Lesk was unable to resolve a synset meant that a ZERO entry was added to the vector supplied to the machine learner. Amongst the possible combinations, the best subset of features was found by combining all features with the exception of bigrams, trigrams, subcategorisation and modality. This subset of features managed to improve results in both the Naïve Bayes and SVM classifiers with the highest overall result coming from the SVM (82.3%).

Confusion matrix for problems

The results for disambiguation of solutions from non-solutions can be seen in Table  8 . The bag-of-words baseline performs much better than random, with the performance being quite high with regard to the SVM (this result was also higher than any of the baseline performances from the problem classifiers). As shown in Table  9 , the top ranked lemmas from the best performing model (using information gain) included “use” and “method”. These lemmas are very indicative of solutionhood and so give some insight into the high baseline returned from the machine learners. Subcategorisation and the result adverbials were the two worst performing features. However, the low performance for subcategorisation is due to the sampling of the non-solutions (the same reason for the low performance of the problem transitivity feature). When fitting the POS-tag distribution for the negative samples, we noticed that over 80% of the head POS-tags were verbs (much higher than the problem heads). The most frequent verb type being the infinite form.

Results distinguishing solutions from non-solutions using Naïve Bayes (NB), logistic regression (LR) and a support vector machine (SVM)

Each feature set’s performance is shown in isolation followed by combinations with other features. Tenfold stratified cross-validation was used across all experiments

Information gain (IG) in bits of top lemmas from the bag-of-words baseline in Table  8

This is not surprising given that a very common formulation to describe a solution is to use the infinitive “TO” since it often describes a task e.g., “One solution is to find the singletons and remove them”. Therefore, since the head POS tags of the non-solutions had to match this high distribution of infinitive verbs present in the solution, the subcategorisation feature is not particularly discriminatory. Polarity, negation, exemplification and syntactic features were slightly more discriminate and provided comparable results. However, similar to the problem experiment, the embeddings from Word2Vec and Doc2Vec proved to be the best features, with polarity using Word2Vec providing the best individual result (73.4% with SVM).

Combining all features together managed to improve over each feature in isolation and beat the baseline using all three classifiers. Furthermore, when looking at the confusion matrix in Table  10 the solutions were classified more accurately than the non-solutions. The best subset of features was found by combining all features without adverbial of result, bigrams, exemplification, negation, polarity and subcategorisation. The best result using this subset of features was achieved by the SVM with 79.7%. It managed to greatly improve upon the baseline but was just shy of achieving statistical significance ( p = 0.057 ).

Confusion matrix for solutions

In this work, we have presented new supervised classifiers for the task of identifying problem and solution statements in scientific text. We have also introduced a new corpus for this task and used it for evaluating our classifiers. Great care was taken in constructing the corpus by ensuring that the negative and positive samples were closely matched in terms of syntactic shape. If we had simply selected random subtrees for negative samples without regard for any syntactic similarity with our positive samples, the machine learner may have found easy signals such as sentence length. Additionally, since we did not allow the machine learner to see the surroundings of the candidate string within the sentence, this made our task even harder. Our performance on the corpus shows promise for this task, and proves that there are strong signals for determining both the problem and solution parts of the problem-solving pattern independently.

With regard to classifying problems from non-problems, features such as the POS tag, document and word embeddings provide the best features, with polarity using the Word2Vec embeddings achieving the highest feature performance. The best overall result was achieved using an SVM with a subset of features (82.3%). Classifying solutions from non-solutions also performs well using the embedding features, with the best feature also being polarity using the Word2Vec embeddings, and the highest result also coming from the SVM with a feature subset (79.7%).

In future work, we plan to link problem and solution statements which were found independently during our corpus creation. Given that our classifiers were trained on data solely from the ACL anthology, we also hope to investigate the domain specificity of our classifiers and see how well they can generalise to domains other than ACL (e.g. bioinformatics). Since we took great care at removing the knowledge our classifiers have of the explicit statements of problem and solution (i.e. the classifiers were trained only on the syntactic argument of the explicit statement of problem-/solution-hood), our classifiers should in principle be in a good position to generalise, i.e., find implicit statements too. In future work, we will measure to which degree this is the case.

To facilitate further research on this topic, all code and data used in our experiments can be found here: www.cl.cam.ac.uk/~kh562/identifying-problems-and-solutions.html

Acknowledgements

The first author has been supported by an EPSRC studentship (Award Ref: 1641528). We thank the reviewers for their helpful comments.

1 http://acl-arc.comp.nus.edu.sg/ .

2 The corpus comprises 3,391,198 sentences, 71,149,169 words and 451,996,332 characters.

3 The head POS tags were found using a modification of the Collins’ Head Finder. This modified algorithm addresses some of the limitations of the head finding heuristics described by Collins ( 2003 ) and can be found here: http://nlp.stanford.edu/nlp/javadoc/javanlp/edu/stanford/nlp/trees/ModCollinsHeadFinder.html .

4 https://www.uni-hildesheim.de/ruppenhofer/data/modalia_release1.0.tgz.

Contributor Information

Kevin Heffernan, Email: [email protected] .

Simone Teufel, Email: [email protected] .

  • Baccianella S, Esuli A, Sebastiani F. Sentiwordnet 3.0: An enhanced lexical resource for sentiment analysis and opinion mining. LREC. 2010; 10 :2200–2204. [ Google Scholar ]
  • Becker, M., Palmer, A., & Frank, A. (2016). Clause types and modality in argumentative microtexts. In Workshop on foundations of the language of argumentation (in conjunction with COMMA) .
  • Briscoe, T., Carroll, J., & Watson, R. (2006). The second release of the rasp system. In Proceedings of the COLING/ACL on interactive presentation sessions, association for computational linguistics pp. 77–80.
  • Chandrasekaran B. Towards a taxonomy of problem solving types. AI Magazine. 1983; 4 (1):9. [ Google Scholar ]
  • Charles M. Adverbials of result: Phraseology and functions in the problem-solution pattern. Journal of English for Academic Purposes. 2011; 10 (1):47–60. doi: 10.1016/j.jeap.2011.01.002. [ CrossRef ] [ Google Scholar ]
  • Chen, D., Dyer, C., Cohen, S. B., & Smith, N. A. (2011). Unsupervised bilingual pos tagging with markov random fields. In Proceedings of the first workshop on unsupervised learning in NLP, association for computational linguistics pp. 64–71.
  • Collins M. Head-driven statistical models for natural language parsing. Computational Linguistics. 2003; 29 (4):589–637. doi: 10.1162/089120103322753356. [ CrossRef ] [ Google Scholar ]
  • Councill, I. G., Giles, C. L., & Kan, M. Y. (2008). Parscit: An open-source CRF reference string parsing package. In LREC .
  • Curran, J. R., Clark, S., & Bos, J. (2007). Linguistically motivated large-scale NLP with C&C and boxer. In Proceedings of the 45th annual meeting of the ACL on interactive poster and demonstration sessions, association for computational linguistics pp. 33–36.
  • Flowerdew L. Corpus-based analyses of the problem-solution pattern: A phraseological approach. Amsterdam: John Benjamins Publishing; 2008. [ Google Scholar ]
  • Grimes JE. The thread of discourse. Berlin: Walter de Gruyter; 1975. [ Google Scholar ]
  • Hall M, Frank E, Holmes G, Pfahringer B, Reutemann P, Witten IH. The weka data mining software: An update. ACM SIGKDD Explorations Newsletter. 2009; 11 (1):10–18. doi: 10.1145/1656274.1656278. [ CrossRef ] [ Google Scholar ]
  • Hoey M. Textual interaction: An introduction to written discourse analysis. Portland: Psychology Press; 2001. [ Google Scholar ]
  • Hutchins J. On the structure of scientific texts. UEA Papers in Linguistics. 1977; 5 (3):18–39. [ Google Scholar ]
  • Jonassen DH. Toward a design theory of problem solving. Educational Technology Research and Development. 2000; 48 (4):63–85. doi: 10.1007/BF02300500. [ CrossRef ] [ Google Scholar ]
  • Jordan MP. Short texts to explain problem-solution structures-and vice versa. Instructional Science. 1980; 9 (3):221–252. doi: 10.1007/BF00177328. [ CrossRef ] [ Google Scholar ]
  • Kratzer, A. (1991). Modality. In von Stechow & Wunderlich (Eds.), Semantics: An international handbook of contemporary research .
  • Le QV, Mikolov T. Distributed representations of sentences and documents. ICML. 2014; 14 :1188–1196. [ Google Scholar ]
  • Lesk, M. (1986). Automatic sense disambiguation using machine readable dictionaries: How to tell a pine cone from an ice cream cone. In Proceedings of the 5th annual international conference on Systems documentation, ACM (pp. 24–26).
  • Madnani, N., Tetreault, J., & Chodorow, M. (2012). Exploring grammatical error correction with not-so-crummy machine translation. In Proceedings of the seventh workshop on building educational applications using NLP, association for computational linguistics pp. 44–53.
  • Mann WC, Thompson SA. Rhetorical structure theory: Toward a functional theory of text organization. Text-Interdisciplinary Journal for the Study of Discourse. 1988; 8 (3):243–281. doi: 10.1515/text.1.1988.8.3.243. [ CrossRef ] [ Google Scholar ]
  • Marasović, A., & Frank, A. (2016). Multilingual modal sense classification using a convolutional neural network. In Proceedings of the 1st Workshop on Representation Learning for NLP .
  • McKeown K, Daume H, Chaturvedi S, Paparrizos J, Thadani K, Barrio P, Biran O, Bothe S, Collins M, Fleischmann KR, et al. Predicting the impact of scientific concepts using full-text features. Journal of the Association for Information Science and Technology. 2016; 67 :2684–2696. doi: 10.1002/asi.23612. [ CrossRef ] [ Google Scholar ]
  • Medlock B, Briscoe T. Weakly supervised learning for hedge classification in scientific literature. ACL, Citeseer. 2007; 2007 :992–999. [ Google Scholar ]
  • Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., & Dean, J. (2013). Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems (pp. 3111–3119).
  • Mohit, B., Schneider, N., Bhowmick, R., Oflazer, K., & Smith, N. A. (2012). Recall-oriented learning of named entities in arabic wikipedia. In Proceedings of the 13th conference of the European chapter of the association for computational linguistics, association for computational linguistics (pp. 162–173).
  • Mullen T, Collier N. Sentiment analysis using support vector machines with diverse information sources. EMNLP. 2004; 4 :412–418. [ Google Scholar ]
  • Nakov, P., Hearst, M. A. (2008). Solving relational similarity problems using the web as a corpus. In: ACL (pp. 452–460).
  • Poon, H., & Domingos, P. (2009). Unsupervised semantic parsing. In Proceedings of the 2009 conference on empirical methods in natural language processing: Volume 1-association for computational linguistics (pp. 1–10).
  • Ruppenhofer, J., & Rehbein, I. (2012). Yes we can!? Annotating the senses of English modal verbs. In Proceedings of the 8th international conference on language resources and evaluation (LREC), Citeseer (pp. 24–26).
  • Saha, S. K., Mitra, P., & Sarkar, S. (2008). Word clustering and word selection based feature reduction for maxent based hindi ner. In ACL (pp. 488–495).
  • Scott, M. (2001). Mapping key words to problem and solution. In Patterns of text: In honour of Michael Hoey Benjamins, Amsterdam (pp. 109–127).
  • Siegel S. Nonparametric statistics for the behavioral sciences. New York: McGraw-hill; 1956. [ Google Scholar ]
  • Strübing, J. (2007). Research as pragmatic problem-solving: The pragmatist roots of empirically-grounded theorizing. In The Sage handbook of grounded theory (pp. 580–602).
  • Teufel, S., et al. (2000). Argumentative zoning: Information extraction from scientific text. PhD Thesis, Citeseer .
  • Turney, P. D. (2002). Thumbs up or thumbs down?: Semantic orientation applied to unsupervised classification of reviews. In Proceedings of the 40th annual meeting on association for computational linguistics, association for computational linguistics (pp. 417–424).
  • Van Dijk TA. Text and context explorations in the semantics and pragmatics of discourse. London: Longman; 1980. [ Google Scholar ]
  • Wiebe J, Wilson T, Cardie C. Annotating expressions of opinions and emotions in language. Language Resources and Evaluation. 2005; 39 (2):165–210. doi: 10.1007/s10579-005-7880-9. [ CrossRef ] [ Google Scholar ]
  • Wilson, T., Wiebe, J., & Hoffmann, P. (2005). Recognizing contextual polarity in phrase-level sentiment analysis. In Proceedings of the conference on human language technology and empirical methods in natural language processing, association for computational linguistics (pp. 347–354).
  • Winter, E. O. (1968). Some aspects of cohesion. In Sentence and clause in scientific English . University College London.
  • Zhou, M., Frank, A., Friedrich, A., & Palmer, A. (2015). Semantically enriched models for modal sense classification. In Workshop on linking models of lexical, sentential and discourse-level semantics (LSDSem) (p. 44).

Logo for Maricopa Open Digital Press

1 Thinking Like a Scientist

Learning Objectives

After studying this chapter, you should be able to:

  • Identify the shared characteristics of the natural sciences
  • Compare inductive reasoning with deductive reasoning
  • Illustrate the steps in the scientific method
  • Explain how to design a controlled experiment
  • Describe the goals of descriptive science and hypothesis-based science
  • Apply the Claim-Evidence-Reasoning process to a scientific investigation

The Nature of Science

Environmental science (also known as environmental biology) is a field of study that focuses on the earth and its many complex systems. It is an interdisciplinary field that brings together elements of biology, geology, chemistry, and other natural sciences. It may even include elements of social sciences such as economics and political science. The discoveries of environmental science are made by a community of researchers who work individually and together using agreed-on methods. In this sense, environmental science, like all sciences, is a social enterprise like politics or the arts. The methods of science include careful observation, record keeping, logical and mathematical reasoning, experimentation, and submitting conclusions to the scrutiny of others. Science also requires considerable imagination and creativity; a well-designed experiment is commonly described as elegant, or beautiful. Like politics, science has considerable practical implications, and some science is dedicated to practical applications, such as improvements to farming practices (Figure 1). Other science proceeds largely motivated by curiosity. Whatever its goal, there is no doubt that science has transformed human existence and will continue to do so.

Photo of George Washington Carver working in a laboratory

What exactly is science? What does the study of environmental science share with other scientific disciplines? Science   (from the Latin  scientia, meaning “knowledge”) can be defined as knowledge about the natural world. But science is not just a collection of facts and theories, it is also a process used to gain that knowledge.

Science is a very specific way of learning, or knowing, about the world. The history of the past 500 years demonstrates that science is a very powerful way of knowing about the world; it is largely responsible for the technological revolutions that have taken place during this time. There are however, areas of knowledge and human experience that the methods of science cannot be applied to. These include such things as answering purely moral questions, aesthetic questions, or what can be generally categorized as spiritual questions. Science cannot investigate these areas because they are outside the realm of material phenomena, the phenomena of matter and energy, and cannot be observed and measured.

The  scientific method  is a method of research with defined steps that include experiments and careful observation. The steps of the scientific method will be examined in detail later, but one of the most important aspects of this method is the testing of hypotheses. A  hypothesis   is a suggested explanation for an event, which can be tested. Hypotheses, or tentative explanations, are generally produced within the context of a  scientific theory . A scientific theory is a generally accepted, thoroughly tested, and confirmed explanation for a set of observations or phenomena. Scientific theory is the foundation of scientific knowledge. In addition, in many scientific disciplines (less so in biology) there are scientific laws , often expressed in mathematical formulas, which describe how elements of nature will behave under certain specific conditions.

A common misconception is that a hypothesis is elevated to the level of theory after being confirmed, then a theory is promoted to a scientific law after it is confirmed. However, there is no evolution of hypotheses through theories to laws as if they represent some increase in certainty about the world. Hypotheses are the day-to-day material that scientists work with and they are developed within the context of theories. You can think of theories as being “bigger” than hypotheses because a theory incorporates many hypotheses and facts. Laws, on the other hand, are concise descriptions of natural events that can usually be described mathematically. For example, Newton’s Law of Gravity explains how objects in the universe attract other objects differently depending on their mass.

Natural Sciences

What would you expect to see in a museum of natural sciences? Frogs? Plants? Dinosaur skeletons? Exhibits about how the brain functions? A planetarium? Gems and minerals? Or maybe all of the above? Science includes such diverse fields as astronomy, biology, computer sciences, geology, logic, physics, chemistry, and mathematics. However, those fields of science related to the physical world and its phenomena and processes are considered natural sciences . Thus, a museum of natural sciences might contain any of the items listed above (Figure 2).

Natural History Museum of Los Angeles County

There is no complete agreement when it comes to defining what the natural sciences include. For some experts, the natural sciences are astronomy, biology, chemistry, earth science, and physics. Other scholars choose to divide natural sciences into  life sciences , which study living things and include biology, and  physical sciences , which study nonliving matter and include astronomy, physics, and chemistry. Some disciplines such as biophysics and biochemistry build on two sciences and are interdisciplinary.

Knowledge Check

Scientific Inquiry

One thing is common to all forms of science: an ultimate goal “to know.” Curiosity and inquiry are the driving forces for the development of science. Scientists seek to understand the world and the way it operates by using one of two main pathways of scientific study: descriptive science and hypothesis-based science.  Descriptive  (or discovery)  science  aims to observe, explore, and discover, while  hypothesis-based science  begins with a specific question or problem and a potential answer or solution that can be tested. The boundary between these two forms of study is often blurred, because most scientific endeavors combine both approaches. Observations lead to questions, questions lead to forming a hypothesis as a possible answer to those questions, and then the hypothesis is tested. Thus, descriptive science and hypothesis-based science are in continuous dialogue.

Hypothesis Testing

Biologists study the living world by posing questions about it and seeking science-based responses. This approach is common to other sciences as well and is often referred to as the scientific method (Figure 3). The scientific method was used even in ancient times, but it was first documented by England’s Sir Francis Bacon (1561–1626), who set up inductive methods for scientific inquiry. The scientific method is not exclusively used by biologists but can be applied to almost anything as a logical problem-solving method.

A flow chart shows the steps in the scientific method. In step 1, an observation is made. In step 2, a question is asked about the observation. In step 3, an answer to the question, called a hypothesis, is proposed. In step 4, a prediction is made based on the hypothesis. In step 5, an experiment is done to test the prediction. In step 6, the results are analyzed to determine whether or not the hypothesis is supported. If the hypothesis is not supported, another hypothesis is made. In either case, the results are reported.

The scientific process typically starts with an observation (often a problem to be solved) that leads to a question . Let’s think about a simple problem that starts with an observation and apply the scientific method to solve the problem. One Monday morning, a student arrives at class and quickly discovers that the classroom is too warm. That is an observation that also describes a problem: the classroom is too warm. The student then asks a question: “Why is the classroom so warm?”

Recall that a hypothesis is a suggested explanation that can be tested. To solve a problem, several hypotheses may be proposed. For example, one hypothesis might be, “The classroom is warm because no one turned on the air conditioning.” But there could be other responses to the question, and therefore other hypotheses may be proposed. A second hypothesis might be, “The classroom is warm because there is a power failure, and so the air conditioning doesn’t work.”

Once a hypothesis has been selected, a prediction may be made. A prediction is similar to a hypothesis but it typically has the format “If . . . then . . . .” For example, the prediction for the first hypothesis might be, “ If  the student turns on the air conditioning,  then  the classroom will no longer be too warm.”

A hypothesis must be testable to ensure that it is valid. For example, a hypothesis that depends on what a bear thinks is not testable, because it can never be known what a bear thinks. It should also be  falsifiable , meaning that it can be disproven by experimental results. An example of an unfalsifiable hypothesis is “Botticelli’s  Birth of Venus  painting is beautiful.” There is no experiment that might show this statement to be false. To test a hypothesis, a researcher will conduct one or more experiments designed to eliminate one or more of the hypotheses. This is important. A hypothesis can be disproven, or eliminated, but it can never be proven. Science does not deal in proofs like mathematics. If an experiment fails to disprove a hypothesis, then we find support for that explanation, but this is not to say that down the road a better explanation will not be found, or a more carefully designed experiment will be found to falsify the hypothesis.

The best way to test a hypothesis is to conduct a controlled experiment. A controlled experiment is a scientific test performed under controlled conditions, meaning just one (or a few) variables are changed at a time, while all other factors are kept constant. A  variable  is any part of the experiment that can vary or change during the experiment.

What are the key components of a controlled experiment? Let’s say you want what it takes to grow the healthiest tomatoes. Your hypothesis is that tomato plants will grow better if given fertilizer. To test this hypothesis, you give fertilizer to some of your tomato plants and give others only water. Your prediction might be “If I give fertilizer to a group of tomato plants, they will grow better than tomato plants without fertilizer.” In this example, the tomatoes with fertilizer are known as the experimental group , and the ones without fertilizer are the control group because they did not receive the treatment.

The factor that is different between the experimental and control group is known as the  independent variable (in this case, the fertilizer). It can also be thought of as the variable that is directly manipulated by the experimenter. The dependent variable is the response that is measured to determine if the experimental treatment had any effect. In this case, the dependent variable is the growth of the tomato plants.

Experimental results or data are the observations made in the course of an experiment. In this case, the height, number of leaves, and other signs of plant growth are the data you would collect in your experiment. Looking at Figure 4, we can conclude that the hypothesis was supported . If the fertilized plants did not grow better than the unfertilized plants, we would conclude that the hypothesis was not supported , and we may need to generate a new hypothesis.

Photo shows tomato plants grown with fertilizer are larger and healthier than tomato plants grown without fertilizer

Note that in the tomato experiment, three plants were used in each group. This is because there may have been an unhealthy or slow-growing plant that would affect the results. Having a larger sample size helps eliminate the effects of random factors like this.

Not all scientific questions can be answered using controlled experiments. It may be unethical to test the effects of a virus on humans, or impractical to see how changing rainfall affects plants in the desert. In such cases, a scientist may simply collect data from the real world to test a hypothesis. In recent years a new approach of testing hypotheses has developed as a result of an exponential growth of data deposited in various databases. Using computer algorithms and statistical analyses of data in databases, a new field of so-called “data research” (also referred to as “in silico” research) provides new methods of data analyses and their interpretation. This will increase the demand for specialists in both biology and computer science, a promising career opportunity.

In the example below, the scientific method is used to solve an everyday problem. Which part in the example below is the hypothesis? Which is the prediction? Based on the results of the experiment, is the hypothesis supported? If it is not supported, propose some alternative hypotheses.

  • My toaster doesn’t toast my bread.
  • Why doesn’t my toaster work?
  • There is something wrong with the electrical outlet.
  • If something is wrong with the outlet, my coffeemaker also won’t work when plugged into it.
  • I plug my coffeemaker into the outlet.
  • My coffeemaker works.

In practice, the scientific method is not as rigid and structured as it might at first appear. Sometimes an experiment leads to conclusions that favor a change in approach; often, an experiment brings entirely new scientific questions to the puzzle. Many times, science does not operate in a linear fashion; instead, scientists continually draw inferences and make generalizations, finding patterns as their research proceeds. Scientific reasoning is more complex than the scientific method alone suggests.

Basic and Applied Science

The scientific community has been debating for the last few decades about the value of different types of science. Is it valuable to pursue science for the sake of simply gaining knowledge, or does scientific knowledge only have worth if we can apply it to solving a specific problem or bettering our lives? This question focuses on the differences between two types of science: basic science and applied science.

Basic science  or “pure” science seeks to expand knowledge regardless of the short-term application of that knowledge. It is not focused on developing a product or a service of immediate public or commercial value. The immediate goal of basic science is knowledge for knowledge’s sake, though this does not mean that in the end it may not result in an application.

In contrast,  applied science  or “technology,” aims to use science to solve real-world problems, making it possible, for example, to improve a crop yield, find a cure for a particular disease, or save animals threatened by a natural disaster. In applied science, the problem is usually defined for the researcher.

Some individuals may perceive applied science as “useful” and basic science as “useless.” A question these people might pose to a scientist advocating knowledge acquisition would be, “What for?” A careful look at the history of science, however, reveals that basic knowledge has resulted in many remarkable applications of great value. Many scientists think that a basic understanding of science is necessary before an application is developed; therefore, applied science relies on the results generated through basic science. Other scientists think that it is time to move on from basic science and instead to find solutions to actual problems. Both approaches are valid. It is true that there are problems that demand immediate attention; however, few solutions would be found without the help of the knowledge generated through basic science.

One example of how basic and applied science can work together to solve practical problems occurred after the discovery of DNA structure led to an understanding of the molecular mechanisms governing DNA replication. Strands of DNA, unique in every human, are found in our cells, where they provide the instructions necessary for life. During DNA replication, new copies of DNA are made, shortly before a cell divides to form new cells. Understanding the mechanisms of DNA replication enabled scientists to develop laboratory techniques that are now used to identify genetic diseases, pinpoint individuals who were at a crime scene, and determine paternity. Without basic science, it is unlikely that applied science would exist.

Illustration shows some of the letters in the DNA sequence of humans

Another example of the link between basic and applied research is the Human Genome Project, a study in which each human chromosome was analyzed and mapped to determine the precise sequence of DNA subunits and the exact location of each gene. (The gene is the basic unit of heredity; an individual’s complete collection of genes is his or her genome.) Other organisms have also been studied as part of this project to gain a better understanding of human chromosomes. The Human Genome Project (Figure 5) relied on basic research carried out with non-human organisms and, later, with the human genome. An important end goal eventually became using the data for applied research seeking cures for genetically related diseases.

While research efforts in both basic science and applied science are usually carefully planned, it is important to note that some discoveries are made by serendipity, that is, by means of a fortunate accident or a lucky surprise. Penicillin was discovered when biologist Alexander Fleming accidentally left a petri dish of Staphylococcus  bacteria open. An unwanted mold grew, killing the bacteria. The mold turned out to be  Penicillium , and a new antibiotic was discovered. Even in the highly organized world of science, luck—when combined with an observant, curious mind—can lead to unexpected breakthroughs.

Communicating Scientific Work

Whether scientific research is basic science or applied science, scientists must share their findings for other researchers to expand and build upon their discoveries. Communication and collaboration within and between sub disciplines of science are key to the advancement of knowledge in science. For this reason, an important aspect of a scientist’s work is disseminating results and communicating with peers. Scientists can share results by presenting them at a scientific meeting or conference, but this approach can reach only the limited few who are present. Instead, most scientists present their results in peer-reviewed articles that are published in scientific journals.  Peer-reviewed articles  are scientific papers that are reviewed, usually anonymously by a scientist’s colleagues, or peers. These colleagues are qualified individuals, often experts in the same research area, who judge whether or not the scientist’s work is suitable for publication. The process of peer review helps to ensure that the research described in a scientific paper or grant proposal is original, significant, logical, and thorough. Grant proposals, which are requests for research funding, are also subject to peer review. Scientists publish their work so other scientists can reproduce their experiments under similar or different conditions to expand on the findings. The experimental results must be consistent with the findings of other scientists.

There are many journals and the popular press that do not use a peer-review system. A large number of online open-access journals, journals with articles available without cost, are now available, many of which use rigorous peer-review systems, but some of which do not. Results of any studies published in these forums without peer review are not reliable and should not form the basis for other scientific work. In one exception, journals may allow a researcher to cite a personal communication from another researcher about unpublished results with the cited author’s permission.

Claim-Evidence-Reasoning

Ultimately, the goal of science is to understand and explain how things work in the natural world. One of the tools scientists use to achieve this goal is the Claim-Evidence-Reasoning process. A claim is a statement that answers a scientific question. It can be an explanation of a natural phenomenon or a conclusion that can be drawn after conducting a scientific investigation. Evidence is the scientific data that supports the claim that is being made. The evidence must be sufficient , meaning there must be enough data to fully support the claim, and it must be appropriate , leaving out any unnecessary information. Reasoning is a justification that connects the evidence to the claim. It shows why the data count as evidence to support this specific claim by using appropriate and sufficient scientific principles.

Attribution

Concepts in Biology by OpenStax, modified by Sean Whitcomb. License: CC-BY

Media Attributions

  • George Washington Carver is licensed under a Public Domain license
  • Natural History Museum of Los Angeles County © Matthew Dillon is licensed under a CC BY (Attribution) license
  • Scientific_method © OpenStax is licensed under a CC BY (Attribution) license
  • Fertilizer_tomatoes © SuSanA Secretariat is licensed under a CC BY (Attribution) license
  • Human Genome Reference Sequence © National Human Genome Research Institute is licensed under a CC BY-NC (Attribution NonCommercial) license

Environmental Science Copyright © by Sean Whitcomb is licensed under a Creative Commons Attribution 4.0 International License , except where otherwise noted.

Share This Book

The Systematic Problem-Solving (SPS) Method:

Make better decisions tom g. stevens phd.

Solving problems is important in every area of human thinking. Learning general problem-solving skills can therefore help you improve your ability to cope with every area of your life. All disciplines of philosophy, business, science, and humanities have developed their own approach to solving problems. Remarkably, the problem-solving models developed by each of these areas are strikingly similar. I describe a simple problem-solving process that you can use to solve almost all problems.

Stages of the problem-solving process. The famous psychologist, Dr. Carl Rogers, was one of the first to help us understand how important self-exploration and problem-solving are for overcoming all types of personal, psychological, and daily-living problems. (1);

Consciously going through each of these four stages when solving any complex problem can be very useful. Following are the five stages of the problem-solving method.

STAGE 1: EXPLORATION OF THE PROBLEM

STAGE 2: EXPLORING ALTERNATIVE SOLUTIONS (Routes to Happiness);

STAGE 3: CHOOSING THE BEST ALTERNATIVE

STAGE 4: PLANNING AND ACTION

STAGE 5: EXPERIMENTING AND GATHERING FEEDBACK

During this stage, we gather all of the information we can about both external aspects of the problem and internal aspects. Good information gathering is not an easy process. Scientists spend their whole lives trying to learn about some very small piece of the world. The type of information-gathering process we use will depend upon the type of problem we are trying to solve. For information about the world the following are powerful skills to use.

  •  Library reference skills
  •  Observational skills
  •  Informational interviewing skills
  •  Critical thinking skills
  •  Scientific method skills
  •  Data analysis and statistical skills

Learning how to become an expert at identifying problems and finding causes is essential to become an expert in any field. The above skills are useful in solving many types of problems--even intra-personal ones. However, the focus of this book is how to be happy; and the key to happiness almost always involves not just external causes but internal ones as well.

It is usually much easier for most of us to observe an external event than an internal one. We have our external sensory organs to see and hear external events, but not internal ones. How do we observe that which we cannot see? We can learn to be better observers of our emotions, self-talk, and images.

The self-exploration process described above provides enough information to make you an expert at self-exploration. That is one of the most essential parts of developing your own inner therapist.

STAGE 2: EXPLORING ALTERNATIVE SOLUTIONS OR ROUTES TO HAPPINESS

Gather all the best information you can about possible solutions. Use brainstorming techniques, observe and consult with people who have overcome similar problems, read relevant material, consult experts, and recall your own relevant past experience. Look at both internal and external solutions.

Once you learn so many different routes to happiness, then you will be truly free to choose to be happy in almost any situation you face in life. The actual choice is made in stage 3 of the /problem-solving process. The appendix contains a very useful decision-making model for helping you make complex choices such as choosing a career or relationship. The following is a simple approach to making a decision between alternatives. (See Carkhuff Decision-Making Model, below, for a method for making complex decisions--for career or life planning.);

(1); List all the alternatives you are considering

(2); List all of the values or criteria that will be affected by the decision

(3); Evaluate each alternative by each criteria or value

(4); Choose the alternative which you predict will satisfy the criteria the best and lead to your greatest overall happiness

STAGE 4:  PLANNING AND ACTION (Experimenting);

Many decisions are made, but never implemented. See that you follow-up with good planning. Once you have made your choice, you can use some of the planning methods suggested in the O-PATSM method from chapter 11 to make sure that you follow through with your decision.

This is the stage of acting on your decision. Many people fear making mistakes and failure as if these were some terrible sins that they should never commit. That view of life of life makes every decision and action seem very serious and they often become very timid people who lack creativity and are plagued by guilt and fear of failure. Instead we can view every action as an experiment. If one of our overall goals in life is learning and growth, then we can never fail to learn. All people who have accomplished great happiness for themselves and contributed to others have shared the courage to act on their beliefs.

STAGE 5: GATHERING FEEDBACK

Many people hate to be evaluated and dread finding out the results of what they have done out of fear that the feedback will be negative. These fears can be serious impediments to the growth that can only happen through getting open, accurate feedback.

However, once learning and growth are important goals, then getting feedback becomes highly desirable. How else can we learn? Even negative outcomes can provide valuable information. Of course, almost everyone would rather have outcomes that maximize happiness; but when we don't, we can at least try to maximize our learning. Learning can help maximize happiness in the future.

We can also make the mistakes of dwelling on past mistakes that goes beyond constructive learning and reasonable reparations to victims and of punishing ourselves unnecessarily. Normally, there is no value to punishment--once a lesson has been learned. (2); Keep clear at all times that this problem-solving process is only a tool to serve the overall life goals of increased health, growth, and happiness.

CARKHUFF DECISION-MAKING MODEL:   This particular decision-making model is based upon one by Dr. Robert Carkhuff and follows the general guidelines of a considerable amount of research on how people can make more effective decisions. It can also be used for any other type of decision--from buying a new car to choosing a mate.

EXAMPLE OF USING THE DECISION-MAKING MODEL

The decision-making model will be illustrated in a way which you can use aa an analogy for making your own career decision. In this example, Henry is trying to decide whether to major in psychology or in computer science. Thus he has narrowed his alternatives to the following two:

1); majoring in psychology with a career goal of either going into high school counseling or teaching or 2); majoring in computer science with a possible goal of working as a computer programmer.

These are represented along the top axis of the following matrix.

  ** is the WINNER-it has the most points of the two alternatives

STEPS TO USING THE DECISION-MAKING MODEL--(use above example);

STEP 1-- LIST YOUR CAREER ALTERNATIVES. This is your refined list of alternatives of which majors or occupations you are trying to decide between. Remember, that you can list as many as you want. You can list unusual combinations of simpler alternatives. For Henry those narrowed alternatives were psychology and computer science.

STEP 2--CAREER SELECTION CRITERIA. Review your Career Selection Criteria list and write all the important career selection criteria in the far left column. Note that repeating the same idea or leaving out an important idea can affect the decision outcome.

STEP 3-- CRITERIA WEIGHTS. Evaluate the relative importance to you of each of your Career Selection Criteria on a scale of 1 to 10 (10 being the most important);. Write your answer in the column next to the selection criteria.

STEP 4--ALTERNATIVE EVALUATION SCALE. Each alternative is to be evaluated from the point of view of each selection criterion. You need to think about what this means for each selection criterion. For example Henry determined that for the selection criteria of income , a "minimally acceptable" income would be $25,000 starting with prospects of making up to $50,000 eventually. An outstanding salary would be starting at about $40, 000 with prospects of making up to $100,000.

+5 = Maximum evaluation--outstanding (example: income begin $40,000 go to $100,000 +4, +3, +2, +1 = intermediate values

0 = Minimally acceptable value. (example: income = begin $20,000 go to $45,000);

-1, -2, -3, -4 = intermediate values

-5 = Minimum evaluation--worst possible (example: income < $10,000

STEP 5--EVALUATE EACH ALTERNATIVE BY EACH SELECTION CRITERION. Use the evaluation scale from step 3 to evaluate each alternative from the point of view of each Career Selection Criterion. Give it rating from -5 to +5. In the example above, both alternatives were evaluated on the criterion of "income": Henry gave the psychology income an evaluation of "+2" and computer science income an evaluation of " +4."

STEP 6--MULTIPLY THE CRITERIA WEIGHTS TIMES THE EVALUATIONS. In the example above for the selection criterion of "income," Henry multiplied the criterion weight of "9" times the evaluation of " +2" for "PSYCH" to get a result of "18." That is its SCORE OR POINTS for psychology on the criterion of income. Put it inside of the parentheses. This score of 18 is an overall prediction much Henry's income in psychology will contribute to his overall happiness. Since he had a score of 36 in computer science, he his predicting that he will be much happier with his income in that field.

STEP 7--FIND THE OVERALL SUM OF THE SCORES FOR EACH ALTERNATIVE. Add together the numbers inside the parentheses for each alternative. In the example above, the overall sum for the "PSYCH" alternative is "405."

STEP 8--COMPARE THE ALTERNATIVES WITH EACH OTHER AND WITH THE "IDEAL." The "ideal" is the maximum possible number of points. Once you have determined all the totals and compared them to each other, try to figure out why one alternative came out ahead of another--where it got its points. Play with the points until you think the points match your true feelings and values.

* The alternative with the most points is the one you are predicting will make you the happiest person.

1. 1 Some might argue that Freud was the first. He clearly did describe many helpful techniques. I think that some of his free association techniques are still very useful for helping to find underlying beliefs, images, or cognitive systems which are related to the problem. However, Rogers was the one that more clearly described the stages of self-exploration and problem-solving and the conditions of unconditional positive regard, empathetic understanding, and genuineness on the part of the therapist which seem to be important to the therapeutic process or to any person attempting to feel better.

Robert Carkhuff (one of Roger's pupils); has developed a structured training system for helping people learn these skills. Robert Cash, a personal friend, has further elaborated these skills in his own courses and introduced me to this process. There is a good deal of research supporting the effectiveness of these techniques.

2. 2 This statement does not address the use of punishment as a deterrent to prevent some persons from profiting from their dysfunctional behaviors. For example if behaviors such as murder, robbery, or selling drugs are not given sufficient punishment, some people will engage in these behaviors. A person whose ultimate concern is money and pleasure may deal drugs to make money with little regard to how it affects others. Increasing the cost for a person with those beliefs can reduce the chances they will sell drugs.

Self-Help and other resources on this website (and site map)

Copyright 2021 Tom G. Stevens PhD  

Exploring the Problem Solving Cycle in Computer Science – Strategies, Techniques, and Tools

  • Post author By bicycle-u
  • Post date 08.12.2023

The world of computer science is built on the foundation of problem solving. Whether it’s finding a solution to a complex algorithm or analyzing data to make informed decisions, the problem solving cycle is at the core of every computer science endeavor.

At its essence, problem solving in computer science involves breaking down a complex problem into smaller, more manageable parts. This allows for a systematic approach to finding a solution by analyzing each part individually. The process typically starts with gathering and understanding the data or information related to the problem at hand.

Once the data is collected, computer scientists use various techniques and algorithms to analyze and explore possible solutions. This involves evaluating different approaches and considering factors such as efficiency, accuracy, and scalability. During this analysis phase, it is crucial to think critically and creatively to come up with innovative solutions.

After a thorough analysis, the next step in the problem solving cycle is designing and implementing a solution. This involves creating a detailed plan of action, selecting the appropriate tools and technologies, and writing the necessary code to bring the solution to life. Attention to detail and precision are key in this stage to ensure that the solution functions as intended.

The final step in the problem solving cycle is evaluating the solution and its effectiveness. This includes testing the solution against different scenarios and data sets to ensure its reliability and performance. If any issues or limitations are discovered, adjustments and optimizations are made to improve the solution.

In conclusion, the problem solving cycle is a fundamental process in computer science, involving analysis, data exploration, algorithm development, solution implementation, and evaluation. It is through this cycle that computer scientists are able to tackle complex problems and create innovative solutions that drive progress in the field of computer science.

Understanding the Importance

In computer science, problem solving is a crucial skill that is at the core of the problem solving cycle. The problem solving cycle is a systematic approach to analyzing and solving problems, involving various stages such as problem identification, analysis, algorithm design, implementation, and evaluation. Understanding the importance of this cycle is essential for any computer scientist or programmer.

Data Analysis and Algorithm Design

The first step in the problem solving cycle is problem identification, which involves recognizing and defining the issue at hand. Once the problem is identified, the next crucial step is data analysis. This involves gathering and examining relevant data to gain insights and understand the problem better. Data analysis helps in identifying patterns, trends, and potential solutions.

After data analysis, the next step is algorithm design. An algorithm is a step-by-step procedure or set of rules to solve a problem. Designing an efficient algorithm is crucial as it determines the effectiveness and efficiency of the solution. A well-designed algorithm takes into consideration the constraints, resources, and desired outcomes while implementing the solution.

Implementation and Evaluation

Once the algorithm is designed, the next step in the problem solving cycle is implementation. This involves translating the algorithm into a computer program using a programming language. The implementation phase requires coding skills and expertise in a specific programming language.

After implementation, the solution needs to be evaluated to ensure that it solves the problem effectively. Evaluation involves testing the program and verifying its correctness and efficiency. This step is critical to identify any errors or issues and to make necessary improvements or adjustments.

In conclusion, understanding the importance of the problem solving cycle in computer science is essential for any computer scientist or programmer. It provides a systematic and structured approach to analyze and solve problems, ensuring efficient and effective solutions. By following the problem solving cycle, computer scientists can develop robust algorithms, implement them in efficient programs, and evaluate their solutions to ensure their correctness and efficiency.

Identifying the Problem

In the problem solving cycle in computer science, the first step is to identify the problem that needs to be solved. This step is crucial because without a clear understanding of the problem, it is impossible to find a solution.

Identification of the problem involves a thorough analysis of the given data and understanding the goals of the task at hand. It requires careful examination of the problem statement and any constraints or limitations that may affect the solution.

During the identification phase, the problem is broken down into smaller, more manageable parts. This can involve breaking the problem down into sub-problems or identifying the different aspects or components that need to be addressed.

Identifying the problem also involves considering the resources and tools available for solving it. This may include considering the specific tools and programming languages that are best suited for the problem at hand.

By properly identifying the problem, computer scientists can ensure that they are focused on the right goals and are better equipped to find an effective and efficient solution. It sets the stage for the rest of the problem solving cycle, including the analysis, design, implementation, and evaluation phases.

Gathering the Necessary Data

Before finding a solution to a computer science problem, it is essential to gather the necessary data. Whether it’s writing a program or developing an algorithm, data serves as the backbone of any solution. Without proper data collection and analysis, the problem-solving process can become inefficient and ineffective.

The Importance of Data

In computer science, data is crucial for a variety of reasons. First and foremost, it provides the information needed to understand and define the problem at hand. By analyzing the available data, developers and programmers can gain insights into the nature of the problem and determine the most efficient approach for solving it.

Additionally, data allows for the evaluation of potential solutions. By collecting and organizing relevant data, it becomes possible to compare different algorithms or strategies and select the most suitable one. Data also helps in tracking progress and measuring the effectiveness of the chosen solution.

Data Gathering Process

The process of gathering data involves several steps. Firstly, it is necessary to identify the type of data needed for the particular problem. This may include numerical values, textual information, or other types of data. It is important to determine the sources of data and assess their reliability.

Once the required data has been identified, it needs to be collected. This can be done through various methods, such as surveys, experiments, observations, or by accessing existing data sets. The collected data should be properly organized, ensuring its accuracy and validity.

Data cleaning and preprocessing are vital steps in the data gathering process. This involves removing any irrelevant or erroneous data and transforming it into a suitable format for analysis. Properly cleaned and preprocessed data will help in generating reliable and meaningful insights.

Data Analysis and Interpretation

After gathering and preprocessing the data, the next step is data analysis and interpretation. This involves applying various statistical and analytical methods to uncover patterns, trends, and relationships within the data. By analyzing the data, programmers can gain valuable insights that can inform the development of an effective solution.

During the data analysis process, it is crucial to remain objective and unbiased. The analysis should be based on sound reasoning and logical thinking. It is also important to communicate the findings effectively, using visualizations or summaries to convey the information to stakeholders or fellow developers.

In conclusion, gathering the necessary data is a fundamental step in solving computer science problems. It provides the foundation for understanding the problem, evaluating potential solutions, and tracking progress. By following a systematic and rigorous approach to data gathering and analysis, developers can ensure that their solutions are efficient, effective, and well-informed.

Analyzing the Data

Once you have collected the necessary data, the next step in the problem-solving cycle is to analyze it. Data analysis is a crucial component of computer science, as it helps us understand the problem at hand and develop effective solutions.

To analyze the data, you need to break it down into manageable pieces and examine each piece closely. This process involves identifying patterns, trends, and outliers that may be present in the data. By doing so, you can gain insights into the problem and make informed decisions about the best course of action.

There are several techniques and tools available for data analysis in computer science. Some common methods include statistical analysis, data visualization, and machine learning algorithms. Each approach has its own strengths and limitations, so it’s essential to choose the most appropriate method for the problem you are solving.

Statistical Analysis

Statistical analysis involves using mathematical models and techniques to analyze data. It helps in identifying correlations, distributions, and other statistical properties of the data. By applying statistical tests, you can determine the significance and validity of your findings.

Data Visualization

Data visualization is the process of presenting data in a visual format, such as charts, graphs, or maps. It allows for a better understanding of complex data sets and facilitates the communication of findings. Through data visualization, patterns and trends can become more apparent, making it easier to derive meaningful insights.

Machine Learning Algorithms

Machine learning algorithms are powerful tools for analyzing large and complex data sets. These algorithms can automatically detect patterns and relationships in the data, leading to the development of predictive models and solutions. By training the algorithm on a labeled dataset, it can learn from the data and make accurate predictions or classifications.

In conclusion, analyzing the data is a critical step in the problem-solving cycle in computer science. It helps us gain a deeper understanding of the problem and develop effective solutions. Whether through statistical analysis, data visualization, or machine learning algorithms, data analysis plays a vital role in transforming raw data into actionable insights.

Exploring Possible Solutions

Once you have gathered data and completed the analysis, the next step in the problem-solving cycle is to explore possible solutions. This is where the true power of computer science comes into play. With the use of algorithms and the application of scientific principles, computer scientists can develop innovative solutions to complex problems.

During this stage, it is important to consider a variety of potential solutions. This involves brainstorming different ideas and considering their feasibility and potential effectiveness. It may be helpful to consult with colleagues or experts in the field to gather additional insights and perspectives.

Developing an Algorithm

One key aspect of exploring possible solutions is the development of an algorithm. An algorithm is a step-by-step set of instructions that outlines a specific process or procedure. In the context of problem solving in computer science, an algorithm provides a clear roadmap for implementing a solution.

The development of an algorithm requires careful thought and consideration. It is important to break down the problem into smaller, manageable steps and clearly define the inputs and outputs of each step. This allows for the creation of a logical and efficient solution.

Evaluating the Solutions

Once you have developed potential solutions and corresponding algorithms, the next step is to evaluate them. This involves analyzing each solution to determine its strengths, weaknesses, and potential impact. Consider factors such as efficiency, scalability, and resource requirements.

It may be helpful to conduct experiments or simulations to further assess the effectiveness of each solution. This can provide valuable insights and data to support the decision-making process.

Ultimately, the goal of exploring possible solutions is to find the most effective and efficient solution to the problem at hand. By leveraging the power of data, analysis, algorithms, and scientific principles, computer scientists can develop innovative solutions that drive progress and solve complex problems in the world of technology.

Evaluating the Options

Once you have identified potential solutions and algorithms for a problem, the next step in the problem-solving cycle in computer science is to evaluate the options. This evaluation process involves analyzing the potential solutions and algorithms based on various criteria to determine the best course of action.

Consider the Problem

Before evaluating the options, it is important to take a step back and consider the problem at hand. Understand the requirements, constraints, and desired outcomes of the problem. This analysis will help guide the evaluation process.

Analyze the Options

Next, it is crucial to analyze each solution or algorithm option individually. Look at factors such as efficiency, accuracy, ease of implementation, and scalability. Consider whether the solution or algorithm meets the specific requirements of the problem, and if it can be applied to related problems in the future.

Additionally, evaluate the potential risks and drawbacks associated with each option. Consider factors such as cost, time, and resources required for implementation. Assess any potential limitations or trade-offs that may impact the overall effectiveness of the solution or algorithm.

Select the Best Option

Based on the analysis, select the best option that aligns with the specific problem-solving goals. This may involve prioritizing certain criteria or making compromises based on the limitations identified during the evaluation process.

Remember that the best option may not always be the most technically complex or advanced solution. Consider the practicality and feasibility of implementation, as well as the potential impact on the overall system or project.

In conclusion, evaluating the options is a critical step in the problem-solving cycle in computer science. By carefully analyzing the potential solutions and algorithms, considering the problem requirements, and considering the limitations and trade-offs, you can select the best option to solve the problem at hand.

Making a Decision

Decision-making is a critical component in the problem-solving process in computer science. Once you have analyzed the problem, identified the relevant data, and generated a potential solution, it is important to evaluate your options and choose the best course of action.

Consider All Factors

When making a decision, it is important to consider all relevant factors. This includes evaluating the potential benefits and drawbacks of each option, as well as understanding any constraints or limitations that may impact your choice.

In computer science, this may involve analyzing the efficiency of different algorithms or considering the scalability of a proposed solution. It is important to take into account both the short-term and long-term impacts of your decision.

Weigh the Options

Once you have considered all the factors, it is important to weigh the options and determine the best approach. This may involve assigning weights or priorities to different factors based on their importance.

Using techniques such as decision matrices or cost-benefit analysis can help you systematically compare and evaluate different options. By quantifying and assessing the potential risks and rewards, you can make a more informed decision.

Remember: Decision-making in computer science is not purely subjective or based on personal preference. It is crucial to use analytical and logical thinking to select the most optimal solution.

In conclusion, making a decision is a crucial step in the problem-solving process in computer science. By considering all relevant factors and weighing the options using logical analysis, you can choose the best possible solution to a given problem.

Implementing the Solution

Once the problem has been analyzed and a solution has been proposed, the next step in the problem-solving cycle in computer science is implementing the solution. This involves turning the proposed solution into an actual computer program or algorithm that can solve the problem.

In order to implement the solution, computer science professionals need to have a strong understanding of various programming languages and data structures. They need to be able to write code that can manipulate and process data in order to solve the problem at hand.

During the implementation phase, the proposed solution is translated into a series of steps or instructions that a computer can understand and execute. This involves breaking down the problem into smaller sub-problems and designing algorithms to solve each sub-problem.

Computer scientists also need to consider the efficiency of their solution during the implementation phase. They need to ensure that the algorithm they design is able to handle large amounts of data and solve the problem in a reasonable amount of time. This often requires optimization techniques and careful consideration of the data structures used.

Once the code has been written and the algorithm has been implemented, it is important to test and debug the solution. This involves running test cases and checking the output to ensure that the program is working correctly. If any errors or bugs are found, they need to be fixed before the solution can be considered complete.

In conclusion, implementing the solution is a crucial step in the problem-solving cycle in computer science. It requires strong programming skills and a deep understanding of algorithms and data structures. By carefully designing and implementing the solution, computer scientists can solve problems efficiently and effectively.

Testing and Debugging

In computer science, testing and debugging are critical steps in the problem-solving cycle. Testing helps ensure that a program or algorithm is functioning correctly, while debugging analyzes and resolves any issues or bugs that may arise.

Testing involves running a program with specific input data to evaluate its output. This process helps verify that the program produces the expected results and handles different scenarios correctly. It is important to test both the normal and edge cases to ensure the program’s reliability.

Debugging is the process of identifying and fixing errors or bugs in a program. When a program does not produce the expected results or crashes, it is necessary to go through the code to find and fix the problem. This can involve analyzing the program’s logic, checking for syntax errors, and using debugging tools to trace the flow of data and identify the source of the issue.

Data analysis plays a crucial role in both testing and debugging. It helps to identify patterns, anomalies, or inconsistencies in the program’s behavior. By analyzing the data, developers can gain insights into potential issues and make informed decisions on how to improve the program’s performance.

In conclusion, testing and debugging are integral parts of the problem-solving cycle in computer science. Through testing and data analysis, developers can verify the correctness of their programs and identify and resolve any issues that may arise. This ensures that the algorithms and programs developed in computer science are robust, reliable, and efficient.

Iterating for Improvement

In computer science, problem solving often involves iterating through multiple cycles of analysis, solution development, and evaluation. This iterative process allows for continuous improvement in finding the most effective solution to a given problem.

The problem solving cycle starts with problem analysis, where the specific problem is identified and its requirements are understood. This step involves examining the problem from various angles and gathering all relevant information.

Once the problem is properly understood, the next step is to develop an algorithm or a step-by-step plan to solve the problem. This algorithm is a set of instructions that, when followed correctly, will lead to the solution.

After the algorithm is developed, it is implemented in a computer program. This step involves translating the algorithm into a programming language that a computer can understand and execute.

Once the program is implemented, it is then tested and evaluated to ensure that it produces the correct solution. This evaluation step is crucial in identifying any errors or inefficiencies in the program and allows for further improvement.

If any issues or problems are found during testing, the cycle iterates, starting from problem analysis again. This iterative process allows for refinement and improvement of the solution until the desired results are achieved.

Iterating for improvement is a fundamental concept in computer science problem solving. By continually analyzing, developing, and evaluating solutions, computer scientists are able to find the most optimal and efficient approaches to solving problems.

Documenting the Process

Documenting the problem-solving process in computer science is an essential step to ensure that the cycle is repeated successfully. The process involves gathering information, analyzing the problem, and designing a solution.

During the analysis phase, it is crucial to identify the specific problem at hand and break it down into smaller components. This allows for a more targeted approach to finding the solution. Additionally, analyzing the data involved in the problem can provide valuable insights and help in designing an effective solution.

Once the analysis is complete, it is important to document the findings. This documentation can take various forms, such as written reports, diagrams, or even code comments. The goal is to create a record that captures the problem, the analysis, and the proposed solution.

Documenting the process serves several purposes. Firstly, it allows for easy communication and collaboration between team members or future developers. By documenting the problem, analysis, and solution, others can easily understand the thought process behind the solution and potentially build upon it.

Secondly, documenting the process provides an opportunity for reflection and improvement. By reviewing the documentation, developers can identify areas where the problem-solving cycle can be strengthened or optimized. This continuous improvement is crucial in the field of computer science, as new challenges and technologies emerge rapidly.

In conclusion, documenting the problem-solving process is an integral part of the computer science cycle. It allows for effective communication, collaboration, and reflection on the solutions devised. By taking the time to document the process, developers can ensure a more efficient and successful problem-solving experience.

Communicating the Solution

Once the problem solving cycle is complete, it is important to effectively communicate the solution. This involves explaining the analysis, data, and steps taken to arrive at the solution.

Analyzing the Problem

During the problem solving cycle, a thorough analysis of the problem is conducted. This includes understanding the problem statement, gathering relevant data, and identifying any constraints or limitations. It is important to clearly communicate this analysis to ensure that others understand the problem at hand.

Presenting the Solution

The next step in communicating the solution is presenting the actual solution. This should include a detailed explanation of the steps taken to solve the problem, as well as any algorithms or data structures used. It is important to provide clear and concise descriptions of the solution, so that others can understand and reproduce the results.

Overall, effective communication of the solution in computer science is essential to ensure that others can understand and replicate the problem solving process. By clearly explaining the analysis, data, and steps taken, the solution can be communicated in a way that promotes understanding and collaboration within the field of computer science.

Reflecting and Learning

Reflecting and learning are crucial steps in the problem solving cycle in computer science. Once a problem has been solved, it is essential to reflect on the entire process and learn from the experience. This allows for continuous improvement and growth in the field of computer science.

During the reflecting phase, one must analyze and evaluate the problem solving process. This involves reviewing the initial problem statement, understanding the constraints and requirements, and assessing the effectiveness of the chosen algorithm and solution. It is important to consider the efficiency and accuracy of the solution, as well as any potential limitations or areas for optimization.

By reflecting on the problem solving cycle, computer scientists can gain valuable insights into their own strengths and weaknesses. They can identify areas where they excelled and areas where improvement is needed. This self-analysis helps in honing problem solving skills and becoming a better problem solver.

Learning from Mistakes

Mistakes are an integral part of the problem solving cycle, and they provide valuable learning opportunities. When a problem is not successfully solved, it is essential to analyze the reasons behind the failure and learn from them. This involves identifying errors in the algorithm or solution, understanding the underlying concepts or principles that were misunderstood, and finding alternative approaches or strategies.

Failure should not be seen as a setback, but rather as an opportunity for growth. By learning from mistakes, computer scientists can improve their problem solving abilities and expand their knowledge and understanding of computer science. It is through these failures and the subsequent learning process that new ideas and innovations are often born.

Continuous Improvement

Reflecting and learning should not be limited to individual problem solving experiences, but should be an ongoing practice. As computer science is a rapidly evolving field, it is crucial to stay updated with new technologies, algorithms, and problem solving techniques. Continuous learning and improvement contribute to staying competitive and relevant in the field.

Computer scientists can engage in continuous improvement by seeking feedback from peers, participating in research and development activities, attending conferences and workshops, and actively seeking new challenges and problem solving opportunities. This dedication to learning and improvement ensures that one’s problem solving skills remain sharp and effective.

In conclusion, reflecting and learning are integral parts of the problem solving cycle in computer science. They enable computer scientists to refine their problem solving abilities, learn from mistakes, and continuously improve their skills and knowledge. By embracing these steps, computer scientists can stay at the forefront of the ever-changing world of computer science and contribute to its advancements.

Applying Problem Solving in Real Life

In computer science, problem solving is not limited to the realm of programming and algorithms. It is a skill that can be applied to various aspects of our daily lives, helping us to solve problems efficiently and effectively. By using the problem-solving cycle and applying the principles of analysis, data, solution, algorithm, and cycle, we can tackle real-life challenges with confidence and success.

The first step in problem-solving is to analyze the problem at hand. This involves breaking it down into smaller, more manageable parts and identifying the key issues or goals. By understanding the problem thoroughly, we can gain insights into its root causes and potential solutions.

For example, let’s say you’re facing a recurring issue in your daily commute – traffic congestion. By analyzing the problem, you may discover that the main causes are a lack of alternative routes and a lack of communication between drivers. This analysis helps you identify potential solutions such as using navigation apps to find alternate routes or promoting carpooling to reduce the number of vehicles on the road.

Gathering and Analyzing Data

Once we have identified the problem, it is important to gather relevant data to support our analysis. This may involve conducting surveys, collecting statistics, or reviewing existing research. By gathering data, we can make informed decisions and prioritize potential solutions based on their impact and feasibility.

Continuing with the traffic congestion example, you may gather data on the average commute time, the number of vehicles on the road, and the impact of carpooling on congestion levels. This data can help you analyze the problem more accurately and determine the most effective solutions.

Generating and Evaluating Solutions

After analyzing the problem and gathering data, the next step is to generate potential solutions. This can be done through brainstorming, researching best practices, or seeking input from experts. It is important to consider multiple options and think outside the box to find innovative and effective solutions.

For our traffic congestion problem, potential solutions can include implementing a smart traffic management system that optimizes traffic flow or investing in public transportation to incentivize people to leave their cars at home. By evaluating each solution’s potential impact, cost, and feasibility, you can make an informed decision on the best course of action.

Implementing and Iterating

Once a solution has been chosen, it is time to implement it in real life. This may involve developing a plan, allocating resources, and executing the solution. It is important to monitor the progress and collect feedback to learn from the implementation and make necessary adjustments.

For example, if the chosen solution to address traffic congestion is implementing a smart traffic management system, you would work with engineers and transportation authorities to develop and deploy the system. Regular evaluation and iteration of the system’s performance would ensure that it is effective and making a positive impact on reducing congestion.

By applying the problem-solving cycle derived from computer science to real-life situations, we can approach challenges with a systematic and analytical mindset. This can help us make better decisions, improve our problem-solving skills, and ultimately achieve more efficient and effective solutions.

Building Problem Solving Skills

In the field of computer science, problem-solving is a fundamental skill that is crucial for success. Whether you are a computer scientist, programmer, or student, developing strong problem-solving skills will greatly benefit your work and studies. It allows you to approach challenges with a logical and systematic approach, leading to efficient and effective problem resolution.

The Problem Solving Cycle

Problem-solving in computer science involves a cyclical process known as the problem-solving cycle. This cycle consists of several stages, including problem identification, data analysis, solution development, implementation, and evaluation. By following this cycle, computer scientists are able to tackle complex problems and arrive at optimal solutions.

Importance of Data Analysis

Data analysis is a critical step in the problem-solving cycle. It involves gathering and examining relevant data to gain insights and identify patterns that can inform the development of a solution. Without proper data analysis, computer scientists may overlook important information or make unfounded assumptions, leading to subpar solutions.

To effectively analyze data, computer scientists can employ various techniques such as data visualization, statistical analysis, and machine learning algorithms. These tools enable them to extract meaningful information from large datasets and make informed decisions during the problem-solving process.

Developing Effective Solutions

Developing effective solutions requires creativity, critical thinking, and logical reasoning. Computer scientists must evaluate multiple approaches, consider various factors, and assess the feasibility of different solutions. They should also consider potential limitations and trade-offs to ensure that the chosen solution addresses the problem effectively.

Furthermore, collaboration and communication skills are vital when building problem-solving skills. Computer scientists often work in teams and need to effectively communicate their ideas, propose solutions, and address any challenges that arise during the problem-solving process. Strong interpersonal skills facilitate collaboration and enhance problem-solving outcomes.

  • Mastering programming languages and algorithms
  • Staying updated with technological advancements in the field
  • Practicing problem solving through coding challenges and projects
  • Seeking feedback and learning from mistakes
  • Continuing to learn and improve problem-solving skills

By following these strategies, individuals can strengthen their problem-solving abilities and become more effective computer scientists or programmers. Problem-solving is an essential skill in computer science and plays a central role in driving innovation and advancing the field.

Questions and answers:

What is the problem solving cycle in computer science.

The problem solving cycle in computer science refers to a systematic approach that programmers use to solve problems. It involves several steps, including problem definition, algorithm design, implementation, testing, and debugging.

How important is the problem solving cycle in computer science?

The problem solving cycle is extremely important in computer science as it allows programmers to effectively tackle complex problems and develop efficient solutions. It helps in organizing the thought process and ensures that the problem is approached in a logical and systematic manner.

What are the steps involved in the problem solving cycle?

The problem solving cycle typically consists of the following steps: problem definition and analysis, algorithm design, implementation, testing, and debugging. These steps are repeated as necessary until a satisfactory solution is achieved.

Can you explain the problem definition and analysis step in the problem solving cycle?

During the problem definition and analysis step, the programmer identifies and thoroughly understands the problem that needs to be solved. This involves analyzing the requirements, constraints, and possible inputs and outputs. It is important to have a clear understanding of the problem before proceeding to the next steps.

Why is testing and debugging an important step in the problem solving cycle?

Testing and debugging are important steps in the problem solving cycle because they ensure that the implemented solution functions as intended and is free from errors. Through testing, the programmer can identify and fix any issues or bugs in the code, thereby improving the quality and reliability of the solution.

What is the problem-solving cycle in computer science?

The problem-solving cycle in computer science refers to the systematic approach that computer scientists use to solve problems. It involves various steps, including problem analysis, algorithm design, coding, testing, and debugging.

Related posts:

  • The Stages of the Problem Solving Cycle in Cognitive Psychology – Understanding, Planning, Execution, Evaluation, and Reflection
  • A Comprehensive Guide to the Problem Solving Cycle in Psychology – Strategies, Techniques, and Applications
  • The Step-by-Step Problem Solving Cycle for Effective Solutions
  • The Importance of Implementing the Problem Solving Cycle in Education to Foster Critical Thinking and Problem-Solving Skills in Students
  • The Importance of the Problem Solving Cycle in Business Studies – Strategies for Success
  • The Comprehensive Guide to the Problem Solving Cycle in PDF Format
  • A Comprehensive Guide on the Problem Solving Cycle – Step-by-Step Approach with Real-Life Example
  • The Seven Essential Steps of the Problem Solving Cycle

loading

How it works

For Business

Join Mind Tools

Self-Assessment • 20 min read

How Good Is Your Problem Solving?

Use a systematic approach..

By the Mind Tools Content Team

a systematic approach to problem solving used by all scientists

Good problem solving skills are fundamentally important if you're going to be successful in your career.

But problems are something that we don't particularly like.

They're time-consuming.

They muscle their way into already packed schedules.

They force us to think about an uncertain future.

And they never seem to go away!

That's why, when faced with problems, most of us try to eliminate them as quickly as possible. But have you ever chosen the easiest or most obvious solution – and then realized that you have entirely missed a much better solution? Or have you found yourself fixing just the symptoms of a problem, only for the situation to get much worse?

To be an effective problem-solver, you need to be systematic and logical in your approach. This quiz helps you assess your current approach to problem solving. By improving this, you'll make better overall decisions. And as you increase your confidence with solving problems, you'll be less likely to rush to the first solution – which may not necessarily be the best one.

Once you've completed the quiz, we'll direct you to tools and resources that can help you make the most of your problem-solving skills.

How Good Are You at Solving Problems?

Instructions.

For each statement, click the button in the column that best describes you. Please answer questions as you actually are (rather than how you think you should be), and don't worry if some questions seem to score in the 'wrong direction'. When you are finished, please click the 'Calculate My Total' button at the bottom of the test.

Answering these questions should have helped you recognize the key steps associated with effective problem solving.

This quiz is based on Dr Min Basadur's Simplexity Thinking problem-solving model. This eight-step process follows the circular pattern shown below, within which current problems are solved and new problems are identified on an ongoing basis. This assessment has not been validated and is intended for illustrative purposes only.

Below, we outline the tools and strategies you can use for each stage of the problem-solving process. Enjoy exploring these stages!

Step 1: Find the Problem (Questions 7, 12)

Some problems are very obvious, however others are not so easily identified. As part of an effective problem-solving process, you need to look actively for problems – even when things seem to be running fine. Proactive problem solving helps you avoid emergencies and allows you to be calm and in control when issues arise.

These techniques can help you do this:

PEST Analysis helps you pick up changes to your environment that you should be paying attention to. Make sure too that you're watching changes in customer needs and market dynamics, and that you're monitoring trends that are relevant to your industry.

Risk Analysis helps you identify significant business risks.

Failure Modes and Effects Analysis helps you identify possible points of failure in your business process, so that you can fix these before problems arise.

After Action Reviews help you scan recent performance to identify things that can be done better in the future.

Where you have several problems to solve, our articles on Prioritization and Pareto Analysis help you think about which ones you should focus on first.

Step 2: Find the Facts (Questions 10, 14)

After identifying a potential problem, you need information. What factors contribute to the problem? Who is involved with it? What solutions have been tried before? What do others think about the problem?

If you move forward to find a solution too quickly, you risk relying on imperfect information that's based on assumptions and limited perspectives, so make sure that you research the problem thoroughly.

Step 3: Define the Problem (Questions 3, 9)

Now that you understand the problem, define it clearly and completely. Writing a clear problem definition forces you to establish specific boundaries for the problem. This keeps the scope from growing too large, and it helps you stay focused on the main issues.

A great tool to use at this stage is CATWOE . With this process, you analyze potential problems by looking at them from six perspectives, those of its Customers; Actors (people within the organization); the Transformation, or business process; the World-view, or top-down view of what's going on; the Owner; and the wider organizational Environment. By looking at a situation from these perspectives, you can open your mind and come to a much sharper and more comprehensive definition of the problem.

Cause and Effect Analysis is another good tool to use here, as it helps you think about the many different factors that can contribute to a problem. This helps you separate the symptoms of a problem from its fundamental causes.

Step 4: Find Ideas (Questions 4, 13)

With a clear problem definition, start generating ideas for a solution. The key here is to be flexible in the way you approach a problem. You want to be able to see it from as many perspectives as possible. Looking for patterns or common elements in different parts of the problem can sometimes help. You can also use metaphors and analogies to help analyze the problem, discover similarities to other issues, and think of solutions based on those similarities.

Traditional brainstorming and reverse brainstorming are very useful here. By taking the time to generate a range of creative solutions to the problem, you'll significantly increase the likelihood that you'll find the best possible solution, not just a semi-adequate one. Where appropriate, involve people with different viewpoints to expand the volume of ideas generated.

Tip: Don't evaluate your ideas until step 5. If you do, this will limit your creativity at too early a stage.

Step 5: Select and Evaluate (Questions 6, 15)

After finding ideas, you'll have many options that must be evaluated. It's tempting at this stage to charge in and start discarding ideas immediately. However, if you do this without first determining the criteria for a good solution, you risk rejecting an alternative that has real potential.

Decide what elements are needed for a realistic and practical solution, and think about the criteria you'll use to choose between potential solutions.

Paired Comparison Analysis , Decision Matrix Analysis and Risk Analysis are useful techniques here, as are many of the specialist resources available within our Decision-Making section . Enjoy exploring these!

Step 6: Plan (Questions 1, 16)

You might think that choosing a solution is the end of a problem-solving process. In fact, it's simply the start of the next phase in problem solving: implementation. This involves lots of planning and preparation. If you haven't already developed a full Risk Analysis in the evaluation phase, do so now. It's important to know what to be prepared for as you begin to roll out your proposed solution.

The type of planning that you need to do depends on the size of the implementation project that you need to set up. For small projects, all you'll often need are Action Plans that outline who will do what, when, and how. Larger projects need more sophisticated approaches – you'll find out more about these in the article What is Project Management? And for projects that affect many other people, you'll need to think about Change Management as well.

Here, it can be useful to conduct an Impact Analysis to help you identify potential resistance as well as alert you to problems you may not have anticipated. Force Field Analysis will also help you uncover the various pressures for and against your proposed solution. Once you've done the detailed planning, it can also be useful at this stage to make a final Go/No-Go Decision , making sure that it's actually worth going ahead with the selected option.

Step 7: Sell the Idea (Questions 5, 8)

As part of the planning process, you must convince other stakeholders that your solution is the best one. You'll likely meet with resistance, so before you try to “sell” your idea, make sure you've considered all the consequences.

As you begin communicating your plan, listen to what people say, and make changes as necessary. The better the overall solution meets everyone's needs, the greater its positive impact will be! For more tips on selling your idea, read our article on Creating a Value Proposition and use our Sell Your Idea Skillbook.

Step 8: Act (Questions 2, 11)

Finally, once you've convinced your key stakeholders that your proposed solution is worth running with, you can move on to the implementation stage. This is the exciting and rewarding part of problem solving, which makes the whole process seem worthwhile.

This action stage is an end, but it's also a beginning: once you've completed your implementation, it's time to move into the next cycle of problem solving by returning to the scanning stage. By doing this, you'll continue improving your organization as you move into the future.

Problem solving is an exceptionally important workplace skill.

Being a competent and confident problem solver will create many opportunities for you. By using a well-developed model like Simplexity Thinking for solving problems, you can approach the process systematically, and be comfortable that the decisions you make are solid.

Given the unpredictable nature of problems, it's very reassuring to know that, by following a structured plan, you've done everything you can to resolve the problem to the best of your ability.

This assessment has not been validated and is intended for illustrative purposes only. It is just one of many Mind Tool quizzes that can help you to evaluate your abilities in a wide range of important career skills.

If you want to reproduce this quiz, you can purchase downloadable copies in our Store .

You've accessed 1 of your 2 free resources.

Get unlimited access

Discover more content

4 logical fallacies.

Avoid Common Types of Faulty Reasoning

Problem Solving

Add comment

Comments (2)

Afkar Hashmi

😇 This tool is very useful for me.

over 1 year

Very impactful

a systematic approach to problem solving used by all scientists

Team Management

Learn the key aspects of managing a team, from building and developing your team, to working with different types of teams, and troubleshooting common problems.

Sign-up to our newsletter

Subscribing to the Mind Tools newsletter will keep you up-to-date with our latest updates and newest resources.

Subscribe now

Business Skills

Personal Development

Leadership and Management

Member Extras

Most Popular

Newest Releases

Article amtbj63

SWOT Analysis

Article a4wo118

SMART Goals

Mind Tools Store

About Mind Tools Content

Discover something new today

How to stop procrastinating.

Overcoming the Habit of Delaying Important Tasks

What Is Time Management?

Working Smarter to Enhance Productivity

How Emotionally Intelligent Are You?

Boosting Your People Skills

Self-Assessment

What's Your Leadership Style?

Learn About the Strengths and Weaknesses of the Way You Like to Lead

Recommended for you

Networked for learning.

How Different Organizations Use Technology to Collaborate, Innovate and Learn

Case Studies

Business Operations and Process Management

Strategy Tools

Customer Service

Business Ethics and Values

Handling Information and Data

Project Management

Knowledge Management

Self-Development and Goal Setting

Time Management

Presentation Skills

Learning Skills

Career Skills

Communication Skills

Negotiation, Persuasion and Influence

Working With Others

Difficult Conversations

Creativity Tools

Self-Management

Work-Life Balance

Stress Management and Wellbeing

Coaching and Mentoring

Change Management

Managing Conflict

Delegation and Empowerment

Performance Management

Leadership Skills

Developing Your Team

Talent Management

Decision Making

Member Podcast

IMAGES

  1. A Systematic Approach to Problem Solving

    a systematic approach to problem solving used by all scientists

  2. systematic approach to problem solving

    a systematic approach to problem solving used by all scientists

  3. Systematic problem solving process.

    a systematic approach to problem solving used by all scientists

  4. A Systematic Approach to Problem Solving

    a systematic approach to problem solving used by all scientists

  5. Systematic Problem Solving

    a systematic approach to problem solving used by all scientists

  6. Systematic Problem Solving

    a systematic approach to problem solving used by all scientists

VIDEO

  1. The scientific approach and alternative approaches to investigation

  2. Systematic Problem Solving In Critical Thinking clip

  3. Systematic problem solving

  4. Types Of Nursing Research/Nursing Research

  5. 9 BRAINS?! Octopus ESCAPE Jars, Eats Crabs & Solves Mazes! #HabitatsOfPlanet #OctopusFacts

  6. Clarifying the '5 Whys' Problem-Solving Method #shorts #problemsolving

COMMENTS

  1. The scientific method (article)

    The scientific method is a systematic approach to problem-solving, and it's the backbone of scientific inquiry in physics, just as it is in the rest of science. ... The scientists in these fields ask different questions and perform different tests. However, they use the same core approach to find answers that are logical and supported by ...

  2. Using the Scientific Method to Solve Problems

    The scientific method is a process used to explore observations and answer questions. Originally used by scientists looking to prove new theories, its use has spread into many other areas, including that of problem-solving and decision-making. The scientific method is designed to eliminate the influences of bias, prejudice and personal beliefs ...

  3. The scientific method (article)

    At the core of biology and other sciences lies a problem-solving approach called the scientific method. The scientific method has five basic steps, plus one feedback step: Make an observation. Ask a question. Form a hypothesis, or testable explanation. Make a prediction based on the hypothesis. Test the prediction.

  4. Scientific method

    The scientific method is critical to the development of scientific theories, which explain empirical (experiential) laws in a scientifically rational manner. In a typical application of the scientific method, a researcher develops a hypothesis, tests it through various means, and then modifies the hypothesis on the basis of the outcome of the ...

  5. What is the Scientific Method: How does it work and why is it important

    Article. Research Process. The scientific method is a systematic process involving steps like defining questions, forming hypotheses, conducting experiments, and analyzing data. It minimizes biases and enables replicable research, leading to groundbreaking discoveries like Einstein's theory of relativity, penicillin, and the structure of DNA.

  6. How the Scientific Method Works: An In-Depth Look

    The scientific method is a systematic approach used by scientists to investigate and understand natural phenomena. It consists of a series of steps that guide researchers in drawing conclusions from hypotheses. "Science never achieves final truth in theories, but one theory can be objectively truer than another, even if we never know that for sure," says British physicist David Deutsch from ...

  7. Science and the scientific method: Definitions and examples

    Science is a systematic and logical approach to discovering how things in the universe work. Scientists use the scientific method to make observations, form hypotheses and gather evidence in an ...

  8. Scientific method

    The scientific method is an empirical method for acquiring knowledge that has characterized the development of science since at least the 17th century. (For notable practitioners in previous centuries, see history of scientific method .) The scientific method involves careful observation coupled with rigorous scepticism, because cognitive ...

  9. Science Philosophy and Practice: The Scientific Method

    Scientific ways of thinking have developed over a period of about 2,000 years, starting from forms of commonsense problem-solving used by people in everyday life. Today, scientific knowledge is a large, ever-growing system to which individual scientists can add only by following some form of the procedure known as the scientific method.

  10. Scientific Method

    Science is an enormously successful human enterprise. The study of scientific method is the attempt to discern the activities by which that success is achieved. Among the activities often identified as characteristic of science are systematic observation and experimentation, inductive and deductive reasoning, and the formation and testing of ...

  11. 1.2: Scientific Approach for Solving Problems

    In doing so, they are using the scientific method. 1.2: Scientific Approach for Solving Problems is shared under a not declared license and was authored, remixed, and/or curated by LibreTexts. Chemists expand their knowledge by making observations, carrying out experiments, and testing hypotheses to develop laws to summarize their results and ...

  12. 1.4: The Scientific Method

    The scientific method is a method of investigation involving experimentation and observation to acquire new knowledge, solve problems, and answer questions. The key steps in the scientific method include the following: Step 1: Make observations. Step 2: Formulate a hypothesis. Step 3: Test the hypothesis through experimentation.

  13. The Scientific Method: What Is It?

    The scientific method is a systematic way of conducting experiments or studies so that you can explore the world around you and answer questions using reason and evidence. It's a step-by-step ...

  14. PDF The scientific method is a systematic method to problem solving. The

    The scientific method is a systematic method to problem solving. The seven steps in the scientific method are: (!)STATING THE PROBLEM. (2)GATHER INFORMATION ON THE PROBLEM. ... The METRIC SYSTEM is the standard system used by all scientists. The METRIC SYSTEM is a decimal system, that is based on units of ten. THE METRIC PREFIXES ARE: MILLI- 1 ...

  15. 1.1.6: Scientific Problem Solving

    The scientific method, as developed by Bacon and others, involves several steps: Ask a question - identify the problem to be considered. Make observations - gather data that pertains to the question. Propose an explanation (a hypothesis) for the observations. Make new observations to test the hypothesis further.

  16. PDF Scientific Method How do Scientists Solve problems

    Formulate student's ideas into a chart of steps in the scientific method. Determine with the students how a scientist solves problems. • Arrange students in working groups of 3 or 4. Students are to attempt to discover what is in their mystery box. • The group must decide on a procedure to determine the contents of their box and formulate ...

  17. Scientific Thinking and Critical Analysis

    The basics of scientific thinking include observation, hypothesis formation, experimentation, and analysis of data. Scientific thinking involves being systematic, objective, and logical in one's approach to problem-solving. It is also important to be aware of one's biases and assumptions when conducting scientific research.

  18. Scientific Discovery

    Nonetheless, it is the task of philosophy of science to provide rules for making this process better. All of these responses can be described as theories of problem solving, whose ultimate goal is to make the generation of new ideas and theories more efficient. But the different approaches to scientific discovery employ different terminologies.

  19. Identifying problems and solutions in scientific text

    Introduction. Problem solving is generally regarded as the most important cognitive activity in everyday and professional contexts (Jonassen 2000).Many studies on formalising the cognitive process behind problem-solving exist, for instance (Chandrasekaran 1983).Jordan argues that we all share knowledge of the thought/action problem-solution process involved in real life, and so our writings ...

  20. Thinking Like a Scientist

    The scientific method was used even in ancient times, but it was first documented by England's Sir Francis Bacon (1561-1626), who set up inductive methods for scientific inquiry. The scientific method is not exclusively used by biologists but can be applied to almost anything as a logical problem-solving method. Figure 3.

  21. Make Better Decisions: Use the Systematic Problem-Solving Model

    All disciplines of philosophy, business, science, and humanities have developed their own approach to solving problems. Remarkably, the problem-solving models developed by each of these areas are strikingly similar. I describe a simple problem-solving process that you can use to solve almost all problems. Stages of the problem-solving process.

  22. The Problem Solving Cycle in Computer Science: A Complete Guide

    This allows for a systematic approach to finding a solution by analyzing each part individually. The process typically starts with gathering and understanding the data or information related to the problem at hand. Once the data is collected, computer scientists use various techniques and algorithms to analyze and explore possible solutions.

  23. How Good Is Your Problem Solving?

    Enjoy exploring these stages! Step 1: Find the Problem (Questions 7, 12) Some problems are very obvious, however others are not so easily identified. As part of an effective problem-solving process, you need to look actively for problems - even when things seem to be running fine.