Purdue Online Writing Lab Purdue OWL® College of Liberal Arts

APA Sample Paper: Experimental Psychology

OWL logo

Welcome to the Purdue OWL

This page is brought to you by the OWL at Purdue University. When printing this page, you must include the entire legal notice.

Copyright ©1995-2018 by The Writing Lab & The OWL at Purdue and Purdue University. All rights reserved. This material may not be published, reproduced, broadcast, rewritten, or redistributed without permission. Use of this site constitutes acceptance of our terms and conditions of fair use.

Media File: APA Sample Paper: Experimental Psychology

This resource is enhanced by an Acrobat PDF file. Download the free Acrobat Reader

  • - Google Chrome

Intended for healthcare professionals

  • Access provided by Google Indexer
  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • Beauty sleep:...

Beauty sleep: experimental study on the perceived health and attractiveness of sleep deprived people

  • Related content
  • Peer review
  • John Axelsson , researcher 1 2 ,
  • Tina Sundelin , research assistant and MSc student 2 ,
  • Michael Ingre , statistician and PhD student 3 ,
  • Eus J W Van Someren , researcher 4 ,
  • Andreas Olsson , researcher 2 ,
  • Mats Lekander , researcher 1 3
  • 1 Osher Center for Integrative Medicine, Department of Clinical Neuroscience, Karolinska Institutet, 17177 Stockholm, Sweden
  • 2 Division for Psychology, Department of Clinical Neuroscience, Karolinska Institutet
  • 3 Stress Research Institute, Stockholm University, Stockholm
  • 4 Netherlands Institute for Neuroscience, an Institute of the Royal Netherlands Academy of Arts and Sciences, and VU Medical Center, Amsterdam, Netherlands
  • Correspondence to: J Axelsson john.axelsson{at}ki.se
  • Accepted 22 October 2010

Objective To investigate whether sleep deprived people are perceived as less healthy, less attractive, and more tired than after a normal night’s sleep.

Design Experimental study.

Setting Sleep laboratory in Stockholm, Sweden.

Participants 23 healthy, sleep deprived adults (age 18-31) who were photographed and 65 untrained observers (age 18-61) who rated the photographs.

Intervention Participants were photographed after a normal night’s sleep (eight hours) and after sleep deprivation (31 hours of wakefulness after a night of reduced sleep). The photographs were presented in a randomised order and rated by untrained observers.

Main outcome measure Difference in observer ratings of perceived health, attractiveness, and tiredness between sleep deprived and well rested participants using a visual analogue scale (100 mm).

Results Sleep deprived people were rated as less healthy (visual analogue scale scores, mean 63 (SE 2) v 68 (SE 2), P<0.001), more tired (53 (SE 3) v 44 (SE 3), P<0.001), and less attractive (38 (SE 2) v 40 (SE 2), P<0.001) than after a normal night’s sleep. The decrease in rated health was associated with ratings of increased tiredness and decreased attractiveness.

Conclusion Our findings show that sleep deprived people appear less healthy, less attractive, and more tired compared with when they are well rested. This suggests that humans are sensitive to sleep related facial cues, with potential implications for social and clinical judgments and behaviour. Studies are warranted for understanding how these effects may affect clinical decision making and can add knowledge with direct implications in a medical context.

Introduction

The recognition [of the case] depends in great measure on the accurate and rapid appreciation of small points in which the diseased differs from the healthy state Joseph Bell (1837-1911)

Good clinical judgment is an important skill in medical practice. This is well illustrated in the quote by Joseph Bell, 1 who demonstrated impressive observational and deductive skills. Bell was one of Sir Arthur Conan Doyle’s teachers and served as a model for the fictitious detective Sherlock Holmes. 2 Generally, human judgment involves complex processes, whereby ingrained, often less consciously deliberated responses from perceptual cues are mixed with semantic calculations to affect decision making. 3 Thus all social interactions, including diagnosis in clinical practice, are influenced by reflexive as well as reflective processes in human cognition and communication.

Sleep is an essential homeostatic process with well established effects on an individual’s physiological, cognitive, and behavioural functionality 4 5 6 7 and long term health, 8 but with only anecdotal support of a role in social perception, such as that underlying judgments of attractiveness and health. As illustrated by the common expression “beauty sleep,” an individual’s sleep history may play an integral part in the perception and judgments of his or her attractiveness and health. To date, the concept of beauty sleep has lacked scientific support, but the biological importance of sleep may have favoured a sensitivity to perceive sleep related cues in others. It seems warranted to explore such sensitivity, as sleep disorders and disturbed sleep are increasingly common in today’s 24 hour society and often coexist with some of the most common health problems, such as hypertension 9 10 and inflammatory conditions. 11

To describe the relation between sleep deprivation and perceived health and attractiveness we asked untrained observers to rate the faces of people who had been photographed after a normal night’s sleep and after a night of sleep deprivation. We chose facial photographs as the human face is the primary source of information in social communication. 12 A perceiver’s response to facial cues, signalling the bearer’s emotional state, intentions, and potential mate value, serves to guide actions in social contexts and may ultimately promote survival. 13 14 15 We hypothesised that untrained observers would perceive sleep deprived people as more tired, less healthy, and less attractive compared with after a normal night’s sleep.

Using an experimental design we photographed the faces of 23 adults (mean age 23, range 18-31 years, 11 women) between 14.00 and 15.00 under two conditions in a balanced design: after a normal night’s sleep (at least eight hours of sleep between 23.00-07.00 and seven hours of wakefulness) and after sleep deprivation (sleep 02.00-07.00 and 31 hours of wakefulness). We advertised for participants at four universities in the Stockholm area. Twenty of 44 potentially eligible people were excluded. Reasons for exclusion were reported sleep disturbances, abnormal sleep requirements (for example, sleep need out of the 7-9 hour range), health problems, or availability on study days (the main reason). We also excluded smokers and those who had consumed alcohol within two days of the protocol. One woman failed to participate in both conditions. Overall, we enrolled 12 women and 12 men.

The participants slept in their own homes. Sleep times were confirmed with sleep diaries and text messages. The sleep diaries (Karolinska sleep diary) included information on sleep latency, quality, duration, and sleepiness. Participants sent a text message to the research assistant by mobile phone (SMS) at bedtime and when they got up on the night before sleep deprivation. They had been instructed not to nap. During the normal sleep condition the participants’ mean duration of sleep, estimated from sleep diaries, was 8.45 (SE 0.20) hours. The sleep deprivation condition started with a restriction of sleep to five hours in bed; the participants sent text messages (SMS) when they went to sleep and when they woke up. The mean duration of sleep during this night, estimated from sleep diaries and text messages, was 5.06 (SE 0.04) hours. For the following night of total sleep deprivation, the participants were monitored in the sleep laboratory at all times. Thus, for the sleep deprivation condition, participants came to the laboratory at 22.00 (after 15 hours of wakefulness) to be monitored, and stayed awake for a further 16 hours. We therefore did not observe the participants during the first 15 hours of wakefulness, when they had had a slightly restricted sleep, but had good control over the last 16 hours of wakefulness when sleepiness increased in magnitude. For the sleep condition, participants came to the laboratory at 12.00 (after five hours of wakefulness). They were kept indoors two hours before being photographed to avoid the effects of exposure to sunlight and the weather. We had a series of five or six photographs (resolution 3872×2592 pixels) taken in a well lit room, with a constant white balance (×900l; colour temperature 4200 K, Nikon D80; Nikon, Tokyo). The white balance was differently set during the two days of the study and affected seven photographs (four taken during sleep deprivation and three during a normal night’s sleep). Removing these participants from the analyses did not affect the results. The distance from camera to head was fixed, as was the focal length, within 14 mm (between 44 and 58 mm). To ensure a fixed surface area of each face on the photograph, the focal length was adapted to the head size of each participant.

For the photo shoot, participants wore no makeup, had their hair loose (combed backwards if long), underwent similar cleaning or shaving procedures for both conditions, and were instructed to “sit with a straight back and look straight into the camera with a neutral, relaxed facial expression.” Although the photographer was not blinded to the sleep conditions, she followed a highly standardised procedure during each photo shoot, including minimal interaction with the participants. A blinded rater chose the most typical photograph from each series of photographs. This process resulted in 46 photographs; two (one from each sleep condition) of each of the 23 participants. This part of the study took place between June and September 2007.

In October 2007 the photographs were presented at a fixed interval of six seconds in a randomised order to 65 observers (mainly students at the Karolinska Institute, mean age 30 (range 18-61) years, 40 women), who were unaware of the conditions of the study. They rated the faces for attractiveness (very unattractive to very attractive), health (very sick to very healthy), and tiredness (not at all tired to very tired) on a 100 mm visual analogue scale. After every 23 photographs a brief intermission was allowed, including a working memory task lasting 23 seconds to prevent the faces being memorised. To ensure that the observers were not primed to tiredness when rating health and attractiveness they rated the photographs for attractiveness and health in the first two sessions and tiredness in the last. To avoid the influence of possible order effects we presented the photographs in a balanced order between conditions for each session.

Statistical analyses

Data were analysed using multilevel mixed effects linear regression, with two crossed independent random effects accounting for random variation between observers and participants using the xtmixed procedure in Stata 9.2. We present the effect of condition as a percentage of change from the baseline condition as the reference using the absolute value in millimetres (rated on the visual analogue scale). No data were missing in the analyses.

Sixty five observers rated each of the 46 photographs for attractiveness, health, and tiredness: 138 ratings by each observer and 2990 ratings for each of the three factors rated. When sleep deprived, people were rated as less healthy (visual analogue scale scores, mean 63 (SE 2) v 68 (SE 2)), more tired (53 (SE 3) v 44 (SE 3)), and less attractive (38 (SE 2) v 40 (SE 2); P<0.001 for all) than after a normal night’s sleep (table 1 ⇓ ). Compared with the normal sleep condition, perceptions of health and attractiveness in the sleep deprived condition decreased on average by 6% and 4% and tiredness increased by 19%.

 Multilevel mixed effects regression on effect of how sleep deprived people are perceived with respect to attractiveness, health, and tiredness

  • View inline

A 10 mm increase in tiredness was associated with a −3.0 mm change in health, a 10 mm increase in health increased attractiveness by 2.4 mm, and a 10 mm increase in tiredness reduced attractiveness by 1.2 mm (table 2 ⇓ ). These findings were also presented as correlation, suggesting that faces with perceived attractiveness are positively associated with perceived health (r=0.42, fig 1 ⇓ ) and negatively with perceived tiredness (r=−0.28, fig 1). In addition, the average decrease (for each face) in attractiveness as a result of deprived sleep was associated with changes in tiredness (−0.53, n=23, P=0.03) and in health (0.50, n=23, P=0.01). Moreover, a strong negative association was found between the respective perceptions of tiredness and health (r=−0.54, fig 1). Figure 2 ⇓ shows an example of observer rated faces.

 Associations between health, tiredness, and attractiveness

Fig 1  Relations between health, tiredness, and attractiveness of 46 photographs (two each of 23 participants) rated by 65 observers on 100 mm visual analogue scales, with variation between observers removed using empirical Bayes’ estimates

  • Download figure
  • Open in new tab
  • Download powerpoint

Fig 2  Participant after a normal night’s sleep (left) and after sleep deprivation (right). Faces were presented in a counterbalanced order

To evaluate the mediation effects of sleep loss on attractiveness and health, tiredness was added to the models presented in table 1 following recommendations. 16 The effect of sleep loss was significantly mediated by tiredness on both health (P<0.001) and attractiveness (P<0.001). When tiredness was added to the model (table 1) with an estimated coefficient of −2.9 (SE 0.1; P<0.001) the independent effect of sleep loss on health decreased from −4.2 to −1.8 (SE 0.5; P<0.001). The effect of sleep loss on attractiveness decreased from −1.6 (table 1) to −0.62 (SE 0.4; P=0.133), with tiredness estimated at −1.1 (SE 0.1; P<0.001). The same approach applied to the model of attractiveness and health (table 2), with a decrease in the association from 2.4 to 2.1 (SE 0.1; P<0.001) with tiredness estimated at −0.56 (SE 0.1; P<0.001).

Sleep deprived people are perceived as less attractive, less healthy, and more tired compared with when they are well rested. Apparent tiredness was strongly related to looking less healthy and less attractive, which was also supported by the mediating analyses, indicating that a large part of the found effects and relations on appearing healthy and attractive were mediated by looking tired. The fact that untrained observers detected the effects of sleep loss in others not only provides evidence for a perceptual ability not previously subjected to experimental control, but also supports the notion that sleep history gives rise to socially relevant signals that provide information about the bearer. The adaptiveness of an ability to detect sleep related facial cues resonates well with other research, showing that small deviations from the average sleep duration in the long term are associated with an increased risk of health problems and with a decreased longevity. 8 17 Indeed, even a few hours of sleep deprivation inflict an array of physiological changes, including neural, endocrinological, immunological, and cellular functioning, that if sustained are relevant for long term health. 7 18 19 20 Here, we show that such physiological changes are paralleled by detectable facial changes.

These results are related to photographs taken in an artificial setting and presented to the observers for only six seconds. It is likely that the effects reported here would be larger in real life person to person situations, when overt behaviour and interactions add further information. Blink interval and blink duration are known to be indicators of sleepiness, 21 and trained observers are able to evaluate reliably the drowsiness of drivers by watching their videotaped faces. 22 In addition, a few of the people were perceived as healthier, less tired, and more attractive during the sleep deprived condition. It remains to be evaluated in follow-up research whether this is due to random error noise in judgments, or associated with specific characteristics of observers or the sleep deprived people they judge. Nevertheless, we believe that the present findings can be generalised to a wide variety of settings, but further studies will have to investigate the impact on clinical studies and other social situations.

Importantly, our findings suggest a prominent role of sleep history in several domains of interpersonal perception and judgment, in which sleep history has previously not been considered of importance, such as in clinical judgment. In addition, because attractiveness motivates sexual behaviour, collaboration, and superior treatment, 13 sleep loss may have consequences in other social contexts. For example, it has been proposed that facial cues perceived as attractive are signals of good health and that this recognition has been selected evolutionarily to guide choice of mate and successful transmission of genes. 13 The fact that good sleep supports a healthy look and poor sleep the reverse may be of particular relevance in the medical setting, where health estimates are an essential part. It is possible that people with sleep disturbances, clinical or otherwise, would be judged as more unhealthy, whereas those who have had an unusually good night’s sleep may be perceived as rather healthy. Compared with the sleep deprivation used in the present investigation, further studies are needed to investigate the effects of less drastic acute reductions of sleep as well as long term clinical effects.

Conclusions

People are capable of detecting sleep loss related facial cues, and these cues modify judgments of another’s health and attractiveness. These conclusions agree well with existing models describing a link between sleep and good health, 18 23 as well as a link between attractiveness and health. 13 Future studies should focus on the relevance of these facial cues in clinical settings. These could investigate whether clinicians are better than the average population at detecting sleep or health related facial cues, and whether patients with a clinical diagnosis exhibit more tiredness and are less healthy looking than healthy people. Perhaps the more successful doctors are those who pick up on these details and act accordingly.

Taken together, our results provide important insights into judgments about health and attractiveness that are reminiscent of the anecdotal wisdom harboured in Bell’s words, and in the colloquial notion of “beauty sleep.”

What is already known on this topic

Short or disturbed sleep and fatigue constitute major risk factors for health and safety

Complaints of short or disturbed sleep are common among patients seeking healthcare

The human face is the main source of information for social signalling

What this study adds

The facial cues of sleep deprived people are sufficient for others to judge them as more tired, less healthy, and less attractive, lending the first scientific support to the concept of “beauty sleep”

By affecting doctors’ general perception of health, the sleep history of a patient may affect clinical decisions and diagnostic precision

Cite this as: BMJ 2010;341:c6614

We thank B Karshikoff for support with data acquisition and M Ingvar for comments on an earlier draft of the manuscript, both without compensation and working at the Department for Clinical Neuroscience, Karolinska Institutet, Sweden.

Contributors: JA designed the data collection, supervised and monitored data collection, wrote the statistical analysis plan, carried out the statistical analyses, obtained funding, drafted and revised the manuscript, and is guarantor. TS designed and carried out the data collection, cleaned the data, drafted, revised the manuscript, and had final approval of the manuscript. JA and TS contributed equally to the work. MI wrote the statistical analysis plan, carried out the statistical analyses, drafted the manuscript, and critically revised the manuscript. EJWVS provided statistical advice, advised on data handling, and critically revised the manuscript. AO provided advice on the methods and critically revised the manuscript. ML provided administrative support, drafted the manuscript, and critically revised the manuscript. All authors approved the final version of the manuscript.

Funding: This study was funded by the Swedish Society for Medical Research, Rut and Arvid Wolff’s Memory Fund, and the Osher Center for Integrative Medicine.

Competing interests: All authors have completed the Unified Competing Interest form at www.icmje.org/coi_disclosure.pdf (available on request from the corresponding author) and declare: no support from any company for the submitted work; no financial relationships with any companies that might have an interest in the submitted work in the previous 3 years; no other relationships or activities that could appear to have influenced the submitted work.

Ethical approval: This study was approved by the Karolinska Institutet’s ethical committee. Participants were compensated for their participation.

Participant consent: Participant’s consent obtained.

Data sharing: Statistical code and dataset of ratings are available from the corresponding author at john.axelsson{at}ki.se .

This is an open-access article distributed under the terms of the Creative Commons Attribution Non-commercial License, which permits use, distribution, and reproduction in any medium, provided the original work is properly cited, the use is non commercial and is otherwise in compliance with the license. See: http://creativecommons.org/licenses/by-nc/2.0/ and http://creativecommons.org/licenses/by-nc/2.0/legalcode .

  • ↵ Deten A, Volz HC, Clamors S, Leiblein S, Briest W, Marx G, et al. Hematopoietic stem cells do not repair the infarcted mouse heart. Cardiovasc Res 2005 ; 65 : 52 -63. OpenUrl Abstract / FREE Full Text
  • ↵ Doyle AC. The case-book of Sherlock Holmes: selected stories. Wordsworth, 1993.
  • ↵ Lieberman MD, Gaunt R, Gilbert DT, Trope Y. Reflection and reflexion: a social cognitive neuroscience approach to attributional inference. Adv Exp Soc Psychol 2002 ; 34 : 199 -249. OpenUrl CrossRef
  • ↵ Drummond SPA, Brown GG, Gillin JC, Stricker JL, Wong EC, Buxton RB. Altered brain response to verbal learning following sleep deprivation. Nature 2000 ; 403 : 655 -7. OpenUrl CrossRef PubMed
  • ↵ Harrison Y, Horne JA. The impact of sleep deprivation on decision making: a review. J Exp Psychol Appl 2000 ; 6 : 236 -49. OpenUrl CrossRef PubMed Web of Science
  • ↵ Huber R, Ghilardi MF, Massimini M, Tononi G. Local sleep and learning. Nature 2004 ; 430 : 78 -81. OpenUrl CrossRef PubMed Web of Science
  • ↵ Spiegel K, Leproult R, Van Cauter E. Impact of sleep debt on metabolic and endocrine function. Lancet 1999 ; 354 : 1435 -9. OpenUrl CrossRef PubMed Web of Science
  • ↵ Kripke DF, Garfinkel L, Wingard DL, Klauber MR, Marler MR. Mortality associated with sleep duration and insomnia. Arch Gen Psychiatry 2002 ; 59 : 131 -6. OpenUrl CrossRef PubMed Web of Science
  • ↵ Olson LG, Ambrogetti A. Waking up to sleep disorders. Br J Hosp Med (Lond) 2006 ; 67 : 118 , 20. OpenUrl PubMed
  • ↵ Rajaratnam SM, Arendt J. Health in a 24-h society. Lancet 2001 ; 358 : 999 -1005. OpenUrl CrossRef PubMed Web of Science
  • ↵ Ranjbaran Z, Keefer L, Stepanski E, Farhadi A, Keshavarzian A. The relevance of sleep abnormalities to chronic inflammatory conditions. Inflamm Res 2007 ; 56 : 51 -7. OpenUrl CrossRef PubMed Web of Science
  • ↵ Haxby JV, Hoffman EA, Gobbini MI. The distributed human neural system for face perception. Trends Cogn Sci 2000 ; 4 : 223 -33. OpenUrl CrossRef PubMed Web of Science
  • ↵ Rhodes G. The evolutionary psychology of facial beauty. Annu Rev Psychol 2006 ; 57 : 199 -226. OpenUrl CrossRef PubMed Web of Science
  • ↵ Todorov A, Mandisodza AN, Goren A, Hall CC. Inferences of competence from faces predict election outcomes. Science 2005 ; 308 : 1623 -6. OpenUrl Abstract / FREE Full Text
  • ↵ Willis J, Todorov A. First impressions: making up your mind after a 100-ms exposure to a face. Psychol Sci 2006 ; 17 : 592 -8. OpenUrl Abstract / FREE Full Text
  • ↵ Krull JL, MacKinnon DP. Multilevel modeling of individual and group level mediated effects. Multivariate Behav Res 2001 ; 36 : 249 -77. OpenUrl CrossRef Web of Science
  • ↵ Ayas NT, White DP, Manson JE, Stampfer MJ, Speizer FE, Malhotra A, et al. A prospective study of sleep duration and coronary heart disease in women. Arch Intern Med 2003 ; 163 : 205 -9. OpenUrl CrossRef PubMed Web of Science
  • ↵ Bryant PA, Trinder J, Curtis N. Sick and tired: does sleep have a vital role in the immune system. Nat Rev Immunol 2004 ; 4 : 457 -67. OpenUrl CrossRef PubMed Web of Science
  • ↵ Cirelli C. Cellular consequences of sleep deprivation in the brain. Sleep Med Rev 2006 ; 10 : 307 -21. OpenUrl CrossRef PubMed Web of Science
  • ↵ Irwin MR, Wang M, Campomayor CO, Collado-Hidalgo A, Cole S. Sleep deprivation and activation of morning levels of cellular and genomic markers of inflammation. Arch Intern Med 2006 ; 166 : 1756 -62. OpenUrl CrossRef PubMed Web of Science
  • ↵ Schleicher R, Galley N, Briest S, Galley L. Blinks and saccades as indicators of fatigue in sleepiness warnings: looking tired? Ergonomics 2008 ; 51 : 982 -1010. OpenUrl CrossRef PubMed Web of Science
  • ↵ Wierwille WW, Ellsworth LA. Evaluation of driver drowsiness by trained raters. Accid Anal Prev 1994 ; 26 : 571 -81. OpenUrl CrossRef PubMed Web of Science
  • ↵ Horne J. Why we sleep—the functions of sleep in humans and other mammals. Oxford University Press, 1988.

experimental research paper sample

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Guide to Experimental Design | Overview, Steps, & Examples

Guide to Experimental Design | Overview, 5 steps & Examples

Published on December 3, 2019 by Rebecca Bevans . Revised on June 21, 2023.

Experiments are used to study causal relationships . You manipulate one or more independent variables and measure their effect on one or more dependent variables.

Experimental design create a set of procedures to systematically test a hypothesis . A good experimental design requires a strong understanding of the system you are studying.

There are five key steps in designing an experiment:

  • Consider your variables and how they are related
  • Write a specific, testable hypothesis
  • Design experimental treatments to manipulate your independent variable
  • Assign subjects to groups, either between-subjects or within-subjects
  • Plan how you will measure your dependent variable

For valid conclusions, you also need to select a representative sample and control any  extraneous variables that might influence your results. If random assignment of participants to control and treatment groups is impossible, unethical, or highly difficult, consider an observational study instead. This minimizes several types of research bias, particularly sampling bias , survivorship bias , and attrition bias as time passes.

Table of contents

Step 1: define your variables, step 2: write your hypothesis, step 3: design your experimental treatments, step 4: assign your subjects to treatment groups, step 5: measure your dependent variable, other interesting articles, frequently asked questions about experiments.

You should begin with a specific research question . We will work with two research question examples, one from health sciences and one from ecology:

To translate your research question into an experimental hypothesis, you need to define the main variables and make predictions about how they are related.

Start by simply listing the independent and dependent variables .

Then you need to think about possible extraneous and confounding variables and consider how you might control  them in your experiment.

Finally, you can put these variables together into a diagram. Use arrows to show the possible relationships between variables and include signs to show the expected direction of the relationships.

Diagram of the relationship between variables in a sleep experiment

Here we predict that increasing temperature will increase soil respiration and decrease soil moisture, while decreasing soil moisture will lead to decreased soil respiration.

Prevent plagiarism. Run a free check.

Now that you have a strong conceptual understanding of the system you are studying, you should be able to write a specific, testable hypothesis that addresses your research question.

The next steps will describe how to design a controlled experiment . In a controlled experiment, you must be able to:

  • Systematically and precisely manipulate the independent variable(s).
  • Precisely measure the dependent variable(s).
  • Control any potential confounding variables.

If your study system doesn’t match these criteria, there are other types of research you can use to answer your research question.

How you manipulate the independent variable can affect the experiment’s external validity – that is, the extent to which the results can be generalized and applied to the broader world.

First, you may need to decide how widely to vary your independent variable.

  • just slightly above the natural range for your study region.
  • over a wider range of temperatures to mimic future warming.
  • over an extreme range that is beyond any possible natural variation.

Second, you may need to choose how finely to vary your independent variable. Sometimes this choice is made for you by your experimental system, but often you will need to decide, and this will affect how much you can infer from your results.

  • a categorical variable : either as binary (yes/no) or as levels of a factor (no phone use, low phone use, high phone use).
  • a continuous variable (minutes of phone use measured every night).

How you apply your experimental treatments to your test subjects is crucial for obtaining valid and reliable results.

First, you need to consider the study size : how many individuals will be included in the experiment? In general, the more subjects you include, the greater your experiment’s statistical power , which determines how much confidence you can have in your results.

Then you need to randomly assign your subjects to treatment groups . Each group receives a different level of the treatment (e.g. no phone use, low phone use, high phone use).

You should also include a control group , which receives no treatment. The control group tells us what would have happened to your test subjects without any experimental intervention.

When assigning your subjects to groups, there are two main choices you need to make:

  • A completely randomized design vs a randomized block design .
  • A between-subjects design vs a within-subjects design .

Randomization

An experiment can be completely randomized or randomized within blocks (aka strata):

  • In a completely randomized design , every subject is assigned to a treatment group at random.
  • In a randomized block design (aka stratified random design), subjects are first grouped according to a characteristic they share, and then randomly assigned to treatments within those groups.

Sometimes randomization isn’t practical or ethical , so researchers create partially-random or even non-random designs. An experimental design where treatments aren’t randomly assigned is called a quasi-experimental design .

Between-subjects vs. within-subjects

In a between-subjects design (also known as an independent measures design or classic ANOVA design), individuals receive only one of the possible levels of an experimental treatment.

In medical or social research, you might also use matched pairs within your between-subjects design to make sure that each treatment group contains the same variety of test subjects in the same proportions.

In a within-subjects design (also known as a repeated measures design), every individual receives each of the experimental treatments consecutively, and their responses to each treatment are measured.

Within-subjects or repeated measures can also refer to an experimental design where an effect emerges over time, and individual responses are measured over time in order to measure this effect as it emerges.

Counterbalancing (randomizing or reversing the order of treatments among subjects) is often used in within-subjects designs to ensure that the order of treatment application doesn’t influence the results of the experiment.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Finally, you need to decide how you’ll collect data on your dependent variable outcomes. You should aim for reliable and valid measurements that minimize research bias or error.

Some variables, like temperature, can be objectively measured with scientific instruments. Others may need to be operationalized to turn them into measurable observations.

  • Ask participants to record what time they go to sleep and get up each day.
  • Ask participants to wear a sleep tracker.

How precisely you measure your dependent variable also affects the kinds of statistical analysis you can use on your data.

Experiments are always context-dependent, and a good experimental design will take into account all of the unique considerations of your study system to produce information that is both valid and relevant to your research question.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Likert scale

Research bias

  • Implicit bias
  • Framing effect
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic

Experimental design means planning a set of procedures to investigate a relationship between variables . To design a controlled experiment, you need:

  • A testable hypothesis
  • At least one independent variable that can be precisely manipulated
  • At least one dependent variable that can be precisely measured

When designing the experiment, you decide:

  • How you will manipulate the variable(s)
  • How you will control for any potential confounding variables
  • How many subjects or samples will be included in the study
  • How subjects will be assigned to treatment levels

Experimental design is essential to the internal and external validity of your experiment.

The key difference between observational studies and experimental designs is that a well-done observational study does not influence the responses of participants, while experiments do have some sort of treatment condition applied to at least some participants by random assignment .

A confounding variable , also called a confounder or confounding factor, is a third variable in a study examining a potential cause-and-effect relationship.

A confounding variable is related to both the supposed cause and the supposed effect of the study. It can be difficult to separate the true effect of the independent variable from the effect of the confounding variable.

In your research design , it’s important to identify potential confounding variables and plan how you will reduce their impact.

In a between-subjects design , every participant experiences only one condition, and researchers assess group differences between participants in various conditions.

In a within-subjects design , each participant experiences all conditions, and researchers test the same participants repeatedly for differences between conditions.

The word “between” means that you’re comparing different conditions between groups, while the word “within” means you’re comparing different conditions within the same group.

An experimental group, also known as a treatment group, receives the treatment whose effect researchers wish to study, whereas a control group does not. They should be identical in all other ways.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bevans, R. (2023, June 21). Guide to Experimental Design | Overview, 5 steps & Examples. Scribbr. Retrieved April 2, 2024, from https://www.scribbr.com/methodology/experimental-design/

Is this article helpful?

Rebecca Bevans

Rebecca Bevans

Other students also liked, random assignment in experiments | introduction & examples, quasi-experimental design | definition, types & examples, how to write a lab report, unlimited academic ai-proofreading.

✔ Document error-free in 5minutes ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

Banner

Writing Center: Experimental Research Papers

  • How to Set Up an Appointment Online
  • Documentation Styles
  • Parts of Speech
  • Types of Clauses
  • Punctuation
  • Spelling & Mechanics
  • Usage & Styles
  • Resources for ESL Students
  • How to Set up an APA Paper
  • How to Set up an MLA Paper
  • Adapt to Academic Learning
  • Audience Awareness
  • Learn Touch Typing
  • Getting Started
  • Thesis Statement
  • The First Draft
  • Proofreading
  • Writing Introductions
  • Writing Conclusions
  • Chicago / Turabian Style
  • CSE / CBE Style
  • Avoiding Plagiarism
  • Cross-Cultural Understanding
  • Writing Resources
  • Research Paper - General Guidelines
  • Annotated Bibliographies
  • History Papers
  • Science Papers
  • Experimental Research Papers
  • Exegetical Papers
  • FAQs About Creative Writing
  • Tips For Creative Writing
  • Exercises To Develop Creative Writing Skills
  • Checklist For Creative Writing
  • Additional Resources For Creative Writing
  • FAQs About Creating PowerPoints
  • Tips For Creating PowerPoints
  • Exercises to Improve PowerPoint Skills
  • Checklist For PowerPoints
  • Structure For GRE Essay
  • Additional Resources For PowerPoints
  • Additional Resources For GRE Essay Writing
  • FAQs About Multimodal Assignments
  • Tips For Creating Multimodal Assignments
  • Checklist For Multimodal Assignments
  • Additional Resources For Multimodal Assignments
  • GRE Essay Writing FAQ
  • Tips for GRE Essay Writing
  • Sample GRE Essay Prompts
  • Checklist For GRE Essays
  • Cover Letter
  • Personal Statements
  • Resources for Tutors
  • Chapter 2: Theoretical Perspectives on Learning a Second Language
  • Chapter 4: Reading an ESL Writer's Text
  • Chapter 5: Avoiding Appropriation
  • Chapter 6: 'Earth Aches by Midnight': Helping ESL Writers Clarify Their Intended Meaning
  • Chapter 7: Looking at the Whole Text
  • Chapter 8: Meeting in the Middle: Bridging the Construction of Meaning with Generation 1.5 Learners
  • Chapter 9: A(n)/The/Ø Article About Articles
  • Chapter 10: Editing Line by Line
  • Chapter 14: Writing Activities for ESL Writers
  • Resources for Faculty
  • Writing Center Newsletter
  • Writing Center Survey

FAQs About Experimental Research Papers (APA)

What is a research paper? 

A researcher uses a research paper to explain how they conducted a research study to answer a question or test a hypothesis. They explain why they conducted the study, the research question or hypothesis they tested, how they conducted the study, the results of their study, and the implications of these results. 

What is the purpose of an experimental research paper? 

A research paper is intended to inform others about advancement in a particular field of study. The researcher who wrote the paper identified a gap in the research in a field of study and used their research to help fill this gap. The researcher uses their paper to inform others about the knowledge that the results of their study contribute. 

What sections are included in an experimental research paper?

A typical research paper contains a Title Page, Abstract, Introduction, Methods, Results, Discussion, and References section. Some also contain a Table and Figures section and Appendix section. 

What citation style is used for experimental research papers? 

APA (American Psychological Association) style is most commonly used for research papers. 

Structure Of Experimental Research Papers (APA)

  • Answers the question of “What is this paper about and who wrote it?”
  • Located on the first page of the paper 
  • The author’s note acknowledges any support that the authors received from others
  • A student paper also includes the course number and name, instructor’s name, and assignment due date
  • Contains a title that summarizes the purpose and content of the research study and engages the audience 
  • No longer than 250 words
  • Summarizes important background information, the research questions and/or hypothesis, methods, key findings, and implications of the findings
  • Explains what the topic of the research is and why the topic is worth studying
  • Summarizes and discusses prior research conducted on the topic 
  • Identifies unresolved issues and gaps in past research that the current research will address
  • Ends with an overview of the current research study, including how the independent and dependent variables, the research questions or hypotheses, and the objective of the research 
  • Explains how the research study was conducted 
  • Typically includes 3 sections: Participants, Materials, and Procedure
  • Includes characteristics of the subjects, how the subjects were selected and recruited, how their anonymity was protected, and what feedback was provided to the participants
  • Describes any equipment, surveys, tests, questionnaires, informed consent forms, and observational techniques 
  • Describes the independent and dependent variables, the type of research design, and how the data was collected
  • Explains what results were found in the research study 
  • Describes the data that was collected and the results of statistical tests 
  • Explains the significance of the results 
  • Accepts or denies the hypotheses 
  • Details the implications of these findings 
  • Addresses the limitations of the study and areas for future research 
  • Includes all sources that were mentioned in the research study 
  • Adheres to APA citation styles
  • Includes all tables and/or figures that were used in the research study 
  • Each table and figure is placed on a separate page 
  • Tables are included before figures
  • Begins with a bolded, centered header such as “ Table 1 ”
  • Appends all forms, surveys, tests, etc. that were used in the study 
  • Only includes documents that were referenced in the Methods section 
  • Each entry is placed on a separate page 
  • Begins with a bolded, centered header such as “ Appendix A ”

Tips For Experimental Research Papers (APA)

  • Initial interest will motivate you to complete your study 
  • Your entire study will be centered around this question or statement 
  • Use only verifiable sources that provide accurate information about your topic 
  • You need to thoroughly understand the field of study your topic is on to help you recognize the gap your research will fill and the significance of your results
  • This will help you identify what you should study and what the significance of your study will be 
  • Create an outline before you begin writing to help organize your thoughts and direct you in your writing 
  • This will prevent you from losing the source or forgetting to cite the source 
  • Work on one section at a time, rather than trying to complete multiple sections at once
  • This information can be easily referred to as your write your various sections 
  • When conducting your research, working general to specific will help you narrow your topic and fully understand the field your topic is in 
  • When writing your literature review, writing from general to specific will help the audience understand your overall topic and the narrow focus of your research 
  • This will prevent you from losing sources you may need later 
  • Incorporate correct APA formatting as you write, rather than changing the formatting at the end of the writing process 

Checklist For Experimental Research Papers (APA)

  • If the paper is a student paper, it contains the title of the project, the author’s name(s), the instructor's name, course number and name, and assignment due date
  • If the paper is a professional paper, it includes the title of the paper, the author’s name(s), the institutional affiliation, and the author note
  • Begins on the first page of the paper
  • The title is typed in upper and lowercase letters, four spaces below the top of the paper, and written in boldface 
  • Other information is separated by a space from the title

Title (found on title page)

  • Informs the audience about the purpose of the paper 
  • Captures the attention of the audience 
  • Accurately reflects the purpose and content of the research paper 

Abstract 

  • Labeled as “ Abstract ”
  • Begins on the second page 
  • Provides a short, concise summary of the content of the research paper 
  • Includes background information necessary to understand the topic 
  • Background information demonstrates the purpose of the paper
  • Contains the hypothesis and/or research questions addressed in the paper
  • Has a brief description of the methods used 
  • Details the key findings and significance of the results
  • Illustrates the implications of the research study 
  • Contains less than 250 words

Introduction 

  • Starts on the third page 
  • Includes the title of the paper in bold at the top of the page
  • Contains a clear statement of the problem that the paper sets out to address 
  • Places the research paper within the context of previous research on the topic 
  • Explains the purpose of the research study and what you hope to find
  • Describes the significance of the study 
  • Details what new insights the research will contribute
  • Concludes with a brief description of what information will be mentioned in the literature review

Literature Review

  • Labeled as “ Literature Review”
  • Presents a general description of the problem area 
  • Defines any necessary terms 
  • Discusses and summarizes prior research on the selected topic 
  • Identifies any unresolved issues or gaps in research that the current research plans to address
  • Concludes with a summary of the current research study, including the independent and dependent variables, the research questions or hypotheses, and the objective of the research  
  • Labeled as “ Methods ”
  • Efficiently explains how the research study was conducted 
  • Appropriately divided into sections
  • Describes the characteristics of the participants 
  • Explains how the participants were selected 
  • Details how the anonymity of the participants was protected 
  • Notes what feedback the participants will be provided 
  • Describes all materials and instruments that were used 
  • Mentions how the procedure was conducted and data collected
  • Notes the independent and dependent variables 
  • Includes enough information that another researcher could duplicate the research 

Results 

  • Labeled as “ Results ”
  • Describes the data was collected
  • Explains the results of statistical tests that were performed
  • Omits any analysis or discussion of the implications of the study 

Discussion 

  • Labeled as “ Discussion ”
  • Describes the significance of the results 
  • Relates the results to the research questions and/or hypotheses
  • States whether the hypotheses should be rejected or accepted 
  • Addresses limitations of the study, including potential bias, confounds, imprecision of measures, and limits to generalizability
  • Explains how the study adds to the knowledge base and expands upon past research
  • Labeled as “ References ”
  • Correctly cites sources according to APA formatting 
  • Orders sources alphabetically
  • All sources included in the study are cited in the reference section 

Table and Figures (optional)

  •  Each table and each figure is placed on a separate page 
  • Tables and figures are included after the reference page
  • Tables and figures are correctly labeled
  • Each table and figure begins with a bolded, centered header such as “ Table 1 ,” “ Table 2 ,”

Appendix (optional) 

  • Any forms, surveys, tests, etc. are placed in the Appendix
  • All appendix entries are mentioned in the Methods section 
  • Each appendix begins on a new page
  • Each appendix begins with a bolded, centered header such as “ Appendix A, ” “ Appendix B ”

Additional Resources For Experimental Research Papers (APA)

  • https://www.mcwritingcenterblog.org/single-post/how-to-conduct-research-using-the-library-s-resources
  • https://www.mcwritingcenterblog.org/single-post/how-to-read-academic-articles
  • https://researchguides.ben.edu/source-evaluation   
  • https://researchguides.library.brocku.ca/external-analysis/evaluating-sources
  • https://writing.wisc.edu/handbook/assignments/planresearchpaper/
  • https://nmu.edu/writingcenter/tips-writing-research-paper
  • https://writingcenter.gmu.edu/guides/how-to-write-a-research-question
  • https://www.unr.edu/writing-speaking-center/student-resources/writing-speaking-resources/guide-to-writing-research-papers
  • https://drive.google.com/drive/folders/1F4DFWf85zEH4aZvm10i8Ahm_3xnAekal?usp=sharing
  • https://owl.purdue.edu/owl/research_and_citation/apa_style/apa_formatting_and_style_guide/general_format.html
  • https://libguides.elmira.edu/research
  • https://www.nhcc.edu/academics/library/doing-library-research/basic-steps-research-process
  • https://libguides.wustl.edu/research
  • << Previous: Science Papers
  • Next: Exegetical Papers >>
  • Last Updated: Sep 14, 2023 10:30 AM
  • URL: https://mc.libguides.com/writingcenter

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • A Quick Guide to Experimental Design | 5 Steps & Examples

A Quick Guide to Experimental Design | 5 Steps & Examples

Published on 11 April 2022 by Rebecca Bevans . Revised on 5 December 2022.

Experiments are used to study causal relationships . You manipulate one or more independent variables and measure their effect on one or more dependent variables.

Experimental design means creating a set of procedures to systematically test a hypothesis . A good experimental design requires a strong understanding of the system you are studying. 

There are five key steps in designing an experiment:

  • Consider your variables and how they are related
  • Write a specific, testable hypothesis
  • Design experimental treatments to manipulate your independent variable
  • Assign subjects to groups, either between-subjects or within-subjects
  • Plan how you will measure your dependent variable

For valid conclusions, you also need to select a representative sample and control any  extraneous variables that might influence your results. If if random assignment of participants to control and treatment groups is impossible, unethical, or highly difficult, consider an observational study instead.

Table of contents

Step 1: define your variables, step 2: write your hypothesis, step 3: design your experimental treatments, step 4: assign your subjects to treatment groups, step 5: measure your dependent variable, frequently asked questions about experimental design.

You should begin with a specific research question . We will work with two research question examples, one from health sciences and one from ecology:

To translate your research question into an experimental hypothesis, you need to define the main variables and make predictions about how they are related.

Start by simply listing the independent and dependent variables .

Then you need to think about possible extraneous and confounding variables and consider how you might control  them in your experiment.

Finally, you can put these variables together into a diagram. Use arrows to show the possible relationships between variables and include signs to show the expected direction of the relationships.

Diagram of the relationship between variables in a sleep experiment

Here we predict that increasing temperature will increase soil respiration and decrease soil moisture, while decreasing soil moisture will lead to decreased soil respiration.

Prevent plagiarism, run a free check.

Now that you have a strong conceptual understanding of the system you are studying, you should be able to write a specific, testable hypothesis that addresses your research question.

The next steps will describe how to design a controlled experiment . In a controlled experiment, you must be able to:

  • Systematically and precisely manipulate the independent variable(s).
  • Precisely measure the dependent variable(s).
  • Control any potential confounding variables.

If your study system doesn’t match these criteria, there are other types of research you can use to answer your research question.

How you manipulate the independent variable can affect the experiment’s external validity – that is, the extent to which the results can be generalised and applied to the broader world.

First, you may need to decide how widely to vary your independent variable.

  • just slightly above the natural range for your study region.
  • over a wider range of temperatures to mimic future warming.
  • over an extreme range that is beyond any possible natural variation.

Second, you may need to choose how finely to vary your independent variable. Sometimes this choice is made for you by your experimental system, but often you will need to decide, and this will affect how much you can infer from your results.

  • a categorical variable : either as binary (yes/no) or as levels of a factor (no phone use, low phone use, high phone use).
  • a continuous variable (minutes of phone use measured every night).

How you apply your experimental treatments to your test subjects is crucial for obtaining valid and reliable results.

First, you need to consider the study size : how many individuals will be included in the experiment? In general, the more subjects you include, the greater your experiment’s statistical power , which determines how much confidence you can have in your results.

Then you need to randomly assign your subjects to treatment groups . Each group receives a different level of the treatment (e.g. no phone use, low phone use, high phone use).

You should also include a control group , which receives no treatment. The control group tells us what would have happened to your test subjects without any experimental intervention.

When assigning your subjects to groups, there are two main choices you need to make:

  • A completely randomised design vs a randomised block design .
  • A between-subjects design vs a within-subjects design .

Randomisation

An experiment can be completely randomised or randomised within blocks (aka strata):

  • In a completely randomised design , every subject is assigned to a treatment group at random.
  • In a randomised block design (aka stratified random design), subjects are first grouped according to a characteristic they share, and then randomly assigned to treatments within those groups.

Sometimes randomisation isn’t practical or ethical , so researchers create partially-random or even non-random designs. An experimental design where treatments aren’t randomly assigned is called a quasi-experimental design .

Between-subjects vs within-subjects

In a between-subjects design (also known as an independent measures design or classic ANOVA design), individuals receive only one of the possible levels of an experimental treatment.

In medical or social research, you might also use matched pairs within your between-subjects design to make sure that each treatment group contains the same variety of test subjects in the same proportions.

In a within-subjects design (also known as a repeated measures design), every individual receives each of the experimental treatments consecutively, and their responses to each treatment are measured.

Within-subjects or repeated measures can also refer to an experimental design where an effect emerges over time, and individual responses are measured over time in order to measure this effect as it emerges.

Counterbalancing (randomising or reversing the order of treatments among subjects) is often used in within-subjects designs to ensure that the order of treatment application doesn’t influence the results of the experiment.

Finally, you need to decide how you’ll collect data on your dependent variable outcomes. You should aim for reliable and valid measurements that minimise bias or error.

Some variables, like temperature, can be objectively measured with scientific instruments. Others may need to be operationalised to turn them into measurable observations.

  • Ask participants to record what time they go to sleep and get up each day.
  • Ask participants to wear a sleep tracker.

How precisely you measure your dependent variable also affects the kinds of statistical analysis you can use on your data.

Experiments are always context-dependent, and a good experimental design will take into account all of the unique considerations of your study system to produce information that is both valid and relevant to your research question.

Experimental designs are a set of procedures that you plan in order to examine the relationship between variables that interest you.

To design a successful experiment, first identify:

  • A testable hypothesis
  • One or more independent variables that you will manipulate
  • One or more dependent variables that you will measure

When designing the experiment, first decide:

  • How your variable(s) will be manipulated
  • How you will control for any potential confounding or lurking variables
  • How many subjects you will include
  • How you will assign treatments to your subjects

The key difference between observational studies and experiments is that, done correctly, an observational study will never influence the responses or behaviours of participants. Experimental designs will have a treatment condition applied to at least a portion of participants.

A confounding variable , also called a confounder or confounding factor, is a third variable in a study examining a potential cause-and-effect relationship.

A confounding variable is related to both the supposed cause and the supposed effect of the study. It can be difficult to separate the true effect of the independent variable from the effect of the confounding variable.

In your research design , it’s important to identify potential confounding variables and plan how you will reduce their impact.

In a between-subjects design , every participant experiences only one condition, and researchers assess group differences between participants in various conditions.

In a within-subjects design , each participant experiences all conditions, and researchers test the same participants repeatedly for differences between conditions.

The word ‘between’ means that you’re comparing different conditions between groups, while the word ‘within’ means you’re comparing different conditions within the same group.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Bevans, R. (2022, December 05). A Quick Guide to Experimental Design | 5 Steps & Examples. Scribbr. Retrieved 2 April 2024, from https://www.scribbr.co.uk/research-methods/guide-to-experimental-design/

Is this article helpful?

Rebecca Bevans

Rebecca Bevans

Examples logo

Experimental Research

Experimental Research Examples

Humans are born curious. As babies, we quench our questions by navigating our surroundings with our available senses. Our fascination with the unknown lingers to adulthood that some of us build a career out of trying to discover the mysteries of the universe. To learn about a point of interest, one of the things we do is isolate and replicate the phenomenon in laboratories and controlled environment. Experimental research is a causal investigation into the cause-effect relationships by manipulating all other factors.  You may also see Student Research examples & samples

Experimental research is generally a quantitative research centered on validating or refuting certain claims on causative relationships of matter. We use this method in natural, applied, theoretical, and social sciences, to name a few. This research follows a scientific design that puts a weight on replicability. When the methodology and results are replicable, the study is verifiable by reviewers and critics.

Notable Experiments

We have been conducting experiments for the longest time. Experimental studies done some thousand of years ago prove that unrefined apparatus and limited knowledge, we were already trying to answer the questions of the universe. We had to start somewhere.

Anatomical Anomaly

Even before, societal beliefs have restricted scientific development. This is especially true for modern medicine. Back then, studying and opening cadavers is a punishable crime. Therefore, physicians based their knowledge on the human body on animal dissections. Because animals have a different body organization than humans, this limited what we knew about ourselves. It took actual studies and experiments on the human body to curtail the misinformation and improve medical knowledge.

Reviewing Resemblance

A garden of fuschia and peas helped change our understanding of heredity and inheritable traits. Mendel was curious about why the fuschia plants generate the colors of the flowers the way they do. He crossed varieties of the plant and obtained consistent results. He also tried to cross pea plants and came with repeatable results. The characteristics of the parent plants are passed down to their offsprings to a certain degree of similarity. He also figured out the predictability of certain traits to appear in the offspring. Mendelian genetics explain the laws of inheritance that are still relevant today.

Canine Conditioning 

In the history of psychology research, one experiment will always ring a bell. Pavlov conditioned a dog to expect food when a bell was rung. After repetitions of this approach, the dog started to salivate at the sound of the bell, even when Pavlov didn’t introduce the food. His work on training the reflexes and the mind is in line with the plasticity of the brain to learn and unlearn relationships based on stimuli.

Correlation Vs. Causation

We can opt for an experimental approach to research when we want to determine if the hypothesized cause follows the expected effect. We do this by following a scientific research method and design that emphasizes the replicability of results to limit and reduce biases. By isolating the variables and manipulating treatments, we can establish causation. This is important if we are to find out the relationship between A and B.

In our experiments, we will encounter two or more phenomena, and we might mislabel their connection. There are instances where that relationship is both correlative and causative. What we need to remember is that correlation is not causation. We can say that A causes B when event B is an explicit product of and entirely dependent on event A. Events A and B are correlated when they appear together, but after experimentation, A doesn’t necessarily result in B.

However, it is not enough to say A caused B. Our results are still subject to statistical treatment to determine the validity of the findings and the degree of causation. We still have to ask how much A influences B. Only then can we accept or reject our hypothesis .

Experimental Research Value

Experimental research is a trial-and-error with an educated basis. It lets us determine what works and what doesn’t, and the underlying relationship. In our daily life, we are engaging in pseudo experiments. While cooking, for instance, you taste the dish before you decide to pour additional seasoning. You test first if the food is fine without additives.

In some fields of science, the results of an experiment can be used to generalized a relationship as true for similar, if not all, cases. Experimental research papers make way for the formation of theories. When those theories become unrefuted for a long time, they can become laws that explain universal phenomena.

10+ Experimental Research Examples

Go over the following examples of experimental research papers . They may be able to help you gain a head start in your study or uproot you from where you’re stuck in your experiment.

1. Experimental Research Design Example

experimental research design example

Size: 465 KB

2. Experimental Data Quality Research Example

experimental data quality research example

Size: 318 KB

3. Experimental Research on Labor Market Discrimination

experimental research on labor market discrimination

Size: 722 KB

4. Experimental Studies Research Example

experimental studies research

Size: 230 KB

5. Short Description of Experimental Research Example

short description of experimental research

Size: 280 KB

6. Sample Experimental Design Research Example

sample experimental research

Size: 109 KB

7. Experimental Research on Democracy Example

experimental research on democracy example

Size: 86 KB

8. Standards for Experimental Research Example

standards for experimental researchs

Size: 141 KB

9. Experimental Research for Evaluation Example

experimental research for evaluation

Size: 87 KB

10. Defense Experimental Research Example

defense experimental research example

Size: 315 KB

11. Formal Experoimental Research in DOC

formal experoimental research in doc

How To Start Your Experiment

The best scientists and researchers started with the basics, too. Here are reminders on how you could improve your research writing skills. Who knows, one day, you will join the ranks of world changers with your experimental research report

1. Identify the Problem

To solve a problem, you need to define what it is first. You can begin with identifying the field of research you wish to investigate, then find gaps in knowledge from the related literature. An original work on a timely and relevant issue will help with the approval of your research proposal . After you have read scholarly articles about the topic, you can start narrowing the focus of your research into a specific topic.

2. Design the Experiment

Create a research plan for your intended research with the following notes. The experimental research design ideally employs a probabilistic sampling method to avoid biases from influencing the validity of your work. However, certain experiments call for non-probabilistic sampling techniques. Your experiment should have a control group with ambient conditions or blank treatments. This set up helps you objectively quantify the relationship between A and B.

3. Test the Hypothesis

In performing your experiment, you should have a variable that you would manipulate. The effect of the manipulation will be reflected in the dependent variable. By manipulating the factors that would cause event B, you can determine if A does, in fact, cause B. You can input the raw data into statistical analysis software and tools to see if you can derive a valid conclusion on the relationship between A and B. Correlation or causation and their degree can also be determined by different statistical tests.

4. Publish the Findings

After you have gone through all the efforts in conducting your research, the next step is communicating the findings to the academic community and the public, especially if public and government entities funded the study. You do this by submitting your paper to journals and academic conferences. For what use is the new knowledge you have worked for if you keep the results to yourself?

Experimental research separates science from fiction. Despite criticisms that this method exists in an ideal world, removed from reality, we cannot downsize its merits in the search for knowledge. Because the results are observable, replicable, and appreciable in a real-world sense, this research type will always have room in the development of scientific knowledge and the improvement of man. For as long as man is curious, science will keep growing.

experimental research paper sample

AI Generator

Text prompt

  • Instructive
  • Professional

10 Examples of Public speaking

20 Examples of Gas lighting

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Athl Train
  • v.45(1); Jan-Feb 2010

Study/Experimental/Research Design: Much More Than Statistics

Kenneth l. knight.

Brigham Young University, Provo, UT

The purpose of study, experimental, or research design in scientific manuscripts has changed significantly over the years. It has evolved from an explanation of the design of the experiment (ie, data gathering or acquisition) to an explanation of the statistical analysis. This practice makes “Methods” sections hard to read and understand.

To clarify the difference between study design and statistical analysis, to show the advantages of a properly written study design on article comprehension, and to encourage authors to correctly describe study designs.

Description:

The role of study design is explored from the introduction of the concept by Fisher through modern-day scientists and the AMA Manual of Style . At one time, when experiments were simpler, the study design and statistical design were identical or very similar. With the complex research that is common today, which often includes manipulating variables to create new variables and the multiple (and different) analyses of a single data set, data collection is very different than statistical design. Thus, both a study design and a statistical design are necessary.

Advantages:

Scientific manuscripts will be much easier to read and comprehend. A proper experimental design serves as a road map to the study methods, helping readers to understand more clearly how the data were obtained and, therefore, assisting them in properly analyzing the results.

Study, experimental, or research design is the backbone of good research. It directs the experiment by orchestrating data collection, defines the statistical analysis of the resultant data, and guides the interpretation of the results. When properly described in the written report of the experiment, it serves as a road map to readers, 1 helping them negotiate the “Methods” section, and, thus, it improves the clarity of communication between authors and readers.

A growing trend is to equate study design with only the statistical analysis of the data. The design statement typically is placed at the end of the “Methods” section as a subsection called “Experimental Design” or as part of a subsection called “Data Analysis.” This placement, however, equates experimental design and statistical analysis, minimizing the effect of experimental design on the planning and reporting of an experiment. This linkage is inappropriate, because some of the elements of the study design that should be described at the beginning of the “Methods” section are instead placed in the “Statistical Analysis” section or, worse, are absent from the manuscript entirely.

Have you ever interrupted your reading of the “Methods” to sketch out the variables in the margins of the paper as you attempt to understand how they all fit together? Or have you jumped back and forth from the early paragraphs of the “Methods” section to the “Statistics” section to try to understand which variables were collected and when? These efforts would be unnecessary if a road map at the beginning of the “Methods” section outlined how the independent variables were related, which dependent variables were measured, and when they were measured. When they were measured is especially important if the variables used in the statistical analysis were a subset of the measured variables or were computed from measured variables (such as change scores).

The purpose of this Communications article is to clarify the purpose and placement of study design elements in an experimental manuscript. Adopting these ideas may improve your science and surely will enhance the communication of that science. These ideas will make experimental manuscripts easier to read and understand and, therefore, will allow them to become part of readers' clinical decision making.

WHAT IS A STUDY (OR EXPERIMENTAL OR RESEARCH) DESIGN?

The terms study design, experimental design, and research design are often thought to be synonymous and are sometimes used interchangeably in a single paper. Avoid doing so. Use the term that is preferred by the style manual of the journal for which you are writing. Study design is the preferred term in the AMA Manual of Style , 2 so I will use it here.

A study design is the architecture of an experimental study 3 and a description of how the study was conducted, 4 including all elements of how the data were obtained. 5 The study design should be the first subsection of the “Methods” section in an experimental manuscript (see the Table ). “Statistical Design” or, preferably, “Statistical Analysis” or “Data Analysis” should be the last subsection of the “Methods” section.

Table. Elements of a “Methods” Section

An external file that holds a picture, illustration, etc.
Object name is i1062-6050-45-1-98-t01.jpg

The “Study Design” subsection describes how the variables and participants interacted. It begins with a general statement of how the study was conducted (eg, crossover trials, parallel, or observational study). 2 The second element, which usually begins with the second sentence, details the number of independent variables or factors, the levels of each variable, and their names. A shorthand way of doing so is with a statement such as “A 2 × 4 × 8 factorial guided data collection.” This tells us that there were 3 independent variables (factors), with 2 levels of the first factor, 4 levels of the second factor, and 8 levels of the third factor. Following is a sentence that names the levels of each factor: for example, “The independent variables were sex (male or female), training program (eg, walking, running, weight lifting, or plyometrics), and time (2, 4, 6, 8, 10, 15, 20, or 30 weeks).” Such an approach clearly outlines for readers how the various procedures fit into the overall structure and, therefore, enhances their understanding of how the data were collected. Thus, the design statement is a road map of the methods.

The dependent (or measurement or outcome) variables are then named. Details of how they were measured are not given at this point in the manuscript but are explained later in the “Instruments” and “Procedures” subsections.

Next is a paragraph detailing who the participants were and how they were selected, placed into groups, and assigned to a particular treatment order, if the experiment was a repeated-measures design. And although not a part of the design per se, a statement about obtaining written informed consent from participants and institutional review board approval is usually included in this subsection.

The nuts and bolts of the “Methods” section follow, including such things as equipment, materials, protocols, etc. These are beyond the scope of this commentary, however, and so will not be discussed.

The last part of the “Methods” section and last part of the “Study Design” section is the “Data Analysis” subsection. It begins with an explanation of any data manipulation, such as how data were combined or how new variables (eg, ratios or differences between collected variables) were calculated. Next, readers are told of the statistical measures used to analyze the data, such as a mixed 2 × 4 × 8 analysis of variance (ANOVA) with 2 between-groups factors (sex and training program) and 1 within-groups factor (time of measurement). Researchers should state and reference the statistical package and procedure(s) within the package used to compute the statistics. (Various statistical packages perform analyses slightly differently, so it is important to know the package and specific procedure used.) This detail allows readers to judge the appropriateness of the statistical measures and the conclusions drawn from the data.

STATISTICAL DESIGN VERSUS STATISTICAL ANALYSIS

Avoid using the term statistical design . Statistical methods are only part of the overall design. The term gives too much emphasis to the statistics, which are important, but only one of many tools used in interpreting data and only part of the study design:

The most important issues in biostatistics are not expressed with statistical procedures. The issues are inherently scientific, rather than purely statistical, and relate to the architectural design of the research, not the numbers with which the data are cited and interpreted. 6

Stated another way, “The justification for the analysis lies not in the data collected but in the manner in which the data were collected.” 3 “Without the solid foundation of a good design, the edifice of statistical analysis is unsafe.” 7 (pp4–5)

The intertwining of study design and statistical analysis may have been caused (unintentionally) by R.A. Fisher, “… a genius who almost single-handedly created the foundations for modern statistical science.” 8 Most research did not involve statistics until Fisher invented the concepts and procedures of ANOVA (in 1921) 9 , 10 and experimental design (in 1935). 11 His books became standard references for scientists in many disciplines. As a result, many ANOVA books were titled Experimental Design (see, for example, Edwards 12 ), and ANOVA courses taught in psychology and education departments included the words experimental design in their course titles.

Before the widespread use of computers to analyze data, designs were much simpler, and often there was little difference between study design and statistical analysis. So combining the 2 elements did not cause serious problems. This is no longer true, however, for 3 reasons: (1) Research studies are becoming more complex, with multiple independent and dependent variables. The procedures sections of these complex studies can be difficult to understand if your only reference point is the statistical analysis and design. (2) Dependent variables are frequently measured at different times. (3) How the data were collected is often not directly correlated with the statistical design.

For example, assume the goal is to determine the strength gain in novice and experienced athletes as a result of 3 strength training programs. Rate of change in strength is not a measurable variable; rather, it is calculated from strength measurements taken at various time intervals during the training. So the study design would be a 2 × 2 × 3 factorial with independent variables of time (pretest or posttest), experience (novice or advanced), and training (isokinetic, isotonic, or isometric) and a dependent variable of strength. The statistical design , however, would be a 2 × 3 factorial with independent variables of experience (novice or advanced) and training (isokinetic, isotonic, or isometric) and a dependent variable of strength gain. Note that data were collected according to a 3-factor design but were analyzed according to a 2-factor design and that the dependent variables were different. So a single design statement, usually a statistical design statement, would not communicate which data were collected or how. Readers would be left to figure out on their own how the data were collected.

MULTIVARIATE RESEARCH AND THE NEED FOR STUDY DESIGNS

With the advent of electronic data gathering and computerized data handling and analysis, research projects have increased in complexity. Many projects involve multiple dependent variables measured at different times, and, therefore, multiple design statements may be needed for both data collection and statistical analysis. Consider, for example, a study of the effects of heat and cold on neural inhibition. The variables of H max and M max are measured 3 times each: before, immediately after, and 30 minutes after a 20-minute treatment with heat or cold. Muscle temperature might be measured each minute before, during, and after the treatment. Although the minute-by-minute data are important for graphing temperature fluctuations during the procedure, only 3 temperatures (time 0, time 20, and time 50) are used for statistical analysis. A single dependent variable H max :M max ratio is computed to illustrate neural inhibition. Again, a single statistical design statement would tell little about how the data were obtained. And in this example, separate design statements would be needed for temperature measurement and H max :M max measurements.

As stated earlier, drawing conclusions from the data depends more on how the data were measured than on how they were analyzed. 3 , 6 , 7 , 13 So a single study design statement (or multiple such statements) at the beginning of the “Methods” section acts as a road map to the study and, thus, increases scientists' and readers' comprehension of how the experiment was conducted (ie, how the data were collected). Appropriate study design statements also increase the accuracy of conclusions drawn from the study.

CONCLUSIONS

The goal of scientific writing, or any writing, for that matter, is to communicate information. Including 2 design statements or subsections in scientific papers—one to explain how the data were collected and another to explain how they were statistically analyzed—will improve the clarity of communication and bring praise from readers. To summarize:

  • Purge from your thoughts and vocabulary the idea that experimental design and statistical design are synonymous.
  • Study or experimental design plays a much broader role than simply defining and directing the statistical analysis of an experiment.
  • A properly written study design serves as a road map to the “Methods” section of an experiment and, therefore, improves communication with the reader.
  • Study design should include a description of the type of design used, each factor (and each level) involved in the experiment, and the time at which each measurement was made.
  • Clarify when the variables involved in data collection and data analysis are different, such as when data analysis involves only a subset of a collected variable or a resultant variable from the mathematical manipulation of 2 or more collected variables.

Acknowledgments

Thanks to Thomas A. Cappaert, PhD, ATC, CSCS, CSE, for suggesting the link between R.A. Fisher and the melding of the concepts of research design and statistics.

ORIGINAL RESEARCH article

Impact of weekly physical activity on stress response: an experimental study.

\r\nRicardo de la Vega

  • 1 Department of Physical Education, Sport and Human Movement, Autonomous University of Madrid, Madrid, Spain
  • 2 Didactic and Behavioral Analysis in Sport Research Group, Faculty of Sport Sciences, University of Extremadura, Cáceres, Spain
  • 3 Sport of Studies Center, Rey Juan Carlos University, Madrid, Spain

The aim of this research is focused on analyzing the alteration of the psychophysiological and cognitive response to an objective computerized stress test (Determination Test - DT-, Vienna test System ® ), when the behavioral response is controlled. The sample used was sports science students (N = 22), with a mean age of 22.82 (M age = 22.82; SD years = 3.67; M PhysicalActivity hours/Week = 7.77; SD hours / week = 3.32) A quasi-experimental design was used in which the response of each participant to the DT test was evaluated. The variable “number of hours of physical activity per week” and the variable “level of behavioral response to stress” were controlled. Before and after this test, the following parameters were measured: activation and central fatigue (Critical Flicker Fusion Threshold (CFF Critical flicker fusion ascending and Critical flicker fusion descending; DC potential), and perceived exertion (Central Rating of Perceived Exertion and Peripheral Rating of Perceived Exertion). Significant differences were found in all of the measures indicated. The usefulness of this protocol and the measures used to analyze the stress response capacity of the study subjects are discussed.

Introduction

The analysis of psychophysiological fatigue is considered very important in different contexts ( Lohani et al., 2019 ). In this sense, the consideration of the study of humans’s response to external and internal loads ( Wijesuriya et al., 2007 ; Wilson et al., 2007 ) has become one of the most important research topics. The external loads exerted on the individual are added to their skills and coping strategies, resulting in a level of tolerance and adaptation to each situation ( Folkman and Lazarus, 1988 ). Along the last decades, distinctions are often made between physical and mental fatigue role, indicating clear methodologies for the analysis of physiological fatigue, but with clear limitations in the study of central fatigue, because this is measurable only indirectly, which emphasizes the importance of developing new central fatigue analysis procedures ( Bittner et al., 2000 ).

Throughout the decades of research on this topic, different strategies have been used to evaluate the adaptation to these external and internal loads ( Lazarus, 1990 ; Amann, 2011 ). Thus, for example, a multitude of self-reports and standardized tests have been used ( Britner et al., 2003 ), to which physiological and biological measures have been added ( Arza et al., 2019 ). However, relatively low attention is usually given to the Central Nervous System (CNS)-related mechanisms, which play a major role on the development of fatigue ( Tarvainen et al., 2014 ), but are rarely monitored in the sport and physical activity field ( Valenzuela et al., 2020 ). Most of the studies related to central fatigue to date have focused on the effect it has on performing strenuous physical tasks ( Amann and Dempsey, 2008 ), although over the last few years there has been a notable increase in interest in studying the role of central fatigue in explaining human performance ( Inzlicht and Marcora, 2016 ). In this sense, the psychobiological model based on motivational intensity theory has gained special strength ( Gendolla and Richter, 2010 ). This model emphasizes that perception of effort and potential motivation are the central determinants of task engagement. Both variables are taken into consideration in our research, controlling the involvement in the task (motivation), by applying a computerized test, and analyzing the perception of both central and peripheral effort as detailed in the methodological section.

Two of these measures, which focus the methodological attention of this research due to its great potential in the study of this topic, are the Critical Flicker Fusion Threshold (CFFT), evaluated using one Flicker Fusion instrument ( Vicente-Rodríguez et al., 2020 ), and the DC Potential, evaluated using the OmegaWave technology. The neuro-physiological basis of flicker perception is complex but well established ( Görtelmeyer and Zimmermann, 1982 ). In particular, flickering light directly influences cortical activity. The CFFT was measured using two red light- emitting diodes in binocular foveal fixation. The continuous psychophysical method of limits was employed to determine CFFT ( Woodworth and Schlosberg, 1954 ). The utility of CFFT in sport has been focused on the relationship of arousal level with CNS ( Görtelmeyer and Zimmermann, 1982 ). Increase in CFFT suggests an increase in cortical arousal and sensory sensitivity. By contrast, a decrease of CFFT suggests a reduction in the efficiency of the system to process information ( Li et al., 2004 ; Clemente and Díaz, 2019 ). On the other hand, for the evaluation of the brain’s direct current (DC) potentials -slow potentials that reflect alterations in brain excitability- OmegaWave technology has gained strength in recent years ( Naranjo-Orellana et al., 2020 ; Valenzuela et al., 2020 ). This device not only measures the Heart Rate Variability (HRV) but it also simultaneously a brainwave signal (DC potential) in order to complement the information obtained from HRV to assess the athlete’s functional state ( Naranjo-Orellana et al., 2020 ). DC potentials—frequency ranges between 0 and 0.5 Hz, are correlated with different brain processes, such as take consciousness during decision making ( Guggisberg and Mottaz, 2013 ) high alertness states ( Bachmann, 1984 ), arousal state ( Haider et al., 1981 ), or attention ( Rösler et al., 1997 ).

To date, most studies conducted in the evaluation of central fatigue have shown that the greatest disturbances are produced by tasks that require efforts at maximum speed that involve a large amount of force ( Davranche and Pichon, 2005 ; Clemente and Díaz, 2019 ). However, there are very few studies that have analyzed central fatigue through controlled analysis of a task that primarily involves central fatigue ( Fuentes et al., 2019 ). In this sense, the aim is to apply a computerized test (DT, Vienna Test System), that allows evaluating people’s tolerance to stress and central fatigue by applying a standardized protocol, in physical activity practitioners. The knowledge in this field is really limited, for this reason we developed the present research with the aim of studying the modifications in CFFT and DC potentials in a sample group of regular physical activity. The first hypothesis establishes that the computerized stress task increases the participants’ perception of central fatigue, while keeping the perception of peripheral fatigue stable. As a consequence, the second hypothesis establishes that differences will be found in the “post” situation in the CFFT measures and in the central physiological indicators, which would indicate a relationship between the subjective and objective measures of central fatigue.

Materials and Methods

This study followed a quasi-experimental design ( Montero and León, 2007 ) and it received the approval of the University Ethical Commission in compliance with the Helsinki Declaration. All subjects were informed about the procedure and gave their written consent to participate. This study was carried out complying with the Standards for Ethics in Sport and Exercise Science Research ( Harriss et al., 2019 ).

Participants

The participants included 22 individuals from Madrid (Spain), 18 of whom were male and 4 females. These participants were aged between 18 and 36 years ( M years = 22.82, SD years = 3.67). All of the participants regularly engaged in physical activity, between 4 and 14 h per week ( M hours / week = 7.77, SD hours / week = 3.32). The inclusion criteria was that they performed physical activity at least 3 times a week and 150 min of moderate/vigorous physical activity. The exclusion criteria was not correctly performing the proposed measurements. Four participants were excluded from the study for not completing the measurements correctly. Intentional sampling methods were used ( Montero and León, 2007 ). Due to the impossibility of continuing with the data collection due to the Alert State decreed by the Spanish Government as a result of COVID-19, the sample had to be closed with the participants who had passed all the tests before March 2020.

Instrumentation and Study Variables

The number of hours of physical activity per week and the scores obtained on the DT test were used as controlled variables. This allows us to know that the differences found are not due to the ability to respond to stress, or to the weekly amount of physical exercise performed. Therefore, only the subjects in which there were no statistically significant differences in their weekly level of physical exercise, nor in the scores obtained in the DT test, were used.

To carry out this research, three measurement systems have been used: OmegaWave device, Flicker Fusion Unit (Vienna Test System), and the Determination Test (Vienna Test System). OmegaWave is a device that assesses the physiological readiness of athletes by examining the autonomic balance through HRV and brain‘s energy balance via DC potential ( Gómez-Oliva et al., 2019 ), Elastic chest band MEDITRACE (dominant hand and forehead). Coach + application (OmegaWave Ltd, Espoo, Finland) was used on Ipad mini 2 32GB. The Vienna Test System is an instrument for computerized psychological assessments that allows the objective evaluation of different psychological parameters. The Determination Test (DT Vienna test system) ( Whiteside, 2002 ; Whiteside et al., 2003 ) was used to determine neuropsychological fatigue. The test studied the attentional capacity, reactive stress tolerance, reaction speed among continuously, and quickly changing acoustic and visual stimuli. The test is simple, the difficulty of the task lies in the different modality of the arriving stimuli and their speed. This way we measure those cognitive abilities of the people involved that are needed for the distinction of colors and sounds, the perception of the characteristics of stimuli, their memorization, and finally, the selection of the adequate answer. The stimuli coming during the test are not predictable. Instead, the subjects need to react to them randomly ( Schuhfried, 2013 ). We study four key variables: the average value of reaction speed (sec), the number of correct answers (raw score), which reflects the ability of the respondent to precisely and quickly select the adequate answer even under pressure. Furthermore, we also examine the number of incorrect answers (raw score) which can show us how likely the respondent is to get confused under stress and pressure; finally, the high number of missed answers (raw score) reveals that the respondent is not capable of maintaining his/her attention under stress and is prone to giving up these situations ( Neuwirth and Benesch, 2012 ). The duration of this test was 6 min.

Before and after the stress test the following parameters were analyzed in this order:

Parameters analyzed through OmegaWave Coach + device ® (OmegaWave Ltd, Espoo, Finland):

– Hear Rate Variability (HRV). Square root of the mean of the squares of successive RR interval differences (RMSSD), Standard deviation of all normal to normal RR intervals (SDNN), and Standard deviation of successive squares of intervals RR (SDSD). OmegaWave is a device that assesses the physiological readiness of athletes by examining autonomic balance through HRV and brain‘s metabolic state via DC potential ( Ilyukhina and Zabolotskikh, 2020 ). Elastic chest band MEDITRACE (dominant hand and forehead). Coach + application (Omegawave Ltd., Espoo, Finland) was used on Ipad mini 2 32GB. For calculating HRV it be used the Root Mean Square of the Successive Differences score (RMSSD) ( Ilyukhina et al., 1982 ). It was used before and after the stress test.

– DC potential dynamics. DC Potential represent changes in the brain’s metabolic balance in response to increased exercise intensity or psychological challenges and are linked to cognitive and mental load ( Wagshul et al., 2011 ; Ilyukhina, 2015 ).

– CNS System Readiness ( Ilyukhina, 1986 ). It’s indicated by a floating grade from 1.0 to 7.0, where 7.0 is the optimal state. This index represents the state of the brain’s energy level and is composed of three factors (in order of significance): stabilization point of DC potential (mV), stabilization time (reduces system readiness state of 1.0–7.0, if not optimal), and curve shape (reduces system readiness state of 1.0–7.0, if not optimal).

– Stabilization point of DC Potential (mV) ( Ilyukhina et al., 1982 ; Ilyukhina, 2013 ): The first priority in DC analysis is the stabilization point of DC Potential. In the literature, especially by Ilyukhina, this point is defined as Level of Operational Rest. In 1982, the combined work of Ilyukhina and Sychev was published which outlined quantitative parameters of LOR for the assessment of the healthy human’s adaptation and compensatory−adaptive abilities to physical and mental loads in sports.

– Stabilization time ( Ilyukhina and Zabolotskikh, 1997 ). The second priority of analysis is to look at the stabilization time. measured in minutes. The spontaneous relaxation speed represents neuroreflex reactivity (neural control of baroreflex arch) of cardiovascular and respiratory systems. This measure associated with psycho-emotional dynamic and stability. Normal stabilization time occurs within 2 min and represents optimal balance within stress-regulation systems.

– Curve Shape: The curve shape is composed of two elements: Difference between measurement start mV and end mV values ( Table 1 ). The optimal shape of the curve should show a smooth transition from a higher initial value (active wakefulness) to a lower stabilization value (operational rest DC potential form represents the dynamic interaction within stress-regulation systems). DC potential form can indicate the level of CNS activation balance.

Parameters analyzed though Flicker Fusion unit (Vienna Test System ® ):

– Critical flicker fusion ascending (Hz) (CFFA) and Critical flicker fusion descending (Hz) (CFFD). Cortical arousal was measured using the critical flicker fusion threshold (Hz) (CFFT) in a viewing chamber (Vienna Test System ® ), following the procedure of previous studies ( Clemente et al., 2016 ). An increase in CFFT suggests an increase in cortical arousal and information processing; a decrease in CFFT values below the baseline reflects a reduction in the efficiency of information processing and central nervous system fatigue ( Whiteside, 2002 ). It was used before and after the stress test.

Parameters analyzed though DT test (Vienna Test System ® ):

– We study four key variables: the average value of reaction speed (msec), the number of correct answers (raw score), which reflects the ability of the respondent to precisely and quickly select the adequate answer even under pressure. Furthermore, we also examine the number of incorrect answers (raw score) which can show us how likely the athlete is to get confused under stress and pressure; finally, the high number of missed answers (raw score) reveals that the respondent is not capable of maintaining his/her attention under stress and is prone to giving up these situations ( Neuwirth and Benesch, 2012 ). The duration of this test was 6 min without instructions.

Parameters analyzed by self-report instruments:

– Central Rating of Perceived Exertion (RPEC) and Peripheral Rating of Perceived Exertion (RPEP). The Rating of Perceived Exertion ( Borg, 1998 ), was used as a measure of central (cardiorespiratory) and peripheral (local-muscular, metabolic) exertion before and after the stress test ( Bolgar et al., 2010 ; Cárdenas et al., 2017 ). The RPE is a 15 point category-ratio; the odd numbered categories have verbal anchors. Beginning at 6, “no exertion at all,” and goes up to 20, “maximal exertion.” Before testing, subjects were instructed on the use of the RPE scale ( Noble and Robertson, 1996 ). We use the scale with the clear differentiation between central as peripheral perceived exertion following the recommendations of the medical staff and under the guideline of Borg ( Borg, 1982 ), for applied studies.

www.frontiersin.org

Table 1. Simplified curve change mV reduction algorithm.

The participants were contacted and informed about the measurement protocol and of the date and time of the data collection. All of the measurements were collected during the same day. The total data collection time per participant was approximately 45 min. The order of measurements was the following: CFFT, DC Potential, RPE, DT test, RPE, CFFT, and DC Potential.

Data were analyzed using the Statistical Package for the Social Sciences (SPSS) version 21 (SPSS Inc., Chicago, Ill., United States). Means and SDs were calculated using traditional statistical techniques. Normality was tested with the Shapiro-Wilk test. As the distributions were not adjusted to the normal, non-parametric tests were used. A Wilcoxon sign ranges test for intragroup comparisons were conducted to analyze differences between pre and post-test. A Rho Spearman coefficient was used to know the correlations between variables. The Effect Size was tested using the formula = Z/ N for non-parametric tests ( Tomczak and Tomcak, 2014 ). Following the considerations of Cohen (1988) , the effect size is considered small when the value is inferior to 0.10, medium when it varies between 0.10 and 0.30 and high when it is superior to 0.50. The significance level was set at p < 0.05.

Descriptive Analysis, Normality Test According N, Wilcoxon Test, and Effect Sizes

Firstly, the normality tests were realized with the Shapiro-Wilk test. It was determined that most of the variables were not normal, due to which non-parametric statistical tests were applied. In relation to the descriptive analyzes of the study variables, shown in Table 2 , after applying the stressor via the DT test, worse values were obtained in all the variables measured. This reflects the alterations in the central response evaluated. Regarding the Wilcoxon rank test that was used to analyze whether there were differences between the scores obtained before and after applying the stressor (DT test), significant differences were found in the variables OverallDc ( p < 0.05), Flicker ascending ( p < 0.01), Flicker descending ( p < 0.01), Central RPE ( p < 0.01) and Physical RPE ( p < 0.01), while not finding significant differences in the rest of the variables ( Table 2 ).

www.frontiersin.org

Table 2. Descriptive analysis of the measured variables.

Correlation Analysis

A Spearman bivariate correlation analysis was performed. Spearman’s Rho coefficient was used, since the distribution was non-parametric. Note that significant correlations were found ( Table 3 ) entre OverallDC con DCSSatabilizationLevel ( p = 0.000; r = 0.791 ∗∗ ); OWCNS ( p = 0.005; r = 0.581 ∗∗ ); OWDCC ( p = 0.013; r = 0.522 ∗ ); Flicker Descending ( p = 0.044; r = 0.432 ∗ ). DCSStabilizationLevel con OWCNS ( p = 0.000; r = 0.766 ∗∗ ); Flicker Descending ( p = 0.049; r = 0.424 ∗ ). DCSStabilizationTime con OWCNS ( p = 0.005; r = 0.572 ∗ ); OWDCC ( p = 0.046; r = 0.430 ∗ ); Flicker Ascending ( p = 0.006; r = 0.563 ∗∗ ). OWCNS correlated with Flicker Ascending ( p = 0.018; r = 0.499 ∗ ), and SDSD with Flicker Descending score ( p = 0.046; r = −0.430 ∗ ).

www.frontiersin.org

Table 3. Rho Spearman coefficient.

The objective of the present research was to study the modification of DC potentials and the CFFT scores after the computerized stress test (DT). The analysis of the subjective cognitive responses about fatigue after DT test reveals significant differences in the participants, both at a physical and central level. As regards the first hypothesis, it is partially fulfilled. There are significant differences in central perceived fatigue, with a very high effect size, which supports the hypothesis and emphasizes the usefulness of the established research protocol. However, significant differences also appear in peripheral perceived fatigue, which is beyond the initial approaches. This result is of special interest because it allows to consider the relationship between both types of perceived fatigue ( Bittner et al., 2000 ; Clemente et al., 2016 ). These results, taking into account that the participants did the test sitting down, emphasize the effect achieved through the protocol used to generate stress in them, without significant differences in the performance achieved in the task. Previous research carried out with the DT test already points in this same direction ( Ong, 2015 ). The differences found in the perception of physical fatigue even without previous movement are interesting. Similar results are found in studies carried out in contexts such as chess ( Fuentes et al., 2019 ), where central fatigue due to the demands of each game also leads to physical fatigue of the players. This fact seems relevant insofar as the studies should incorporate measures of both dimensions to be able to explain a higher percentage of variance of the results found.

As regards the second hypothesis, the decrease of CFFD values indicates that it has a negative effect generating central fatigue and an alteration in cortical activation ( Li et al., 2004 ; Clemente, 2016 ). These results confirm the alterations in cortical activation found in physiological efforts of high intensity and of short duration, such as sprints at maximum speed ( Clemente et al., 2011 ). This same trend is also observed in research focused on generating a high level of stress in soldiers, which emphasizes the usefulness of using the DT test to create stress in the participants ( Clemente et al., 2016 ). In line with the ideas defended by Clemente (2016) , decreased in CFFD scores seem to be linked to high sympathetic autonomous nervous system activation, which could also affect higher cognitive functions, such as executive processes (i.e., making complex decisions, memory, and attention processes) ( Shields et al., 2016 ). These same considerations can also be made with respect to the significant differences found in CFFA scores. Higher scores are found after the stress test, which implies that the participants have needed more time to respond to the flicker task as consequence of central fatigue ( Fuentes et al., 2019 ; Lohani et al., 2019 ).

Regarding the results obtained in the Overall DC scores, the significant differences show a pattern of alteration as a consequence of the stress test. As Naranjo-Orellana et al. (2020) point out, the OW test obtains good reliability and validity values using the heart rate variability as a measure in conjunction with the DC Potential (stabilitation DC, stabilitation time, and curve shape). Changes in the DC potentials have been reported to be reflective of performance in different brain processes ( Haider et al., 1981 ; Valenzuela et al., 2020 ). The lower scores obtained after the stress test could indicate, as with the CFF scores, an increase in central fatigue detected by the OmegaWave system ( Valenzuela et al., 2020 ). This result, in any case, needs to be analyzed in detail in future research.

Therefore, monitoring the DC potentials and the CFF scores could be useful to control the cognitive load of the different tasks that having a high mental demand.

Due to the exceptional circumstances of data collection in the present study, some of the study limitations were the sample size and the small number of women who participated in it. Future research works should expand the sample power, as well as determine its effect in a sedentary sample.

To conclude, this is the first study that has jointly analyzed the scores obtained in the analysis of low-frequency brain waves (DC potentials), together with those obtained in the Flicker test. In this sense, although the performance in a specific task seems similar, the demand it has for the person must be evaluated, being useful the use of research protocols similar to the ones we have used. The results open a new field where both measurements could be interesting and useful to assess the cognitive demands of persons.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation, to any qualified researcher.

Ethics Statement

The studies involving human participants were reviewed and approved by the University Ethical Commission in compliance with the Helsinki Declaration. The patients/participants provided their written informed consent to participate in this study.

Author Contributions

RV: conceptualization, investigation, resources, writing—review and editing, and project administration. RV, ML-R, and RJ-C: methodology, data curation, writing—original draft preparation, visualization, supervision, and formal analysis. ML-R and RJ-C: software and validation.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Amann, M. (2011). Central and peripheral fatigue: interaction during cicling exercise in humans. Med. Sci. Sports Exerc. 43, 2039–2045. doi: 10.1249/MSS.0b013e31821f59ab

PubMed Abstract | CrossRef Full Text | Google Scholar

Amann, M., and Dempsey, J. A. (2008). Locomotor muscle fatigue modifies central motor drive in healthy humans and imposes a limitation to exercise performance. J. Physiol. 586, 161–173. doi: 10.1113/jphysiol.2007.141838

Arza, A., Garzón-Rey, J. M., Lázaro, J., Gil, E., López-Anton, R., de la Cámara, C., et al. (2019). Measuring acute stress response through physiological signals: towards a quantitative assessment of stress. Med. Biol. Eng. Comput. 57, 271–287. doi: 10.1007/s11517-018-1879-z

Bachmann, T. (1984). The process of perceptual retouch: nonspecific afferent activation dynamics in explaining visual masking. Percept. Psychophys. 35, 69–84. doi: 10.3758/BF03205926

Bittner, R., Hána, K., Pousek, L., Smrka, P., Schreib, P., and Vysoky, P. (2000). “Detecting of fatigue states of a car driver,” in Medical Data Analysis. ISMDA 2000. Lecture Notes in Computer Science , Vol. 1933, eds R. W. Brause and E. Hanisch (Berlin: Springer).

Google Scholar

Bolgar, M. R., Baker, C. E., Goss, F. L., Nagle, E., and Robertson, R. J. (2010). Effect of exercise intensity on differentiated and undifferentiated ratings of perceived exertion during cycle and treadmill exercise in recreationally active and trained women. J. Sports Sci. Med. 9, 557–563.

Borg, G. (1982). Psychophysical bases of perceived exertion. Med. Sci. Sports Exerc. 14, 377–381. doi: 10.1249/00005768-198205000-00012

Borg, G. (1998). Perceived Exertion and Pain Scale. Champaign, IL: Human Kinetics.

Britner, P. A., Morog, M. C., Pianta, R. C., and Marvin, R. S. (2003). Stress and coping: a comparison of self-report measures of functioning in families of young children with cerebral palsy or no medical diagnosis. J. Child Fam. Stud. 12, 335–348. doi: 10.1023/A:1023943928358

CrossRef Full Text | Google Scholar

Cárdenas, D., Conde-Gonzáles, J., and Perales, J. C. (2017). La fatiga como estado motivacional subjetivo. Rev. Andaluza Med. Deporte 10, 31–41. doi: 10.1016/j.ramd.2016.04.001

Clemente, V. (2016). Cortical arousal and central nervous system fatigue after a mountain marathon. Cult. Ciencia Deporte 12, 143–148. doi: 10.12800/ccd.v12i35.886

Clemente, V., De la Vega, R., Robles, J. J., Lautenschlaeger, M., and Fernández-Lucas, J. (2016). Experience modulates the psychophysiological response of airborne warfighters during a tactical combat parachute jump. Int. J. Psychophysiol. 110, 212–216. doi: 10.1016/j.ijpsycho.2016.07.502

Clemente, V., and Díaz, M. (2019). Evaluation of central fatigue by the critical flicker fusion threshold in cyclist. J. Med. Syst. 43:61. doi: 10.1007/s10916-019-1170-3

Clemente, V., Muñoz, V., and Melús, M. (2011). Fatiga del sistema nervio-so después de realizar un test de capacidad de sprints repetidos (RSA) en jugadores de futbol profesionales. Arch. Med. Deporte 143, 103–112.

Cohen, J. (1988). Statistical Power Analysis for the Behavioral Sciences , 2nd Edn. New York, NY: Academic Press.

Davranche, K., and Pichon, A. (2005). Critical flicker frequency threshold increment after an exhausting exercise. J. Sport Exerc. Psychol. 27:515. doi: 10.1123/jsep.27.4.515

Folkman, S., and Lazarus, R. S. (1988). Coping as a mediator of emotion. J. Pers. Soc. Psychol. 54, 466–475. doi: 10.1037/0022-3514.54.3.466

Fuentes, J. P., Villafaina, S., Collado, D., De la Vega, R., Olivares, P., and Clemente, V. (2019). Differences between high vs. low performance chess players in heart rate variability during chess problems. Front. Psychol. 10:409. doi: 10.3389/fpsyg.2019.00409

Gendolla, G. H. E., and Richter, M. (2010). Effort mobilization when the self is involved: some lessons from the cardiovascular system. Rev. Gen. Psychol. 14, 212–226. doi: 10.1037/a0019742

Gómez-Oliva, E., Robles-Pérez, J. J., Ruiz-Barquín, R., Hidalgo-Bellota, F., and de la Vega, R. (2019). Psychophysiological response to the use of nuclear, biological and chemical equipment with military tasks. Physiol. Behav. 204, 186–190. doi: 10.1016/j.physbeh.2019.02.019

Görtelmeyer, R., and Zimmermann, H. (1982). Retest reliability and construct validity of critical flicker fusion frequency. Pharmacopsychiatry 15, 24–28. doi: 10.1055/s-2007-1019545

Guggisberg, A., and Mottaz, A. (2013). Timing and awareness of movement decisions: does consciousness really come too late? Front. Hum. Neurosci. 7:385. doi: 10.3389/fnhum.2013.00385

Haider, M., Groll-Knapp, E., and Ganglberger, J. A. (1981). Event-related slow (DC) potentials in the human brain. Rev. Physiol. Biochem. Pharmacol. 88, 125–195. doi: 10.1007/BFb0034537

Harriss, D. J., Macsween, A., and Atkinson, G. (2019). Ethical standards in sport and exercise science research: 2020 update. Int. J. Sports Med. 40, 813–817. doi: 10.1055/a-1015-3123

Ilyukhina, V. A. (1986). Neirofiziologiya funktsional’nykh sostoyanii cheloveka (Neurophysiology of Human Functional States). Nauka: Leningrad.

Ilyukhina, V. A. (2015). Contributions of academicians A. A. Ukhtomsky and N. P. Bechtereva to multidisciplinary human brain science. Cogn. Syst. Monogr. 25, 81–100. doi: 10.1007/978-3-319-19446-2_5

Ilyukhina, V. (2013). Ultraslow information control systems in the integration of life activity processes in the brain and body. Hum. Physiol. 39, 323–333. doi: 10.1134/S0362119713030092

Ilyukhina, V., Sychev, A., Shcherbakova, N., Baryshev, G., and Denisova, V. (1982). The omegapotential: a quantitative parameter of the state of brain structures and of the individual: II. Possibilities and limitations of the use of the omega-potential for rapid assessment of the state of the individual. Hum. Physiol. 8, 328–339.

Ilyukhina, V. A., and Zabolotskikh, I. B. (1997). The typology of spontaneous and induced dynamics of superslow physiological processes recorded from the surface of the head and the body of a healthy and sick man. Kuban Sci. Med. Bull. 4:12.

Ilyukhina, V. A., and Zabolotskikh, I. B. (2020). Physiological basis of differences in the body tolerance to submaximal physical load to capacity in healthy young individuals. Hum. Physiol. 26, 330–336. doi: 10.1007/BF02760195

Inzlicht, M., and Marcora, S. M. (2016). The central governor model of exercise regulation teaches us precious little about the nature of mental fatigue and self-control failure. Front. Psychol. 7:656. doi: 10.3389/fpsyg.2016.00656

Lazarus, R. S. (1990). Theory-based stress measurement. Psychol. Inq. 1, 3–13. doi: 10.1207/s15327965pli0101_1

Li, Z., Jiao, K., Chen, M., and Wang, C. (2004). Reducing the effects of driving fatigue with magnitopuncture stimulation. Accident Anal. Prevent. 36, 501–505. doi: 10.1016/S0001-4575(03)00044-7

Lohani, M., Payne, B. R., and Strayer, D. L. (2019). A review of psychophysiological measures to assess cognitive states in real-world driving. Front. Hum. Neurosci. 19:57. doi: 10.3389/fnhum.2019.00057

Montero, I., and León, O. G. (2007). A guide for naming research studies in psychology. Int. J. Clin. Health Psychol. 7, 847–862.

Naranjo-Orellana, J., Ruso-Álvarez, J. F., and Rojo-Álvarez, J. L. (2020). Comparison of Omegawave device and an ambulatory ECG for RR interval measurement at rest. Int. J. Sport Med. [Epub ahead of print]. doi: 10.1055/a-1157-9220

Neuwirth, W., and Benesch, M. (2012). Vienna Test System Manual: Determination Test, (Version 35). Moedling: Schuhfried.

Noble, R. J., and Robertson, R. J. (1996). Perceived Exertion. Champaign, IL: Human Kinetics, 77–81.

Ong, N. C. H. (2015). The use of the Vienna Test System in sport psychology research: a review. Int. Rev. Sport Exerc. Psychol. 8, 204–223. doi: 10.1080/1750984X.2015.106158

Rösler, F., Heil, M., and Ridder, B. (1997). Slow negative brain potentials as reflections of specific modular resources of cognition. Biol. Psychol. 45, 109–141. doi: 10.1016/S0301-0511(96)05225-8

Schuhfried, G. (2013). Vienna Test System: Psychological Assessment. Moedling: Schuhfried.

Shields, G. S., Sazma, M. A., and Yonelinas, A. P. (2016). The effects of acute stress on core executive functions: a meta-analysis and comparison with cortisol. Neurosci. Biobehav. Rev. 68, 661–668. doi: 10.1016/j.neubiorev.2016.06.038

Tarvainen, M. P., Niskanen, J. P., Lipponen, J. A., Ranta-aho, P. O., and Karjalainen, P. A. (2014). Kubios HRV - Heart rate variability analysis software. Comput. Methods Progr. Biomed. 113, 210–220. doi: 10.1016/j.cmpb.2013.07.024

Tomczak, M., and Tomcak, E. (2014). The need to report effect size estimates revisited. An overwiew of some recommended measures of effect size. Trends Sport Sci. 1, 19–25.

Valenzuela, P. L., Sánchez-Martínez, G., Torrontegi, E., Vázquez-Carrión, J., Montalvo, Z., and Kara, O. (2020). Validity, reliability, and sensitivity to exercise-induced fatigue of a customer-friendly device for the measurement of the brain’s direct current potencial. J. Strength Condition. Res. [Epub ahead of print]. doi: 10.1519/JSC.0000000000003695

Vicente-Rodríguez, M., Fuentes-García, J. P., and Clemente-Suárez, V. J. (2020). Psychophysiological stress response in an underwater evacuation training. Int. J. Environ. Res. Public Health 17:2307. doi: 10.3390/ijerph17072307

Wagshul, M. E., Eide, P. K., and Madsen, J. R. (2011). The pulsating brain: a review of experimental and clinical studies of intracranial pulsatility. Fluids Barriers CNS 8, 1–23. doi: 10.1186/2045-8118-8-5

Whiteside, A. (2002). A synopsis of the vienna test system: a computer aided psychological diagnosis. J. Occup. Psychol. Employment Disabil. 5, 41–50.

Whiteside, A., Parker, G., and Snodgrass, R. (2003). A review of selected tests from the Vienna test system. Select. Dev. Rev. 19, 7–11.

Wijesuriya, N., Tran, Y., and Craig, A. (2007). The psychophysiological determinants of fatigue. Int. J. Psychophysiol. 63, 77–86. doi: 10.1016/j.ijppsycho.2006.08.005

Wilson, G. F., Caldwell, J. A., and Russell, C. A. (2007). Performance and psychophysiological measures of fatigue effects on aviation related tasks of varying difficulty. Int. J. Aviation Psychol. 17, 219–247. doi: 10.1080/10508410701328839

Woodworth, R. S., and Schlosberg, H. (1954). Experimental Psychology. New York, NY: Holt.

Keywords : central fatigue, omega wave, cognitive response, psychophysiology, stress

Citation: de la Vega R, Jiménez-Castuera R and Leyton-Román M (2021) Impact of Weekly Physical Activity on Stress Response: An Experimental Study. Front. Psychol. 11:608217. doi: 10.3389/fpsyg.2020.608217

Received: 19 September 2020; Accepted: 04 December 2020; Published: 12 January 2021.

Reviewed by:

Copyright © 2021 de la Vega, Jiménez-Castuera and Leyton-Román. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Marta Leyton-Román, [email protected]

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

  • Search Menu
  • Advance Articles
  • Editor's Choice
  • Author Guidelines
  • Submission Site
  • Open Access
  • About Health Education Research
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Self-Archiving Policy
  • Dispatch Dates
  • Journals on Oxford Academic
  • Books on Oxford Academic

Article Contents

Introduction, implications.

  • < Previous

An experimental study of effects on schoolchildren of exposure to point-of-sale cigarette advertising and pack displays

  • Article contents
  • Figures & tables
  • Supplementary Data

Melanie Wakefield, Daniella Germain, Sarah Durkin, Lisa Henriksen, An experimental study of effects on schoolchildren of exposure to point-of-sale cigarette advertising and pack displays, Health Education Research , Volume 21, Issue 3, July 2006, Pages 338–347, https://doi.org/10.1093/her/cyl005

  • Permissions Icon Permissions

By creating a sense of familiarity with tobacco, cigarette advertising and bold packaging displays in stores where children often visit may help to pre-dispose them to smoking. A total of 605 ninth-grade students were randomly allocated to view a photograph of a typical convenience store point-of-sale which had been digitally manipulated to show either cigarette advertising and pack displays, pack displays only or no cigarettes. Students then completed a self-administered questionnaire. Compared with those who viewed the no cigarettes, students either in the display only condition or cigarette advertising condition perceived it would be easier to purchase tobacco from these stores. Those who saw the cigarette advertising perceived it would be less likely they would be asked for proof of age, and tended to think a greater number of stores would sell cigarettes to them, compared with respondents who saw no tobacco products. Respondents in the display only condition tended to recall displayed cigarette brands more often than respondents who saw no cigarettes. Cigarette advertising similarly influenced students, and tended to weaken students' resolve not to smoke in future. Retail tobacco advertising as well as cigarette pack displays may have adverse influences on youth, suggesting that tighter tobacco marketing restrictions are needed.

As usual avenues for tobacco advertising have become increasingly unavailable, the visual presence of the cigarette pack and the in-store pack display has become an essential means of communicating brand imagery for tobacco companies [ 1, 2 ]. Tobacco industry documents indicate that tobacco companies understood the importance of the cigarette pack display as a means of promoting brand awareness: ‘The aim of the exercise is instant recognition: (Horizon) along with Benson & Hedges, that's given us full gold and blue blocks on display and that helps our brands stand out’ [ 3 ].

It has been demonstrated that widespread in-store tobacco advertising can influence and distort adolescents' perceptions regarding popularity, use and availability of tobacco. Experimental research has shown that adolescents exposed to retail tobacco advertising perceived significantly easier access to cigarettes than a control group [ 4 ]. Advertising exposure also influenced perceptions about smoking prevalence, peer approval for smoking and support for tobacco control policies [ 4 ]. Another study [ 5 ] found that schoolchildren exposed to point-of-sale advertisements were more likely than those exposed to a photograph of a pack of cigarettes to report positive attributes of users of the brand of cigarettes. Further research has shown that adolescents who reported at least weekly exposure to retail tobacco marketing were more likely to have experimented with smoking [ 6 ] and that in-store branded tobacco advertising and promotion are strongly associated with choice of cigarette brands by adolescents [ 7 ].

The presence of tobacco in stores alongside everyday items such as confectionery, soft drinks and magazines helps to create a sense of familiarity with tobacco products. This familiarity may act to de-emphasize the serious health consequences of tobacco consumption and increase youth perceptions of the prevalence of smoking, as well as their perceived access to tobacco products [ 8 ]. The presence of tobacco products in neighbourhood retail outlets conveys to young people that tobacco use is desirable, socially acceptable and prevalent in society [ 9 ].

In Victoria, Australia, point-of-sale tobacco advertising has been banned since January 2002, and cigarette pack displays are limited to one pack face per brand variant. An observational study conducted following the implementation of this law found that, although compliance was evident, displays emerged that tilted packs towards the floor, providing maximum viewing of the top of all the packs queued in the display and a consequently greater visual and colourful presence for each brand variant [ 10 ]. Efforts to enhance the displays to achieve maximum ‘standout’ for cigarette brands has led researchers to be concerned that cigarette displays at the point-of-sale may be just as influential as traditional advertising, acting as a promotional tool for cigarette brands.

The present study aims to examine the effect of cigarette packaging displays and advertising at the point-of-sale on students' smoking-related perceptions, beliefs and intentions. Given previous research, we hypothesized that exposure to retail tobacco advertising and cigarette pack displays at the point-of-sale would influence students' perceptions about ease of access to cigarettes, normative beliefs about smoking, perceived harms of smoking, perceived popularity of cigarette brands and future intentions to smoke.

Participants

Data collection took place in late 2003 and early 2004 from a convenience sample of ninth-grade students (aged 14–15 years) from five secondary schools in Victoria, Australia: two Catholic boys schools, a private co-educational school, a public co-educational school and a Catholic girls' school. Three of the schools were located in areas that had above average level of socio-economic advantage for Victoria, while the other two schools were in areas that had below average level of socio-economic advantage [ 11 ].

Schools were approached by a research assistant to determine willingness to have their students involved in the study. Schools were informed that the study would be an investigation into product advertising in convenience stores. Specific detail about examining tobacco marketing was not disclosed, to avoid risk of priming student's responses. Information was sent home to students' parents, along with a consent form, to obtain parental permission to be involved in the study. Out of 886 ninth-grade students approached, active parental permission was obtained for 605 students, resulting in an overall response rate of 68%.

The between-subjects experimental study design was adapted from that developed by Henriksen et al. [ 4 ]. Within each classroom, participants were randomly exposed to one of the three point-of-sale conditions under the guise of pre-testing a news story written for teenagers.

No cigarettes

A convenience store's point-of-sale area with no visible tobacco presence.

Cigarette display

A convenience store's point-of-sale area with a cigarette pack display, but no cigarette advertising (as required by the current law in Victoria).

Cigarette advertising

A point-of-sale area with both cigarette advertising and cigarette pack displays.

A colour photograph of a point-of-sale section of a convenience store was digitally altered to create the three versions of the same retail environment. Adobe Photoshop was used to eliminate cigarette advertising and cigarette pack displays and to replace these with other non-tobacco product advertising or displays. No retailers or customers were visible in the photographs and references to store names were removed.

Trained research assistants visited schools to administer the study. Before the experimental manipulation, all students took part in a discussion designed to increase the salience of general brand advertising and display. Following the discussion of brand advertising, students within classrooms were randomly assigned to see photographs of one of the three conditions. A research assistant then read aloud a fictional news story about teen eating habits and visits to convenience stores. Students were told to look carefully at the photograph they were given of the point-of-sale, and asked to imagine walking around the shop noticing what to buy, while they listened to the story.

After the news story had been read out, the research assistant collected all point-of-sale photographs to ensure students did not subsequently refer back to them. Students then completed a brief questionnaire. The entire data collection session was completed during a class period of ∼45 min.

Dependent variables

Perceived difficulty of access.

Students were asked about their own, and students their age, likelihood of being able to purchase tobacco from the pictured stores, using a Likert scale ranging from ‘1 = very easy’ to ‘5 = very hard’. These two questions were combined and averaged to create an overall measure of perceived difficulty of purchasing tobacco (α > 0.70). Students were also asked about the likelihood they would be asked for proof of age if they tried to purchase cigarettes at the store, using a Likert scale ranging from ‘1 = very likely’ to ‘5 = very unlikely’. Finally, students were asked to estimate how many stores in their neighbourhood would sell tobacco to them, and to other students their age.

Normative beliefs

Perceived prevalence of smoking was assessed by asking how many out of 100 classmates in their year level, 100 high school students and 100 adults they thought smoked cigarettes at least once a week. Perceived approval of smoking was measured by asking students how much they agreed or disagreed on a Likert scale ranging from ‘1 = strongly agree’ to ‘5 = strongly disagree’ with a range of attributes to describe smokers (‘A teenager who smokes cigarettes seems … cool; successful; smart; healthy; athletic; and popular’). Perceived peer approval was measured by asking students whether most students their age, and most high school students, ‘think it's ok to smoke cigarettes once in a while’. These two questions measured on a Likert scale from ‘1 = strongly agree’ to ‘5 = strongly disagree’, were combined and averaged to create an overall ‘peer approval of smoking’ measure (α > 0.70).

Perceived harm

Students were asked whether they agreed or disagreed that ‘Smoking can harm your health’, and how dangerous they thought it was to smoke <10 cigarettes a day, and one or two cigarettes occasionally, on a Likert scale ranging from ‘1 = not dangerous’ to ‘3 = very dangerous’.

Perceived brand popularity

We asked students to nominate the brand they would be likely to smoke if they were a smoker, and then nominate what they thought were the most popular brands smoked by students their age and adults. In order to examine whether cigarette displays and advertising influenced which brands students thought were the most popular, the cigarette brands that were clearly advertised in the cigarette advertising condition (Benson & Hedges, Lucky Strike, Horizon, Marlboro and Winfield) were coded as ‘advertised brands’. Similarly, those cigarette brands that were the most prominent in the cigarette display and cigarette advertising conditions were coded as ‘prominently displayed brands’. These brands were determined by their visual presentation in the display, based on the criteria of being presented by a block of colour or a block with a distinctive feature of the pack (e.g. the prominent stripe on Alpine and Winfield packs). Prominently displayed brands included Horizon, Dunhill, Winfield, Benson & Hedges and Alpine.

Intention to smoke

To gauge students' future intentions to smoke, students were asked whether they thought they would smoke a cigarette at any time during the next year, with responses being ‘definitely not, probably not, probably yes or definitely yes’. Students who had not tried smoking were also asked if they thought they would try a cigarette soon, and also ‘If one of your best friends were to offer you a cigarette, would you smoke it?’ with responses also being ‘definitely not, probably not, probably yes or definitely yes’.

Descriptive variables

Students indicated their sex, whether they had any older brothers or sisters, or a parent or guardian who smoked and how many, if any, of their five best friends smoked. Students were also asked to indicate their frequency of visiting a convenience store, with response options being ‘practically every day, a few times a week, about once a week, about once a month or hardly ever’.

Following the method of Pierce et al. [ 12 ], students were categorized as non-susceptible never smokers, susceptible never smokers or experimenters. Students who reported trying smoking (even just a few puffs) were coded as ‘experimenters’. Students who had never smoked and indicated they would definitely not try smoking cigarettes ‘soon’ and ‘in the next year’, and would definitely not smoke a cigarette if one of their best friends were to offer them one, were coded as ‘non-susceptible never smokers’. Students who did not answer ‘definitely no’ to each circumstance were considered ‘susceptible never smokers’.

Finally, an ‘others smoking’ variable was created by combining students' responses to whether they had at least one parent who smokes, a sibling who smoked and how many of the respondent's best friends smoked. This was a continuous variable, where a lower value indicated less exposure to cigarettes from family and friends.

Chi-square analysis was used to determine whether random assignment produced equivalent groups in relation to tobacco use and other characteristics. To test hypotheses, generalized estimating equations (GEEs) with random effects were used to determine the effects of exposure to the three point-of-sale conditions, controlling for sex, smoking susceptibility and social and familial exposure to smoking. The school attended by respondents was treated as a random effect to account for clustering by school.

Logistic regression analyses were used to examine the relationship between the cigarette brands respondents thought were most popular among students and adults, and those cigarette brands that were advertised or displayed in the pictured stores.

Sample characteristics

The sample of 605 students consisted of 51% females, 41% of students had tried smoking cigarettes and 9% currently smoked. Of those who had not yet tried smoking, 11% said they would probably or definitely try a cigarette soon and 8% reported they would probably or definitely try smoking during the next year. Over one-third (36%) of students had at least one parent or guardian who smoked, 21% said they had at least one older brother or sister who smoked and 45% reported at least one of their best friends smoked.

Table I shows that the characteristics of students were equally distributed by condition in relation to demographic characteristics and peer and family exposure to smoking.

Student characteristics, by exposure condition

Regardless of experimental condition, students who were experimenting with smoking visited convenience stores more often ( ⁠ X ¯ = 3.1, on scale of 1 = practically every day to 5 = hardly ever) than those students who were susceptible non-smokers ( ⁠ X ¯ = 3.5) ( P = 0.002) and those who were non-susceptible non-smokers ( ⁠ X ¯ = 3.7) ( P < 0.01). There was no significant difference between the latter two conditions.

Perceived access to tobacco

Table II indicates that students who were exposed to either the cigarette display or the cigarette advertising conditions perceived it would be less difficult for either themselves or students their age to purchase tobacco, than those students who saw the no cigarettes condition ( P = 0.000). In addition, students who saw the cigarette advertising condition were less likely than respondents in the no cigarettes condition to report that they would be asked for proof of age if they tried to buy cigarettes ( P = 0.01).

Perceived access to cigarettes, by exposure condition

Covariates = sex, ‘susceptibility’ and ‘others smoking’, random effects = school id.

Scale: 1 (very easy) to 5 (very hard).

Significantly different ( P < 0.01) to condition a: no cigarettes.

Scale: 1 (very likely) to 5 (very unlikely).

Significantly different ( P < 0.05) to condition a: no cigarettes.

Trend towards a significant difference ( P < 0.1) between condition a: no cigarettes.

On average, students reported that at least one store in their neighbourhood would sell cigarettes to them ( ⁠ X ¯ = 1.5 stores) or students their age ( ⁠ X ¯ = 1.8 stores). Students who saw the no cigarettes condition tended to report a lower number of neighbourhood stores would sell them cigarettes ( ⁠ X ¯ = 1.4 stores), compared with those who saw the cigarette advertising point-of-sale ( ⁠ X ¯ = 1.7 stores) ( P = 0.07). There was no exposure effect for the number of neighbourhood stores that participants thought would sell to ‘students their age’.

Normative beliefs about smoking

On average, students thought ∼30% of students their age smoke cigarettes at least once a week, with no significant differences between the experimental conditions ( Table III ).

Perceived smoking prevalence, by exposure condition

Covariates = sex, susceptibility and others smoking, random effects = school id.

Trend towards a significant difference ( P < 0.1) between condition b: cigarette display.

Significantly different ( P < 0.05) to condition b: cigarette display.

Scale: 1 (strongly agree) to 5 (strongly disagree).

However, those in the cigarette advertising condition reported on average that ∼52% of high school students smoked at least once a week, compared with those who saw the cigarette display condition, who estimated ∼48% of high school students smoke ( P = 0.03).

Respondents who saw the cigarette advertising condition also thought a higher proportion of adults smoke (63%) than did those who saw the cigarette display condition (59%).

There was little variation between experimental conditions and students' approval of smoking ( P > 0.10). Students also tended to disagree with statements attributing positive characteristics to teenagers who smoked, with no significant differences between experimental conditions ( Table III ).

Perceived harm of smoking

Regardless of survey condition, most students agreed that smoking can harm your health ( ⁠ X ¯ = 1.3, SD = 0.81). Over half of students (52%) considered smoking <10 cigarettes a day ‘very dangerous’. However, only 15% of students thought smoking one or two cigarettes occasionally was ‘very dangerous’, with a further 55% considering it ‘a little dangerous’ and 25% ‘not dangerous’. Students who saw the cigarette advertising condition were significantly more likely ( P = 0.02) to consider smoking one or two cigarettes occasionally as less dangerous ( ⁠ X ¯ = 1.9), than were respondents who saw the cigarette display condition ( ⁠ X ¯ = 2.1) ( Table IV ).

Perceived harm of smoking, by exposure condition

Covariates = sex, susceptibility and others smoking; random effects = school id.

Scale: 1 (not dangerous) to 3 (very dangerous).

Future intentions to smoke

Students who saw the cigarette advertising condition tended to be more likely to suggest that they would smoke a cigarette any time during the following year ( ⁠ X ¯ = 2.0, on scale of ‘1 = definitely not’ to ‘4 = definitely yes’), compared with those who saw the cigarette display condition ( ⁠ X ¯ = 1.9) ( P = 0.07).

Examining only students who had not yet tried smoking ( n = 348), those who had been exposed to the cigarette advertising condition, were more likely to suggest that they would smoke a cigarette if one of their best friends offered them one, compared with those who saw the cigarette display condition ( P = 0.039). However, no significant exposure effects existed for never-smokers' intentions to try a cigarette ‘soon’ or during the following year ( P > 0.1).

Perceived popularity of cigarette brands and brand preferences

As shown in Table V , when asked to name cigarette brands that were most popular among adult smokers, students exposed to the cigarette advertising condition were more likely to report a cigarette brand that was advertised (Winfield, Horizon, Benson & Hedges, Marlboro or Lucky Strike), compared with those exposed to the no cigarettes condition ( P = 0.049). There was also a trend for respondents exposed to the cigarette advertising condition to report one of the advertised brands, more than those who saw the cigarette display condition ( P = 0.057).

Perceived cigarette brand popularity and brand preferences, by exposure condition

Tobacco brands that were prominently visible in the displays of the cigarette display and cigarette advertising conditions (i.e. Winfield, Horizon, Benson & Hedges, Alpine and Dunhill) were also related to which brands students thought were most popular among adults. There was a trend for those respondents who saw the cigarette display condition to report brands that were prominently displayed, more than students who saw the no cigarettes condition ( P = 0.052).

There were no significant differences between conditions in relation to the brands respondents thought were popular among students their age who smoke. However, when respondents were asked which cigarette brand they would try if they smoked, those exposed to the cigarette advertising condition also tended to report an advertised brand more than those who saw the cigarette display condition ( P = 0.09).

This experimental study aimed to assess whether cigarette pack displays in retail stores influenced students' perceptions about smoking in ways similar to those previously found for retail tobacco advertising [ 4 ].

Overall, our results suggest that the presence of cigarettes at the point-of-sale—whether cigarette display only or display plus tobacco advertising—increased students' perceptions about the ease of purchasing cigarettes. In addition, the presence of tobacco advertising decreased students' perceived likelihood of being asked for proof of age and tended to increase perceptions of the number of stores that would sell them cigarettes. This pattern of findings suggests the presence of displays in retail stores serve to create the perception among students that cigarettes are easily available and accessible in their community, while the presence of tobacco advertising further strengthens perceived ease of accessibility of cigarettes.

Our study findings also suggest that, like advertising, the cigarette pack display is an effective vehicle for promoting brand recall, as evidenced by the cigarette brands reported by students to be the most popular among adult smokers. High recall of cigarette brand names that were advertised in the pictured store, as well as cigarette brands that were prominent in the displays, suggests that tobacco companies are effectively using cigarette packaging displays as a communication device for creating and reinforcing brand awareness and recognition [ 7 ]. Cigarette brand names that were advertised in the pictured store also tended to affect the brands of cigarettes students reported they might try if they did smoke.

Exposure to point-of-sale advertising, but not displays, tended to weaken student's resolve not to smoke in the following year. Findings also indicate that exposure to advertising, as opposed to a pack display on its own, influenced whether students would accept a cigarette from one of their friends if they offered. In countries such as the United States in which point-of-sale tobacco advertising has continued to proliferate, this is great cause for concern. US Federal Trade Commission figures indicate that in 2002, tobacco companies spent $12.47 billion on tobacco promotion, a considerable amount of which was focused on the point-of-sale [ 13 ].

No effects were observed for most variables measuring perceived harm from smoking, except the perceived danger of smoking one or two cigarettes per day, which was significantly higher among those in the cigarette advertising condition than those in the cigarette display condition. Overall, we found no consistent effects of cigarette advertising or display on peer approval for smoking, the likelihood of positive attributes being ascribed to smokers, or overall harm from smoking. Several of the perceived harm variables and all the smoker attribute variables were highly skewed in a desirable direction, suggesting established views about smoking which may not be easy to manipulate by a single experimental exposure.

Results from this study support some of the findings of the experimental study of Henriksen et al. [ 4 ]. Like Henriksen et al. , we found that retail cigarette advertising induced significantly easier perceived access to cigarettes and increased perceived smoking prevalence of high school students and adults. However, unlike Henriksen et al. , cigarette advertising did not influence perceived prevalence of smoking among students their own age. We also did not find advertising to induce more positive appraisals of smokers. There were differences between our study and that of Henriksen et al. that may have accounted for differences in some findings. These include the fact that students are no longer routinely exposed to retail tobacco advertising in Australia, that Australian students were in Grade 9 (aged 14–15 years) only, rather than Grades 8 and 9 (aged 13–15 years), that Australian students were recruited by active, rather than passive consent, and that Australian students were exposed to only one photograph in each condition, rather than two. However, given these methodological and contextual differences, the fact that we did find experimental effects for most variables used in both studies suggests that the effects are relatively robust.

There were several study limitations, not the least of which was that the stimulus conditions were artificial. Students briefly viewed one of the three manipulated point-of-sale photographs in a classroom setting, rather than visiting a real store environment, so they may have perceived the photographs to be unrealistic, and may not have responded in the same way to a real life situation. However, the fact we did observe effects of the different point-of-sale photographs on students' perceptions about smoking even with a brief exposure suggests that the influence of cigarette advertising as well as pack displays in the actual store environment is probably considerable.

In addition, we cannot be certain that the responses of students who saw the store with no cigarettes were not influenced by their own memory of what a convenience store ‘usually’ looks like (i.e. in Victoria, with the presence of tobacco displays). Over one-third (35%, n = 74) of students who saw the store with no cigarettes reported that they had seen tobacco products, even though there was none present, and this false recognition was positively related to being a current smoker ( P < 0.05). It is possible that due to students' misperceptions of what they had seen during the experimental manipulation we may not have achieved a ‘clean’ measure of student's exposure to a store with no tobacco products, and therefore condition effects may have been diluted. This also suggests that cigarette displays maybe extremely salient to smoking teenagers and can potentially influence their recollections of this type of marketing.

We confirmed the finding of Henriksen et al. [ 6 ] that frequency of student visits to convenience stores was associated with a higher likelihood of experimenting with cigarettes, one interpretation being that there may be long-term cumulative effects of point-of-sale exposure. Future research might aim to study students' brand recall and perceptions about smoking immediately after exiting real stores that vary in dominance of cigarette displays at the point-of-sale.

A strength of the study was that we were able to randomize students to conditions within classrooms, rather than randomizing whole classrooms, as in the study of Henriksen et al. [ 4 ]. However, since data collection occurred within classrooms and only five schools were involved, there may still be clustering of respondents, so to be conservative, we analysed the data using GEEs with random effects, where the school attended by respondents was treated as a random effect. We also controlled for sex, smoking susceptibility and social and familial exposures to cigarette smoking. Thus, the effects observed in this study are independent of these other well-known influences on smoking perceptions.

This study suggests that the presence of cigarette displays at the point-of-sale, even in the absence of cigarette advertising, has adverse effects on students' perceptions about ease of access to cigarettes and brand recall, both factors that increase the risk of taking up smoking [ 14, 15 ]. Furthermore, the study suggests that cigarette advertising has similar effects, and may also weaken students' firm intentions not to smoke in future, a measure that also strongly predicts smoking uptake [ 16 ]. These findings make a case for eliminating cigarette advertising at the point-of-sale, and also for placing cigarettes out of sight in the retail environment, as has happened in Saskatechewan, Canada [ 17 ]. Such a move may help to curb the alarming rate of smoking uptake among adolescents.

Google Scholar

Google Preview

  • advertising

Email alerts

Citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 1465-3648
  • Print ISSN 0268-1153
  • Copyright © 2024 Oxford University Press
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Enago Academy

Experimental Research Design — 6 mistakes you should never make!

' src=

Since school days’ students perform scientific experiments that provide results that define and prove the laws and theorems in science. These experiments are laid on a strong foundation of experimental research designs.

An experimental research design helps researchers execute their research objectives with more clarity and transparency.

In this article, we will not only discuss the key aspects of experimental research designs but also the issues to avoid and problems to resolve while designing your research study.

Table of Contents

What Is Experimental Research Design?

Experimental research design is a framework of protocols and procedures created to conduct experimental research with a scientific approach using two sets of variables. Herein, the first set of variables acts as a constant, used to measure the differences of the second set. The best example of experimental research methods is quantitative research .

Experimental research helps a researcher gather the necessary data for making better research decisions and determining the facts of a research study.

When Can a Researcher Conduct Experimental Research?

A researcher can conduct experimental research in the following situations —

  • When time is an important factor in establishing a relationship between the cause and effect.
  • When there is an invariable or never-changing behavior between the cause and effect.
  • Finally, when the researcher wishes to understand the importance of the cause and effect.

Importance of Experimental Research Design

To publish significant results, choosing a quality research design forms the foundation to build the research study. Moreover, effective research design helps establish quality decision-making procedures, structures the research to lead to easier data analysis, and addresses the main research question. Therefore, it is essential to cater undivided attention and time to create an experimental research design before beginning the practical experiment.

By creating a research design, a researcher is also giving oneself time to organize the research, set up relevant boundaries for the study, and increase the reliability of the results. Through all these efforts, one could also avoid inconclusive results. If any part of the research design is flawed, it will reflect on the quality of the results derived.

Types of Experimental Research Designs

Based on the methods used to collect data in experimental studies, the experimental research designs are of three primary types:

1. Pre-experimental Research Design

A research study could conduct pre-experimental research design when a group or many groups are under observation after implementing factors of cause and effect of the research. The pre-experimental design will help researchers understand whether further investigation is necessary for the groups under observation.

Pre-experimental research is of three types —

  • One-shot Case Study Research Design
  • One-group Pretest-posttest Research Design
  • Static-group Comparison

2. True Experimental Research Design

A true experimental research design relies on statistical analysis to prove or disprove a researcher’s hypothesis. It is one of the most accurate forms of research because it provides specific scientific evidence. Furthermore, out of all the types of experimental designs, only a true experimental design can establish a cause-effect relationship within a group. However, in a true experiment, a researcher must satisfy these three factors —

  • There is a control group that is not subjected to changes and an experimental group that will experience the changed variables
  • A variable that can be manipulated by the researcher
  • Random distribution of the variables

This type of experimental research is commonly observed in the physical sciences.

3. Quasi-experimental Research Design

The word “Quasi” means similarity. A quasi-experimental design is similar to a true experimental design. However, the difference between the two is the assignment of the control group. In this research design, an independent variable is manipulated, but the participants of a group are not randomly assigned. This type of research design is used in field settings where random assignment is either irrelevant or not required.

The classification of the research subjects, conditions, or groups determines the type of research design to be used.

experimental research design

Advantages of Experimental Research

Experimental research allows you to test your idea in a controlled environment before taking the research to clinical trials. Moreover, it provides the best method to test your theory because of the following advantages:

  • Researchers have firm control over variables to obtain results.
  • The subject does not impact the effectiveness of experimental research. Anyone can implement it for research purposes.
  • The results are specific.
  • Post results analysis, research findings from the same dataset can be repurposed for similar research ideas.
  • Researchers can identify the cause and effect of the hypothesis and further analyze this relationship to determine in-depth ideas.
  • Experimental research makes an ideal starting point. The collected data could be used as a foundation to build new research ideas for further studies.

6 Mistakes to Avoid While Designing Your Research

There is no order to this list, and any one of these issues can seriously compromise the quality of your research. You could refer to the list as a checklist of what to avoid while designing your research.

1. Invalid Theoretical Framework

Usually, researchers miss out on checking if their hypothesis is logical to be tested. If your research design does not have basic assumptions or postulates, then it is fundamentally flawed and you need to rework on your research framework.

2. Inadequate Literature Study

Without a comprehensive research literature review , it is difficult to identify and fill the knowledge and information gaps. Furthermore, you need to clearly state how your research will contribute to the research field, either by adding value to the pertinent literature or challenging previous findings and assumptions.

3. Insufficient or Incorrect Statistical Analysis

Statistical results are one of the most trusted scientific evidence. The ultimate goal of a research experiment is to gain valid and sustainable evidence. Therefore, incorrect statistical analysis could affect the quality of any quantitative research.

4. Undefined Research Problem

This is one of the most basic aspects of research design. The research problem statement must be clear and to do that, you must set the framework for the development of research questions that address the core problems.

5. Research Limitations

Every study has some type of limitations . You should anticipate and incorporate those limitations into your conclusion, as well as the basic research design. Include a statement in your manuscript about any perceived limitations, and how you considered them while designing your experiment and drawing the conclusion.

6. Ethical Implications

The most important yet less talked about topic is the ethical issue. Your research design must include ways to minimize any risk for your participants and also address the research problem or question at hand. If you cannot manage the ethical norms along with your research study, your research objectives and validity could be questioned.

Experimental Research Design Example

In an experimental design, a researcher gathers plant samples and then randomly assigns half the samples to photosynthesize in sunlight and the other half to be kept in a dark box without sunlight, while controlling all the other variables (nutrients, water, soil, etc.)

By comparing their outcomes in biochemical tests, the researcher can confirm that the changes in the plants were due to the sunlight and not the other variables.

Experimental research is often the final form of a study conducted in the research process which is considered to provide conclusive and specific results. But it is not meant for every research. It involves a lot of resources, time, and money and is not easy to conduct, unless a foundation of research is built. Yet it is widely used in research institutes and commercial industries, for its most conclusive results in the scientific approach.

Have you worked on research designs? How was your experience creating an experimental design? What difficulties did you face? Do write to us or comment below and share your insights on experimental research designs!

Frequently Asked Questions

Randomization is important in an experimental research because it ensures unbiased results of the experiment. It also measures the cause-effect relationship on a particular group of interest.

Experimental research design lay the foundation of a research and structures the research to establish quality decision making process.

There are 3 types of experimental research designs. These are pre-experimental research design, true experimental research design, and quasi experimental research design.

The difference between an experimental and a quasi-experimental design are: 1. The assignment of the control group in quasi experimental research is non-random, unlike true experimental design, which is randomly assigned. 2. Experimental research group always has a control group; on the other hand, it may not be always present in quasi experimental research.

Experimental research establishes a cause-effect relationship by testing a theory or hypothesis using experimental groups or control variables. In contrast, descriptive research describes a study or a topic by defining the variables under it and answering the questions related to the same.

' src=

good and valuable

Very very good

Good presentation.

Rate this article Cancel Reply

Your email address will not be published.

experimental research paper sample

Enago Academy's Most Popular Articles

7 Step Guide for Optimizing Impactful Research Process

  • Publishing Research
  • Reporting Research

How to Optimize Your Research Process: A step-by-step guide

For researchers across disciplines, the path to uncovering novel findings and insights is often filled…

Launch of "Sony Women in Technology Award with Nature"

  • Industry News
  • Trending Now

Breaking Barriers: Sony and Nature unveil “Women in Technology Award”

Sony Group Corporation and the prestigious scientific journal Nature have collaborated to launch the inaugural…

Guide to Adhere Good Research Practice (FREE CHECKLIST)

Achieving Research Excellence: Checklist for good research practices

Academia is built on the foundation of trustworthy and high-quality research, supported by the pillars…

ResearchSummary

  • Promoting Research

Plain Language Summary — Communicating your research to bridge the academic-lay gap

Science can be complex, but does that mean it should not be accessible to the…

Journals Combat Image Manipulation with AI

Science under Surveillance: Journals adopt advanced AI to uncover image manipulation

Journals are increasingly turning to cutting-edge AI tools to uncover deceitful images published in manuscripts.…

Choosing the Right Analytical Approach: Thematic analysis vs. content analysis for…

Comparing Cross Sectional and Longitudinal Studies: 5 steps for choosing the right…

Research Recommendations – Guiding policy-makers for evidence-based decision making

experimental research paper sample

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • 10+ Checklists
  • Research Guides

We hate spam too. We promise to protect your privacy and never spam you.

I am looking for Editing/ Proofreading services for my manuscript Tentative date of next journal submission:

experimental research paper sample

What should universities' stance be on AI tools in research and academic writing?

Help | Advanced Search

Computer Science > Computer Vision and Pattern Recognition

Title: negative label guided ood detection with pretrained vision-language models.

Abstract: Out-of-distribution (OOD) detection aims at identifying samples from unknown classes, playing a crucial role in trustworthy models against errors on unexpected inputs. Extensive research has been dedicated to exploring OOD detection in the vision modality. Vision-language models (VLMs) can leverage both textual and visual information for various multi-modal applications, whereas few OOD detection methods take into account information from the text modality. In this paper, we propose a novel post hoc OOD detection method, called NegLabel, which takes a vast number of negative labels from extensive corpus databases. We design a novel scheme for the OOD score collaborated with negative labels. Theoretical analysis helps to understand the mechanism of negative labels. Extensive experiments demonstrate that our method NegLabel achieves state-of-the-art performance on various OOD detection benchmarks and generalizes well on multiple VLM architectures. Furthermore, our method NegLabel exhibits remarkable robustness against diverse domain shifts. The codes are available at this https URL .

Submission history

Access paper:.

  • HTML (experimental)
  • Other Formats

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

COMMENTS

  1. APA Sample Paper: Experimental Psychology

    Writing the Experimental Report: Methods, Results, and Discussion. Tables, Appendices, Footnotes and Endnotes. References and Sources for More Information. APA Sample Paper: Experimental Psychology. Style Guide Overview MLA Guide APA Guide Chicago Guide OWL Exercises. Purdue OWL. Subject-Specific Writing.

  2. PDF Sample Paper: One-Experiment Paper

    Sample One-Experiment Paper (continued) emotional detection than young adults, or older adults could show a greater facilitation than. young adults only for the detection of positive information. The results lent some support to the. first two alternatives, but no evidence was found to support the third alternative.

  3. Beauty sleep: experimental study on the perceived health and

    Methods. Using an experimental design we photographed the faces of 23 adults (mean age 23, range 18-31 years, 11 women) between 14.00 and 15.00 under two conditions in a balanced design: after a normal night's sleep (at least eight hours of sleep between 23.00-07.00 and seven hours of wakefulness) and after sleep deprivation (sleep 02.00-07.00 and 31 hours of wakefulness).

  4. Guide to Experimental Design

    Step 1: Define your variables. You should begin with a specific research question. We will work with two research question examples, one from health sciences and one from ecology: Example question 1: Phone use and sleep. You want to know how phone use before bedtime affects sleep patterns.

  5. A Quantitative Study of the Impact of Social Media Reviews on Brand

    Table 1 Categories and their examples of social media platforms ... the 2010 Pew Research report, the millennial is defined as having been born between 1977 and 1992 (Norén, L. 2011). The reviewers of the millennial generation have a high power of influence on the audience that thinks and acts like them. ...

  6. Journal of Experimental Psychology: General: Sample articles

    February 2011. by Jeff Galak and Tom Meyvis. The Nature of Gestures' Beneficial Role in Spatial Problem Solving (PDF, 181KB) February 2011. by Mingyuan Chu and Sotaro Kita. Date created: 2009. Sample articles from APA's Journal of Experimental Psychology: General.

  7. PDF An Experimental Study on the Effectiveness of Multimedia

    The results have been fed into SPSS (12.0) and analyzed using independent sample T-test analysis. Table 2 shows that in Test 1, Group 1 and Group 2 are quite similar in the means (Group 1 is 69.33, while Group 2 is 70.92), this means both groups have nearly the same English proficiency, and though experimental group is a little

  8. Experimental Research Papers

    A research paper is intended to inform others about advancement in a particular field of study. The researcher who wrote the paper identified a gap in the research in a field of study and used their research to help fill this gap. The researcher uses their paper to inform others about the knowledge that the results of their study contribute ...

  9. A Quick Guide to Experimental Design

    Step 1: Define your variables. You should begin with a specific research question. We will work with two research question examples, one from health sciences and one from ecology: Example question 1: Phone use and sleep. You want to know how phone use before bedtime affects sleep patterns.

  10. PDF B.S. Research Paper Example (Empirical Research Paper)

    B.S. Research Paper Example (Empirical Research Paper) This is an example of a research paper that was written in fulfillment of the B.S. research paper requirement. It uses APA style for all aspects except the cover sheet (this page; the cover sheet is required by the department). It describes research that the author was involved in while ...

  11. Experimental Research

    10+ Experimental Research Examples. Go over the following examples of experimental research papers. They may be able to help you gain a head start in your study or uproot you from where you're stuck in your experiment. 1. Experimental Research Design Example. onlinelibrary.wiley.com.

  12. Exploring Experimental Research: Methodologies, Designs, and

    Experimental research serves as a fundamental scientific method aimed at unraveling. cause-and-effect relationships between variables across various disciplines. This. paper delineates the key ...

  13. (PDF) AN EXPERIMENTAL STUDY ON THE EFFECT OF PARTS ...

    The population was nine classes (420 students) of grade XI in SMA Negeri 5 Denpasar academic year 2012/2013, in which 2 classes were samples which were assigned into two groups, i.e. experimental ...

  14. How to Write an Experimental Research Paper

    This article aims to present general guidelines to one of the many roles of a neurosurgeon: Writing an experimental research paper. Every research report must use the "IMRAD formula: introduction, methods, results and discussion". After the IMRAD is finished, abstract should be written and the title should be "created".

  15. Study/Experimental/Research Design: Much More Than Statistics

    Study, experimental, or research design is the backbone of good research. It directs the experiment by orchestrating data collection, defines the statistical analysis of the resultant data, and guides the interpretation of the results. When properly described in the written report of the experiment, it serves as a road map to readers, 1 helping ...

  16. Impact of Weekly Physical Activity on Stress Response: An Experimental

    The sample used was sports science students (N = 22), with a mean age of 22.82 (M age = 22.82; SD years = 3.67; M PhysicalActivity hours/Week = 7.77; SD hours / week = 3.32) A quasi-experimental design was used in which the response of each participant to the DT test was evaluated. The variable "number of hours of physical activity per week ...

  17. (PDF) Experimental Research Methods

    PDF | On Jan 1, 2003, S.M. Ross and others published Experimental Research Methods | Find, read and cite all the research you need on ResearchGate

  18. experimental study of effects on schoolchildren of exposure to point-of

    Experimental research has shown that adolescents exposed to retail tobacco advertising perceived significantly easier access to cigarettes than a control group . ... The sample of 605 students consisted of 51% females, 41% of students had tried smoking cigarettes and 9% currently smoked. Of those who had not yet tried smoking, 11% said they ...

  19. PDF A Sample Research Paper/Thesis/Dissertation on Aspects of Elementary

    Theorem 1.2.1. A homogenous system of linear equations with more unknowns than equations always has infinitely many solutions. The definition of matrix multiplication requires that the number of columns of the first factor A be the same as the number of rows of the second factor B in order to form the product AB.

  20. Experimental Research Designs: Types, Examples & Advantages

    There are 3 types of experimental research designs. These are pre-experimental research design, true experimental research design, and quasi experimental research design. 1. The assignment of the control group in quasi experimental research is non-random, unlike true experimental design, which is randomly assigned. 2.

  21. PDF CHAPTER 4: ANALYSIS AND INTERPRETATION OF RESULTS

    The analysis and interpretation of data is carried out in two phases. The. first part, which is based on the results of the questionnaire, deals with a quantitative. analysis of data. The second, which is based on the results of the interview and focus group. discussions, is a qualitative interpretation.

  22. An Experimental Study on the Effects of the Study Skills Course Learn

    An Experimental Study on the Effects of the Study Skills Course Learn ...

  23. Negative Label Guided OOD Detection with Pretrained Vision-Language Models

    View PDF HTML (experimental) Abstract: Out-of-distribution (OOD) detection aims at identifying samples from unknown classes, playing a crucial role in trustworthy models against errors on unexpected inputs. Extensive research has been dedicated to exploring OOD detection in the vision modality. Vision-language models (VLMs) can leverage both textual and visual information for various multi ...