U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • Korean J Anesthesiol
  • v.70(1); 2017 Feb

Understanding one-way ANOVA using conceptual figures

Tae kyun kim.

Department of Anesthesia and Pain Medicine, Pusan National University Yangsan Hospital and School of Medicine, Yangsan, Korea.

Analysis of variance (ANOVA) is one of the most frequently used statistical methods in medical research. The need for ANOVA arises from the error of alpha level inflation, which increases Type 1 error probability (false positive) and is caused by multiple comparisons. ANOVA uses the statistic F, which is the ratio of between and within group variances. The main interest of analysis is focused on the differences of group means; however, ANOVA focuses on the difference of variances. The illustrated figures would serve as a suitable guide to understand how ANOVA determines the mean difference problems by using between and within group variance differences.

Introduction

The differences in the means of two groups that are mutually independent and satisfy both the normality and equal variance assumptions can be obtained by comparing them using a Student's t-test. However, we may have to determine whether differences exist in the means of 3 or more groups. Most readers are already aware of the fact that the most common analytical method for this is the one-way analysis of variance (ANOVA). The present article aims to examine the necessity of using a one-way ANOVA instead of simply repeating the comparisons using Student's t-test. ANOVA literally means analysis of variance, and the present article aims to use a conceptual illustration to explain how the difference in means can be explained by comparing the variances rather by the means themselves.

Significance Level Inflation

In the comparison of the means of three groups that are mutually independent and satisfy the normality and equal variance assumptions, when each group is paired with another to attempt three paired comparisons 1) , the increase in Type I error becomes a common occurrence. In other words, even though the null hypothesis is true, the probability of rejecting it increases, whereby the probability of concluding that the alternative hypothesis (research hypothesis) has significance increases, despite the fact that it has no significance.

Let us assume that the distribution of differences in the means of two groups is as shown in Fig. 1 . The maximum allowable error range that can claim “differences in means exist” can be defined as the significance level (α). This is the maximum probability of Type I error that can reject the null hypothesis of “differences in means do not exist” in the comparison between two mutually independent groups obtained from one experiment. When the null hypothesis is true, the probability of accepting it becomes 1-α.

An external file that holds a picture, illustration, etc.
Object name is kjae-70-22-g001.jpg

Now, let us compare the means of three groups. Often, the null hypothesis in the comparison of three groups would be “the population means of three groups are all the same,” however, the alternative hypothesis is not “the population means of three groups are all different,” but rather, it is “at least one of the population means of three groups is different.” In other words, the null hypothesis (H 0 ) and the alternative hypothesis (H 1 ) are as follows:

Therefore, among the three groups, if the means of any two groups are different from each other, the null hypothesis can be rejected.

In that case, let us examine whether the probability of rejecting the entire null hypothesis remains consistent, when two continuous comparisons are made on hypotheses that are not mutually independent. When the null hypothesis is true, if the null hypothesis is rejected from a single comparison, then the entire null hypothesis can be rejected. Accordingly, the probability of rejecting the entire null hypothesis from two comparisons can be derived by firstly calculating the probability of accepting the null hypothesis from two comparisons, and then subtracting that value from 1. Therefore, the probability of rejecting the entire null hypothesis from two comparisons is as follows:

If the comparisons are made n times, the probability of rejecting the entire null hypothesis can be expressed as follows:

It can be seen that as the number of comparisons increases, the probability of rejecting the entire null hypothesis also increases. Assuming the significance level for a single comparison to be 0.05, the increases in the probability of rejecting the entire null hypothesis according to the number of comparisons are shown in Table 1 .

ANOVA Table

Although various methods have been used to avoid the hypothesis testing error due to significance level inflation, such as adjusting the significance level by the number of comparisons, the ideal method for resolving this problem as a single statistic is the use of ANOVA. ANOVA is an acronym for analysis of variance, and as the name itself implies, it is variance analysis. Let us examine the reason why the differences in means can be explained by analyzing the variances, despite the fact that the core of the problem that we want to figure out lies with the comparisons of means.

For example, let us examine whether there are differences in the height of students according to their grades ( Table 2 ). First, let us examine the ANOVA table ( Table 3 ) that is commonly obtained as a product of ANOVA. In Table 3 , the significance is ultimately determined using a significance probability value (P value), and in order to obtain this value, the statistic and its position in the distribution to which it belongs, must be known. In other words, there has to be a distribution that serves as the reference and that distribution is called F distribution. This F comes from the name of the statistician Ronald Fisher . The ANOVA test is also referred to as the F test, and F distribution is a distribution formed by the variance ratios. Accordingly, F statistic is expressed as a variance ratio, as shown below.

Raw data of students' heights in three different classes. Each class consists of thirty students.

Ȳ i is the mean of the group i; n i is the number of observations of the group i; Ȳ is the overall mean; K is the number of groups; Y ij is the j th observational value of group i; and N is the number of all observational values. The F statistic is the ratio of intergroup mean sum of squares to intragroup mean sum of squares.

Here, Ȳ i is the mean of the group i; n i is the number of observations of the group i; Ȳ is the overall mean; K is the number of groups; Y ij is the j th observational value of group i; and N is the number of all observational values.

It is not easy to look at this complex equation and understand ANOVA at a single glance. The meaning of this equation will be explained as an illustration for easier understanding. Statistics can be regarded as a study field that attempts to express data which are difficult to understand with an easy and simple ways so that they can be represented in a brief and simple forms. What that means is, instead of independently observing the groups of scattered points, as shown in Fig. 2A , the explanation could be given with the points lumped together as a single representative value. Values that are commonly referred to as the mean, median, and mode can be used as the representative value. Here, let us assume that the black rectangle in the middle represents the overall mean. However, a closer look shows that the points inside the circle have different shapes and the points with the same shape appear to be gathered together. Therefore, explaining all the points with just the overall mean would be inappropriate, and the points would be divided into groups in such a way that the same shapes belong to the same group. Although it is more cumbersome than explaining the entire population with just the overall mean, it is more reasonable to first form groups of points with the same shape and establish the mean for each group, and then explain the population with the three means. Therefore, as shown in Fig. 2B , the groups were divided into three and the mean was established in the center of each group in an effort to explain the entire population with these three points. Now the question arises as to how can one evaluate whether there are differences in explaining with the representative value of the three groups (e.g.; mean) versus explaining with lumping them together as a single overall mean.

An external file that holds a picture, illustration, etc.
Object name is kjae-70-22-g002.jpg

First, let us measure the distance between the overall mean and the mean of each group, and the distance from the mean of each group to each data within that group. The distance between the overall mean and the mean of each group was expressed as a solid arrow line ( Fig. 2C ). This distance is expressed as (Ȳ i − Ȳ) 2 , which appears in the denominator of the equation for calculating the F statistic. Here, the number of data in each group are multiplied, n i (Ȳ i − Ȳ) 2 . This is because explaining with the representative value of a single group is the same as considering that all the data in that group are accumulated at the representative value. Therefore, the amount of variance which is induced by explaining with the points divided into groups can be seen, as compared to explaining with the overall mean, and this explains inter-group variance.

Let us return to the equation for deriving the F statistic. The meaning of ( Y ij − Ȳ i ) 2 in the numerator is represented as an illustration in Fig. 2C , and the distance from the mean of each group to each data is shown by the dotted line arrows. In the figure, this distance represents the distance from the mean within the group to the data within that group, which explains the intragroup variance.

By looking at the equation for F statistic, it can be seen that this inter- or intragroup variance was divided into inter- and intragroup freedom. Let us assume that when all the fingers are stretched out, the mean value of the finger length is represented by the index finger. If the differences in finger lengths are compared to find the variance, then it can be seen that although there are 5 fingers, the number of gaps between the fingers is 4. To derive the mean variance, the intergroup variance was divided by freedom of 2, while the intragroup variance was divided by the freedom of 87, which was the overall number obtained by subtracting 1 from each group.

What can be understood by deriving the variance can be described in this manner. In Figs. 3A and 3B , the explanations are given with two different examples. Although the data were divided into three groups, there may be cases in which the intragroup variance is too big ( Fig. 3A ), so it appears that nothing is gained by dividing into three groups, since the boundaries become ambiguous and the group mean is not far from the overall mean. It seems that it would have been more efficient to explain the entire population with the overall mean. Alternatively, when the intergroup variance is relatively larger than the intragroup variance, in other word, when the distance from the overall mean to the mean of each group is far ( Fig. 3B ), the boundaries between the groups become more clear, and explaining by dividing into three group appears more logical than lumping together as the overall mean.

An external file that holds a picture, illustration, etc.
Object name is kjae-70-22-g003.jpg

Ultimately, the positions of statistic derived in this manner from the inter- and intragroup variance ratios can be identified from the F distribution ( Fig. 4 ). When the statistic 3.629 in the ANOVA table is positioned more to the right than 3.101, which is a value corresponding to the significance level of 0.05 in the F distribution with freedoms of 2 and 87, meaning bigger than 3.101, the null hypothesis can be rejected.

An external file that holds a picture, illustration, etc.
Object name is kjae-70-22-g004.jpg

Post-hoc Test

Anyone who has performed ANOVA has heard of the term post-hoc test. It refers to “the analysis after the fact” and it is derived from the Latin word for “after that.” The reason for performing a post-hoc test is that the conclusions that can be derived from the ANOVA test have limitations. In other words, when the null hypothesis that says the population means of three mutually independent groups are the same is rejected, the information that can be obtained is not that the three groups are different from each other. It only provides information that the means of the three groups may differ and at least one group may show a difference. This means that it does not provide information on which group differs from which other group ( Fig. 5 ). As a result, the comparisons are made with different pairings of groups, undergoing an additional process of verifying which group differs from which other group. This process is referred to as the post-hoc test.

An external file that holds a picture, illustration, etc.
Object name is kjae-70-22-g005.jpg

The significance level is adjusted by various methods [ 1 ], such as dividing the significance level by the number of comparisons made. Depending on the adjustment method, various post-hoc tests can be conducted. Whichever method is used, there would be no major problems, as long as that method is clearly described. One of the most well-known methods is the Bonferroni's correction. To explain this briefly, the significance level is divided by the number of comparisons and applied to the comparisons of each group. For example, when comparing the population means of three mutually independent groups A, B, and C, if the significance level is 0.05, then the significance level used for comparisons of groups A and B, groups A and C, and groups B and C would be 0.05/3 = 0.017. Other methods include Turkey, Schéffe, and Holm methods, all of which are applicable only when the equal variance assumption is satisfied; however, when this assumption is not satisfied, then Games Howell method can be applied. These post-hoc tests could produce different results, and therefore, it would be good to prepare at least 3 post-hoc tests prior to carrying out the actual study. Among the different types of post-hoc tests it is recommended that results which appear the most frequent should be used to interpret the differences in the population means.

Conclusions

It is believed that a wide variety of approaches and explanatory methods are available for explaining ANOVA. However, illustrations in this manuscript were presented as a tool for providing an understanding to those who are dealing with statistics for the first time. As the author who reproduced ANOVA is a non-statistician, there may be some errors in the illustrations. However, it should be sufficient for understanding ANOVA at a single glance and grasping its basic concept.

ANOVA also falls under the category of parametric analysis methods which perform the analysis after defining the distribution of the recruitment population in advance. Therefore, normality, independence, and equal variance of the samples must be satisfied for ANOVA. The processes of verification on whether the samples were extracted independently from each other, Levene's test for determining whether homogeneity of variance was satisfied, and Shapiro-Wilk or Kolmogorov test for determining whether normality was satisfied must be conducted prior to deriving the results [ 2 , 3 , 4 ].

1) A, B, C three paired comparisons: A vs B, A vs C and B vs C.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

One-way ANOVA | When and How to Use It (With Examples)

Published on March 6, 2020 by Rebecca Bevans . Revised on June 22, 2023.

ANOVA , which stands for Analysis of Variance, is a statistical test used to analyze the difference between the means of more than two groups.

A one-way ANOVA uses one independent variable , while a two-way ANOVA uses two independent variables.

Table of contents

When to use a one-way anova, how does an anova test work, assumptions of anova, performing a one-way anova, interpreting the results, post-hoc testing, reporting the results of anova, other interesting articles, frequently asked questions about one-way anova.

Use a one-way ANOVA when you have collected data about one categorical independent variable and one quantitative dependent variable . The independent variable should have at least three levels (i.e. at least three different groups or categories).

ANOVA tells you if the dependent variable changes according to the level of the independent variable. For example:

  • Your independent variable is social media use , and you assign groups to low , medium , and high levels of social media use to find out if there is a difference in hours of sleep per night .
  • Your independent variable is brand of soda , and you collect data on Coke , Pepsi , Sprite , and Fanta to find out if there is a difference in the price per 100ml .
  • You independent variable is type of fertilizer , and you treat crop fields with mixtures 1 , 2 and 3 to find out if there is a difference in crop yield .

The null hypothesis ( H 0 ) of ANOVA is that there is no difference among group means. The alternative hypothesis ( H a ) is that at least one group differs significantly from the overall mean of the dependent variable.

If you only want to compare two groups, use a t test instead.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

ANOVA determines whether the groups created by the levels of the independent variable are statistically different by calculating whether the means of the treatment levels are different from the overall mean of the dependent variable.

If any of the group means is significantly different from the overall mean, then the null hypothesis is rejected.

ANOVA uses the F test for statistical significance . This allows for comparison of multiple means at once, because the error is calculated for the whole set of comparisons rather than for each individual two-way comparison (which would happen with a t test).

The F test compares the variance in each group mean from the overall group variance. If the variance within groups is smaller than the variance between groups , the F test will find a higher F value, and therefore a higher likelihood that the difference observed is real and not due to chance.

The assumptions of the ANOVA test are the same as the general assumptions for any parametric test:

  • Independence of observations : the data were collected using statistically valid sampling methods , and there are no hidden relationships among observations. If your data fail to meet this assumption because you have a confounding variable that you need to control for statistically, use an ANOVA with blocking variables.
  • Normally-distributed response variable : The values of the dependent variable follow a normal distribution .
  • Homogeneity of variance : The variation within each group being compared is similar for every group. If the variances are different among the groups, then ANOVA probably isn’t the right fit for the data.

While you can perform an ANOVA by hand , it is difficult to do so with more than a few observations. We will perform our analysis in the R statistical program because it is free, powerful, and widely available. For a full walkthrough of this ANOVA example, see our guide to performing ANOVA in R .

The sample dataset from our imaginary crop yield experiment contains data about:

  • fertilizer type (type 1, 2, or 3)
  • planting density (1 = low density, 2 = high density)
  • planting location in the field (blocks 1, 2, 3, or 4)
  • final crop yield (in bushels per acre).

This gives us enough information to run various different ANOVA tests and see which model is the best fit for the data.

For the one-way ANOVA, we will only analyze the effect of fertilizer type on crop yield.

Sample dataset for ANOVA

After loading the dataset into our R environment, we can use the command aov() to run an ANOVA. In this example we will model the differences in the mean of the response variable , crop yield, as a function of type of fertilizer.

Prevent plagiarism. Run a free check.

To view the summary of a statistical model in R, use the summary() function.

The summary of an ANOVA test (in R) looks like this:

One-way ANOVA summary

The ANOVA output provides an estimate of how much variation in the dependent variable that can be explained by the independent variable.

  • The first column lists the independent variable along with the model residuals (aka the model error).
  • The Df column displays the degrees of freedom for the independent variable (calculated by taking the number of levels within the variable and subtracting 1), and the degrees of freedom for the residuals (calculated by taking the total number of observations minus 1, then subtracting the number of levels in each of the independent variables).
  • The Sum Sq column displays the sum of squares (a.k.a. the total variation) between the group means and the overall mean explained by that variable. The sum of squares for the fertilizer variable is 6.07, while the sum of squares of the residuals is 35.89.
  • The Mean Sq column is the mean of the sum of squares, which is calculated by dividing the sum of squares by the degrees of freedom.
  • The F value column is the test statistic from the F test: the mean square of each independent variable divided by the mean square of the residuals. The larger the F value, the more likely it is that the variation associated with the independent variable is real and not due to chance.
  • The Pr(>F) column is the p value of the F statistic. This shows how likely it is that the F value calculated from the test would have occurred if the null hypothesis of no difference among group means were true.

Because the p value of the independent variable, fertilizer, is statistically significant ( p < 0.05), it is likely that fertilizer type does have a significant effect on average crop yield.

ANOVA will tell you if there are differences among the levels of the independent variable, but not which differences are significant. To find how the treatment levels differ from one another, perform a TukeyHSD (Tukey’s Honestly-Significant Difference) post-hoc test.

The Tukey test runs pairwise comparisons among each of the groups, and uses a conservative error estimate to find the groups which are statistically different from one another.

The output of the TukeyHSD looks like this:

Tukey summary one-way ANOVA

First, the table reports the model being tested (‘Fit’). Next it lists the pairwise differences among groups for the independent variable.

Under the ‘$fertilizer’ section, we see the mean difference between each fertilizer treatment (‘diff’), the lower and upper bounds of the 95% confidence interval (‘lwr’ and ‘upr’), and the p value , adjusted for multiple pairwise comparisons.

The pairwise comparisons show that fertilizer type 3 has a significantly higher mean yield than both fertilizer 2 and fertilizer 1, but the difference between the mean yields of fertilizers 2 and 1 is not statistically significant.

When reporting the results of an ANOVA, include a brief description of the variables you tested, the  F value, degrees of freedom, and p values for each independent variable, and explain what the results mean.

If you want to provide more detailed information about the differences found in your test, you can also include a graph of the ANOVA results , with grouping letters above each level of the independent variable to show which groups are statistically different from one another:

One-way ANOVA graph

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Chi square test of independence
  • Statistical power
  • Descriptive statistics
  • Degrees of freedom
  • Pearson correlation
  • Null hypothesis

Methodology

  • Double-blind study
  • Case-control study
  • Research ethics
  • Data collection
  • Hypothesis testing
  • Structured interviews

Research bias

  • Hawthorne effect
  • Unconscious bias
  • Recall bias
  • Halo effect
  • Self-serving bias
  • Information bias

The only difference between one-way and two-way ANOVA is the number of independent variables . A one-way ANOVA has one independent variable, while a two-way ANOVA has two.

  • One-way ANOVA : Testing the relationship between shoe brand (Nike, Adidas, Saucony, Hoka) and race finish times in a marathon.
  • Two-way ANOVA : Testing the relationship between shoe brand (Nike, Adidas, Saucony, Hoka), runner age group (junior, senior, master’s), and race finishing times in a marathon.

All ANOVAs are designed to test for differences among three or more groups. If you are only testing for a difference between two groups, use a t-test instead.

A factorial ANOVA is any ANOVA that uses more than one categorical independent variable . A two-way ANOVA is a type of factorial ANOVA.

Some examples of factorial ANOVAs include:

  • Testing the combined effects of vaccination (vaccinated or not vaccinated) and health status (healthy or pre-existing condition) on the rate of flu infection in a population.
  • Testing the effects of marital status (married, single, divorced, widowed), job status (employed, self-employed, unemployed, retired), and family history (no family history, some family history) on the incidence of depression in a population.
  • Testing the effects of feed type (type A, B, or C) and barn crowding (not crowded, somewhat crowded, very crowded) on the final weight of chickens in a commercial farming operation.

In ANOVA, the null hypothesis is that there is no difference among group means. If any group differs significantly from the overall group mean, then the ANOVA will report a statistically significant result.

Significant differences among group means are calculated using the F statistic, which is the ratio of the mean sum of squares (the variance explained by the independent variable) to the mean square error (the variance left over).

If the F statistic is higher than the critical value (the value of F that corresponds with your alpha value, usually 0.05), then the difference among groups is deemed statistically significant.

Quantitative variables are any variables where the data represent amounts (e.g. height, weight, or age).

Categorical variables are any variables where the data represent groups. This includes rankings (e.g. finishing places in a race), classifications (e.g. brands of cereal), and binary outcomes (e.g. coin flips).

You need to know what type of variables you are working with to choose the right statistical test for your data and interpret your results .

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bevans, R. (2023, June 22). One-way ANOVA | When and How to Use It (With Examples). Scribbr. Retrieved January 2, 2024, from https://www.scribbr.com/statistics/one-way-anova/

Is this article helpful?

Rebecca Bevans

Rebecca Bevans

Other students also liked, two-way anova | examples & when to use it, anova in r | a complete step-by-step guide with examples, guide to experimental design | overview, steps, & examples, what is your plagiarism score.

Journal of Research in Educational Sciences

  • Announcements
  • AUTHOR'S GUIDELINES

Reporting and Interpreting One-Way Analysis of Variance (ANOVA) Using a Data-Driven Example: A Practical Guide for Social Science Researchers

  • Simon NTUMI University of Education, Winneba, West Africa, Ghana

One-way ( between-groups) analysis of variance (ANOVA) is a statistical tool or procedure used to analyse variation in a response variable (continuous random variable) measured under conditions defined by discrete factors (classification variables, often with nominal levels). The tool is used to detect a difference in means of 3 or more independent groups. It compares the means of the samples or groups in order to make inferences about the population means. It can be construed as an extension of the independent t-test. Given the omnibus nature of ANOVA, it appears that most researchers in social sciences and its related fields have difficulties in reporting and interpreting ANOVA results in their studies. This paper provides detailed processes and steps on how researchers can practically analyse and interpret ANOVA in their research works. The paper expounded that in applying ANOVA in analysis, a researcher must first formulate the null and in other cases alternative hypothesis. After the data have been gathered and cleaned, the researcher must test statistical assumptions to see if the data meet those assumptions. After this, the researcher must then do the necessary statistical computations and calculate the F-ratio (ANOVA result) using a software. To this end, the researcher then compares the critical value of the F-ratio with the table value or simply look at the p -value against the established alpha. If the calculated critical value is greater than the table value, the null hypothesis will be rejected and the alternative hypothesis is upheld.

research paper using one way anova pdf

  • EndNote - EndNote format (Macintosh & Windows)
  • ProCite - RIS format (Macintosh & Windows)
  • Reference Manager - RIS format (Windows only)

The Copyright Transfer Form to ASERS Publishing (The Publisher) This form refers to the manuscript, which an author(s) was accepted for publication and was signed by all the authors. The undersigned Author(s) of the above-mentioned Paper here transfer any and all copyright-rights in and to The Paper to The Publisher. The Author(s) warrants that The Paper is based on their original work and that the undersigned has the power and authority to make and execute this assignment. It is the author's responsibility to obtain written permission to quote material that has been previously published in any form. The Publisher recognizes the retained rights noted below and grants to the above authors and employers for whom the work performed royalty-free permission to reuse their materials below. Authors may reuse all or portions of the above Paper in other works, excepting the publication of the paper in the same form. Authors may reproduce or authorize others to reproduce the above Paper for the Author's personal use or for internal company use, provided that the source and The Publisher copyright notice are mentioned, that the copies are not used in any way that implies The Publisher endorsement of a product or service of an employer, and that the copies are not offered for sale as such. Authors are permitted to grant third party requests for reprinting, republishing or other types of reuse. The Authors may make limited distribution of all or portions of the above Paper prior to publication if they inform The Publisher of the nature and extent of such limited distribution prior there to. Authors retain all proprietary rights in any process, procedure, or article of manufacture described in The Paper. This agreement becomes null and void if and only if the above paper is not accepted and published by The Publisher, or is with drawn by the author(s) before acceptance by the Publisher.

  • Come and join our team! become an author
  • Soon, we launch the books app stay tune!
  • Online support 24/7 +4077 033 6758
  • Tell Friends and get $5 a small gift for you
  • Privacy Policy
  • Customer Service
  • Refunds Politics

Mail to: [email protected]

Phone: +40754 027 417

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 16 March 2018

An ANOVA approach for statistical comparisons of brain networks

  • Daniel Fraiman   ORCID: orcid.org/0000-0002-0482-9137 1 , 2 &
  • Ricardo Fraiman 3 , 4  

Scientific Reports volume  8 , Article number:  4746 ( 2018 ) Cite this article

19k Accesses

13 Citations

3 Altmetric

Metrics details

  • Data processing
  • Statistical methods

The study of brain networks has developed extensively over the last couple of decades. By contrast, techniques for the statistical analysis of these networks are less developed. In this paper, we focus on the statistical comparison of brain networks in a nonparametric framework and discuss the associated detection and identification problems. We tested network differences between groups with an analysis of variance (ANOVA) test we developed specifically for networks. We also propose and analyse the behaviour of a new statistical procedure designed to identify different subnetworks. As an example, we show the application of this tool in resting-state fMRI data obtained from the Human Connectome Project. We identify, among other variables, that the amount of sleep the days before the scan is a relevant variable that must be controlled. Finally, we discuss the potential bias in neuroimaging findings that is generated by some behavioural and brain structure variables. Our method can also be applied to other kind of networks such as protein interaction networks, gene networks or social networks.

Introduction

Understanding how individual neurons, groups of neurons and brain regions connect is a fundamental issue in neuroscience. Imaging and electrophysiology have allowed researchers to investigate this issue at different brain scales. At the macroscale, the study of brain connectivity is dominated by MRI, which is the main technique used to study how different brain regions connect and communicate. Researchers use different experimental protocols in an attempt to describe the true brain networks of individuals with disorders as well as those of healthy individuals. Understanding resting state networks is crucial for understanding modified networks, such as those involved in emotion, pain, motor learning, memory, reward processing, and cognitive development, among others. Comparing brain networks accurately can also lead to the precise early diagnosis of neuropsychiatric and neurological disorders 1 , 2 . Rigorous mathematical methods are needed to conduct such comparisons.

Currently, the two main techniques used to measure brain networks at the whole brain scale are Diffusion Tensor Imaging (DTI) and resting-state functional magnetic resonance imaging (rs-fMRI). In DTI, large white-matter fibres are measured to create a connectional neuroanatomy brain network, while in rs-fMRI, functional connections are inferred by measuring the BOLD activity at each voxel and creating a whole brain functional network based on functionally-connected voxels (i.e., those with similar behaviour). Despite technical limitations, both techniques are routinely used to provide a structural and dynamic explanation for some aspects of human brain function. These magnetic resonance neuroimages are typically analysed by applying network theory 3 , 4 , which has gained considerable attention for the analysis of brain data over the last 10 years.

The space of networks with as few as 10 nodes (brain regions) contains as many as 10 13 different networks. Thus, one can imagine the number of networks if one analyses brain network populations (e.g. healthy and unhealthy) with, say, 1000 nodes. However, most studies currently report data with few subjects, and the neuroscience community has recently begun to address this issue 5 , 6 , 7 and question the reproducibility of such findings 8 , 9 , 10 . In this work, we present a tool for comparing samples of brain networks. This study contributes to a fast-growing area of research: network statistics of network samples 11 , 12 , 13 , 14 .

We organized the paper as follows: In the Results section, we first present a discussion about the type of differences that can be observed when comparing brain networks. Second, we present the method for comparing brain networks and identifying network differences that works well even with small samples. Third, we present an example that illustrates in greater detail the concept of comparing networks. Next, we apply the method to resting-state fMRI data from the Human Connectome Project and discuss the potential biases generated by some behavioural and brain structural variables. Finally, in the Discussion section, we discuss possible improvements, the impact of sample size, and the effects of confounding variables.

Preliminars

Most studies that compare brain networks (e.g., in healthy controls vs. patients) try to identify the subnetworks, hubs, modules, etc. that are affected in the particular disease. There is a widespread belief (largely supported by data) that the brain network modifications induced by the factor studied (disease, age, sex, stimulus) are specific . This means that the factor will similarly affect the brains of different people.

On the other hand, labeled networks can be modified in many different ways while preserving the nodes, and these modifications can be categorized into three. In the first category, called here localized modifications , some particular identified links suffer changes by the factor. In the second, called unlocalized modifications , some links change, but the changed links differ among subjects. For example, the degree of interconnection of some nodes may decrease/increase by 50%, but in some individuals, this happens in the frontal lobe, in others in the right parietal lobe or the occipital lobe, and so on. In this case, the localization of the links/nodes affected by the factor can be considered random. In the third category, called here global modifications , some links (not the same across subjects) are changed, and these changes produce a global alteration of the network. For example, they can notably decrease/increase the average path length, the average degree, or the number of modules, or just produce more heterogeneous networks in a population of homogeneous ones. This last category is similar to the unlocalized modifications case, but in this case, an important global change in the network occurs.

In all cases, there are changes in the links influenced by the “factor”, while nodes are fixed. How to detect if any of these changes have occurred (hereinafter called detection) is one of the main challenges of this work. And, once their occurrence has been determined, we aim to identify where they occurred (hereinafter called identification). The difficulty lies in statistically asserting that the factor produced true modifications in the huge space of labeled networks. We aim to detect all three types of network modifications. Clearly, as is always true in statistics, more precise methods can be proposed when hypotheses regarding the data are more accurate (e.g., that the differences belong to the global modifications category). However, this last approach requires one to make many more assumptions about the brain’s behaviour. The assumptions are generally unverifiable; for this reason, we use a nonparametric approach, following the adage “less is more”, which is often very useful in statistics. For the detection problem, we developed an analysis of variance (ANOVA) test specifically for networks. As is well known, ANOVA is designed to test differences among the means of the subpopulations, and one may observe that equal means have different distributions. However, we propose a definition of means that will differ in the presence of any of the three modification categories mentioned above. As is well known, the identification stage is computationally far more complicated, and we address it partially looking at the subset of links or a subnetwork that present the highest network differences between groups.

Network Theory Framework

A network (or graph), denoted by G = ( V , E ), is an object described by a set V of nodes (vertices) and a set E   ⊂   V × V of links (edges) between them. In what follows, we consider families of networks defined over the same fixed finite set of n nodes (brain regions). A network is completely described by its adjacency matrix A   ∈  {0, 1} n  ×  n , where A ( i , j ) = 1 if and only if the link ( i , j )  ∈   E . If the matrix A is symmetric, then the graph is undirected; otherwise, we have a directed graph.

Let us suppose we are interested in studying the brain network of a given population, where most likely brain networks differ from each other to some extent. If we randomly choose a person from this population and study his/her brain network, what we obtain is a random network. This random network, G , will have a given probability of being network G 1 , another probability of being network G 2 , and so on until \({G}_{\tilde{n}}\) . Therefore, a random network is completely characterized by its probability law,

Likewise, a random variable is also completely characterized by its probability law. In this case, the most common test for comparing many subpopulations is the analysis of variance test (ANOVA). This test rejects the null hypothesis of equal means if the averages are statistically different. Here, we propose an ANOVA test designed specifically to compare networks.

To develop this test, we first need to specify the null assumption in terms of some notion of mean network and a statistic to base the test on. We only have at hand two main tools for that: the adjacency matrices of the networks and a notion of distance between networks.

The first step for comparing networks is to define a distance or metric between them. Given two networks G 1 , G 2 we consider the most classical distance, the edit distance 15 defined as

This distance corresponds to the minimum number of links that must be added and subtracted to transform G 1 into G 2 (i.e. the number of different links), and is the L 1 distance between the two matrices. We will also use equation ( 2 ) for the case of weighted networks, i.e. for matrices with A ( i , j ) taking values between 0 and 1. It is important to mention that the results presented here are still valid under other metrics 16 , 17 , 18 .

Next, we consider the average weighted network - hereinafter called the average network - defined as the network whose adjacency matrix is the average of the adjacency matrices in the sample of networks. More precisely, we consider the following definitions.

Definition 1

Given a sample of networks { G 1 , …, G l } with the same distribution

The average network \( {\mathcal M} \) that has as adjacency matrix the average of the adjacency matrices

which in terms of the population version corresponds to the mean matrix \( {\mathcal M} (i,\,j)={\mathbb{E}}({A}_{{\bf{G}}}(i,\,j))=:{p}_{ij}\) .

The average distance around a graph H is defined as

which corresponds to the mean population distance

With these definitions in mind, the natural way to define a measure of network variability is

which measures the average distance (variability) of the networks around the average weighted network.

Given m subpopulations G 1 , …, G m the null assumption for our ANOVA test will be that the means of the m subpopulations \({\tilde{{ {\mathcal M} }}}_{1},\,\ldots ,\,{\tilde{{ {\mathcal M} }}}_{m}\) are the same. The test statistic will be based on a normalized version of the sum of the differences between \({\bar{d}}_{{G}^{i}}({{ {\mathcal M} }}_{i})\) and \({\bar{d}}_{G}({{ {\mathcal M} }}_{i})\) , where \({\bar{d}}_{{G}^{i}}\) and \({\bar{d}}_{G}\) are calculated according to (4) using the i –sample and the pooled sample respectively. This is developed in more detail in the next section.

Detecting and identifying network differences

Now we address the testing problem. Let \({G}_{1}^{1},{G}_{2}^{1},\ldots ,{G}_{{n}_{1}}^{1}\) denote the networks from subpopulation 1, \({G}_{1}^{2},{G}_{2}^{2},\ldots ,{G}_{{n}_{2}}^{2}\) the ones from subpopulation 2, and so on until \({G}_{1}^{m},{G}_{2}^{m},\ldots ,{G}_{{n}_{m}}^{m}\) the networks of subpopulation m . Let G 1 , G 2 , …, G n denote, without superscript, the complete pooled sample of networks, where \(n={\sum }_{i\mathrm{=1}}^{m}{n}_{i}\) . And finally, let \({{ {\mathcal M} }}_{i}\) and σ i denote the average network and the variability of the i -subpopulation of networks. We want to test (H 0 )

that all the subpopulations have the same mean network, under the alternative that at least one subpopulation has a different mean network.

It is interesting to note that for objects that are networks, the average network ( \({ {\mathcal M} }\) ) and the variability ( σ ) are not independent summary measures. In fact, the relationship between them is given by

Therefore, the proposed test can also be considered a test for equal variability. The proposed statistic for testing the null hypothesis is:

where a is a normalization constant given in Supplementary Information  1.3 . This statistic measures the difference between the network variability of each specific subpopulation and the average distance between all the populations and the specific average network. Theorem 1 states that under the null hypothesis (items (i) and (ii)) T is asymptotically Normal(0, 1), and if H 0 is false (item (iii)) T will be smaller than some negative constant c . This specific value is obtained by the following theorem (see the Supplementary Information  1 for the proof).

. Under the null hypothesis, the T statistic fulfills (i) and (ii), while T is sensitive to the alternative hypothesis, and (iii) holds true.

\({\mathbb{E}}(T)=0\)

T is asymptotically ( K : = min { n 1 , n 2 , .., n m } → ∞) Normal(0, 1).

Under the alternative hypothesis, T will be smaller than any negative value if K is large enough (The test is consistent).

This theorem provides a procedure for testing whether two or more groups of networks are different. Although having a procedure like the one described is important, we not only want to detect network differences, we also want to identify the specific network changes or differences. We discuss this issue next.

Identification

. Let us suppose that the ANOVA test for networks rejects the null hypothesis, and now the main goal is to identify network differences. Two main objectives are discussed:

Identification of all the links that show statistical differences between groups.

Identification of a set of nodes (a subnetwork) that present the highest network differences between groups.

The identification procedure we describe below aims to eliminate the noise (links or nodes without differences between subpopulations) while keeping the signal (links or nodes with differences between subpopulations).

Given a network G = ( V , E ) and a subset of links \(\tilde{E}\subset E\) , let us generically denote \({G}_{\tilde{E}}\) the subnetwork with the same nodes but with links identified by the set \(\tilde{E}\) . The rest of the links are erased. Given a subset of nodes \(\tilde{V}\subset V\) let us denote \({G}_{\tilde{V}}\) the subnetwork that only has the nodes (with the links between them) identified by the set \(\tilde{V}\) . The T statistic for the sample of networks with only the set of \(\tilde{E}\) links is denoted by \({T}_{\tilde{E}}\) , and the T statistic computed for all the sample networks with only the nodes that belong to \(\tilde{V}\) is denoted by \({T}_{\tilde{V}}\) .

The procedure we propose for identifying all the links that show statistical differences between groups is based on the minimization for \(\tilde{E}\subset E\) of \({T}_{\tilde{E}}\) . The set of links, \(\bar{E}\) , defined by

contain all the links that show statistical differences between subpopulations. One limitation of this identification procedure is that the space E is huge (# E  = 2 n ( n −1)/2 where n is the number of nodes) and an efficient algorithm is needed to find the minimum. That is why we focus on identifying a group of nodes (or a subnetwork) expressing the largest differences.

The procedure proposed for identifying the subnetwork with the highest statistical differences between groups is similar to the previous one. It is based on the minimization of \({T}_{\tilde{V}}\) . The set of nodes, N , defined by

contains all relevant nodes. These nodes make up the subnetwork with the largest difference between groups. In this case, the complexity is smaller, since the space V is not so big (# V = 2 n  −  n  − 1).

As in other well-known statistical procedures such as cluster analysis or selection of variables in regression models, finding the size \(\tilde{j}:=\#N\) of the number of nodes in the true subnetwork is a difficult problem due to possible overestimation of noisy data. The advantage of knowing \(\tilde{j}\) is that it reduces the computational complexity for finding the minimum to an order of \({n}^{\tilde{j}}\) instead of 2 n if we have to look for all possible sizes. However, the problem in our setup is less severe than other cases since the objective function ( \({T}_{\tilde{V}}\) ) is not monotonic when the size of the space increases. To solve this problem, we suggest the following algorithm.

Let V { j } be the space of networks with j distinguishable nodes, j   ∈  {2, 3, …, n } and \(V=\mathop{\cup }\limits_{j}{V}_{\{j\}}\) . The nodes N j

define a subnetwork. In order to find the true subnetwork with differences between the groups, we now study the sequence T 2 , T 3 , …, T n . We continue with the search (increasing j ) until we find \(\tilde{j}\) fulfilling

where g is a positive function that decreases together with the sample size (in practice, a real value). \({N}_{\tilde{j}}\) are the nodes that make up the subnetwork with the largest differences among the groups or subpopulations studied.

It is important to mention that the procedures described above do not impose any assumption regarding the real connectivity differences between the populations. With additional hypotheses, the procedure can be improved. For instance, in 14 , 19 the authors proposed a methodology for the edge-identification problem that is powerful only when the real difference connection between the populations form a large unique connected component.

Examples and Applications

A relevant problem in the current neuroimaging research agenda is how to compare populations based on their brain networks. The ANOVA test presented above deals with this problem. Moreover, the ANOVA procedure allows the identification of the variables related to the brain network structure. In this section, we show an example and application of this procedure in neuroimaging (EEG, MEG, fMRI, eCoG). In the example we show the robustness of the procedures for testing and identification of different sample sizes. In the application, we analyze fMRI data to understand which variables in the dataset are dependent on the brain network structure. Identifying these variables is also very important because any fair comparison between two or more populations requires these variables be controlled (similar values).

Let us suppose we have three groups of subjects with equal sample size, K , and the brain network of each subject is studied using 16 regions (electrodes or voxels). Studies show connectivity between certain brain regions is different in certain neuropathologies, in aging, under the influence of psychedelic drugs, and more recently, in motor learning 20 , 21 . Recently, we have shown that a simple way to study connectivity is by what the physics community calls “the correlation function” 22 . This function describes the correlation between regions as a function of the distance between them. Although there exist long range connections, on average, regions (voxels or electrodes) closer to each other interact strongly, while distant ones interact more weakly. We have shown that the way in which this function decays with distance is a marker of certain diseases 23 , 24 , 25 . For example, patients with a traumatic brachial plexus lesion with root avulsions revealed a faster correlation decay as a function of distance in the primary motor cortex region corresponding to the arm 24 .

Next we present a toy model that analyses the method’s performance. In a network context, the behaviour described above can be modeled in the following way: since the probability that two regions are connected is a monotonic function of the correlation between them (i.e. on average, distant regions share fewer links than nearby regions) we decided to skip the correlations and directly model the link probability as an exponential function that decays with distance. We assume that the probability that region i is connected with j is defined as

where d ( i , j ) is the distance between regions i and j . For the alternative hypothesis, we consider that there are six frontal brain regions (see Fig.  1 Panel A) that interact with a different decay rate in each of the three subpopulations. Figure  1 panel (A) shows the 16 regions analysed on an x-y scale. Panel (B) shows the link probability function for all electrodes and for each subpopulation. As shown, there is a slight difference between the decay of the interactions between the frontal electrodes in each subpopulation ( λ 1 = 1, λ 2 = 0.8 and λ 3 = 0.6 for groups 1, 2 and 3, respectively). The aim is to determine whether the ANOVA test for networks detects the network differences that are induced by the link probability function.

figure 1

Detection problem. ( A ) Diagram of the scalp (each node represent a EEG electrode) on an x-y scale and the link probability. The three groups confirm the equation P ( ○  ↔ •) =  P (• ↔ •) =  e − d . ( B ) Link probability of frontal electrodes, P ( ○  ↔  ○ ), as a function of the distance for the three subpopulations. (C) Power of the tests as a function of sample size, K . Both tests are presented.

Here we investigated the power of the proposed test by simulating the model under different sample sizes ( K ). K networks were computed for each of the three subpopulations and the T statistic was computed for each of 10,000 replicates. The proportion of replicates with a T value smaller than −1.65 is an estimation of the power of the test for a significance level of 0.05 (unilateral hypothesis testing). Star symbols in Fig.  1C represent the power of the test for the different sample sizes. For example, for a sample size of 100, the test detects this small difference between the networks 100% of the time. As expected, the test has less power for small sample sizes, and if we change the values λ 2 and λ 3 in the model to 0.66 and 0.5, respectively, power increases. In this last case, the power changed from 64% to 96% for a sample size of 30 (see Supplementary Fig.  S1 for the complete behaviour).

To the best of our knowledge, the T statistic is the first proposal of an ANOVA test for networks. Thus, here we compare it with a naive test where each individual link is compared among the subpopulations. The procedure is as follows: for each link, we calculate a test for equal proportions between the three groups to obtain a p-value for each link. Since we are conducting multiple comparisons, we apply the Benjamini-Hochberg procedure controlling at a significance level of α = 0.05. The procedure is as follows:

1. Compute the p-value of each link comparison, pv 1 , pv 2 , …, pv m .

2. Find the j largest p-value such that \(p{v}_{(j)}\le \frac{j}{m}\alpha \mathrm{.}\)

3. Declare that the link probability is different for all links that have a p-value ≤  pv ( j ) .

This procedure detects differences in the individual links while controlling for multiple comparisons. Finally, we consider the networks as being different if at least one link (of the 15 that have real differences) was detected to have significant differences. We will call this procedure the “Links Test”. Crosses in Fig.  1C correspond to the power of this test as a function of the sample size. As can be observed, the test proposed for testing equal mean networks is much more powerful than the previous test.

Theorem 1 States that T is asymptotically (sample size → ∞) Normal(0, 1) under the Null hypothesis. Next we investigated how large the sample size must be to obtain a good approximation. Moreover, we applied Theorem 1 in the simulations above for K = {30, 50, 70, 100}, but we did not show that the approximation is valid for K = 30, for example. Here, we show that the normal approximation is valid even for K = 30 in the case of 16-node networks. We simulated 10,000 replicates of the model considering that all three groups have exactly the same probability law given by group 1, i.e. all brain connections confirm the equation \(P(i\leftrightarrow j)={e}^{-{\lambda }_{1}d(i,j)}\) for the three groups (H 0 hypothesis). The T value is computed for each replicate of sample size K = 30, and the distribution is shown in Fig.  2(A) . The histogram shows that the distribution is very close to normal. Moreover, the Kolmogorov-Smirnov test against a normal distribution did not reject the hypothesis of a normal distribution for the T statistic (p-value = 0.52). For sample sizes smaller than 30, the distribution has more variance. For example, for K = 10, the standard deviation of T is 1.1 instead of 1 (see Supplementary Fig.  S2 ). This deviation from a normal distribution can also be observed in panel B where we show the percentage of Type I errors as a function of the sample size ( K ). For sample sizes smaller than 30, this percentage is slightly greater than 5%, which is consistent with a variance greater than 1. The Links test procedure yielded a Type I error percentage smaller than 5% for small sample sizes.

figure 2

Null hypothesis. ( A ) Histogram of T statistics for K = 30. ( B ) Percentage of Type I Error as a function of sample size, K . Both tests are presented.

Finally, we applied the subnetwork identification procedure described before to this example. Fifty simulations were performed for the model with a sample size of K = 100. For each replication, the minimum statistic T j was studied as a function of the number of j nodes in the subnetwork. Figure  3A and B show two of the 50 simulation outcomes for the T j function of ( j ) number of nodes. Panel A shows that as nodes are incorporated into the subnetwork, the statistic sharply decreases to six nodes, and further incorporating nodes produces a very small decay in T j in the region between six and nine nodes. Finally, adding even more nodes results in a statistical increase. A similar behaviour is observed in the simulation shown in panel B, but the “change point” appears for a number of nodes equal to five. If we define that the number of nodes with differences, \(\tilde{j}\) , confirms

we obtain the values circled. For each of the 50 simulations, we studied the value \(\tilde{j}\) and a histogram of the results is shown in Panel C. With the criteria defined, most of the simulations (85%) result in a subnetwork of 6 nodes, as expected. Moreover, these 6 nodes correspond to the real subnetwork with differences between subpopulations (white nodes in Fig.  1A ). This was observed in 100% of simulations with \(\tilde{j}\)  = 6 (blue circles in Panel D). In the simulations where this value was 5, five of the six true nodes were identified, and five of the six nodes with differences vary between simulations (represented with grey circles in Panel D). For the simulations where \(\tilde{j}\)  = 7, all six real nodes were identified and a false node (grey circle) that changed between simulations was identified as being part of the subnetwork with differences.

figure 3

Identification problem. ( A , B ) Statistic T j as a function of the number of nodes of the subnetwork ( j ) for two simulations. Blue circles represent the value \(\tilde{j}\) following the criteria described in the text. ( C ) Histogram of the number of subnetwork nodes showing differences, \(\tilde{j}\) . ( D ) Identification of the nodes. Blue and grey circles represent the nodes identified from the set \({N}_{\tilde{j}}\) . Circled blue nodes are those identified 100% of the time. Grey circles represent nodes that are identified some of the time. On the left, grey circles alternate between the six white nodes. On the right, the grey circle alternates between the black nodes.

The identification procedure was also studied for a smaller sample size of K = 30, and in this case, the real subnetwork was identified only 28% of the time (see Suppplementary Fig.  S3 for more details). Identifying the correct subnetwork is more difficult (larger sample sizes are needed) than detecting global differences between group networks.

Resting-state fMRI functional networks

In this section, we analysed resting-state fMRI data from the 900 participants in the 2015 Human Connectome Project (HCP 26 ). We included data from the 812 healthy participants who had four complete 15-minute rs-fMRI runs, for a total of one hour of brain activity. We partitioned the 812 participants into three subgroups and studied the differences between the brain groups. Clearly, if the participants are randomly divided into groups, no brain subgroup differences are expected, but if the participants are divided in an intentional way, differences may appear. For example, if we divided the 812 by the amount of hours slept before the scan ( G 1 less than 6 hours, G 2 between 6 and 7 hours, and G 3 more than 7) it might be expected 27 , 28 to observe differences in brain connectivity on the day of the scan. Moreover, as a by-product, we obtain that this variable is an important factoring variable to be controlled before the scan. Fortunately, HCP provides interesting individual socio-demographic, behavioural and structural brain data to facilitate this analysis. Moreover, using a previous release of the HCP data (461 subjects), Smith et al . 29 , using a multivariate analysis (canonical correlation), showed that a linear combination of demographics and behavior variables highly correlates with a linear combination of functional interactions between brain parcellations (obtained by Independent Component Analysis). Our approach has the same spirit, but has some differences. In our case, the main objective is to identify variables that “explain” (that are dependent with) the individual brain network. We do not impose a linear relationship between non-imaging and imaging variables, and we study the brain network as a whole object without different “loads” in each edge. Our method does not impose any kind of linearity, and it also detects linear and non-linear dependence structures.

Data were pre-processed by HCP 30 , 31 , 32 (details can be found in 30 ), yielding the following outputs:

Group-average brain regional parcellations obtained by means of group-Independent Component Analysis (ICA 33 ). Fifteen components are described.

Subject-specific time series per ICA component.

Figure  4(A) shows three of the 15 ICA components with the specific one hour time series for a particular subject. These signals were used to construct an association matrix between pairs of ICA components per subject. This matrix represents the strength of the association between each pair of components, which can be quantified by different functional coupling metrics, such as the Pearson correlation coefficient between the signals of the component, which we adopted in the present study (panel (B)). For each of the 812 subjects, we studied functional connectivity by transforming each correlation matrix, Σ, into binary matrices or networks, G , (panel (C)). Two criteria for this transformation were used 34 , 35 , 36 : a fixed correlation threshold and a fixed number of links criterion. In the first criterion, the matrix was thresholded by a value ρ affording networks with varying numbers of links. In the second, a fixed number of link criteria were established and a specific threshold was chosen for each subject.

figure 4

( A ) ICA components and their corresponding time series. ( B ) Correlation matrix of the time series. ( C ) Network representation. The links correspond to the nine highest correlations.

As we have already mentioned, HCP provides interesting individual socio-demographic, behavioural and structural brain data. Variables are grouped into seven main categories: alertness, motor response, cognition, emotion, personality, sensory, and brain anatomy. Volume, thickness and areas of different brain regions were computed using the T1-weighted images of each subject in Free Surfer 37 . Thus, for each subject, we obtained a brain functional network, G , and a multivariate vector X that contains this last piece of information.

The main focus of this section is to analyse the “impact” of each of these variables ( X ) on the brain networks (i.e., on brain activity). To this end, we first selected a variable such as k , X k , and grouped each subject according to his/her value into only one of three categories (Low, Medium, or High) just by placing the values in ascending and using the 33.3% percentile. In this way, we obtained three groups of subjects, each identified by its correlation matrix \({{\rm{\Sigma }}}_{1}^{L},\,\ldots ,\,{{\rm{\Sigma }}}_{{n}_{L}}^{L}\) , \({{\rm{\Sigma }}}_{1}^{M},\,\ldots ,\,{{\rm{\Sigma }}}_{{n}_{M}}^{M}\) , and \({{\rm{\Sigma }}}_{1}^{H},\,\ldots ,\,{{\rm{\Sigma }}}_{{n}_{H}}^{H}\) , or by its corresponding network (once the criteria and the parameter are chosen) \({G}_{1}^{L},\,\ldots ,\,{G}_{{n}_{L}}^{L},\,\,\,{G}_{1}^{M},\,\ldots ,\,{G}_{{n}_{M}}^{M}\) , and \({G}_{1}^{H},\,\ldots ,\,{G}_{{n}_{H}}^{H}\) . The sample size of each group ( n L , n M , and n H ) is approximately 1/3 of 812, except in cases where there were ties. Once we obtained these three sets of networks, we applied the developed test. If differences exist between all three groups, then we are confirming an interdependence between the factoring variable and the functional networks. However, we cannot yet elucidate directionality (i.e., different networks lead to different sleeping patterns or vice versa?).

After filtering the data, we identified 221 variables with 100% complete information for the 812 subjects, and 90 other variables with almost complete information, giving a total of 311 variables. We applied the network ANOVA test for each of these 311 variables and report the T statistic. Figure  5(A) shows the T statistic for the variable Thickness of the right Inferior Parietal region. All values of the T statistic are between −2 and 2 for all ρ values using the fixed correlation criterion (left panel) for constructing the networks. The same occurs when a fixed number of link criteria is used (right panel). According to Theorem 1, when there are no differences between groups, T is asymptotically normal (0, 1), and therefore a value smaller than −3 is very unlikely (p-value = 0.00135). Since all T values are between −2 and 2, we assert that Thickness of the right Inferior Parietal region is not associated with the resting-state functional interactions. In panel (B), we show the T statistic for the variable Amount of hours spent sleeping on the 30 nights prior to the scan (“During the past month, how many hours of actual sleep did you get at night? (This may be different than the number of hours you spent in bed.)”) which corresponds to the alertness category. As one can see, most T values are much lower than −3, rejecting the hypothesis of equal mean network. Importantly, this shows that the number of hours a person sleeps is associated with their brain functional networks (or brain activity). However, as explained above, we do not know whether the number of hours slept the nights before represent these individuals’ habitual sleeping patterns, complicating any effort to infer causation. In other words, six hours of sleep for an individual who habitually sleeps six hours may not produce the same network pattern as six hours in an individual who normally sleeps eight hours (and is likely tired during the scan). Alternatively, different activity observed during waking hours may “produce” different sleep behaviours. Nevertheless, we know that the amount of hours slept before the scan should be measured and controlled when scanning a subject. In Panel (C), we show that brain volumetric variables can also influence resting-state fMRI networks. In that panel, we show the T value for the variable Area of the left Middle temporal region. Significant differences for both network criteria are also observed for this variable.

figure 5

( A – C ) T –statistics as a function of (left panel) ρ and (right panel) the number of links for three variables: ( A ) Right Inferioparietal Thickness, ( B ) Number of hours slept the nights prior to the scan. ( C ) Left Middle temporal Area. ( D ) W -statistic distribution (black bars) based on a bootstrap strategy. The W -statistic of the three variables studied is depicted with dots.

Under the hypothesis of equal mean networks between groups, we expect not to obtain a T statistic less than −3 when comparing the sample networks. We tested several different thresholds and numbers of links in order to present a more robust methodology. However, in this way, we generate sets of networks that are dependent on each criterion and between criteria, similarly to what happens when studying dynamic networks with overlapping sliding windows. This makes the statistical inference more difficult. To address this problem, we decided to define a new statistic based on T , W 3 , and study its distribution using the bootstrap resampling technique. The new statistic is defined as,

where Δ is the number of values of T that are lower than −3 for the resolution (grid of thresholds) studied. The supraindex in Δ indicates the criteria (correlation threshold, ρ or number of links fixed, L ) and the subindex indicates whether it is for positive or negative parameter values ( ρ or number of links). For example, Fig.  5(C) reveals that the variable Area of the left Middle temporal confirms having \({{\rm{\Delta }}}_{+}^{\rho }=10\) , \({{\rm{\Delta }}}_{-}^{\rho }=10\) , \({{\rm{\Delta }}}_{+}^{L}=9\) , and \({{\rm{\Delta }}}_{-}^{L}=9\) , and therefore W 3 = 9. The distribution of W 3 under the null hypothesis is studied numerically. Ten thousand random resamplings of the real networks were selected and the W 3 statistic was computed for each one. Figure  5(D) shows the W empirical distribution (under the null hypothesis) with black bars. Most W 3 values are zero, as expected. In this figure, the W 3 values of the three variables described are also represented by dots. The extreme values of W 3 for the variables Amount of Sleep and Middle Temporal Area L confirm that these differences are not a matter of chance. Both variables are related to brain network connectivity.

So far we have shown, among other things, that functional networks differ between individuals who get more or fewer hours of sleep, but how do these networks differ exactly? Fig.  6(A) shows the average networks for the three groups of subjects. There are differences in connectivity strength between some of the nodes (ICA components). These differences are more evident in panel (B), which presents a weighted network Ψ with links showing the variability among the subpopulation’s average networks. This weighted network is defined as

where \(\overline{{ {\mathcal M} }}(i,\,j)=\frac{1}{3}\mathop{\sum _{s\mathrm{=1}}}\limits^{3}{{ {\mathcal M} }}^{{\rm{grp}}s}\) . The role of Ψ is to highlight the differences between the mean networks. The greatest difference is observed between nodes 1 and 11. Individuals that sleep 6.5 hours or less show the strongest connection between ICA component number 1 (which corresponds to the occipital pole and the cuneal cortex in the occipital lobe) and ICA component number 11 (which includes the middle and superior frontal gyri in the frontal lobe, the superior parietal lobule and the angular gyrus in the parietal lobe). Another important connection that differs between groups is the one between ICA components 1 and 8, which corresponds to the anterior and posterior lobes of the cerebellum. Using the subnetwork identification procedure previously described (see Fig.  6C ) we identified a 7-node subnetwork as the most significant for network differences. The nodes that make up that network are presented in panel D.

figure 6

( A ) Average network for each subgroup defined by hours of sleep ( B ) Weighted network with links that represent the differences among the subpopulation mean networks. ( C ) T j -statistic as a function of the number of nodes in each subnetwork ( j ). The nodes identified by the minimum T j are presented in the boxes, while the number of nodes identified by the procedure are represented with a red circle. ( D ) Nodes from the identified subnetwork are circled in blue. The nodes identified in ( D ) correspond to those in panel ( B ).

The results described above refer to only three of the 311 variables we analysed. In terms of the remaining variables, we observed more variables that partitioned the subjects into groups presenting statistical differences between the corresponding brain networks. Two more behavioral variables were identified the variable Dimensional Change Card Sort (CardSort_AgeAdj and CardSort_Unadj) which is a measure of cognitive flexibility, and the variable motor strength (Strength_AgeAdj and Strength_Unadj). Also 20 different brain volumetric variables were identified, the complete list of these variables is shown in Suppl. Table  S1 . It is important to note that these brain volumetric variables are largely dependent on each other; for example, individuals with larger inferior-temporal areas often have a greater supratentorial volume, and so on (see Suppl. Fig.  S4 ).

We have reported only those variables for which there is very strong statistical evidence in favor of the existence of dependence between the functional networks and the “behavioral” variables, irrespectively of the threshold used to build up the networks. There are other variables that show this dependence only for some levels of the threshold parameter, but we do not report these to avoid reporting results that may not be significant. Our results complement those observed in 29 . In particular, Smith et al . report that the variable Picture Vocabulary test is the most significant. With a less restrictive criterion, this variable can also be considered significant with our methodology. In fact, the W 3 value equals 3 (see Supplementary Fig.  S5 for details), which supports the notion (see panel D in Fig.  5 ) that the variable Picture Vocabulary test is also relevant for explaining the functional networks. On the other hand, the variable we found to vary significantly ( W 3  = 9) the Amount of sleep is not reported by Smith et al . Perhaps the canonical correlation cannot find the variable because it looks for linear correlations in a high dimensional space. It is well known that non-linearities appear typically in high dimensional statistical problems (See for instance 38 ). To capture nonlinear associations, a kernel CCA method was introduced, see 39 , 40 and the references therein. By contrast, our method does not impose any kind of linearity, and detects linear as well as non-linear dependence structures. The variable “Cognitive flexibility” (Card Sort) found here was also reported in 38 . Finally, the brain volumetric variables we found to be relevant here were not analyzed in 29 .

So far, we apply the methodology presented here to analyse brain data by using only 15 brain ICA dimensions (provided by HCP). But, what is the impact of working with more ICA components? Does we identify more covariables? Fortunately, we can respond these questions since more ICA dimensions were recently made available on HCP webpage. Three new cognitive variables, Working memory , Relational processing and Self-regulation/Impulsivity were identified for higher network dimension (50 and 300 ICA dimensions, see Suppl. Table  S2 for details).

Performing statistical inference on brain networks is important in neuroimaging. In this paper, we presented a new method for comparing anatomical and functional brain networks of two or more subgroups of subjects. Two problems were studied: the detection of differences between the groups and the identification of the specific network differences. For the first problem, we developed an ANOVA test based on the distance between networks. This test performed well in terms of detecting existing differences (high statistical power). Finally, based on the statistics developed for the testing problem, we proposed a way of solving the identification problem. Next, we discuss our findings.

Based on the minimization of the T statistic, we propose a method for identifying the subnetwork that differs among the subgroups. This subnetwork is very useful. On the one hand, it allows us to understand which brain regions are involved in the specific comparison study (neurobiological interpretation), and on the other, it allows us to identify/diagnose new subjects with greater accuracy.

The relationship between the minimum T value for a fixed number of nodes as a function of the number of nodes ( T j vs. j ) is very informative. A large decrease in T j incorporating a new node into the subnetwork ( T j + 1 << T j ) means that the new node and its connections explain much of the difference between groups. A very small decrease shows that the new node explains only some of the difference because either the subgroup difference is small for the connections of the new node, or because there is a problem of overestimation.

The correct number of nodes in each subnetwork must verify

In this paper, we present ad hoc criteria in each example (a certain constant for g ( sample size )) and we do not give a general formula for g ( sample size ). We believe that this could be improved in theory, but in practice, one can propose a natural way to define the upper bound and subsequently identify the subnetwork, as we showed in the example and in the application by observing T j as a function of j . Statistical methods such as the one developed for change-point detection may be useful in solving this problem.

Sample size

What is the adequate sample size for comparing brain networks? This is typically the first question in any comparison study. Clearly, the response depends on the magnitude of the network differences between the groups and the power of the test. If the subpopulations differ greatly, then a moderate number of networks in each group is enough. On the other hand, if the differences are not very big, then a larger sample size is required to have a reasonable power of detection. The problem gets more complicated when it comes to identification. We showed in Example 1 that we obtain a good identification rate when a sample size of 100 networks is selected from each subgroup. Thus, the rate of correct identification is small for a sample size of for example 30.

Confounding variables in Neuroimaging

Humans are highly variable in their brain activity, which can be influenced, in turn, by their level of alertness, mood, motivation, health and many other factors. Even the amount of coffee drunk prior to the scan can greatly influence resting-state neural activity. What variables must be controlled to make a fair comparison between two or more groups? Certainly age, gender, and education are among those variables, and in this study we found that the amount of hours slept the nights prior to the scan is also relevant. Although this might seem pretty obvious, to the best of our knowledge, most studies do not control for this variable. Five other variables were identified, each one related with some dimensions of cognitive flexibility, self-regulation/impulsivity, relational processing, working memory or motor strength. Finally, we identified as being relevant a set of 20 highly interdependent brain volumetric variables. In principle, the role of these variables is not surprising, since comparing brain activity between individuals requires one to pre-process the images by realigning and normalizing them to a standard brain. In other words, the relevance of specific area volumes may simply be a by-product of the standardization process. However, if our finding that brain volumetric variables affect functional networks is replicated in other studies, this poses a problem for future experimental designs. Specifically, groups will not only have to be matched by variables such as age, gender and education level, but also in terms of volumetric variables, which can only be observed in the scanner. Therefore, several individuals would have to be scanned before selecting the final study groups.

In sum, a large number of subjects in each group must be tested to obtain highly reproducible findings when analysing resting-state data with network methodologies. Also, whenever possible, the same participants should be tested both as controls and as the treatment group (paired samples) in order to minimize the impact of brain volumetric variables.

Deco, G. & Kringelbach, M. L. Great expectations: using whole-brain computational connectomics for understanding neuropsychiatric disorders. Neuron 84 , 892–905 (2014).

Article   CAS   PubMed   Google Scholar  

Stephan, K. E., Iglesias, S., Heinzle, J. & Diaconescu, A. O. Translational perspectives for computational neuroimaging. Neuron 87 , 716–732 (2015).

Bullmore, E. & Sporns, O. Complex brain networks: network theoretical analysis of structural and functional systems. Nature Reviews Neuroscience 10 , 186–196 (2009).

Fornito, A., Zalesky, A. & Bullmore, E. Fundamentals of Brain Network Analysis. Elsevier .

Anonymous Focus on human brain mapping. Nat. Neurosci. 20 , 297–298 (2017).

Article   Google Scholar  

Button, K. S. et al . Power failure: why small sample size undermines the reliability of neuroscience. Nat. Rev. Neurosci. 14 , 365–376 (2013).

Poldrack, R. Scanning the horizon: towards transparent and reproducible neuroimaging research. Nat. Rev. Neurosci. 18 , 115–126 (2017).

Nichols, T. E. et al . Best Practices in Data Analysis and Sharing in Neuroimaging using MRI. Nat. Neurosci. 20 , 299–303 (2016).

Bennett, C. M. & Miller, M. B. How reliable are the results from functional magnetic resonance imaging? Annals of the New York Academy of Sciences 1191 , 133–155 (2010).

Article   ADS   PubMed   Google Scholar  

Brown, E. N. & Behrmann, M. Controversy in statistical analysis of functional magnetic resonance imaging data. Proc Natl Acad Sci USA 114 , E3368–E3369 (2017).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Fraiman, D., Fraiman, N. & Fraiman, R. Non Parametric Statistics of Dynamic Networks with distinguishable nodes. Test 26 , 546?573 (2017).

Article   MATH   Google Scholar  

Cerqueira, A., Fraiman, D., Vargas, C. & Leonardi, F. A test of hypotheses for random graph distributions built from EEGdata. IEEE Transactions on Network Science and Engineering 4 , 75–82 (2017).

Article   MathSciNet   Google Scholar  

Kolar, M., Song, L., Ahmed, A. & Xing, E. Estimating Time-varying networks. Ann. Appl. Stat. Estimating Time-varying networks. 4 , 94–123 (2010).

Google Scholar  

Zalesky, A., Fornito, A. & Bullmore, E. Network-based statistic: identifying differences in brain networks. Neuroimage 53 , 1197–1207 (2010).

Article   PubMed   Google Scholar  

Sanfeliu, A. & Fu, K. A distance measure between attributed relational graphs. IEEE T. Sys. Man. Cyb. 13 , 353–363 (1983).

Schieber, T. et al . Quantification of network structural dissimilarities. Nature communications 8 , 13928 (2017).

Article   ADS   CAS   PubMed   PubMed Central   Google Scholar  

Shimada, Y., Hirata, Y., Ikeguchi, T. & Aihara, K. Graph distance for complex networks. Scientific reports 6 , 34944 (2016).

Gao, X., Xiao, B., Tao, D. & Li, X. A survey of graph edit distance. Pattern Anal Appl 13 , 113–129 (2010).

Zalesky, A., Cocchi, L., Fornito, A., Murray, M. & Bullmore, E. Connectivity differences in brain networks. Neuroimage 60 , 1055–1062 (2012).

Della-Maggiore, V., Villalta, J. I., Kovacevic, N. & McIntosh, A. R. Functional Evidence for Memory Stabilization in Sensorimotor Adaptation: A 24-h Resting-State fMRI Study. Cerebral Cortex 27 , 1748–1757 (2015).

Mawase, F., Bar-Haim, S. & Shmuelof, L. Formation of Long-Term Locomotor Memories Is Associated with Functional Connectivity Changes in the Cerebellar?Thalamic?Cortical Network. Journal of Neuroscience 37 , 349–361 (2017).

Fraiman, D. & Chialvo, D. What kind of noise is brain noise: anomalous scaling behavior of the resting brain activity fluctuations. Frontiers in Physiology 3 , 307 (2012).

Article   PubMed   PubMed Central   Google Scholar  

Garcia-Cordero, I. et al . Stroke and neurodegeneration induce different connectivity aberrations in the insula. Stroke 46 , 2673–2677 (2015).

Fraiman, D. et al . Reduced functional connectivity within the primary motor cortex of patients with brachial plexus injury. Neuroimage Clinical 12 , 277–284 (2016).

Dottori, M. et al . Towards affordable biomarkers of frontotemporal dementia: A classification study via network’s information sharing. Scientific Reports 7 , 3822 (2017).

Article   ADS   PubMed   PubMed Central   Google Scholar  

Human Connectome Project. http://www.humanconnectomeproject.org/

Kaufmann, T. et al . The brain functional connectome is robustly altered by lack of sleep. NeuroImage 127 , 324–332 (2016).

Krause, A. et al . The sleep-deprived human brain. Nature Reviews Neuroscience 18 , 404–418 (2017).

Smith, S. et al . A positive-negative mode of population covariation links brain connectivity, demographics and behavior. Nature neuroscience 18 , 1565–1567 (2015).

Human Connectome Project. WU-Minn HCP 900 Subjects Data Release: Reference Manual. 67–87 (2015).

Griffanti, L. et al . ICA-based artefact removal and accelerated fMRI acquisition for improved resting state network imaging. Neuroimage 95 , 232–247 (2014).

Smith, S. M. et al . Resting-state fMRI in the Human Connectome Project. Neuroimage 80 , 144–168 (2013).

Beckmann, C., DeLuca, M., Devlin, J. & Smith, S. Investigations into resting-state connectivity using independent component analysis. Philosophical Transactions of the Royal Society of London B: Biological Sciences 360 , 1001–1013 (2005).

Fraiman, D., Saunier, G., Martins, E. & Vargas, C. Biological Motion Coding in the Brain: Analysis of Visually Driven EEG Functional Networks. PloS One , 0084612 (2014).

Amoruso, L. et al . Brain network organization predicts style-specific expertise during Tango dance observation. Neuroimage 146 , 690–700 (2017).

van den Heuvel, M. P. et al . Proportional thresholding in resting-state fMRI functional connectivity networks and consequences for patient-control connectome studies: Issues and recommendations. Neuroimage 152 , 437–449 (2017).

Fischl, B. FreeSurfer. Neuroimage 62 , 774–781 (2012).

Buhlmann, P. & van der Geer, S. Statistics for High-Dimensional Data: Methods, Theory and Applications. Springer (2011).

Yoshida, K., Yoshimoto, J. & Doya, K. Sparse kernel canonical correlation analysis for discovery of nonlinear interactions in high-dimensional data. BMC Bioinformatics 18 , 108 (2017).

Yamanishi, Y., Vert, J. P., Nakaya, A. & Kanehisa, M. Extraction of correlated gene clusters from multiple genomic data by generalized kernel canonical correlation analysis. Bioinformatics 19 , 323–330 (2003).

Download references

Acknowledgements

We thank two anonymous reviewers for extensive comments that helped improve the manuscript significantly. Data were provided by the Human Connectome Project, WU-Minn Consortium (Principal Investigators: David Van Essen and Kamil Ugurbil; 1U54MH091657) funded by the 16 NIH Institutes and Centers that support the NIH Blueprint for Neuroscience Research; and by the McDonnell Center for Systems Neuroscience at Washington University. This paper was produced as part of the activities of FAPESP Research, Innovation and Dissemination Center for Neuromathematics (Grant No. 2013/07699-0, S. Paulo Research Foundation). This work was partially supported by PAI UdeSA.

Author information

Authors and affiliations.

Departamento de Matemática y Ciencias, Universidad de San Andrés, Buenos Aires, Argentina

Daniel Fraiman

Consejo Nacional de Investigaciones Científicas y Tecnológicas, Buenos Aires, Argentina

Centro de Matemática, Facultad de Ciencias, Universidad de la República, Montevideo, Uruguay

Ricardo Fraiman

Instituto Pasteur de Montevideo, Montevideo, Uruguay

You can also search for this author in PubMed   Google Scholar

Contributions

D.F. and R.F. conceived the research, analysed the data and wrote the manuscript.

Corresponding author

Correspondence to Daniel Fraiman .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Supplementary information, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Fraiman, D., Fraiman, R. An ANOVA approach for statistical comparisons of brain networks. Sci Rep 8 , 4746 (2018). https://doi.org/10.1038/s41598-018-23152-5

Download citation

Received : 13 November 2017

Accepted : 06 March 2018

Published : 16 March 2018

DOI : https://doi.org/10.1038/s41598-018-23152-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

research paper using one way anova pdf

Book cover

Basic and Advanced Statistical Tests pp 21–24 Cite as

One-Way Anova

  • Amanda Ross 3 &
  • Victor L. Willson 4  

2452 Accesses

16 Citations

A one-way ANOVA (analysis of variance) compares the means of two or more groups for one dependent variable. A one-way ANOVA is required when the study includes more than two groups. (In other words, a t -test cannot be used.) As with t -tests, there is one independent variable and one dependent variable. Interval dependent variables for nominal groups are required. The assumption of normal distribution is not required.

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Unable to display preview.  Download preview PDF.

Author information

Authors and affiliations.

A. A. Ross Consulting and Research, USA

Amanda Ross

Texas A&M University, USA

Victor L. Willson

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Sense Publishers

About this chapter

Cite this chapter.

Ross, A., Willson, V.L. (2017). One-Way Anova. In: Basic and Advanced Statistical Tests. SensePublishers, Rotterdam. https://doi.org/10.1007/978-94-6351-086-8_5

Download citation

DOI : https://doi.org/10.1007/978-94-6351-086-8_5

Publisher Name : SensePublishers, Rotterdam

Online ISBN : 978-94-6351-086-8

eBook Packages : Education Education (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

IMAGES

  1. (PDF) One-Way Analysis of Variance (ANOVA)

    research paper using one way anova pdf

  2. (PDF) Methodology and Application of One-way ANOVA

    research paper using one way anova pdf

  3. One Way ANOVA.pdf

    research paper using one way anova pdf

  4. One way anova examples with solutions pdf

    research paper using one way anova pdf

  5. One Way ANOVA

    research paper using one way anova pdf

  6. (PDF) One-Way Analysis of Variance (ANOVA) Example Problem

    research paper using one way anova pdf

VIDEO

  1. ANOVA Summary in Origin Pro

  2. +3 3rd Semester//2nd Year//Core Paper 5// Psychological Statistics//One Way ANOVA ଆସନ୍ତୁ ଶିଖିବା

  3. Test of Difference (One-way anova, Two-way anova, and Ancova) using Jamovi

  4. One-Way ANOVA using JASP

  5. 16th batch ANOVA with tuey test day 7

  6. ANOVA 2

COMMENTS

  1. (PDF) Methodology and Application of One-way ANOVA

    Abstract and Figures. This paper describes the powerful statistical technique one-way ANOVA that can be used in many engineering and manufacturing applications and presents its application. This ...

  2. PDF Jean Ashby Community College of Baltimore County

    Results of a one way ANOVA showed that there were significant differences between learning environments with the students in the blended courses having the least success. Additional analysis was done to address issues of attrition since attrition rates are high for community college students and online students.

  3. PDF Using ANOVA to Examine the Relationship between Safety & Security and

    Using one-way analysis of variance (ANOVA), this study aimed to examine the relationship between safety and security index and human development. The sample consisted of 53 African countries.

  4. (PDF) Data Analysis and Application: One-Way ANOVA

    Abbas Parchami. Analysis of variance (ANOVA) is an important method in exploratory and confirmatory data analysis. The simplest type of ANOVA is one-way ANOVA for comparison among means of several ...

  5. PDF Chapter 7 One-way ANOVA

    One-way ANOVA examines equality of population means for a quantitative out-come and a single categorical explanatory variable with any number of levels. The t-test of Chapter 6 looks at quantitative outcomes with a categorical ex-planatory variable that has only two levels.

  6. Understanding one-way ANOVA using conceptual figures

    In the comparison of the means of three groups that are mutually independent and satisfy the normality and equal variance assumptions, when each group is paired with another to attempt three paired comparisons 1), the increase in Type I error becomes a common occurrence.

  7. (PDF) One-Way Analysis of Variance (ANOVA)

    (PDF) One-Way Analysis of Variance (ANOVA) Report number: stcp-gilchristsamuels-9 Affiliation: Birmingham City University Authors: Peter Samuels Birmingham City University Mollie Gilchrist...

  8. (PDF) AN INTRODUCTION TO ONE WAY ANOVA

    AN INTRODUCTION TO ONE WAY ANOVA Authors: Poras A. Patel International Journal for Pharmaceutical Research Scholars Abstract The presentation highlights various topics like Definition, Type of...

  9. PDF ANOVA Analysis of Student Daily Test Scores in Multi-Day Test Periods

    One-way ANOVA analysis finds that mean test scores of students who take the test later in the test period significantly decline. Pairwise comparisons that assume unequal numbers of observations in each group as well as unequal variances of exam scores for each day, show that day 4 mean scores are significantly less than days 1, 2, and 3.

  10. One-Way ANOVA

    One-way ANOVA (analysis of variance) is a technique that generalizes the two-sample t -test to three or more samples. We test the hypotheses (specified here for k =6 samples) about population means μ j : H 0: μ 1 = μ 2 = μ 3 = μ 4 = μ 5 = μ 6 H 1: Not all μ j are equal ( j =1:6) The test is based on the observed sample means \ (\bar x_j\). Keywords

  11. [PDF] One-Way ANOVA and Multiple Comparison in Public Health Research

    One-way ANOVA is an inferential statistic for analyzing the mean difference between more than two groups, conducted with one dependent variable and one independent variable. One-way ANOVA is an inferential statistic for analyzing the mean difference between more than two groups. It isconducted with one dependent variable and one independent variable. The dependent variable is a continuous ...

  12. PDF Methodology and Application of One-way ANOVA

    2. One-way ANOVA Test Procedure The simplest case is oneway ANOVA. --way A one analysis of variance is used when the data are divided into groups according to only one factor. Assume that the data ...

  13. PDF One-Way Analysis of Variance: Comparing Several Means

    Diana Mindrila, Ph.D. Phoebe Balentyne, M.Ed. Based on Chapter 25 of The Basic Practice of Statistics (6th ed.) Concepts: Comparing Several Means The Analysis of Variance F Test The Idea of Analysis of Variance Conditions for ANOVA F Distributions and Degrees of Freedom Objectives: Describe the problem of multiple comparisons.

  14. Analysis of Variance

    The first model, 1-way fixed-effects ANOVA, is an extension of the Student 2-independent-samples t test that lets us simultaneously compare means among several independent samples. The second model, 2-way fixed-effects ANOVA, has 2 factors, A and B, and each level of factor A appears in combination with each level of factor B.

  15. PDF Case study using analysis of variance to determine groups' variations

    1 About anova Anova, which is the abbreviation of "analysis of variance", tests the hypothesis that the means of three or more populations are equal. ANOVA assess the importance of one or more factors by comparing the response variable means at the different factor levels.

  16. An ANOVA Analysis of Education Inequities Using Participation and

    States, using Germany as the benchmark. Data were acquired from the Programme for International Student Assessment from the Organisation for Economic Cooperation and Development. The total sample included 150 cases randomly selected from 240 schools in Massachusetts and 150 schools in Germany. Data were analyzed using ANOVA. The

  17. One-way ANOVA

    Revised on June 22, 2023. ANOVA, which stands for Analysis of Variance, is a statistical test used to analyze the difference between the means of more than two groups. A one-way ANOVA uses one independent variable, while a two-way ANOVA uses two independent variables. One-way ANOVA example

  18. Reporting and Interpreting One-Way Analysis of Variance (ANOVA) Using a

    PDF Published 2021-12-31 How to Cite NTUMI, Simon. Reporting and Interpreting One-Way Analysis of Variance (ANOVA) Using a Data-Driven Example: A Practical Guide for Social Science Researchers. Journal of Research in Educational Sciences, [S.l.], v. 12, n. 14, p. 38 - 47, dec. 2021. ISSN 2068-8407.

  19. PDF One-Way ANOVA

    MINITAB ASSISTANT WHITE PAPER This paper explains the research conducted by Minitab statisticians to develop the methods and data checks used in the Assistant in Minitab Statistical Software. One-Way ANOVA Overview One-way ANOVA is used to compare the means of three or more groups to determine whether they differ significantly from one another.

  20. (PDF) Analysis of Variance: The Fundamental Concepts

    ... Data were analyzed statically using Statistical Package for the Social Sciences, SPSS version 16 software. The level of significance was measured by using the Analysis of Variance (ANOVA)...

  21. An ANOVA approach for statistical comparisons of brain networks

    Abstract. The study of brain networks has developed extensively over the last couple of decades. By contrast, techniques for the statistical analysis of these networks are less developed. In this ...

  22. PDF One-Way ANOVA Exam Practice

    C8057 (Research Methods II): One-Way ANOVA Exam Practice Dr. Andy Field Page 2 4/18/2007 Banana Reward Observing Monkey Observing Human 17 15 115 8 71 13 13 8 13 13 9 6 Mean 7.00 8.00 11.00 Variance 36.00 25.00 14.50 Grand Mean Grand Variance 8.67 24.67 • Carry out a one-way ANOVA by hand to test the hypothesis that some forms of learning

  23. One-Way Anova

    A one-way ANOVA (analysis of variance) compares the means of two or more groups for one dependent variable. A one-way ANOVA is required when the study includes more than two groups. (In other words, a t -test cannot be used.) As with t -tests, there is one independent variable and one dependent variable. Interval dependent variables for nominal ...