## ANOVA

ANOVA in statistics is a powerful tool for determining the influence of different groups of observations among themselves. The analysis of variance was introduced by Fisher, an English scientist who made a huge contribution to the development of science. ANOVA is an acronym for ANalysis Of VAriance.

### Example

Suppose you want to conduct an empirical study of gasoline quality, for this you fill up the tank at one gas station and drive n kilometers, repeat such an experiment, say, five times, then conduct the same experiment, only at a different gas station. You have two sets of data - refueling A and refueling B. Certainly, the figures are scattered, but there is still some dependence, so that would determine whether refueling affects gasoline consumption (or the data are not related) You are using variance analysis.

The analysis of variance allows you to determine which of the factors affects more, intra-group or intergroup. In the example above, you will be able to determine how much the choice of gas station affects gasoline consumption. This is the essence of the dispersion analysis: to find out whether the selected factor is significant for the selected observations.

In a sense, the analysis of variance is similar to regression and correlation analyses, because it allows determine the influence of variables on each other.

## Analysis

In theory, a simple model is built to analyze the variance, similar to the one studied in time series analysis.

### Model

The model of the analysis of variance includes the average value, the effect of the experiment and a random error:

y = μ + τ + ε

τ - experiment effect, ε - random error

### Single-factor

One-factor analysis of variance considers the influence of one criterion, it is done this way: we conduct two experiments, in one of them we include an additional factor and analyze whether this factor has made changes. As initial data, consider the results of a number of experiments:

N | E_{1} | E_{2} | E_{3} | E_{4} |
---|---|---|---|---|

1 | 57 | 54 | 121 | 59 |

2 | 47 | 50 | 85 | 49 |

3 | 38 | 57 | 136 | 59 |

4 | 55 | 51 | 85 | 32 |

5 | 43 | 47 | 108 | 38 |

μ_{i} | 48 | 51.8 | 107 | 47.4 |

μ = (48 + 51.8 + 107 + 47.4) / 4 = 63.55

The square of errors within groups (Square Sum within group):

SS_{w}= Σ_{i}Σ_{j}(y_{ij}- μ_{i})^{2}= 2918

The square of errors between groups (Square Sum between group):

SS_{b}= Σ_{i}(μ_{i}- μ)^{2}= 2528.59

Given the degrees of freedom, the expected average is:

MS_{w}= SS_{w}/ a(n-1) = 194.53

MS_{b}= SS_{b}/ a-1 = 632.15

Value of F_{crit}:

F_{0}= MS_{b}/MS_{w}= 3.25

Fischer's test: if the value of F_{0} turns out to be greater than the value of F _{λ,4,15}, then the factor has an impact.

For n = 20 and a = 5, F_{λ,n-a,a-1}= F_{λ,15,4}= 5.86

Since F_{0}= 3.25 < 5.86, then we assume that the introduced factordid nothave an effecton the results of the experiment.

### Two-factor

In two- factor analysis , three hypotheses are put forward for verification:

- Factors A and B do not affect the result
- Factor A does not affect the result
- Factor B does not affect the result

To carry out a two-factor analysis, it is necessary to make groups of results: several measurements for all values of each of the factors, i.e.:

A_{1} | A_{2} | |
---|---|---|

B_{1} | X1_{a1,b1}...XN_{a1,b1} | X1_{a1,b2}...XN_{a1,b2} |

B_{2} | X1_{a1,b2}...XN_{a1,b2} | X1_{a1,b2}...XN_{a1,b2} |

Next, the average value for each factor value is calculated, i.e. the average for A1, the average for B1, etc. Then it is calculated the total average for all results. Let's set the number of criteria: k = 2 (the number of criteria A) and m = 2 (the number of criteria B).

T = ΣΣΣx_{ijk}

The sum of elements under the influence of factor A:

T_{Ai}= Σx_{i·k}

The sum of elements under the influence of factor B:

T_{Bj}= Σx_{·jk}

The sum of elements under the influence of factor AB:

T_{AiBj}= Σx_{ij·}

SST = Σx^{2}_{ijk}- T^{2}/N

SSA = ΣT^{2}_{Ai}/n·m - T^{2}/N

SSB = ΣT^{2}_{Bj}/n·k - T^{2}/N

SSAB = ΣΣT^{2}_{AiBj}/n - SSA - SSB - T^{2}/N

SSE = ΣΣΣx^{2}_{ijk}- ΣΣT^{2}_{AiBj}/n

SST = SSA + SSB + SSAB + SSE

MSE = SSE/(n-1)·m·k

MSA = SSA/k-1

MSB = SSB/m-1

MSAB = SSAB/(m-1)·(k-1)

Test "Criterion Adoes notaffect the result", ν_{1}= k-1:

F_{A}= MS_{A}/MS_{E}

Test "Criterion Bdoes notaffect the result", ν_{1}= m-1:

F_{B}= MS_{B}/MS_{E}

Test "Criteria A and Bdo notaffect the result", ν_{1}= (k-1)(m-1):

F_{int}= MS_{AB}/MS_{E}

For each F, if F > F_{α,ν1,ν2}, then the hypothesis is rejected. ν_{2}= N-mk

### Multifactorial

Multivariate analysis is similar to two-factor analysis - the same operations are performed, but the criteria are grouped and the influence of each of the factors is found iteratively.

### With repeated measurements

The analysis of variance with repeated measurements indicates that several tests were performed for each criterion measurements of a random variable to obtain a more accurate result (since ANOVA) uses the intra-group sum of squares.

### Application

Dispersion analysis is used in a wide variety of branches of science and production when it is necessary to study the dependence of the criteria on the difference in average values, while comparing not the average value, but the spread the results are around the mean, i.e. the variance.

## Solving problems

As an example, let's give a problem from metrology. The plant houses five machines that produce shafts. It is necessary to determine whether the choice of a machine tool or the training of an employee affects the result of production. For analysis measurements are made for each machine and employee, the result is a table:

Operator 1 | ||||||||||

M1 | 30.345 | 30.594 | 30.358 | 30.692 | 30.656 | 30.692 | 30.317 | 30.474 | 30.557 | 30.455 |
---|---|---|---|---|---|---|---|---|---|---|

M2 | 30.389 | 30.327 | 30.319 | 30.365 | 30.354 | 30.353 | 30.377 | 30.379 | 30.303 | 30.339 |

M3 | 30.37 | 30.529 | 30.454 | 30.458 | 30.511 | 30.439 | 30.545 | 30.688 | 30.571 | 30.359 |

M4 | 30.389 | 30.38 | 30.353 | 30.333 | 30.339 | 30.334 | 30.378 | 30.361 | 30.309 | 30.347 |

M5 | 30.988 | 31.279 | 30.703 | 30.506 | 31.26 | 30.801 | 31.012 | 30.853 | 30.537 | 30.303 |

Operator 2 | ||||||||||

M1 | 30.128 | 30.288 | 30.299 | 30.163 | 30.195 | 30.219 | 30.133 | 30.277 | 30.124 | 30.261 |

M2 | 30.356 | 30.395 | 30.382 | 30.387 | 30.368 | 30.304 | 30.319 | 30.331 | 30.394 | 30.325 |

M3 | 30.371 | 30.793 | 30.636 | 30.642 | 30.529 | 30.761 | 30.584 | 30.672 | 30.635 | 30.457 |

M4 | 30.596 | 30.768 | 30.785 | 30.984 | 30.684 | 30.796 | 30.632 | 30.829 | 30.945 | 30.479 |

M5 | 30.314 | 30.367 | 30.376 | 30.395 | 30.31 | 30.389 | 30.339 | 30.374 | 30.392 | 30.353 |

Let's use the method of two-factor analysis, factor A is the operator, factor B is the machine. Calculate the sums of squares, to do this, you need to calculate the average value for each of the groups:

T | T_{A1} | T_{A2} |
T_{B1} | T_{B2} | T_{B3} | T_{B4} | T_{B5} |
---|---|---|---|---|---|---|---|

3048.169 | 1525.334 | 1522.835 | 607.227 | 607.066 | 611.004 | 611.021 | 611.851 |

SSA = 0.062

SSB = 1.055

SSAB = 2.334

SSE = 1.65

MSA = 0.062

MSB = 0.264

MSAB = 0.584

MSE = 0.413

F_{A}= 0.15

F_{B}= 0.639

F_{AB}= 1.414

Critical values for the Fischer test:

F_{crit A}= F_{0.1, 1, 90}= 2.77

F_{crit B}= F_{0.1, 4, 90}= 2.01

F_{crit AB}= F_{0.1, 4, 90}= 2.01

Results table:

The impact of the machine on the result | Yes | 0.15 < 2.77 |
---|---|---|

The impact of the employee's qualifications on the result | Yes | 0.639 < 2.01 |

The mutual influence of the employee's qualifications and the choice of the machine on the result | Yes | 1.414 < 2.01 |

### In excel/Open Calc

To solve the variance analysis in a spreadsheet, you will need the following formulas:

sumproduct | Sum of products, used to find the sum of squares |

finv | Inverse value of the distribution F - Fisher criterion |