## What is it and who needs it?

Verification (test) statistical hypotheses are a way of mathematically determining the validity of some statement based on
distribution law. Having mastered this method, you will be able to make mathematically sound conclusions, for example:

### Example #1

You make dice for the dice game and to make sure that the dice is perfectly balanced, you conduct a test - roll the dice 600 times and decide that if each number falls 100±10 times, then the dice is balanced.

#### Example #2

In production, 5% of products are rejected, you have developed a new technology and want to check whether it will decrease the amount of marriage.

## Basic terms, definitions and formulas

### Null and alternative statistical hypotheses

Mathematically, the condition of the statistical test is written in the form of the main (null) hypothesis H_{0} and
the alternative (competing) hypothesis H_{1}. The main hypothesis implies a certain parameter value.
An alternative hypothesis is used to indicate an area that we may also be interested in.

#### Now in the examples:

In the first example, we want to find out if the number of each thrown number is equal to 100±10, while for us both more than 110 and less than 90 will be unsuccessful

H_{0}: μ = 100±10

H_{1}: μ ≠ 100±10

the scientific record looks like this:

H_{0}: μ = 100

H_{1}: μ ≠ 100

α = 0.1

In the second example, we want to find out if the new technology is better than the old one? At the same time, we are not interested in whether it has become worse, but only if there are improvements. Suppose that if the number of defects remained at the level of 5% 0.25%, then the process did not get better, if the number of defects is less than 4.75%, then there are improvements:

H_{0}: p = 5±0.25%

H_{1}: p < 4.75%

the scientific record looks like this:

H_{0}: p = 0.05

H_{1}: p < 0.05

α = 0.05

### Critical area and two errors

The area of values in which the main hypothesis is incorrect is the critical area, the size of this area is set as the significance level:

We have values from 100 to 200 and want to check,

we assume that the main hypothesis is incorrect in the critical area, if our assumption is incorrect, then we have made a mistake, such an error is called
**error of the first kind**. For an alternative hypothesis, we can also make an error, such an error will be called a **error of the second kind**

#### Why?

We formulate the hypothesis in such a way that the incorrect rejection of the main hypothesis is more significant for our solution, than incorrect acceptance of an alternative, here is an example:

A study is being conducted on whether there is a link between smoking and cancer, the main hypothesis is put forward as follows: smoking causes cancer. If we reject this statement, and it turns out to be true - we are endangering human lives (a mistake of the first kind). At the same time, if smoking does not cause cancer, and during the experiment we confirmed what causes it, then it will not cause any special consequences (error of the second kind).

In terms of making a decision, we want to control the level of **errors of the first kind**, i.e. if we need to make a decision about a certain statement,
we must set some **significance level** α and subsequent calculations will depend on this parameter.

It is necessary to check,

### Significance level, statistical power

**The significance level** α is the probability of making a mistake of the first kind. The significance level and the error of the first kind are the same thing.
Statistical power is associated with a second kind of error (β), **statistical power** is the probability of rejecting the main hypothesis when true
alternative. The probability of error of the second kind and statistical power in total give 100%, respectively, the greater the statistical power,
the less likely it is to make a mistake of the second kind.

### So we have:

**Statistical hypothesis testing** is a mathematical representation of a certain statement

**Null hypothesis** (H_{0}) - an assumption about a certain parameter θ, H_{0}: θ = θ_{0}

**Alternative hypothesis** (H_{1}) is an assumption about a certain parameter θ, H_{1}: θ = θ_{0}

**Critical region** is the area in which the main hypothesis H_{0} is incorrect

**Error of the first kind** - the probability of rejecting the main hypothesis when it is true

**Type II error** - the probability of accepting the main hypothesis when it is incorrect

#### Example

Mathematical record of the hypothesis that the average value of the general population is 2

H_{0} : μ = 2

H_{1} : μ ≠ 2

#### Another example

Mathematical record of the hypothesis that the average value of sample A and the average value of sample B are equal

H_{0} : μ_{A} = μ_{B}

H_{1} : μ_{A} ≠ μ_{B}

#### That would certainly

Mathematical record of the hypothesis that the average value of sample A is less than the average value of sample B

H_{0} : μ_{A} < μ_{B}

H_{1} : μ_{A} ≥ μ_{B}

## Significance level α

The significance level (it could also be called the "Degree of confidence") is a parameter that means what is the probability,
that the correct hypothesis will not be accepted. This parameter can be obtained, or it can be pre-set by a condition, I give two examples:

- Can we be 90% sure (the significance level is 10%) that the car will not need to be repaired within a year? After testing the hypothesis, we will get the result "yes" or "no"
- How much can we be sure that the car will not need to be repaired during the year? After testing the hypothesis, we will get the result as a percentage

## Hypothesis errors

When we make a statement about a certain hypothesis, we can make two mistakes:

### Error of the first kind α

For example, we conducted a test of a certain sample and based on the results decided that the parameter X does not correspond to the general population. If the selection was
if it is done incorrectly and the parameter X describes the general population, then we made a mistake of the first kind - we abandoned the main hypothesis
when it is true.

α = P(error of the first kind) = P(rejection of H

_{0}|H_{0}is true)

The error of the first kind and the level of significance are absolutely the same.

### Example

We weighed 10 rabbits, their average weight is 5.1±0.5 kg.

Suppose that the rabbit's weight obeys normal law, then:

σ = 0.5/√(10) = 0.16

μ = 5.1

Hypothesis condition:

α = P(H_{0}is incorrect | H_{0}is true) = P(x< )

### Error of the second kind β

The reverse case of the error of the first kind is when we accepted the main hypothesis, but it turned out to be incorrect

β = P(error of the second kind) = P(acceptance of H

_{0}|H_{0}is erroneous)

## Statistical hypothesis testing

Checking a statistical hypothesis means performing the following steps:

1. Building a random sample

2. Calculating the parameter X of the sample

3. Testing the hypothesis using the obtained value of X