# Tests of statistical significance – How do you decide which to use?

The most perplexing aspect of statistics for someone without a statistical background is always knowing which basic statistical tests to use when.

The purpose of this article is to differentiate between the most common statistical tests, describe how null value hypotheses are applied in practice, and outline which statistical test to use under what conditions.

**Hypothesis testing and null hypotheses**

Other than knowing how to write a hypothesis, it is important to define a null hypothesis before moving on to the differences between various tests. The assumption of no meaningful difference between two sets of data is known as the null hypothesis. In the context of these methodology types, there are two hypotheses:

Null hypothesis

Given that the means of two samples are equivalent,

Alternate hypothesis:

Given that the means of two samples differ significantly from one another,

Understanding when and how to use even the most fundamental statistical tests can be challenging for those without formal training in statistics. This article aims to clarify the differences between the most commonly used tests, describe the role of null value hypotheses in each, and outline the specific situations in which each test should be employed.

Calculating a test statistic allows one to decide whether or not to reject a null hypothesis. If this test statistic is larger than a predetermined threshold value, the null hypothesis is rejected. A crucial zone is a theoretical boundary inside which a test statistic must fall in order for the null hypothesis to be rejected. Limits of the critical zone are given by the critical values. In certain circumstances (such as a two-sided t-test), there will be two critical values, whereas in others (such as a 2 test or one-sided t-test), there will be only one.

**Some Common Statistical Tests**

The following are a few tests:

**Z-test**

A z-test presumes that the sample is regularly distributed. In order to verify the claim that the sample taken is representative of the population, a z-score is computed using population characteristics like “population mean” & “population standard deviation”.

Null:

The mean of the sample coincides with the mean of the whole population.

Alternate:

The sample mean and the population mean are not the same.

The z-statistic, whose score is obtained as part of this hypothesis testing, is employed.

z = (x — μ) / (σ / √n)

**T-test**

To compare the means of two provided samples, a t-test is used.

A t-test makes the same assumption about the sample’s normal distribution that a z-test does.

In situations in which the population characteristics (mean and standard deviation) cannot be determined, a t-test may be used.

There are 3 versions of the t-test.

● Mean-group comparisons may be made using the Independent Samples t-Test.

● Testing for differences in averages between two samples taken from the same population at different periods.

● T-test for comparing the mean of one set of data to an already established mean, used for a single sample.

Hypothesis testing using the t-statistic is performed by computing

t = (x1 — x2) / (σ / √n1 + σ / √n2)

**ANOVA**

It is a statistical technique designed to compare three or more groups using a common data set called an analysis of variance (ANOVA). ANOVA can be divided into two types.

1. To examine the dissimilarity between three or more groups on a single independent variable, one may utilise one-way ANOVA.

2. With the use of the Multivariate Analysis of Variance (MANOVA), we can determine how different sets of factors influence each other. Furthermore, MANOVA may distinguish between sets of independent factors with different correlations between dependent variables.

The hypothesis being tested in ANOVA is

Null:

Each and every pair of samples is identical, which indicates that the means of all samples are the same.

Alternate:

The results of at least one comparison between samples varied considerably.

Measures of statistical significance in this context are known as F-statistics. A formula is provided to determine the F value.

F= ((SSE1—SSE2)/m)/SSE2/n-k

**The Chi-Square Test**

Often, two categorical variables are compared with the chi-square statistic. Chi-square tests can be performed in one of two ways.

● Chi-square fits are used to compare two variables in a contingency table to determine whether the data fits the sample well.

● According to a tiny chi-square score, the data fits the model

● The chi-square statistic indicates that there is a mismatch between the data and the model when it exceeds a certain threshold.

If you are struggling with statistical analysis, get professional assignment help from Essays UK, a trusted essay and dissertation service.