homechevron_rightProfessionalchevron_rightStatistics

# Mann-Whitney Test

This online calculator performs the Mann–Whitney U test (also called the Mann–Whitney–Wilcoxon (MWW), Wilcoxon rank-sum test, or Wilcoxon–Mann–Whitney test).

As it was stated in Two sample t-Test, you can apply t-test if the following assumptions are met:

• That the two samples are independently and randomly drawn from the source population(s).
• That the scale of measurement for both samples has the properties of an equal interval scale.
• That the source population(s) can be reasonably supposed to have a normal distribution.

Sometimes, however, your data just fails to meet second and/or third requirement. For example, there is nothing to indicate it has normal distribution, or you do not have equal interval scale - that is, the spacing between adjacent values cannot be assumed to be constant. But you still want to find out whether the difference between two samples is significant. In such cases, you can use the Mann–Whitney U test, non-parametric alternative of t-test.

In statistics, the Mann–Whitney U test (also called the Mann–Whitney–Wilcoxon (MWW), Wilcoxon rank-sum test, or Wilcoxon–Mann–Whitney (WMW) test) is a nonparametric test of the null hypothesis that it is equally likely that a randomly selected value from one sample will be less than or greater than a randomly selected value from a second sample1, or $p(X<Y)=0.5$. However, it is also used as substitute for the independent groups t-test, with the null hypothesis that the two population medians are equal.

BTW, there are actually two tests - Mann-Whitney U test and Wilcoxon rank-sum test. They were developed independently, use different measures, but statistically equivalent.

The assumptions of the Mann-Whitney test are:

• That the two samples are randomly and independently drawn;
• That the dependent variable is intrinsically continuous, capable in principle, if not in practice, of producing measures carried out to the nth decimal place;
• That the measures within the two samples have the properties of at least an ordinal scale of measurement, so that it is meaningful to speak of "greater than," "less than," and "equal to."2

As you can see, this nonparametric test does not assume (or require) samples from normally distributed populations. Such tests are also called distribution-free tests.

## Word of caution

It has been known for some time that the Wilcoxon-Mann-Whitney test is adversely affected by heterogeneity of variance when the sample sizes are not equal. However, even when sample sizes are equal, very small differences between the population variances cause the large-sample Wilcoxon-Mann-Whitney test to become too liberal, that is, actual Type I error rate for the large-sample Wilcoxon-Mann-Whitney test increased as the sample size increased.3.

Hence you must remember that this test is true only if the two population distributions are the same (including homogeneity of variance) apart from a shift in location.

## The method

The method replaces raw values with their corresponding ranks. With this, some results can be achieved using simple math. For example, the total sum of ranks is already known from the total size and it is $\frac{N*(N+1)}{2}$. Hence, the average rank is $\frac{N*(N+1)}{2}*\frac{1}{N}=\frac{N+1}{2}$.

The general idea is that if the null hypothesis is true and samples aren't significantly different, then ranks are somewhat balanced between A and B, and average rank for each sample should approximate total average rank, and rank-sums should approximate $\frac{n_A*(N+1)}{2}$ and $\frac{n_B*(N+1)}{2}$ respectively.

## The calculation

To perform the test, first you need to calculate a measure known as U for each sample.

You start from combining all values from both samples into single set, sorting them by value, and assigning rank to each value (in case of ties, each value receives average rank). Ranks go from 1 to N, where N is the sum of sizes $n_A$ and $n_B$. Then you calculate the sum of ranks for values of each sample $R_A$ and $R_B$.

Now you can calculate U as
$U_A=n_A*n_B+\frac{n_A*(n_A+1)}{2}-R_A\\U_B=n_A*n_B+\frac{n_B*(n_B+1)}{2}-R_B$

For small sample sizes you can use tabulated values. You take the minimum of two U's, and then compare it with the critical value corresponding to sample sizes and chosen significance level. Statistics textbooks usually list critical values in tables for sample sizes up to 20.

For large sample sizes you can use z-test. It was shown that U is approximately normally distributed if both sample sizes equal to or greater than 5 (some sources says if $n_A*n_B>20$4).

$z=\frac{U-\mu_U}{\sigma_U}$,
where
$\mu_U=\frac{n_A*n_B}{2}\\ \sigma_U=\sqrt{\frac{n_A*n_B*(N+1)}{12}}$

In case of ties, formula for standard deviation becomes
$\sigma_U=\sqrt{\frac{n_A*n_B}{N*(N-1)}*[\frac{N^3-N}{12}-\sum_{j=1}^g\frac{t_{j}^3-t_j}{12}]}$
where g is the number of groups of ties, tj is the number of tied ranks in group j.

The calculator below uses z-test. Of course there is limitation on sample sizes (both sample sizes should be equal to or greater than 5), but this is probably not much of a limitation for real cases.

### Mann-Whitney U test

Digits after the decimal point: 2
Ranks table
U for sample A

U for sample B

U mean

U standard deviation

Z-score (absolute value)

Level of confidence for non-directional hypothesis

Level of confidence for directional hypothesis