Adventures in Machine Learning

Mastering Kruskal-Wallis and Dunn’s Test: A Comprehensive Guide

Kruskal-Wallis and Dunn’s Test: A Comprehensive Guide

As a statistical test, the Kruskal-Wallis test is used to compare the medians of three or more independent groups. It is particularly useful when the data does not meet the assumptions of normality required by traditional parametric tests, such as One-Way ANOVA.

In this article, we will explore the Kruskal-Wallis test, compare it to One-Way ANOVA, and learn about Dunn’s test, which is a post-hoc test that is used to determine which groups are significantly different from one another. I.to Kruskal-Wallis Test

The Kruskal-Wallis test is a non-parametric statistical test that can be used to analyze the differences between multiple groups.

Essentially, it evaluates whether or not there are statistically significant differences between the medians of groups. It is useful when data does not meet the assumptions for normality and equal variances that are required for One-Way ANOVA.

The primary variation between the Kruskal-Wallis test and One-Way ANOVA is that Kruskal-Wallis uses medians rather than means. By doing so, it is more resistant to outliers that might skew the data.

II. Comparison to One-Way ANOVA

One-Way ANOVA is a parametric statistical test used to determine if there are any significant differences between the means of two or more independent groups.

In contrast, Kruskal-Wallis is a non-parametric test that tests for differences in medians. One-Way ANOVA is appropriate when the data sets in each group can be assumed to be normally distributed, and have equal variances.

When these assumptions are not met, a non-parametric test such as Kruskal-Wallis may be used instead. III.to Dunn’s Test

Suppose a Kruskal-Wallis test concludes that there is a statistically significant difference among at least three groups.

Suppose you want to determine which specific groups are significantly different from each other. In that case, a post-hoc Dunn’s test can be conducted.

Dunn’s test is used to compare each group to every other group and identify whether they are statistically different from each other. It is most often used to compare groups of unequal sizes and assumes that each group’s data is independent and not normally distributed.

IV. Example of Dunn’s Test in Python

Suppose we want to perform a Dunn’s test to determine whether certain fertilizers have a significant impact on plant growth.

We can separate 24 plants into three groups: plants using fertilizer A, plants using fertilizer B, and plants using fertilizer C. After measuring each plant’s growth, we obtain the following data:

“`

Fertilizer A: 2, 4, 6, 8, 10

Fertilizer B: 1, 3, 5, 7, 9

Fertilizer C: 11, 12, 13, 14, 15

“`

The posthoc_dunn() method from the scikit-posthocs library can be used to perform the Dunn’s test in Python:

“`

import numpy as np

from scikit_posthocs import posthoc_dunn

data = np.array([ [2, 4, 6, 8, 10],

[1, 3, 5, 7, 9],

[11, 12, 13, 14, 15] ])

# Run the Kruskal Wallis test

kw = scipy.stats.kruskal(*data)

# Performing Dunn’s Test

p_values = posthoc_dunn(data, p_adjust=’bonferroni’)

print(p_values)

“`

V. P-value Adjustment Choices

Dunn’s test produces an array of p-values for each group combination.

In our fertilizer example, we will have three p-values, one for each pairwise comparison between the three groups. P-values are adjusted for multiple comparisons using various approaches, one of which being the Bonferroni correction.

Bonferroni correction is a conservative correction, but it controls the family-wise error rate, which is the probability of committing at least one type I error across all hypothesis tests. VI.

Interpreting the Results of Dunn’s Test

Consider the adjusted p-values for our fertilizer example:

| | A | B | C |

|—-|——–|——–|——–|

| A | 1.0000 | 0.0829 | 0.0010 |

| B | 0.0829 | 1.0000 | 0.0122 |

| C | 0.0010 | 0.0122 | 1.0000 |

Each adjusted p-value represents the probability of observing a difference between two groups through chance. If the p-value is less than the significance level, typically 0.05, we can reject the null hypothesis and say that there is a statistically significant difference between two groups.

If we look at the two elements in each row and column, we can see that there is a statistically significant difference between Fertilizer A and Fertilizer C, as well as between Fertilizer B and Fertilizer C.

Concluding Thoughts

The Kruskal-Wallis test is a non-parametric test used to evaluate differences in medians for three or more independent groups. Dunn’s test is a post-hoc test used to determine which specific groups are significantly different from each other, following Kruskal-Wallis’ significant results.

The Bonferroni correction is a conservative method for adjusting p-values for multiple comparisons. Overall, Kruskal-Wallis and Dunn’s test are essential tools in analyzing data when normality and equal variances are not met.

In summary, the Kruskal-Wallis test is a non-parametric statistical test used to compare the medians of three or more independent groups, while Dunn’s test is a post-hoc test used to determine which specific groups are significantly different from each other following Kruskal-Wallis’ results. It is particularly useful when data does not meet the assumptions of normality required by traditional parametric tests, such as One-Way ANOVA.

Overall, Kruskal-Wallis and Dunn’s test are essential tools in analyzing data when normality assumptions and equal variances are not met. These tests enable researchers to glean a better understanding of their data, enabling them to make informed decisions, and crucially, they help researchers eliminate potential errors in their research.

Popular Posts