## Measuring Dependencies between Nominal/Categorical Variables

Have you ever wondered how you can measure correlations between categorical data? When dealing with continuous variables, we often use measures such as covariance or correlation to measure the strength of the relationship between two variables.

However, when we deal with nominal or categorical data, the methods to measure dependency become a bit more complex. In this article, we will explore two different methods – contingency tables and the chi-squared test – that are often used to measure dependencies between nominal/categorical variables.

## Covariance and Correlation for Continuous Variables

Let’s start with a quick refresher on measures of correlation for continuous variables. Correlation tells us the strength and direction of the linear relationship between two continuous variables.

The measure of correlation that we are all familiar with is Pearson’s correlation coefficient. However, when it comes to discrete data, there are no natural units, so Pearson’s correlation would not work for nominal/categorical data.

## Contingency Tables and the Chi-Squared Test for Nominal/Categorical Variables

Contingency tables are used when we have two categorical variables and want to examine the relationship between them. They provide us with a visual representation of how the frequencies of categories in one variable are related to those in another variable.

The contingency table will show us how many occurrences there are for each of the possible combinations of the two variables. A very simple example of this is when we have two variables: gender and favorite color.

To create a contingency table, we would count the frequency of males and females in each category of the favorite color. The table would look something like this:

Blue | Red | Green | |
---|---|---|---|

Male | 12 | 8 | 15 |

Female | 6 | 9 | 14 |

Here, we can see that the most common favorite color is green, both for males and females.

This method of creating contingency tables allows us to very easily and quickly see how the variables are related. To measure the dependency between the variables, we use a chi-squared test.

The test helps us to determine whether the frequency of a given observation in the table is statistically significant or simply due to chance. If the chi-squared test returns a very low p-value – typically p < 0.05 – we can conclude that there is a significant association between the two variables.

## Actual and Expected Distributions

To perform the chi-squared test, we compare the actual distribution – the counts from the contingency table – with what we would expect the distribution to be if there were no association between the variables. The expected distribution is calculated based on the marginal probabilities and the independence assumption.

## Marginal Probability and Independence

Marginal probabilities are simply the sum of the rows and columns of the contingency table. In the previous example, the marginal probabilities for gender were 28 for males and 29 for females and the marginal probabilities for favorite color were 18 for blue, 17 for red, and 29 for green.

The independence assumption assumes that the occurrence of a category in one variable does not depend on the occurrence of a category in another variable. If the two variables are independent, then the expected distribution is simply the product of the marginal probabilities of the respective categories.

For example, if we assume that gender and favorite color are independent, we would expect the following distribution:

Blue | Red | Green | |
---|---|---|---|

Male | 9.5 | 9 | 9.5 |

Female | 8.5 | 8 | 8.5 |

The expected distribution shows us what we would expect to observe if the two variables were independent. We can then compare the expected distribution to the actual distribution and perform the chi-squared test.

## Conclusion

Using contingency tables and the chi-squared test is a straightforward method for measuring correlation between nominal/categorical variables. By creating a contingency table and comparing the observed distribution to the expected distribution, we can determine if there is a significant correlation between two variables.

This method can be applied in various areas, such as social sciences, medical research, marketing, and more. Remember, whenever we deal with nominal/categorical data, the chi-squared test is a valuable method to test dependencies and relationships between variables.

## Chi-Squared Test

The chi-squared test is widely used to determine the association between two categorical variables. It is a statistical hypothesis test used to determine whether there is a significant difference between the observed and expected frequencies of the two nominal variables.

In this article, we will dive deeper into the computation of the test’s chi-squared value, the null hypothesis, critical points, degrees of freedom, calculation of probability and a naive approach to obtaining expected frequencies.

## Null Hypothesis and Critical Points

The tests’ null hypothesis states that the two variables are independent. The chi-squared test is a one-tailed test that rejects the null hypothesis if the calculated chi-squared value exceeds the critical point.

The critical point is determined based on the degrees of freedom and the significance level, typically set at 5% (0.05). The chi-squared distribution table is used to determine the critical point.

## Calculation of Chi-Squared Value

The chi-squared value is calculated using the Pearson chi-squared formula. The formula involves taking the square of the difference between the observed and expected frequencies, dividing each value by the expected frequency, and summing up all the results.

The formula is as follows:

x^{2} = (O – E)^{2}/E

where O represents the observed frequency and E represents the expected frequency.

## Degrees of Freedom and Probability

Degrees of freedom (df) is the number of categories in the contingency table -1. The number of categories indicates the number of independent pieces of information.

In turn, degrees of freedom determines the critical point, as mentioned earlier.

The probability of the chi-squared test is calculated based on the critical point and degrees of freedom.

The probability is typically set at 5% unless otherwise stated.

## Chi-Squared Calculation: a Naive Approach

A common problem that arises when working with the chi-squared test is obtaining the expected frequencies, especially if our data set has many categories or missing data.

A naive approach is to use the total number of observations, the total number of rows, and the total number of columns. The expected frequencies use the same formula below:

E = (row total * column total) / grand total

However, this method is not a suitable approach, especially if our data set has few observations.

## Calculation Steps using Common Table Expressions (CTE)

A better approach to obtaining the expected frequencies is using common table expressions (CTE). CTEs are similar to the temporary tables, where they can store information that could be used for subsequent queries.

In the case of the chi-squared test, the CTE can help create a query that calculates the expected frequencies.

To begin with, we start by calculating the marginal frequencies using the sum function, which counts the frequency in each row and column.

```
WITH
marginals AS (
SELECT col1, col2, SUM(num) AS total
FROM data_table
GROUP BY col1, col2
),
```

We then use the marginals CTE to calculate the expected frequencies using a self-join on the data table.

```
expected AS (
SELECT d.col1, d.col2,
m1.total
* m2.total
/ (SELECT SUM(num) FROM data_table) AS expected
FROM data_table AS d
JOIN marginals AS m1 ON d.col1 = m1.col1
JOIN marginals AS m2 ON d.col2 = m2.col2
)
```

Finally, we join the expected values with the observed values and use them to calculate the chi-squared statistic as we would with the conventional method.

```
SELECT SUM(POW(observed - expected, 2) / expected) AS chi2
FROM (
SELECT col1, col2, num AS observed, expected
FROM data_table
JOIN expected USING (col1, col2)
)
```

The CTE method is computationally costly but provides a better approach to obtain expected frequencies by accounting for the various combinations of categories in a dataset.

## Conclusion

The chi-squared test is a widely used statistical method to determine the association between categorical variables. The test’s value is calculated using the Pearson chi-squared formula and determines the probability of rejecting the null hypothesis by calculating the degree of freedom.

While obtaining expected frequencies using the naive approach can be a solution, it is better to use CTEs as they account for the various combinations of categories in a dataset. Understanding the steps to calculate the chi-squared test is essential when working with categorical data, as it helps us infer the relationship between categorical variables.

## Chi-Squared Calculation: an Efficient Approach

In the previous section, we introduced the naive approach as a back-up solution for obtaining expected frequencies but also presented CTEs and self-joins as a more reliable method. However, this approach is computationally costly.

In this section, we will introduce Window Aggregate Functions, which is even more efficient than the previous method. We will also discuss the steps to calculate the chi-squared value using CTEs and Window Aggregate Functions.

## Window Aggregate Functions

Window Aggregate Functions computes a value based on a group of rows, with a defined window of rows, called the window function. The value is computed using a specific function that we can define over a set of ordered rows.

In other words, the window function allows us to define the order in which we calculate the values of a column. The following are some common window functions:

- ROW_NUMBER: assigns a unique integer value to each row within its window.
- LAG and LEAD: returns the value of a given column of a previous or next row, respectively, within the same window.
- SUM, AVG, MIN, MAX: computes the sum, average, minimum, or maximum value of the rows within the defined window.

## Steps to Calculate Chi-Squared Value using CTE and Window Aggregate Functions

To calculate the chi-squared value using Window Aggregate Functions, we start by calculating the expected values using a CTE, just as the previous section. However, instead of using self-joins, we create a CTE that specifies the rows that we want to use as a frame for our window function.

The final query would be as follows:

```
WITH RECURSIVE
marginal_cte AS (
SELECT category1, category2, SUM(count) AS marginal_total
FROM example_table
GROUP BY category1, category2
),
expected_cte AS (
SELECT
category1,
category2,
marginal1.marginal_total * marginal2.marginal_total / sum(marginal_total) OVER () AS expected
FROM example_table
JOIN marginal_cte AS marginal1 ON example_table.category1 = marginal1.category1
JOIN marginal_cte AS marginal2 ON example_table.category2 = marginal2.category2
)
SELECT
sum(POW(example_table.count - expected_cte.expected, 2) / expected_cte.expected) AS chi_squared
FROM example_table
JOIN expected_cte ON example_table.category1 = expected_cte.category1 AND example_table.category2 = expected_cte.category2
GROUP BY example_table.category1, example_table.category2
```

The query calculates the marginal totals using a CTE, the same as the previous approach. It then creates another CTE to calculate the expected values but using a window function.

The window function uses the OVER keyword to define a frame that comprises all rows in the table, and hence, it does not require self-joins. The chi-squared value is then calculated by joining the expected values table and grouping by category1 and category2.

## Conclusion and Future Articles

In this article, we have examined the chi-squared test, an essential statistical tool for determining the association between categorical variables. We discussed how to obtain expected frequencies using two different methods: self-joins and Window Aggregate Functions.

The latter provided a more efficient approach that didn’t rely on self-joins. In future articles, we will explore how we can measure the level of association between one continuous and one discrete variable, a field that can offer invaluable insights into a diverse range of fields such as social sciences, medical research, business and applied mathematics.

We will also look into practical uses of Window Aggregate Functions and Common Table Expressions in database management systems.

The Chi-Squared test is a statistical tool used to measure the association between two categorical variables.

This article covered two ways to calculate the chi-squared value using Common Table Expressions (CTEs), one of which involved using Window Aggregate Functions, a more efficient approach. Obtaining expected frequencies is an essential element of the test, and the CTE approach can account for a more significant range of category combinations.

Therefore, accurately calculating the chi-squared test is invaluable for identifying the relationship between categorical variables. In future articles, we will explore the practical application of Window Aggregate Functions and Common Table Expressions in database management systems and analyze the level of association between one continuous variable and one discrete variable.