How Does the Chi-Square Test (e.g. Goodness-of-Fit) Really Work?

One thing that always puzzled me, when I first started learning statistics, was the Chi-squared tests.  Adding up rescaled squared differences of course gives us a distance measure, but why would the denominator be E? After all, the variance of a binomial, for example, would be

np(1-p) = E(1-p),

so for this to be Chi-squared distributed, should we not be dividing by E(1-p)? And, if so, is our \chi^2 statistic not out by a factor of (1-p)? Which, for any reasonable-sized p, could be quite a huge discrepancy.

The web turned out to be full of nasty red herrings trying to fool me into the wrong conclusion. I trawled through pages that claim we have a Poisson distributon when we don’t (and which have no way of justifying the degrees of freedom we are using), and pages that admit that their derivations are out by (1-p) but simply suggest that it’s “preferable to omit the factors (1 – pi) in the denominators” (!).

It’s all in the covariance matrix

Eventually, I found some good resources, and the way it works is really surprisingly nifty! I have detailed some good and bad derivations, along with my own more concise version of the good derivations here, but all you really need to know is that the covariance matrix of your \frac{o_i-e_i}{\sqrt{e_i}} terms looks like this:

\Sigma = I - \sqrt{\mathbf{p}}\sqrt{\mathbf{p}}^T,

where elements of the (notation-abusing) vector \sqrt{\mathbf p}=\begin{pmatrix}\sqrt{p_1},\sqrt{p_2},…,\sqrt{p_m}\end{pmatrix}^T are the square roots of the probabilities p_i of each respective outcome in the multinomial model that we are modelling. I should, of course, stress that the goodness-of-fit test is a model designed for testing multinomial outcomes, and should not be confused with other countable outcomes, for which similar, but subtly different \chi^2 tests can sometimes be derived. Crucially, note that \sqrt{\mathbf{p}} is a unit vector (since the probabilities must all sum to one).

What does this covariance matrix tell me?

If you’re super happy with linear algebra and covariance matrices, you can skip to the next subsection. If you’re new to linear algebra (the vectors and matrices I’ve used above), I’ll show you a diagram of it in a minute, and underneath the first diagram is a simpler summary. If you have studied some basic linear algebra, but are not sure how to understand this matrix, think of it this way.

This matrix tells us how much variance we have in any given direction; to find out how much variance we have in a given direction, try multiplying a vector that points in that direction, and see how it gets stretched. For example, if I multiply the vector \sqrt{\mathbf{p}} into this matrix, I will get a vector full of zeros as a result (bear in mind that \sqrt{\mathbf{p}} is a unit vector, so \sqrt{\mathbf{p}}^T\sqrt{\mathbf{p}}=1). So, we can see that in the direction \sqrt{\mathbf{p}}, our vector gets multiplied by zero, and there is zero variance.

Now, imagine we multiplied a new vector \mathbf{x} into this matrix that is orthogonal to \sqrt{\mathbf{p}} (that is, \sqrt{\mathbf{p}}^T\mathbf x = 0). Then the second term in the covariance matrix will give us zeros, and the matrix acts like the identity matrix: that is, our vector \mathbf{x} gets stretched by a factor of 1 (i.e. not at all!). And the variance in every direction that is orthogonal to \sqrt{\mathbf{p}} is 1. Furthermore, since it acts as the identity matrix in these orthogonal directions, there is no correlation in the distributions along these vectors.

So, why is this Chi-squared distributed?

So, our covariance matrix explains everything!! If our count table has m elements, then we have one direction (\sqrt{\mathbf{p}}) that has zero variance (effectively does not exist), and (m-1) orthogonal directions in which we have the variance we wanted of 1, and zero covariance with each other. To visualize this, we have a unit sphere that has been squashed flat in the direction \sqrt{\mathbf{p}}. When we add up our m correlated, non-unit-variance, squared distances \frac{(o_i-e_i)^2}{e_i}, we end up with the same squared distance as if we had added up the (m-1) uncorrelated, unit-variance squared distances orthogonal to \sqrt{\mathbf{p}}.

Assuming that this distribution tends towards being multivariate normal (and central limit theorem gives us that it does), then this squared distance will be \chi^2_{(m-1)}-distributed (based on the rotational symmetry of the normal distribution).

Visualising the covariance matrix

That is, to see that we have unit variance, we need to rotate our perspective to look along the direction of the vector \sqrt{\mathbf{p}}, as illustrated in the figure below (which shows the distribution for a binomial with probability of success p_1=0.5). Looking along the axes, for example, if we consider only the number of successes in the binomial case, and ignore the number of fails, we do have the wrong variance — but that is because we are looking at our distribution from the wrong angle. Aligning ourselves to look at 45 degrees (in this case, since p(\mathrm{success})=0.5 and \sqrt{\mathbf{p}}=\begin{pmatrix}\sqrt{.5}\\\sqrt{.5}\end{pmatrix}), we are suddenly looking at a normal distribution with unit variance.

chi square diagram 2 p=0.5 with root p arrow

If you’re struggling to follow the technical lingo, think of the above diagrams this way. The total number of trials is fixed, so every extra success means one fewer fail. That is why our distribution cuts across the middle of the graph in a straight line; we can’t go anywhere but that line; gaining a success must lose us a fail. If we look at just the successes or the fails in isolation, we have a variance of (1-p_i), just as I was pondering at the top. But when we rotate ourselves to look at this line we are tracing between the two, we can see, just by a simple application of Pythagoras, that our two “wrong” normal distributions collapse into a single normal distribution that has variance of 1. Once we have normal distributions with unit variance, we can compare them against a \chi^2 distribution.

Changing the probability of success

As the probability of success decreases, \sqrt{\mathbf{p}} rotates, and the proportion of variance contributed by each axis changes:

chi square diagram 2 p=0.25 with root p arrow

As p(\mathrm{success}) gets closer and closer to zero, we get closer and closer to a Poisson, and (in this special case) the “Poisson” argument looks more credible; this can be seen as \sqrt{\mathbf{p}} starts to align with the “failures” axis as p(\mathrm{success}) \rightarrow 0 and the variance on the successes axis heads towards unity:

chi square diagram 2 p=0.01 with root p arrow

A Concise Derivation of the Chi-Squared Tests

On this page, I will derive the covariance matrix for the terms \frac{o_i-e_i}{\sqrt{e_i}} in a Chi-Squared Goodness-of-Fit test. It is not necessarily necessary to digest this whole derivation to understand why Chi-Squared tests work; the main point of interest in this post is the final result. For a discussion of this result that avoids the derivation, but tries to build intuitions and visualizations instead, see my other post here.

Preamble/Moan

I wrote this post (in fact, created this blog) out of a frustration with the quality of materials on this topic on the web. Many derivations out there are outright wrong, or gloss over the most important facts. This page, for example, claims that we are using E as the denominator in \frac{(O-E)^2}{E} because E is the variance of a Poisson distribution. But, in general, we do not have a Poisson distribution; if the probability p of a success on any given trial is nontrivial, then the variance is different (often, very different) from this — by a factor of (1-p). They also cannot account for why we lose any degrees of freedom. This page recognises that the (apparent!) error of (1-p) does exist, but wafts it away by saying “There are theoretical reasons, beyond the scope of this book, that make it preferable to omit the factors (1 – pi)”. Seriously?!?! Why would it be preferable to omit a factor that could be arbitrarily far from 1?? And according to that derivation, we still haven’t lost any degrees of freedom anywhere.

Then I found this page and started to realise that the reason there aren’t clear explanations, is because the real explanation is quite complicated, and those other pages I referenced were trying to avoid falling down the hole of trying to make sense of it! That page is absolutely correct, but quite terse, and takes some reading. Finally, I stumbled across this wonderful, clear, derivation. Finally, the penny dropped!

However, looking over the derivation, I felt that it would be possible to capture the underlying gist in far less space, using a little bit of linear algebra. So, what follows is my as-concise-as-possible derivation of the why the Chi-Squared test works. If any of it does not make sense to you, I refer you to the very clear derivation linked at the end of the last paragraph, and on which this was based.

The Derivation

To make this clean, we will need to use vector notation, I’ll define \mathbf{o} to be the vector of m observed counts for each multinomial category, and \mathbf{e} to be the vector of expected values. In the goodness-of-fit test, we are modelling that the observed values are the counts from a multinomial distribution. So \mathbb{E}(\mathbf{o}) = \mathbf{e}, and the observed and expected values will sum to the same value (\mathbf{o}^T\mathbf{1} = \mathbf{e}^T\mathbf{1}).

Our main obstacle is that of finding \mathbb{E}(\mathbf{oo}^T). For this part, I will use the analogy and (similar notation) from the derivation here. Our multinomial is like throwing n balls X_1,…,X_n into m buckets B_1,…,B_m, and our observed values o_i are given by the number of balls in each given bucket:

o_i = \sum_{l=1}^n I(X_l \in B_i).

We can use this analogy to find \mathbb{E}(\mathbf{oo}^T):

\mathbb{E}(o_io_j) = \mathbb{E}\left(\left[\sum\limits_{l=1}^n I(X_l \in B_i)\right]\left[\sum\limits_{l'=1}^n I(X_{l'} \in B_j)\right]\right) = \mathbb{E}\left(\sum_{l,l'} I(X_l \in B_i)I(X_{l'} \in B_j)\right) = \mathbb{E}\left(\sum_{l=l'} I(X_l \in B_i)I(X_{l'} \in B_j) + \sum_{l \neq l'} I(X_l \in B_i)I(X_{l'} \in B_j)\right) = \sum_{l=l'} \underbrace{\mathbb{E}\left(I(X_l \in B_i)I(X_{l'} \in B_j)\right)}_{=I(i=j)p_i} + \sum_{l \neq l'} \underbrace{\mathbb{E}\left(I(X_l \in B_i)I(X_{l'} \in B_j)\right)}_{=p_ip_j} = n \left[ I(i=j)p_i \right] + n(n-1) \left[ p_ip_j \right]

= I(i=j)e_i + \frac{n-1}{n}e_ie_j,

where the result in the first underbrace above comes from the fact that a ball cannot be in two different buckets at the same time (so the \mathbb{E} there is zero for i \neq j).

From this we have:

\mathbb{E}(\mathbf{oo}^T) = \mathrm{diag}(\mathbf{e})+\frac{n-1}{n}\mathbf{ee}^T,

where I have used \mathrm{diag} to denote the placing of the elements of the vector onto the diagonal of an otherwise zero matrix.

From here, deriving the covariance matrix of \mathbf{o}-\mathbf{e} is pretty easy. Making use of the facts that \mathbb{E}(\mathbf{o})=\mathbf{e} and, therefore \mathbb{E}(\mathbf{o}-\mathbf{e})=\mathbf{0}:

\mathrm{cov}(\mathbf{o}-\mathbf{e})
= \mathbb{E}(\mathbf{o}-\mathbf{e})(\mathbf{o}-\mathbf{e})^T
= \mathbb{E}(\mathbf{oo}^T) - \mathbb{E}(\mathbf{o})\mathbf{e}^T - \mathbf{e}\mathbb{E}(\mathbf{o})^T + \mathbf{ee}^T
= \mathbb{E}(\mathbf{oo}^T) - \mathbf{ee}^T
= \left[ \mathrm{diag}(\mathbf{e}) + \frac{n-1}{n}\mathbf{ee}^T \right] - \mathbf{ee}^T
= \mathrm{diag}(\mathbf{e}) -\frac{1}{n}\mathbf{ee}^T

Now, in the chi-squared formula, for each cell, we are calculating \left( \frac{o_i-e_i}{\sqrt{e_i}} \right)^2. What happens when we divide by \sqrt{e_i}? Here, for succinctness, I will use the slight abuse of notation that \sqrt{\mathbf{e}} is calculated by taking the square root of each element of \mathbf{e} independently:

\mathrm{cov} \left(  \left[\mathrm{diag}(\sqrt{\mathbf{e}})\right]^{-1} (\mathbf{o}-\mathbf{e})  \right)
= \left[\mathrm{diag}(\sqrt{\mathbf{e}})\right]^{-1} \mathrm{cov} (\mathbf{o}-\mathbf{e}) \left[\mathrm{diag}(\sqrt{\mathbf{e}}) \right]^{-1}
= \left[\mathrm{diag}(\sqrt{\mathbf{e}})\right]^{-1} \left( \mathrm{diag}(\mathbf{e}) -\frac{1}{n}\mathbf{ee}^T \right) \left[\mathrm{diag}(\sqrt{\mathbf{e}}) \right]^{-1}
= I - \frac{1}{n}\sqrt{\mathbf{e}}\sqrt{\mathbf{e}}^T
= I - \sqrt{\mathbf{p}}\sqrt{\mathbf{p}}^T

where the vector \sqrt{\mathbf{p}} is the vector of the square roots of the probability of each multinomial outcome. Importantly, this vector is a unit vector, so that the covariance matrix is singular, having a zero eigenvalue in the direction \sqrt{\mathbf{p}}, and unit eigenvalues in (m-1) directions orthogonal to it. So, our covariance matrix tells us that our \frac{o_i-e_i}{\sqrt{e_i}} terms are distributed according to a unit sphere that has been squashed flat in on dimension. Looking along the diagonal, we see that each variance in isolation is indeed (1-p_i); it is when we consider the shape of all the terms together that the \chi^2-distributed nature emerges. I go into more intuitions, and present some visualizations, based on this, here.