[Q] Why does √(p[p-1]) measure stdev? How is it similar/analogous to the “squared differences” formula?

I am generally quite comfortable with the concepts of variance and stdev in the abstract: They quantify the amount of dispersion of values in a sample or population from the central tendency. This interpretation falls right out of the formula. Take stdev, which is just the average dispersion:

√( 1/n * ∑[xᵢ - x̄]² ) 

where (xᵢ - x̄)² gives us the dispersion (=distance from mean), and 1/n * ∑ basically gives us the average. So to me, that formula is quite simple to pick apart and transparent regarding what it represents.

However, I guess it only applies to quantitative (or maybe continuous?) variables, because I’ve just learned that variance is calculated differently when dealing with categorical/Bernoulli/binomial variables. Specifically, stdev for these distributions is calculated as

√(p[p-1]) (or √(np[p-1]) if more than one trial) 

and simply squaring the above gets you the variance.

But this is where I get confused: How does the above expression give us the same information about our distribution as the “squared differences” formula up top? I really don’t see any of the same pieces across or parallels between the “squared differences” and p(p-1) formulas: Where does the “average” come from in the second? Where does the “difference from mean” come from in the second? Where is the analog to “p-1” in the first? Etc.

Overall these two formulas just seem completely different, so I’m really struggling to connect them in my mind and understand how, despite their different inputs, their outputs are equivalent. Because I am quite comfortable with the first, I can tell that my weak understanding mostly stems from the second. Please help, thanks.

submitted by /u/synthphreak
[link] [comments]

Published by

Nevin Manimala

Nevin Manimala is interested in blogging and finding new blogs https://nevinmanimala.com

Leave a Reply

Your email address will not be published. Required fields are marked *