When should biased sample variance be used?

I feel like every example I find is talking about a sample from a population. My understanding from my lecture notes is that the unbiased sample variance (divide by n-1) should be used for normal distributions? This may sound like a stupid question but I’m kinda cramming a few weeks into a few days.

submitted by /u/icoolio22
[link] [comments]

Anniversary 1 owner manual bone stock mkiv



Local grocery store. Owner comes out mid pic, maybe mid 50s woman.

Huge smile at the attention I was giving it.

Said she’ll be buried in it. Local dealer 1 mechanic serviced since she bought it new. 144k miles on it now. Looked rather daily driven. Chrome wheels peeling but otherwise perfect.

submitted by /u/WTFNameIsntTaken
[link] [comments]

I heard there was a conference on inference?

One of our lecturers once mentioned that there has been a big conference on inference some time ago, where a lot of different statisticians debated general questions on inference. In the end they produced a document in which they summarized what they could agree on. Furthermore many of the participants also published additional statements in which they added their personal opinion on the statement. I would be really interested in reading some of these. Does anybody know how this conferences was called. And maybe if you have already studied some of it, where there any you would really recommend to check out?

submitted by /u/PythonicParseltongue
[link] [comments]

Are Weighted Least Squares and GLM the same?

I had a dataset with 3 repeats (Ys) at each X level. I used those to calculate SD(Y) each Ybar. Then I did a fit of this SD(Ybar) vs Ybar. It was linear with insignificant intercept so SD proportional to Xbar the mean.

I used that SD as weights for a linear graph of Y vs X. I got a slope around 100 (110 was unweighted OLS). The SE of this was around 3.

Second method was to do a Gamma GLM fit. I also got a value approx 100 with SE approx 3.

This got me wondering? Are these the same thing?

I noticed also that for the Gamma distribution specifically the SD is also proportional to the mean. Are ‘weights’ for GLM taken from the specified distribution?

Or is weighting OLS and using a distribution to fit a GLM (which R does via MLE) the same thing?

In this case does my “SD function” found via fitting itself just confirm that my data follows Gamma Distribution bc the mean and SD are proportional?

All this stuff is really interesting to me.

submitted by /u/ice_shadow
[link] [comments]

Proof of statistical significance

Hay Reddit.

I just did a project with 8 participants (one with some censored data) so obviously there’ll be no statistical significance. However I was looking at a way to prove it formally. I got a p value of 0.001 for the intercept and 0.226 for the variable x. I was wondering if from this I can deduce that there is no statistical significance or if I have to perform a calculation

submitted by /u/NoticedTriangularity
[link] [comments]

No more than 3

So, any positive integer can be written as the sum of at most 4 squares or 9 cubes or 19 fourth powers and so on.

But I have a question can any sufficiently large number be written using no more than 3 sums of any integer power?

I think the answer is yes. But I dont know if this is provable in any sense for me anyway, and it could be that there will always be an infinite number of rare exceptions.

To be clear what I mean, using only squares, the number 31 takes 4.


Or 52 + 22 + 12 + 11 = 31

But if you allow any integer power

33 + 22 = 31

So 7,15,23 take 4 because there are no new higher powers yet compared to squares.

But after that the exceptions get rare fairly quickly.

I dont code, so I am trying these all by hand.

But anyway, just wanted to see what you ladies and gentlemen thought.

Thanks for reading.

submitted by /u/paashpointo
[link] [comments]