So especially in the bio area I have seen papers with bad statistics published all the time. People doing ANOVAs on clearly heteroscedastic data, forgetting random effects, doing multiple t tests and not correcting, etc.
These people are clearly doing things wrong but when you talk to them in the lab, they don’t seem to care that you showed the correct method like a log transform or whatever. Or the diagnostic plots for lm/aov models showing things off.
The main thing just seems to be stick in that you used a stat method and it gave significance.
They don’t care about details like “p-values aren’t uniformly distributed under the null hypothesis when assumptions like nonconstant variance are violated”.
A lot of these details in publications which use stats the problems will only be noticed by stats people.
So how exactly do you work on making your work more accessible and relevant to non-stats people? Like why should they care about this? Obscurely explaining for example standard errors are biased, p values are not valid p values, does not make sense to them. Even if it is incorrect, its not the main focus of the work.
Especially in biostats I have to deal with this. I have re-analyzed people’s data using correct frameworks but they will still use their old one bc they do not understand what I did (even if it was as simple as a log transform).
Ive seen some very bad things like instead of a 3 factor ANOVA people “combining” all the factors to do a 1 factor ANOVA, not realizing that this is equivalent to only including interactions.
And then if I try to show the regression framework of ANOVA and how this is bad due to multicollinearity people cannot follow it. There is no simple way to explain these things though. Biologists don’t even know that anova is a regression for example.