# Statistical inference in practice

Hey team, I am a studier of statistics (albeit not a great one). I have done courses on Classical and Bayesian inference, and have also done one into to applied statistics course, but plan on doing more next year. Now in my inference courses, we always assume that we are dealing with some given family of distributions, either we judge something to be from a certain family or we say it is because…reasons. And using this assumption that the likelihood comes from some family, or we judge the prior to be from some family, we can do a whole bunch of nice statistical inference stuff like confidence intervals, parameter estimation and much more! However it seems this whole theory is based upon knowing/judging which family of distributions PDF’s are from? We have gone into the theory of statistical inference which I love, but can anyone tell me how it is used in practice? Basically my main question is: 1) Does it ever make sense to guess a likelihood distribution about looking at a frequency graph? So, for events I can say Bernoulli, or waiting time I can say Gamma, but all these are just guesses. And when delving into the realm of truly random processes, the best I can think of is to say “that looks Normal!”. Can anyone shed some insight into this? Many thanks. 2) Judging priors is whack. Perhaps after looking at more applied content I can read more about how this is done, but so far my knowledge is basically “choose a Beta distribution and adjust the parameters to suit uncertainty and location”. Thanks for your help, and I hope this isn’t a totally nooby question, as it certainly feels like it. Love y’all. submitted by /u/CivcraftPlayer123 [link] [comments]