Bayesian analyis made easy(ier)

I’m going to be estimating the effect of an intervention for a client and we’re woefully underpowered for the question at hand. As is company policy in such situations, we’d rather hand the client statements about a posterior distribution (there’s a 50% chance this impact was positive) instead of a statistical signficance findings (p-value>0.05). In this situation the structure of our data are complicated, so the likelihood P(y|theta_0), where theta_0 is the treatment effect, is complicated. A colleage is suggesting an easy way out: first find impact estimate theta_0 using standard methods, then find estimated treatment effects theta_1, theta_2,…, theta_j from reasonably similar interventions, then form the likelihood using these estimates as the data, as in theta_j~normal(mu,sigma). We would then state a prior on the theta_j, mu, and sigma, and viola: posterior distrubtion of theta_0. This almost seems like cheating to me. Is it valid? It’s so much easier then using our data for the likelihood. It’s more of a hierarchical bayes method. Any thoughts would be appreciated. submitted by /u/foogeeman [link] [comments]

Published by

Nevin Manimala

Nevin Manimala is interested in blogging and finding new blogs

Leave a Reply

Your email address will not be published. Required fields are marked *