Bootstrapping vs. Simulating Data

I’ve done quite a bit of non-parametric bootstrapping (simply resampling rows) and feel more and more comfortable with it. Lately, I’m paying more attention to other methods. In particular, I’ve found a few instances of researchers applying a Cholesky decomposition of the correlation matrix of a dataset, then multiplying factor L by a random normal dataset to simulate new data.

So my questions: 1. Is simulating data from a Cholesky distribution thought of as a monte-carlo simulation, a parametric bootstrap, or something else? 2. The simulation approach is appealing in that we get brand new data with similar relational properties (i.e., correlations) — what advantages does the bootstrap have over this or vice versa?

I’ve tried both approaches on a few problems and the difference in conclusions don’t seem that far off. I feel I’m missing some major principled differences between the two methods.

submitted by /u/iconoclaus
[link] [comments]

Published by

Nevin Manimala

Nevin Manimala is interested in blogging and finding new blogs

Leave a Reply

Your email address will not be published. Required fields are marked *