This recurring thread will be for questions that might not warrant their own thread. We would like to see more conceptual-based questions posted in this thread, rather than “what is the answer to this problem?”. For example, here are some kinds of questions that we’d like to see in this thread: Can someone explain the concept of maпifolds to me? What are the applications of Represeпtation Theory? What’s a good starter book for Numerical Aпalysis? What can I do to prepare for college/grad school/getting a job? Including a brief description of your mathematical background and the context for your question can help others give you an appropriate answer. submitted by /u/AutoModerator [link] [comments]
Just wondering how well you actually need to be able to know measure theory to get use out of it (In relation to statistics, maybe even stochastic calculus, stochastic differential equations), etc. So I’m currently taking real analysis, and the exercises are pretty hard. At this point it feels like I’m understanding the theory, but it’s difficult to create the theory (via doing exercises) if that makes sense. I believe I’ll be in a similar boat in measure theory. Let me frame that another way. I will be able to understand measure theory, but I won’t be able to “create” measure theory. I will be able to read statistics papers with measure theory, but I won’t be able to create those papers? I can pass oral exams in measure theory, but I can’t pass complicated written exams…. etc. So I’m losing out on one side of the course which is the part of doing exercises on your own, but is that really so important in relation to statistics etc? Even just having been through measure theory will give me a leg up on the competition? no? I guess I’ll become more mathematically aware? Have a bit of a clue about what is going on when reading advanced papers/books? idk? submitted by /u/mathstudent137 [link] [comments]
I’m trying to understand some papers on estimating mutual information (i.e. https://arxiv.org/pdf/cond-mat/0305641v1.pdf), but I’m having trouble filling in their derivations, or having any intuition about these estimators. I seem to be missing something. Can anyone suggest some references just on this area? submitted by /u/ov3rsight [link] [comments]
How does a site like playoffstatus.com calculate the chances for each team getting into each position? If anyone has any resources or advice on how this is done it would be greatly appreciated. I’m a huge fan of this site but would like to make a more modern version of it. submitted by /u/lIlIllIIllII [link] [comments]
I’ve been looking into probabilistic programming etc., but haven’t really managed to wrap my head around how message passing algorithms work. Online searches for example code mostly lead to optimized packages which are painful to parse. Is anyone aware of tutorial-like implementations (preferably in a relatively high-level language) of belief propagation? Thanks! submitted by /u/Arisngr [link] [comments]
For context: I’m analyzing a series of microbiological results, more specifically plate counts. A plate count can return values of 0 if no microorganisms grow, 1, 2 yada yada yada until 300. A plate that has more than 300 colonies is only counted until 300 is reached and then it’s marked as >300. The same for other test methods that can return from 2000. So I need to calculate an average value of microorganisms, but obviously >300 and 300 values or all
Hi guys, I am doing a Bayesian multiple regression with 4 metric predictors and 1 nominal predictor. The nominal value can either be 0 or 1 for the type of thing it is, where 1 will add 150,000 to the predicted value (very strong prior).
Per my lecturer’s advice I am standardising the the values of the predictors and beta parameters. I’ve gotten by fine with the metric parameters using this formula so I can set the priors for the coefficients:
zbeta_i = beta_i / (SD_y/SD_x)
Where y is the predicted variable and x is the predictor variable, beta is the coefficient.
However I have no idea how to standardise the beta coefficient for the nominal predictor as I think it doesn’t make sense for it to have a standard deviation. If I put 150,000 straight in as the mean to the prior distribution of the beta coefficient it causes the predicted value to jump waaaaaaaaaay out of range of what would be an acceptable answer, if I were to leave the prior vague with a mean of 0, everything is happy.
I’ve tried to ask him via our course forum, he’s replied to other people but not me so I’m not sure what’s going on there. Can anyone offer some advice? Please let me know if this is enough information, I can try to provide more if needed.
BTW I’m using R with runJags library.
I was reading through my analysis lessons and found this theorem I never really understood, stating that the sum of a series is only commutative if the sequence is of constant sign. Now I already read some stuff about the fact that when you use a series, what you really do is a finite sum followed by a limit, which “breaks” the commutativity of the sum, but I’d still like to have an example of a sequence whose series doesn’t match if you switch some terms.
(Sorry for the remaining grammar mistakes)
Hey statistics community,
Im currently working on a study about postmortem liver weights and have measured organ weights, body weight and height, age, and CT-radiodensity of different organs.
I have a group of 200 cases, and I already know that liver weight depends on blood loss (compared liver weight by t-test). Thats really neat, but that is not the only thing.
Liver weight of course also depends on, for example, body height and weight.
When I do a multiple linear regression to point that out, the best model I can find (by backwards elimination) also includes [spleen weight], and [kidney radiodensity].
The question is:
Does it make sense to include spleen weight and kidney radiodensity in the multiple regression? I know that spleen weight also correlates with body weight and height, and I am afraid that the significance of kidney radiodensity is somewhat random (at least i cannot explain it).
I checked the VIF but none of them was above 3.
Help is much appreciated.
The mode of thought is, of course, self defeating. I know that yes, there is a level of mathematical ability I will never come close to, and that to think about it is to hinder the ability that I have, which is only insufficient by my unrealistic standards, but the thought has been bothering me and making it hard to pursue math and science stuff lately.
It started when I started figuring out about genetics and reading some of the literature, which happened to be near when I got my IQ taken (No reveal because I’m not trying to be an arrogant ass, sorry if I seem like one. Deviation or two above average, as you would probably expect). It’s become increasingly clear that while nurture has a great impact on our lives, genes are greater or just as great. And even so, why can we say that nurture is less oppressive than genetics? Most of socio-economics nurture influences that were exerted upon you were not up to your choice, at least the choices of yourself at the moment. Does anyone really achieve success or is the purported hard work supported by an innate mechanism?
Before this devolves into a tangent, I guess I should cut to the point. Do any of you deal with this? I feel like (best analogy I can think of) a guy with a penis that’s slightly below average whenever I go to a math competition or math club. Again, I know how self-defeating this mode of thought is, but somehow I can’t help it. I always come back to it and it’s been disturbing my academics lately. Does anybody else deal with this or know how to deal with this?