Simple Questions – September 21, 2018

This recurring thread will be for questions that might not warrant their own thread. We would like to see more conceptual-based questions posted in this thread, rather than “what is the answer to this problem?”. For example, here are some kinds of questions that we’d like to see in this thread: Can someone explain the concept of maпifolds to me? What are the applications of Represeпtation Theory? What’s a good starter book for Numerical Aпalysis? What can I do to prepare for college/grad school/getting a job? Including a brief description of your mathematical background and the context for your question can help others give you an appropriate answer. submitted by /u/AutoModerator [link] [comments]

Question about Measure Theory

Just wondering how well you actually need to be able to know measure theory to get use out of it (In relation to statistics, maybe even stochastic calculus, stochastic differential equations), etc. So I’m currently taking real analysis, and the exercises are pretty hard. At this point it feels like I’m understanding the theory, but it’s difficult to create the theory (via doing exercises) if that makes sense. I believe I’ll be in a similar boat in measure theory. Let me frame that another way. I will be able to understand measure theory, but I won’t be able to “create” measure theory. I will be able to read statistics papers with measure theory, but I won’t be able to create those papers? I can pass oral exams in measure theory, but I can’t pass complicated written exams…. etc. So I’m losing out on one side of the course which is the part of doing exercises on your own, but is that really so important in relation to statistics etc? Even just having been through measure theory will give me a leg up on the competition? no? I guess I’ll become more mathematically aware? Have a bit of a clue about what is going on when reading advanced papers/books? idk? submitted by /u/mathstudent137 [link] [comments]

Where to learn about estimators/estimation theory?

I’m trying to understand some papers on estimating mutual information (i.e., but I’m having trouble filling in their derivations, or having any intuition about these estimators. I seem to be missing something. Can anyone suggest some references just on this area? submitted by /u/ov3rsight [link] [comments]

Interested in learning how to calculate sports playoff probabilities, looking for any guidance I can get.

How does a site like calculate the chances for each team getting into each position? If anyone has any resources or advice on how this is done it would be greatly appreciated. I’m a huge fan of this site but would like to make a more modern version of it. submitted by /u/lIlIllIIllII [link] [comments]

Code (tutorial) implementation of Belief Propagation?

I’ve been looking into probabilistic programming etc., but haven’t really managed to wrap my head around how message passing algorithms work. Online searches for example code mostly lead to optimized packages which are painful to parse. Is anyone aware of tutorial-like implementations (preferably in a relatively high-level language) of belief propagation? ​ Thanks! submitted by /u/Arisngr [link] [comments]

How to I calculate average, median and deviation with a set that has “>X” values?

For context: I’m analyzing a series of microbiological results, more specifically plate counts. A plate count can return values of 0 if no microorganisms grow, 1, 2 yada yada yada until 300. A plate that has more than 300 colonies is only counted until 300 is reached and then it’s marked as >300. The same for other test methods that can return from 2000. ​ So I need to calculate an average value of microorganisms, but obviously >300 and 300 values or all

Prerequisites for Stochastic Differential Equations?

I plan on taking Stochastic Differential Equations next summer and need to know what pre requisites I should have under my belt? I’m in an Applied Math Masters based in the engineering school. I have taken Multi-variable Calculus, Calculus-based Probability and Statistics, and Applied Liner Algebra in my undergrad. Right now ( first semester in M.S): Statistical Regression, Computational Statistics ( both require Calc 3, Linear Algebra, and programming, and are about 50% proof based each) Spring of next year: Matrix Theory (Advanced Proof Based Linear Algebra), and I plan on taking either Monte Carlo Methods or Ordinary Differential Equations. The ODE class is a graduate level class that dives deeper than a typical undergraduate course. I am wondering If you guys think I can skip ODE, and take Monte Carlo Methods( very applicable to my future in Financial Engineering), and just self study the ODE, and be ready for stochastic calc? ​ I can post the syllabus for the courses if needed. ​ this is the SDE book: by Oskendal submitted by /u/tkfriend89 [link] [comments]

Seeking Minitab assistance

Hi,I would love to minitab assistance for just some simple boxplots/histograms using data that looks like the following: When I attempt to do boxplots that are sorted by groups, different categorical variables, it always plots one chart with the three groups as one predictor, but I want three different boxplot groups in the same graph. I don’t know if that makes much sense, but any help would be appreciated. submitted by /u/boardgameguy123 [link] [comments]

Should one assume equal likelihood of all possibilities in the absence on information?

I got into a debate with someone online and it eventually came down to this question. I was saying that in the absence of information about the likelihood of an event, we should assume equal probability of all outcomes. So, for example, if we’re shown a house and told that behind the house is either a fox, a bear, or a dog. But we’re given no information about, say, the population of foxes, dogs or bears in that area, then it would be correct to say that, given the information we have, there is a 1/3 probability that there is a bear behind that house. It may be possible, that there’s actually a hundred times as many dogs as bears or foxes in the area making the probability, given that information, of the animal being a bear much lower than 1/3, but, since we’re not given that information, I said it would be correct to say that the probability is 1/3, since all probabilities are contingent on the information given. The person who I was talking to was saying that that would be inaccurate and that you would have to say that the probability is “undefined” so to speak, since we’re not given enough information to come up with a meaningful probability. I was hoping someone with a statistics degree could tell me who’s right, and if there’s some theorem or postulate in statistics that we can point to to prove who’s right submitted by /u/mddrill [link] [comments]

Why is Kullback Leibler divergence a good way to compare two probability distributions? I’m unable to understand intuition behind the advantage/appropriateness it has over say, mean squared error or cross entropy.

Let’s say you have a bag, with 100 balls and with 4 colors of balls, red, white, green, blue. The actual probabilities for each color is [0.36438797, 0.12192962, 0.19189483, 0.32178759] ​ But let’s say my model comes up with two predictions for this distribution as follows, with the KLD and MSE values given P(x) : [0.37300551, 0.12188121, 0.18509246, 0.32002081] KLD: 0.0002 MSE: 0.688 Q(x) : [0.33014522, 0.03053611, 0.30264458, 0.33667409] KLD: 0.1028 MSE: 0.1459 ​ Why is P(x) better than Q(x) or vice versa ? ​ submitted by /u/bizzarebrains [link] [comments]