Is it possible to go from an online masters to a PHD? I’ve been accepted to NC State’s online program (and hopefully TAMU’s as well). I’m working full-time right now at a good job, and they’ve agreed to allow me to work part time while I study.
In a few years, would it be possible to take my masters from NC State or TAMU and use it to get into a PHD program? I understand that research is important, and outside of some consulting work from the schools, I may not have many options. I have no prior research experience (Econ undergrad degree from a top 10 public school). NC State would also be a Master of Statistics, rather than an MS (I’m not sure how much that matters).
The deadlines for a PHD have far passed, and I fully intend on starting school this fall or next spring. I know that Rice shortens their PHD program if you arrive with a masters. I’m hoping to do something like this. I feel like a masters will help me better identify what I may enjoy researching.
Thank you for any help!
I understand posts like these might be nauseating at times, and I’m sorry about that.
I’m comparing May’s data to April’s for some stuff at work and something very curious has happened. We are looking at average time spent on one process. This is the same process everytime, however we can subset it into 2 (almost equal) sets.
When subsetted, both subsets are trending upwards from April to May, however when combined the entire set is trending downwards?
I had a google and the only thing that came up was Simpson’s Paradox (https://en.wikipedia.org/wiki/Simpson%27s_paradox), however I don’t think that applies here.
Any ideas? This is truly baffling to me
Edit: Here’s the plot for April and May: https://imgur.com/U2gLjOh
I have my econ stats final tomorrow so I’m just going over some past exams. Most the stuff I’m okay with but this one question has confused me.
Random sample of 16 and it wants to test the. Validity of the statement at 5% significance and at 1%.
Because the sample is less then 30 I use t’n-1, and so at 5 percent I’m testing it against 2.131 and at 1 percent against 2.947, however in the markschemes its using the percentages for 10% and 2%,
Only thing I can think of its because its one tail not 2 tail, but then do I just double the significance to make it one tail?
Any help? I can upload some pictures of the question and markschemes if needed.
I need help so desperately. So here’s my problem: I need to use three independent and one dependent variable to give insight on a research question, using SPSS. However, ALL my variables are likert scales. I figured I might just use chi square for all of them since they are categorical. But since this is a very big data set they all turn out significant with very high standardized residuals so I basically get no actual results.
My question is, could I treat them as interval/continuous and run a regression analysis? Would I need to make all of the independent variables into binary variables? What about the dependent variable? Would that also have to be a binary variable? They are, as I said all likert scales so I could for example make it into 0= strongly agree, agree 1= neither agree nor disagree, disagree, strongly disagree.
Would Anova be better? But it seems like those also all turn out significant. In regression analysis I would also get the R2 value which would at least tell me how well we can explain the result. Or is there another way to see how strong an association is in Anova, other than significance.
What would you do? I would appreciate your help so much.
Apologies in advance if I sound naive.
Second year undergrad pursuing a stats + cs major here. I’m hoping to get an internship/ placement after my 3rd year and was wondering if there were any certifications that will boost a competitive edge.
A friend of mine who is doing stats & econs is taking some SOA papers (P and VEE) and encouraged me to do the same, but I’m not sure if that is necessary for my field? How doable is the SAS certification for undergrad/ Is there any other accreditation I can attain by exams?
Thanks a lot!
I’m having trouble grasping the “distribution” part. I understand how the inferencing process works and how we use a prior to find a posterior and then continue repeating the process. I was hoping for an explanation as to how a posterior distribution is represented (is it just a graph with points on it?) and how a posterior is different from a posterior distribution (is it just one vs . many?).
I’m kind of shooting in the dark here but, would it be correct to imagine a posterior distribution is just ALL the recorded posterior values throughout the inferencing process? If this is the case, how is it represented on a graph (what are the x and y axes?).
Sorry if any of these questions sound obvious, I’ve never taken a statistics about Nevin Manimala class before so all the terminology goes a bit over my head.
I got really confused when learning about how to calculate “At Least One” Probabilities.
My first question: on this website: https://study.com/academy/lesson/the-at-least-one-rule-for-independent-events.html,
They talk about the probability of at least one person winning the grand prize, but does that really count as an “At Least One” probability when there can only be 1 winner? It doesn’t seem that those 2 events are independent because one person winning definitely influences the other person winning (there’s only 1 winner).
Whereas, in a problem like this:
“Example 1 : Defective chips
A manufacturer of processing chips knows that 2%, percent of its chips are defective in some way.Suppose an inspector randomly selects 4 chips for an inspection.Assuming the chips are independent, what is the probability that at least one of the selected chips is defective?”
It’s clear to me that those events are independent.
My Second Question: Looking at the “Defective Chips” example above, .98^4 is the probability that in all 4 trials of selecting chips, they are all NOT defective. So then you do “1 – 98.4^4” to get the probability that at least one is defective. However, if 98.4^4 is the probability that all 4 chips are NOT defective, then shouldn’t “1 – 98.4^4” be the probability that ALL 4 chips ARE defective? (as opposed to at least one chip is defective) since it’s the complement?
Also a different question using that same “Defective Chips” example. Why can’t one do the probability of Trial 1 or Trial 2 or Trial 3 or Trial 4 having a defective chip by doing the following: .02 + .02 + .02 + .02. Because in probability, “Or” means you add up the probabilities, so can’t you just add them up since doing so will give you the probability of one of those occurring?
Thanks for the help.