Like what common issues are known for your particular model/brand? Have you experienced any of these issues your self? How much did it cost to fix? Any issues you experienced that aren’t common? Any warnings to future owners of that car/brand? Would you buy the car again knowing what you now know? How on top of maintenance and repairs are you?
I am currently working on confirmatory factor analysis using AMOS. I would need some help to understand the concept of the ‘Estimates’. In my output, I have the Regression Weights, all with p <.001. The next line is “Standardized Regression Weights”. There I get an ‘Estimate’ for every factor. What can I tell upon those numbers? I get that the p-value needs to be significant – but what does the estimate value per se mean? Are there any limits which I have to consider (should be negative / positive >/< .xx)? I didn’t find any literature yet, which explains the values more in-depth.
I would be very happy if someone could explain this concept to me.
Reposting because last time I got automodded and the only response I got was some troll.
I’m thinking of channels like 3blue1brown and numberphile/computerphile that put out videos showing fun and interesting things in math. I have very little grasp of “pure” math and more abstract stuff so while I like to think I know quite a bit more than the average guy I’ve been unable to discern between what I’ve been told is crank stuff like this one in particular. So I’m kinda wondering what are some good videos/books and such for someone interested in math but without a background beyond general college calculus.
What else is out there in any form media that covers interesting things in math intended for a lay audience that doesn’t dive into crank territory?
I want to know if this is how correct way to calculate propability.
Event has 2 options, A happens or B happens. A has 0.03% chance B 99.97%. But event happens multiple times so I used this formula to calculate total chance of A happening after Y attempts.
C = 1 – (1-A) ^ Y
But every 10th event chance of A is increased to 0.045% (by 50%) and let say this chance is D but it is still same result. So here I tryed two formulas, one I called realistic and the other simplified.
C = 1 -(1 – A)9 * (1 – D) 1(this is chance after 10 attempts).
Simplified (I just said it is same as if chance increased by 5% every attempt instead of 50% once in 10. So we have E = 0.0315%):
C = 1 – (1 – E) ^ 10 (again chance after 10 attempts).
Results that I got were very similar but not the same.
This event happens a lot like 10000 times so I am more interested in average chance per one event of A/D/E (same result just differnt chance) happening. So I divided previous two formulas by 10 to get average chance of A happening in event, like this:
C = (1 -(1 – A)9 * (1 – D) 1) /10
C = (1 -(1 – E)10) / 10
So basically I am where I started knowing A has 0.03% chance of happening in a event only that this A now has a bit higher chance on average because it is increased every 10th time to 0.045%.
So how correct is that and why is there a small difference in results of formula
I am assuming it is because of formula itself not being completely accurate as if you calculate C = 1 – (1 – 0.03) 1 you actually don’t get exactly 0.03 as you should but rather 0.029999997…
Edit: My main question would actually be can I assume in long run, on average looking at that 50% increase to 0.045% every 10th repetition is same as 5% increase to 0.0315% every repetition? Like will event happen same amount of times in those two scenarios?
I’m going to try to describe the problem without diving too much into the biology.
I did a pilot study (N of 5) and discovered that my treatment had a pretty high effect size (cohen’s d was about .9-.12 for 3 different genes from a gene expression assay). From the sample size calc using 80% power, I determined that a final sample size of 10-12 would be required to get statistical significance for this data.
So I repeat the experiment to get a final N of 12 and the data is not significant, though for two of the three genes, they are relatively close to statistical significance, as they were in the pilot study. I’m wondering what the cause of this could be, is this just more biological variability that wasn’t captured in the pilot study or is this a type I error?
Would it be advisable to repeat the experiment from scratch, requiring 24 more animals, and burning more reagents, or would it be better to conclude that the effect isn’t really there and move on to something else?
Hey folks! Just wanted to ask a question that’s often on my mind. When working math problems especially proof type questions how do I know when I should give up?
There are clearly two extremes to this. The first is that if you don’t see a way to solve the problem after looking at it for 10 seconds you quit and look at the answer. Obviously this isn’t a good approach because you’re not actually trying the problem and just looking at the answer On the other hand you could struggle with one problem for an entire day or more before looking for outside help. I also think this isn’t an optimal strategy because it can waste a lot of time as you just bang your head against a wall.
So my thinking is, there must be some happy medium and I was hoping to hear some of your thoughts about how to find it.