Hi guys, just checking if I’m able to do a regression analysis using my survey data. I have 60 surveys, 30 from one location and 30 from another. I asked the respondents how much time they usually spend and how much money they usually spend when visiting the location. Then i asked them to fill out a 12 question semantic differential table to gauge their feelings towards the atmosphere of that location. In a basic sense, hypothesis would be one locations atmosphere makes people feel better and thus spend more time and money their. Is this enough? Have you any ideas of how else i could present my data? Thanks a lot. submitted by /u/Wagamamamany [link] [comments]
Hey guys, So for my research I translated validated scales and piloted them to improve translation. However, I’m wondering if there’s a way to check the reliability of these translated scales before the main study. Most sources recommend 5 respondents per item (I have about 30 in total consisting of ~8 scales), which I’m not going to reach by far; I have about 35 respondents. I can’t find a clear source on minimum requirements to calculate Cronbach’s Alpha, which would be my next move. Many people seem to say “if it’s just a pilot go ahead”, but I couldn’t find any articles supporting this. Does anyone have an idea how to best approach this? It’s for a small project and I don’t have time to gather many more participants for the pilot. Greatly appreciated! submitted by /u/Eu4iaa [link] [comments]
I apologize if this is a very simplistic question but I just can’t seem to find a clear answer anywhere. I am wondering if there is a way to determine the likelihood that a single score or value along a continuum is part of one distribution or another, given the means and standard deviations for each distribution. To elaborate a bit more: I’m a clinical neuropsychologist and am looking to enhance my diagnostic impressions, mostly in determining whether someone has dementia or not. The research literature is full of studies showing means and standard deviations for healthy people and for people with dementia on standard tests. I’d like to take a patient’s single score on a test and be able to write in a report something like, “Given this person’s score on X test, there is a Y likelihood of belonging to a healthy group and a Z likelihood of belonging to a dementia group.” I don’t think that I’m looking for a likelihood ratio because that’s associated with a cutoff score and the sensitivity/specificity values associated with that cutoff. I’m looking for probabilities associated with a single score that doesn’t depend on a cutoff. I guess I may be able to use just a simple z-score or percentile, which I already do all the time, but that speaks to the single score and all scores above or below it. I really want a method that can take two different means/standard deviations into account. In other words, if an effect size is thought to be pretty big, I should be able to take advantage of that discrepancy between groups and utilize it clinically. Hope that makes sense, thanks in advance for your help. submitted by /u/NPDoc [link] [comments]
Hi guys, sorry if this is really simple but I havent used a statistical program for a few years and Im very rusty. After performing a standard regression analysis on some product data I scraped from the web, I found it to be suffering from heteroscedasticity. How would I go about correcting this? submitted by /u/kinkwik [link] [comments]
So a question wants to figure out the z critical score. ONline I found a formula: z= 1-(Alpha / 2). Or Z =1-(0.05/2) –> Z=1-0.025 –> Z=0.975. But according to the calculator on the same page, and to answers or others are getting off a chart, the correct answer is 1.645. What gives? I why am I not getting this number?
- I’ve found a basic/most common chart that gives the values. But it doesn’t have it for 0.1, which I need. Is it the same as 0.01? I can’t find a chart that goes up to 0.1. Thanks .
Here’s a short “build” progression over the yearNow that I’ve owned my LS430 for a year now I think I can give a pretty good analysis of the car, which we’ve named Old Man Tan.
The numbers: In the past year I’ve owned it I put 24,000 miles on the car (bought it at 63k, now has 87k) drove it from FL to CT, and also from CT to Arkansas and back. I have had to replace was the bank 1 O2 sensor, which of course was original and after 14 years was due anyway. I also sent my ECM out to SIA electronics to get the trans circuitry rebuilt, which may or may have not even been necessary. I have averaged 21 mpg for these 24,000 miles and routinely average 26 mpg on highway trips at 80 mph.
The 4.3 3uz v8 is so smooth you can balance a nickel on it at idle and rev the car to its redline without it falling over. In Drive, say when you’re sitting at a red light, there isn’t a bit of discernable vibration to the point where if you have a cup of water in the cupholder the surface stays completely undisturbed.
I’ve always been a gearhead but this was my first ever Toyota product. I’ve come to admire and respect the engineering behind these cars so much I’ve found myself marvelling at random things as I replace them (read: fix things that aren’t broken). I did the timing belt myself in my garage and Toyota makes it so easy it’s almost foolproof. The oil filter is the same one that’s used in my Ford Ranger and is so easily accessible I can do an oil change in 5 min (not an exaggeration). Interior trim pieces are superbly crafted but held together so simply that my sound system installation was a joke and all the pieces fit back together with no squeaks or rattles.
I have a black ’98 LS400 as well (story for another day) but in my opinion the LS400 and LS430 have to be among the best utilitarian vehicles ever produced. They probably aren’t the best at any single thing but they are competent in everything. There isn’t a single car I’ve ever come across that can last 300k miles easily with basic hand tools and a few spare parts.
I am working with a set of data that consists of a set of products, each with a set of “star” ratings, like Amazon. Each user reviews an item by assigning it a score from 0 to 5; and the dataset is the (nonnegative integer) count of each score for each item.
I am trying to fit a distribution to this dataset for use as a prior in a Bayesian ranking system of these items. Does it make more sense to fit the data to a Dirichlet distribution or a Dirichlet-multinomial distribution?
If I am omitting something (or if you think my thought process is completely wrong), let me know and I will update this post.
EDIT 1: I should note that there is a great degree of variance in the number of reviews for each item; they all have at least 50, but some have over 100 and a few have over 500! Additionally, there seems to be a positive correlation between number of reviews and review score. Just wanted to add this as I felt it was relevant.
Sorry if this is the wrong place to ask, haven’t dove into this side of reddit yet.
I drive a 2006 CVPI and have a laptop set up on the console, I just bought a RIDGID 100W power inverter/adapter to keep the laptop charged. It has an on/off switch, if I turn it off at the end of the night without unplugging it, is it still going to drain my battery? The last couple of days I’ve just been unplugging it, but if it’s possible to leave it plugged in it’d be much easier, as the plug is up and under the console. Thanks!