Logistic Regression: Negative DV Coefficient & Negative IV Coefficient

Hello,

I created a simple Logistic Regression and my intercept coefficient is -1.349, while my IV coefficient (binary as well) is -.527.

  • Can I interpret this as a positive relationship between my DV & IV since they’re both negative?
  • I was advised to turn log odds into odds by exp(log odds). Do I do this for both my intercept and IV or just my IV?
  • I read on wikipedia, to turn log odds into probability, I would use this formula: 1/(1+exp(-(intercept + coefficient*IV))) is this correct? I would prefer to use probability than odds.

Additional information: I’m using excel with a XLSTAT plug in.

I just started learning regression so I apologize if this question is elementary.

submitted by /u/Calvin_klein_2593
[link] [comments]

Which statistical test do I do?

I’m confused on which statistical test I run….It’s within subjects design that utilized 3 crayfish. Crayfish were first given an injection of saline. Stimulus was introduced, and their response, a tail flip, pincer raising, retreating, or freezing was reported. This was done 10 times per crayfish. The same crayfish were then injected with caffeine, and the behavior was repeated.

My partner and I are stuck on if it’s a repeated measures anova or a repeated measures t-test. Or something else.

submitted by /u/M1KETHEGR3AT
[link] [comments]

Statistical significance, “chance”, probability, and randomness

So I was having a debate with a researcher about the definition of statistically significant which was given as:

“Highly unlikely to have occurred by chance

My issue with it was the use of the word chance. I was arguing that there are many reasons why you would get significant results that don’t have to do with “chance”. Chance to me is a fuzzy word.

They brought up a sample size example using male vs female. They also pointed out that I need to think of what chance means in statistics vs laymen’s terms which I agree, but I have also seen non-laymen conflate chance with randomness. I did some digging and found this paper:

Probability and Chance in the Theory of Statistics

It focuses on how chance differs from probability which I think starts to gets towards the better definition:

p(data | null hypothesis or whatever assumption(s)/model)

From paper:

Once given a particular premiss, it may be possible to give a probability a value with which everyone readily agrees; this value is, moreover, independent of further data or premisses unless these contradict the initial prermiss, that is, all other knowledge is irrelevant. The probability may then be said to be a chance. This invariant character of a chance depends always on the recognition that the particular premiss essential to it is inserted in the data.

I also found this discussing chance vs randomness:

Chance versus Randomness

Am I just being nit picky? To what degree is the original definition “right”? They were adamant that it was right and nothing really controversial about it.

Does anyone know of good papers/books which discuss the statistical definition or way “chance” is defined?

Any other considerations?

submitted by /u/slimuser98
[link] [comments]

Need Help Comparing Two Qualitative Lab Tests

I’m working on validating a diagnostic test for use in a clinical laboratory. We are replacing one US Food and Drug Administration-approved (FDA) test with another. We have to do a small study to evaluate our ability to use the test to obtain accurate results. Our study does not need to be anywhere near as complex as something submitted to the FDA. We just need to be able to show that in our hands, the performance of the text is acceptably close to what the manufacturer reported to the FDA. The manufacturer provides information on the studies they conducted and the results. Our lab director decides what constitutes “acceptably close” performance.

I’ve done these qualitative test comparisons before but could never find anyone who could answer a question I had.

We do not have access to information on patient diagnoses. The best we can do when trying to determine if one test is accurate is comparing the results we obtain using the test being evaluated with the results obtained when the same specimens are tested using another method. This is not an optimal means of evaluating accuracy (both tests could be wrong) but we don’t have the data necessary to connect a test result to whether or not a patient actually has a disease.

When doing these comparisons I usually follow the method suggested by the Clinical Laboratory Standards Institute (CLSI) in their document User Protocol for Evaluation of Qualitative Test Performance (EP12-A2).

It uses a 2×2 contingency table with columns designated a positive and negative and rows designated as positive and negative. The number of specimens that are positive using both methods is entered into the top-left cell. The number of specimens that are positive with the method being evaluated but negative with the other method is entered into the top-right cell. The bottom two cells are filled with the numbers of negative/positive and negative/negative results.

Positive percent agreement and negative s percent agreement between the two methods is then calculated. I can post those calculations but this post is getting pretty long.

My question…that no one in the clinical lab field seems able to answer….is:

What do I do when one or both tests can yield an indeterminate result, necessitating a 3×2 or 3×3 contingency table?

submitted by /u/lastmile780
[link] [comments]

Biostatistics protocol – if you do subgroup analysis to show nothing goes wrong for certain subgroups, can you point out the need for p-value correction?

First time helping out with protocol writing. They want to do subgroup analysis with their test to show that it doesn’t perform especially poorly with certain sub-groups (gender, race, age, several others).

We all know subgroup analysis is poor practice when trying to see where a test or therapy performs well, so I’m a bit concerned about plans to do subgroup analysis to show that things don’t perform poorly. It’s entirely possible that the test will perform “significantly worse” (or better) for one of those groups completely due to chance. Should/can I mention that we will do an alpha/p correction where p = # of subgroups to account for multiple testing?

submitted by /u/Jmzwck
[link] [comments]