I have a randomized clinical trial that compares the effectiveness of two different antiretrovirals (ARV) in patients co-infected with TB and HIV. A sample size of 340 per group was chosen to have 90% power to detect a difference in mean CD4 at 1 year of 25 based on an estimated standard deviation of 100.
The correct interpretation of the 90% power of this study will be;
i. We will have a 90% chance of finding a significant difference if the observed mean of the two treatment groups differs by 25 and the standard deviation is 100.
ii. We will have a 90% chance of finding a significant difference if the true mean of the two groups differs by 25 and the standard deviation is 100.
iii. If there is no true difference between the groups and the standard deviation is 100, we have a 90% chance of correctly not rejecting the null hypothesis.
iv. At the end of the study, if the standard deviation is 100 then on average 90% of the data will be contained in a 90% confidence interval.
And why? Thank you!