Borderline BE study Failure [Study Assessment]
Hi Samaya,
Irrelevant in BE. If you insist in post hoc power, your number is wrong. Do you really believe in obtaining ~100% power in a failed study? For Cmax:
Irrelevant again (EU submission…).
What do you mean by “get through by any means”? Do you want us to teach you dirty tricks?
First of all please answer Ohlbe’s questions. Why didn’t you stop the study after the interim analyis? Or did you dose all 56 subjects already and just wanted to “have a look”?
For me your evaluation smells of Pocock’s group sequential design: fixed sample size N, one interim analysis at N/2, parallel groups, normal distributed data, known variance, testing for a significant difference.
But you have a different cup of tea: full replicate cross-over, lognormal data, unknown variance, testing for equivalence by CI-inclusion.
Which α did you use?
❝ Power ~100
Irrelevant in BE. If you insist in post hoc power, your number is wrong. Do you really believe in obtaining ~100% power in a failed study? For Cmax:
library(PowerTOST)
*
CLlo <- 0.7989
CLhi <- 0.8900
PE <- sqrt(CLlo*CLhi)
n <- 56
des <- "2x2x4"
CV <- CI2CV(lower=CLlo, upper=CLhi, n=n, design=des)
cat(sprintf("%s %.2f%%", "Power:", 100*power.TOST(CV=CV, n=n, theta0=PE,
design=des)), "\n")
Power: 48.43%
cat(sprintf("%s %.2f%%", "Power:", 100*power.TOST(CV=CV, n=n, theta0=1,
design=des)), "\n")
Power: 100.00%
❝ No outlier detected.
Irrelevant again (EU submission…).
❝ Is there any possibility to get through by any means…
What do you mean by “get through by any means”? Do you want us to teach you dirty tricks?
First of all please answer Ohlbe’s questions. Why didn’t you stop the study after the interim analyis? Or did you dose all 56 subjects already and just wanted to “have a look”?
For me your evaluation smells of Pocock’s group sequential design: fixed sample size N, one interim analysis at N/2, parallel groups, normal distributed data, known variance, testing for a significant difference.
But you have a different cup of tea: full replicate cross-over, lognormal data, unknown variance, testing for equivalence by CI-inclusion.
Which α did you use?
- If 0.05 (90% CI) – wrong (patient’s risk inflated to ~8.1%)
- If 0.0294 (94.12% CI), did you explore the overall patient’s risk before the study? If yes, please enlighten us how you did that.
- Rule of thumb: If one of the confidence limits is exactly at the border of the acceptance range and the other CL = 1 (i.e., PE 0.8944 or 1.1180), power in any design and any sample size is ~50%. Try it.
—
Dif-tor heh smusma 🖖🏼 Довге життя Україна!![[image]](https://static.bebac.at/pics/Blue_and_yellow_ribbon_UA.png)
Helmut Schütz
![[image]](https://static.bebac.at/img/CC by.png)
The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Dif-tor heh smusma 🖖🏼 Довге життя Україна!
![[image]](https://static.bebac.at/pics/Blue_and_yellow_ribbon_UA.png)
Helmut Schütz
![[image]](https://static.bebac.at/img/CC by.png)
The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Complete thread:
- Borderline BE study Failure Samaya B 2014-10-14 09:02
- Borderline BE study Failure luvblooms 2014-10-14 11:03
- Borderline BE study Failure jag009 2014-10-14 16:59
- Interim analysis Ohlbe 2014-10-14 23:53
- Borderline BE study FailureHelmut 2014-10-15 15:29