kms.srinivas
☆    

India,
2017-12-26 08:39
(2275 d 01:10 ago)

Posting: # 18082
Views: 35,051
 

 Power is getting high [Power / Sample Size]

Dear All, good morning
I'm getting surprising results always that I'm taking sample size with 80% power and by coming to the results always 98 Power it is reaching....How it is possible always? Please let me know...thanq
d_labes
★★★

Berlin, Germany,
2017-12-26 13:08
(2274 d 20:41 ago)

@ kms.srinivas
Posting: # 18083
Views: 33,528
 

 Power is getting high?

Dear kms.srinivas,

please give us more details.

Regards,

Detlew
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2017-12-26 13:22
(2274 d 20:28 ago)

@ kms.srinivas
Posting: # 18084
Views: 33,521
 

 Stop estimating post hoc power!

Hi kms,

❝ I'm getting surprising results always that I'm taking sample size with 80% power and by coming to the results always 98 Power it is reaching....How it is possible always?


If the irrelevant post hoc power is always 98%, I suspect a flaw in the software (or SAS macro).
If it is always higher than the one used in sample size estimation it may be a miscalculation – ineradicably “popular” in Indian CROs (see this post). Revise your procedures.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
kms.srinivas
☆    

India,
2017-12-26 14:00
(2274 d 19:49 ago)

@ Helmut
Posting: # 18085
Views: 33,563
 

 Stop estimating post hoc power!

❝ If the irrelevant post hoc power is always 98%, I suspect a flaw in the software (or SAS macro).

❝ If it is always higher than the one used in sample size estimation it may be a miscalculation – ineradicably “popular” in Indian CROs (see this post). Revise your procedures


I'm sorry, I mean to say always it is coming in the interval 95 to 99 and some times 100 ofcourse one time only, some times we are getting less power on 80s but very less times.


Edit: Full quote removed. Please delete everything from the text of the original poster which is not necessary in understanding your answer; see also this post #5! [Helmut]
ElMaestro
★★★

Denmark,
2017-12-26 15:11
(2274 d 18:39 ago)

@ kms.srinivas
Posting: # 18087
Views: 33,482
 

 Stop estimating post hoc power!

Hi kms.srinivas,


I know this post-hoc power business can be very tricky.

Try and ask yourself which question post-hoc power actually answers. Try and formulate it in a very specific sentence.

Generally, [given a statistical model] power is the chance of showing BE in a trial if your expectations about GMR, CV are correct when you use N subjects.
Choice of GMV, CV and N to plug in a power calculation is up to the user, but in the specific case of post-hoc power stats software may make such a choice for you without asking. Often it is 95% or 100% regardless of what you observed in the previous trial.

So with this in mind try and look back in your figures and tell which question the post-hoc calculation in your case gave an answer to. Feel free to paste your numbers here then I am sure someone can help working it out. You might get surprised.:-)

Pass or fail!
ElMaestro
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2017-12-26 16:51
(2274 d 16:58 ago)

@ kms.srinivas
Posting: # 18088
Views: 33,996
 

 Simulations

Hi kms,

❝ I mean to say always it is coming in the interval 95 to 99 and some times 100 ofcourse one time only, some times we are getting less power on 80s but very less times.


It might be in the actual study that the CV is lower and/or the GMR closer to 1 than assumed. Power is especially sensitive to the GMR. We can also simulate studies and assess the outcome. The GMR follows the log­normal distribution and the variance the χ² distribution with n–2 degrees of freedom in a 2×2×2 crossover. Try this R-code (the library PowerTOST will be installed if not available):

### housekeeping ###
rm(list=ls(all=TRUE))
packages <- c("PowerTOST", "MASS")
inst     <- packages %in% installed.packages()
if (length(packages[!inst]) > 0) install.packages(package[!inst])
invisible(lapply(packages, require, character.only=TRUE))
### give the relevant data below ###
CV      <- 0.20 # assumed CV
theta0  <- 0.95 # assumed T/R-ratio
target  <- 0.90 # target (desired) power
nsims   <- 1e4  # number of simulated studies
### change below this line only if you know what you are doing ###
plan    <- sampleN.TOST(CV=CV, theta0=theta0, targetpower=target, print=FALSE)
n       <- plan[["Sample size"]]
sim.pwr <- power.TOST.sim(CV=CV, theta0=theta0, n=n, nsims=nsims)
cat(paste0("Assumed CV = ", CV, ", GMR = ", theta0, "; target power = ", target,
    ";\nestimated sample size = ", n, " (power = ", signif(plan[["Achieved power"]], 4),
    ").\nAverage power in ", nsims, " simulated studies: ", signif(sim.pwr, 4), "\n"))


Assumed CV = 0.2, GMR = 0.95; target power = 0.9;
estimated sample size = 26 (power = 0.9176).
Average power in 10000 simulated studies: 0.9208


If you are interested in distributions of the GMR, CVs, and powers in the simulated studies, continue:

df  <- n-2
se  <- CV2se(CV)*sqrt(0.5/n) # Cave: simple formula for balanced sequences
mse <- CV2mse(CV)
res <- data.frame(GMR=rep(NA, nsims), CV=NA, power=NA)
names(res)[3] <- "post hoc power"
set.seed(123456)
res$GMR <- exp(rnorm(n=nsims, mean=log(theta0), sd=sqrt(0.5/n)*sqrt(mse)))
res$CV  <- mse2CV(mse*rchisq(n=nsims, df=df)/df)
for (j in 1:nsims) {
  res[j, 3] <- power.TOST(CV=res$CV[j], theta0=res$GMR[j], n=n)
}
cat("Results of", nsims, "simulated studies:\n"); summary(res)


Results of 10000 simulated studies:
      GMR               CV          post hoc power
 Min.   :0.8454   Min.   :0.09836   Min.   :0.2635
 1st Qu.:0.9326   1st Qu.:0.17808   1st Qu.:0.8374
 Median :0.9505   Median :0.19758   Median :0.9189
 Mean   :0.9508   Mean   :0.19850   Mean   :0.8887
 3rd Qu.:0.9679   3rd Qu.:0.21801   3rd Qu.:0.9683
 Max.   :1.0493   Max.   :0.31336   Max.   :1.0000


Explore also boxplots and a histograms:

col <- "bisque"
boxplot(res, col=col, boxwex=0.65, las=1)
grid()
ylab <- "relative frequency density"
op <- par(no.readonly=TRUE)
np <- c(4.5, 4.5, 1, 0.3) + 0.1
par(mar=np)
split.screen(c(1, 3))
screen(1)
truehist(res[, 1], col=col, lty=0, xlab="GMR", las=1, bty="o", ylab=ylab, log="x")
screen(2)
truehist(res[, 2], col=col, lty=0, xlab="CV", las=1, bty="o", ylab=ylab)
screen(3)
truehist(res[, 3], col=col, lty=0, xlab="post hoc power", las=1, bty="o", ylab=ylab)
abline(v=plan[["Achieved power"]], col="blue")
close.screen(all=TRUE)
par(op)

Especially the histogram is telling.

Even if the study was planned for 90% power (like in the example) I strongly doubt that you will always get a post hoc power of 95–99%:

cat("Simulated studies with post hoc power [0.95, 0.99]:", sprintf("%.2f%%",
    100*length(res[, 3][res[, 3] >= 0.95 & res[, 3] <=0.99])/nsims), "\n")

Simulated studies with post hoc power [0.95, 0.99]: 25.28%


If the study was planned for 80% power it will be even less likely:

Simulated studies with post hoc power [0.95, 0.99]: 12.63%


As ElMaestro suggested: Can you give us one example? We need the CV, the GMR, the number of eligible subjects, and your “power”. If the study was imbalanced, please give the number of subjects in each sequence.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
BE-proff
●    

2017-12-27 07:53
(2274 d 01:56 ago)

@ Helmut
Posting: # 18091
Views: 33,267
 

 Simulations

Hi Helmut,

Is it worth to calculate sample size at power of 95%?
Let's say I am a rich client with big wallet :-D

Are there any risks?
ElMaestro
★★★

Denmark,
2017-12-27 08:25
(2274 d 01:24 ago)

@ BE-proff
Posting: # 18093
Views: 33,138
 

 Simulations

Hi BE-proff,

❝ Is it worth to calculate sample size at power of 95%?


Calculation is relatively cheap. Execution is relatively expensive.

Pass or fail!
ElMaestro
Yura
★    

Belarus,
2017-12-27 09:23
(2274 d 00:26 ago)

@ BE-proff
Posting: # 18094
Views: 32,996
 

 Simulations

Hi BE-proff

❝ Is it worth to calculate sample size at power of 95%?


forced bioequivalence
kms.srinivas
☆    

India,
2017-12-27 10:11
(2273 d 23:38 ago)

@ Yura
Posting: # 18095
Views: 33,062
 

 Simulations

forced bioequivalence


Hi yura,
what about regulatory queries on "unintentional forced Bioequivalence"
Yura
★    

Belarus,
2017-12-27 11:07
(2273 d 22:42 ago)

@ kms.srinivas
Posting: # 18096
Views: 33,192
 

 Simulations

Hi kms.srinivas
If at GMR=0.95 and CV=0.25 calculation has to be carried out at N = 28-30 subjects, and you carry out on 50 subjects, there can be questions on forced bioequivalence :cool:
Regards
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2017-12-27 13:57
(2273 d 19:52 ago)

@ Yura
Posting: # 18099
Views: 33,220
 

 α and 1–β

Hi Yura,

❝ If at GMR=0.95 and CV=0.25 calculation has to be carried out at N = 28-30 subjects, and you carry out on 50 subjects, there can be questions on forced bioequivalence :cool:


If I got your example correctly, 28–30 subjects mean 81–83% power. In some guidelines 80–90% power are recommended. 90% power would require 38 subjects. 50 subjects would mean 96% power (BE-proff’s “big wallet”).
However, such a sample size should concern only the IEC (assessing the risk for the study participants). If the protocol was approved by the IEC and the competent regulatory agency, performed as planned and by chance show an even higher “power”, an eyebrow of the assessor might be raised but technically (Type I Error is controlled) there is no reason to question the study.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Yura
★    

Belarus,
2017-12-27 14:33
(2273 d 19:16 ago)

@ Helmut
Posting: # 18101
Views: 33,046
 

 α and 1–β

Hi Helmut,

❝ If the protocol was approved by the IEC and the competent regulatory agency


yes, explanations are needed for estimating the sample size and design (that to reduce it)
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2017-12-27 15:31
(2273 d 18:18 ago)

@ Yura
Posting: # 18102
Views: 33,150
 

 α and 1–β

Hi Yura,

❝ ❝ If the protocol was approved by the IEC and the competent regulatory agency


❝ yes, explanations are needed for estimating the sample size and design (that to reduce it)


Sure. But once explanations (better: assumptions) were accepted (i.e., the protocol approved) that’s the end of the story. That’s why I’m extremely wary of using the term “forced bioequivalence” in a regulatory context (see also this post and followings).
IMHO, post hoc power is crap. The chance to get one which is higher than planned is ~50%. Run my simulation code and at the end:

cat(nsims, "simulated studies with \u201Cpost hoc\u201D power of",
    "\n  \u2265 target    :", sprintf("%5.2f%%",
    100*length(res[, 3][res[, 3] >= target])/nsims),
    "\n  \u2265 achieved  :", sprintf("%5.2f%%",
    100*length(res[, 3][res[, 3] >= plan[["Achieved power"]]])/nsims),
    "\n  \u2265 0.90      :", sprintf("%5.2f%%",
    100*length(res[, 3][res[, 3] >= 0.9])/nsims),
    "\n  [0.95, 0.99]:", sprintf("%5.2f%%",
    100*length(res[, 3][res[, 3] >= 0.95 & res[, 3] <=0.99])/nsims),
    "\n  \u2265 0.95      :", sprintf("%5.2f%%",
    100*length(res[, 3][res[, 3] >= 0.95])/nsims), "\n")


With your example (CV 0.25, GMR 0.95, target power 80%, n 28, 105 simulations):

1e+05 simulated studies with “post hoc” power of
  ≥ target    : 49.92%
  ≥ achieved  : 48.02%
  ≥ 0.90      : 22.93%
  [0.95, 0.99]:  8.40%
  ≥ 0.95      :  9.70%

Although the study was performed with 28 subjects for ~81% power, the chance to get a post hoc power of ≥ 90% is ~23% and ≥ 95% is ~10%. That’s clearly not “forced BE” and none of these studies should be questioned by regulators.*
I suggest to reserve this term to the context of study planning.


  • To quote ElMaestro: Being lucky is not a crime.
    Let’s reverse the question. Would you mistrust a study which demonstrated BE despite its low post hoc power? I hope not. We endured this game for ages and it went where it belongs to: The statistical trash bin.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Yura
★    

Belarus,
2017-12-28 07:50
(2273 d 02:00 ago)

@ Helmut
Posting: # 18108
Views: 32,875
 

 α and 1–β

Hi Helmut,

❝ Although the study was performed with 28 subjects for ~81% power, the chance to get a post hoc power of ≥ 90% is ~23% and ≥ 95% is ~10%. That’s clearly not “forced BE” and none of these studies should be questioned by regulators.


You are considering the power after the study at n = 28 (which were calculated before the study: GMR=0.95, CV=0.25, β=0.80). The question is whether it is possible to carry out a study at n = 50 and will this be forced bioequivalence?
Regards
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2017-12-28 12:30
(2272 d 21:19 ago)

@ Yura
Posting: # 18111
Views: 33,038
 

 Educate the IEC and regulators

Hi Yura,

❝ You are considering the power after the study at n = 28 (which were calculated before the study: GMR=0.95, CV=0.25, β=0.80). The question is whether it is possible to carry out a study at n = 50 and will this be forced bioequivalence?


As I wrote above the IEC and the authority should judge this before the study is done. I agree that in many cases the statistical knowledge of IECs is limited. However, once the protocol was approved by both, I don’t see a reason to talk about “forced BE” any more.

BTW, I don’t see a problem if a study is designed for 90% power (80% is not carved in stone). Let’s assume a dropout rate of 15% and we will already end up with 46 subjects:

library(PowerTOST)
CV     <- 0.25 # CV-intra
theta0 <- 0.95 # T/R-ratio
target <- 0.90 # desired (target) power
dor    <- 15   # expected dropout rate in percent
n      <- sampleN.TOST(CV=CV, theta0=theta0, targetpower=target,
                       print=FALSE)[["Sample size"]]
ceiling(n/(1-dor/100)/2)*2 # round up to next even


Considering your example and assuming that the GMR and CV turn out exactly as assumed, no dropouts (n=50): The 90% CI will be 87.47–103.18%. Fine with me. Not even a significant difference (100% included). If the drop­out-rate is as expected (n=38) the 90% CI will be 86.36–104.51%. If the assessor is not happy with that, he should have a chat with his colleague who approved the protocol and enlighten him about potential “over-powering” in study planing.
According to all guidelines (CI within the acceptance range) I can’t imagine a justification to reject the study. If the study is not accepted only due to the high sample size in the EEA the applicant might go for a referral (with extremely high chances of success) and in the USA the FDA will be sued right away. :-D

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2017-12-27 13:23
(2273 d 20:27 ago)

@ kms.srinivas
Posting: # 18097
Views: 33,336
 

 “Forced BE” 101

❝ what about regulatory queries on "unintentional forced Bioequivalence"


What do you mean by unintentional?
Again: Stop estimating post hoc power! Either the study demonstrated BE or not.*
Going back to my example (study planned for 90% power): The chance to obtain a post hoc power of ≥95% is ~35%. Now what?
It only means that
  • the CV was lower and/or
  • the GMR closer to 1 and/or
  • you had less dropouts than assumed.
However, the patient’s risk (α = probability of the Type I Error) is independent from the producer’s risk (β = probability of the Type II Error). The latter might be of concern for the IEC in study planning (see Yura’s example) whereas only the former is of regulatory concern – and not affected by power.

I still think that you calculations are wrong. Therefore, you are facing high values more often. Would you mind giving us the data ElMaestro and I asked for?


  • If the CV, GMR, number of dropouts are “worse” than assumed, there is still a chance to show BE – though with less than the desired power. If you succeeded, fine.
    If post hoc power is correctly (!) estimated, it will be always low for a failed study.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
kms.srinivas
☆    

India,
2017-12-27 13:41
(2273 d 20:08 ago)

@ Helmut
Posting: # 18098
Views: 33,100
 

 “Forced BE” 101

Dear Helmut,
Much pleasure with your reply.

❝ study planned for 90% power: The chance to obtain a post hoc power of ≥95% is ~35%.


Is it a thumb rule? how it should be calculated?

❝ It only means that

❝    – the CV was lower and/or

❝    – the GMR closer to 1 and/or

❝    – you had less dropouts than assumed.


Yes, on getting posthoc analysis, i found three aspects what you said:
After experiment, it came to know that
1. CV getting lower
2. GMR close to 1
3. No dropouts/withdrawls (though prior consideration of 10% dropouts)
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2017-12-27 14:02
(2273 d 19:48 ago)

@ kms.srinivas
Posting: # 18100
Views: 33,393
 

 Would you be so kind answering our questions?

Hi kms,

❝ ❝ study planned for 90% power: The chance to obtain a post hoc power of ≥95% is ~35%.


❝ Is it a thumb rule?


No; obtained in simulations by the R-code I posted above. There is only one – rather trivial – rule of thumb: The chance to get a post hoc power which is either lower or higher than the target is ~50%.

❝ how it should be calculated?


Once you performed the simulations, use

cat("Simulated studies with post hoc power \u22650.95:", sprintf("%.2f%%",
    100*length(res[, 3][res[, 3] >= 0.95])/nsims), "\n")

You should get

Simulated studies with post hoc power ≥0.95: 34.97%


Adapt the relevant data according to your needs. For CV 0.30, T/R 0.90, and target power 80% you would get only 7.77% in the range 0.95–0.99 and 8.64% ≥0.95.

❝ Yes, on getting posthoc analysis, i found three aspects what you said:

❝ After experiment, it came to know that

❝ 1. CV getting lower

❝ 2. GMR close to 1

❝ 3. No dropouts/withdrawls (though prior consideration of 10% dropouts)


Fine. Can you explain to us why you performed a “posthoc analysis” at all? What did you want to achieve? To repeat ElMaestro:

❝ ❝ ❝ Try and ask yourself which question post-hoc power actually answers. Try and formulate it in a very specific sentence.


For the 5th time (already asked #1, #2, #3, #4): An example would help.
We tried to answer your questions. It would be nice if you answer ours as well.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
kms.srinivas
☆    

India,
2017-12-28 06:53
(2273 d 02:57 ago)

@ Helmut
Posting: # 18107
Views: 32,764
 

 Would you be so kind answering our questions?

Dear Helmut,

❝ For the 5th time (already asked #1, #2, #3, #4): An example would help.

❝ We tried to answer your questions. It would be nice if you answer ours as well.


These are the results i'm finding maximum times.
Please find the example:

Before Study:
Desired Power:80%
Alpha:5%
ISCV:20%
GMR:110
* No. of subjects:36(32+4 by considering 10% dropouts)
No. of subjects completed the study:34

After Study:
Achieved Power:99%
Alpha:5%
ISCV:18.36%
GMR:96

BE Limits are as usual for 80 to 125.
Kindly be clarified.
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2017-12-28 12:47
(2272 d 21:02 ago)

@ kms.srinivas
Posting: # 18112
Views: 32,904
 

 Yes, but why?

Hi kms,

❝ These are the results i'm finding maximum times.


THX for providing the data. I guess you mean maximum concentrations?
When doubting your calculations I stand corrected!

library(PowerTOST)
# assumed
CV.0     <- 0.2 # CV-intra
theta0.0 <- 1.1 # T/R-ratio
target   <- 0.8 # desired (target) power
dor      <- 10  # expected dropout rate in percent
# observed
CV.1     <- 0.1836
theta0.1 <- 0.96
n        <- 34
# sample size and power
plan     <- sampleN.TOST(CV=CV.0, theta0=theta0.0, targetpower=target)
# Adjust for expected dropout rate and round up to next even
N.ad     <- ceiling(plan[["Sample size"]]/(1-dor/100)/2)*2
# power for various number of dropouts #
elig     <- seq(N.ad, 2*plan[["Sample size"]]-N.ad, -1)
pwr1     <- data.frame(dosed=rep(N.ad, length(elig)),
                       dropouts=N.ad-elig, eligible=elig, power=NA)
for (j in seq_along(elig)) {
  pwr1$power[j] <- suppressMessages(power.TOST(CV=CV.0, theta0=theta0.0,
                                               n=elig[j]))
}
cat("Ante hoc power for various number of dropouts:\n"); print(signif(pwr1, 4), row.names=FALSE)
cat("Post hoc power:", signif(power.TOST(CV=CV.1, theta0=theta0.1, n=n), 4), "\n")


Gives:

+++++++++++ Equivalence test - TOST +++++++++++
            Sample size estimation
-----------------------------------------------
Study design:  2x2 crossover
log-transformed data (multiplicative model)

alpha = 0.05, target power = 0.8
BE margins = 0.8 ... 1.25
True ratio = 1.1,  CV = 0.2

Sample size (total)
 n     power
32   0.810068


Ante hoc power for various number of dropouts:
 dosed dropouts eligible  power
    36        0       36 0.8505
    36        1       35 0.8409
    36        2       34 0.8314
    36        3       33 0.8207
    36        4       32 0.8101
    36        5       31 0.7982
    36        6       30 0.7864
    36        7       29 0.7732
    36        8       28 0.7601

Post hoc power: 0.9917


Modifying my simulation code for theta0=1/1.1 and n=34 I got:
1e+05 simulated studies with “post hoc” power of
  ≥ target    : 59.75%
  ≥ achieved  : 57.18%
  ≥ 0.90      : 31.51%
  ≥ 0.95      : 15.18%
  [0.95, 0.99]: 12.59%
  ≥ 0.9917    :  2.11%


[image]

Hence, such a high power is unlikely but possible.

As usual power is more sensitive to changes in the GMR than in the CV:
pwr.0 <- power.TOST(CV=0.2, theta0=1/1.1, n=34)
cat("Relative increase in power due to",
    "\n  better CV :", sprintf("%5.2f%%",
    100*(power.TOST(CV=0.1836, theta0=1/1.1, n=34)-pwr.0)/pwr.0),
    "\n  better GMR:", sprintf("%5.2f%%",
    100*(power.TOST(CV=0.2, theta0=0.96, n=34)-pwr.0)/pwr.0),
    "\n  both      :", sprintf("%5.2f%%",
    100*(power.TOST(CV=0.1836, theta0=0.96, n=34)-pwr.0)/pwr.0), "\n")

Relative increase in power due to
  better CV :  6.16%
  better GMR: 17.95%
  both      : 19.28%


One question remains open:

❝ Can you explain to us why you performed a “posthoc analysis” at all? What did you want to achieve? To repeat ElMaestro:

❝ ❝ ❝ ❝ Try and ask yourself which question post-hoc power actually answers. Try and formulate it in a very specific sentence.

:confused:

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
DavidManteigas
★    

Portugal,
2017-12-28 17:59
(2272 d 15:51 ago)

@ Helmut
Posting: # 18114
Views: 32,709
 

 Yes, but why?

Hi all,

I think the answer to the first question of this post is "because you were very pessimistic on your assumptions regarding sample size" which is something very common in BABE trials (at least, this is my perception).

In this case, given your simulations above and the expected probability of approximately 12% of the studies having power greater then 95% having in consideration the initial assumptions, post hoc power means nothing. But if you had 100 studies instead and 90% of the had >95% power although the sample size was calculated assuming expected power of 80%, some questions and conclusions might be drawn from those results, don't you think? From my understanding of the initial question, this was the case found. So I think that they should start by reviewing how they define their assumptions for the sample size, namely why they assume GMR=1.10 instead of the "normal" 0.95/1.05.

Regards,
David
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2017-12-28 18:33
(2272 d 15:16 ago)

@ DavidManteigas
Posting: # 18115
Views: 32,918
 

 Optimists and pessimists

Hi David,

❝ I think the answer to the first question of this post is "because you were very pessimistic on your assumptions regarding sample size" which is something very common in BABE trials (at least, this is my perception).


Ha-ha! I got too many failed studies on my desk and my clients think that I’m Jesus and can reanimate a corpse… In most cases they were overly optimistic in designing their studies. :-D

❝ In this case, given your simulations above and the expected probability of approximately 12% of the studies having power greater then 95% …


~15%!

❝ … having in consideration the initial assumptions, post hoc power means nothing.


Yep.

❝ But if you had 100 studies instead and 90% of the had >95% power although the sample size was calculated assuming expected power of 80%, some questions and conclusions might be drawn from those results, don't you think?


Agree.

❝ From my understanding of the initial question, this was the case found. So I think that they should start by reviewing how they define their assumptions for the sample size, namely why they assume GMR=1.10 instead of the "normal" 0.95/1.05.


Well, the current GL is poorly written. Talks only about an “appropriate sample size calculation” [sic]. The 2001 NfG was more clear:

The number of subjects required is determined by

  1. the error variance associated with the primary characteristic to be studied as estimated from a pilot experiment, from previous studies or from published data,
  2. the significance level desired,
  3. the expected deviation from the reference product compatible with bioequivalence (∆) and
  4. the required power.
Taking into account that the analytical method used for measuring the content of test- and reference-batches has limited accuracy/precision (2.5% is excellent!) and is never validated for the reference product (you can ask the innovator for a CoA but never ever will get it) 0.95 might be “normal” but IMHO, optimistic even if you measure a content of 100% for both T and R. Given that power is most sensitive to the GMR, I question the usefulness of 0.95.

   I’m not a pessimist,
I’m just a well informed optimist.
    José Saramago
   To call the statistician after the experiment is done
may be no more than asking him to perform a postmortem examination:
he may be able to say what the experiment died of.
    R.A. Fisher


OK, I make money acting as a coroner. Wasn’t really successful in the reanimation-attempts.

[image]

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
d_labes
★★★

Berlin, Germany,
2017-12-28 19:57
(2272 d 13:52 ago)

@ Helmut
Posting: # 18117
Views: 32,955
 

 "normal" GMR setting

Dear Helmut,

❝ ...Taking into account that the analytical method used for measuring the content of test- and reference-batches has limited accuracy/precision (2.5% is excellent!) and the method is not validated for the reference (you can ask the innovator for a CoA but never ever will get it) 0.95 might be “normal” ...


"Normal" is a question one could debate for hours, probably ending in a flame war :-D.
For me 0.95 or 1/0.95 is as "normal" like setting alpha = 0.05.
It's a convention to be used if nothing specific about the GMR is known. Nothing more.
And it seemed mostly to work over the years I have observed the use of this setting.
Of course it is not a natural constant.

Clinically relevant difference (aka GMR in bioequivalence studies): That which is used to justify the sample size but will be claimed to have been used to find it.
Stephen Senn


❝ ... but IMHO, optimistic even if you measure a content of 100% for both T and R. Given that power is most sensitive to the GMR I question the usefulness of 0.95.


Any other suggestion instead of 0.95 :confused:

Regards,

Detlew
mittyri
★★  

Russia,
2017-12-28 23:06
(2272 d 10:44 ago)

@ d_labes
Posting: # 18120
Views: 32,544
 

 Example for discussion

Dear Detlew, Dear Helmut,

Let me play the devil’s advocate.
Let's say you have to register some new generic pharmaceutical product and you have the data of pilot study which says that GMR is about 115% with CV about 20% (n=20). The ISCV data is mainly in accordance with literature sources (18-26%).

Again, a man in Harmani suite as your Boss says it MUST be registered and you have only current product to test. So you are interested to register the drug (to show the bioeqivalence).
What would be your recommendation for sample size? How would you estimate it?

Kind regards,
Mittyri
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2017-12-28 23:33
(2272 d 10:17 ago)

@ mittyri
Posting: # 18122
Views: 33,207
 

 Example for discussion

Hi mittyri,

❝ Let me play the devil’s advocate.


Love it.

❝ Again, a man in Harmani suite as your Boss says it MUST be registered and you have only current product to test. So you are interested to register the drug (to show the bioeqivalence).

❝ What would be your recommendation for sample size? How would you estimate it?


Must be registered” is a bit thick! Try…

library(PowerTOST)
expsampleN.TOST(CV=0.2, theta0=1.15, prior.type="both",
                prior.parm=list(m=20, design="2x2"),
                details=FALSE)

… and tell Hugo Boss that with 304 subjects there will still be a 20% risk of failure. If he wants a risk of 10% >107 subjects would be required. Since re-formulation seems not to be an option (what I strongly would recommend; see below why) you will be [image] fired.

Seriously: First I would check a CI of the CV observed in the pilot. I would use an α of 0.2.

signif(100*CVCL(CV=0.2, df=20-2, side="2-sided", alpha=0.2), 4)
lower CL upper CL
   16.59    25.91

OK, this is sufficiently close to the literature (18–26%). If we want to play it safe, we can use the upper CL with α 0.05 (27.94%). ”Must be registered” is not an argument which will convince the IEC and the authority assessing the protocol. You have to take the risk of getting fired and explain to Hugo Boss that a power of >90% will lead to trouble. Hence, with GMR 1.15 “carved in stone”:

sampleN.TOST(CV=CV.upper, theta0=1.15, targetpower=0.9)

+++++++++++ Equivalence test - TOST +++++++++++
            Sample size estimation
-----------------------------------------------
Study design:  2x2 crossover
log-transformed data (multiplicative model)

alpha = 0.05, target power = 0.9
BE margins = 0.8 ... 1.25
True ratio = 1.15,  CV = 0.2794244

Sample size (total)
 n     power

188   0.901956

Next I would fire up power analyses:

pa.ABE(CV=signif(CV.upper, 4), theta0=1.15,
       targetpower=0.9, minpower=0.80)
pa.ABE(CV=signif(CV.upper, 4), theta0=1.15,
       targetpower=0.9, minpower=0.85)

If we accept a minimum power of 80% the GMR could rise to 1.165 and with 85% to 1.158. That’s not nice. It must be clear even to the guy in the Armani suit that performing the study though in a high sample size will be extremely risky due to the bad GMR. I’m not sure whether the protocol with such a large expected deviation will be welcomed. I know of only one case where an innovator had to change one of the excipients upon the FDA’s demand. A couple of pilots with candidate formulations performed – wasn’t possible to come up with anything better than 0.85. OK, was a blockbuster and at least they tried hard.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2017-12-28 23:10
(2272 d 10:39 ago)

@ d_labes
Posting: # 18121
Views: 32,730
 

 I prefer to play it safe

Dear Detlew,

❝ "Normal" is a question one could debate for hours, probably ending in a flame war :-D.


:lol3:

❝ For me 0.95 or 1/0.95 is as "normal" like setting alpha = 0.05.


Not for me. The former is an assumption whereas the latter fixed by the authority.

❝ It's a convention to be used if nothing specific about the GMR is known. Nothing more.


Agree.

❝ And it seemed mostly to work over the years I have observed the use of this setting.

❝ Of course it is not a natural constant.


Agree again.

❝ ❝ ... but IMHO, optimistic even if you measure a content of 100% for both T and R. Given that power is most sensitive to the GMR I question the usefulness of 0.95.


❝ Any other suggestion instead of 0.95 :confused:


Communication with the analytical staff & common sense. :smoke:
The GL tells us that the T- and R-batches should not differ more than 5% in their contents. I’m too lazy to browse through my protocols but IIRC, on the average it was 2–3%. Now add the analytical (in)accuracy and – conservatively assuming that the error may be on opposite sides – you easily end up with a GMR worse than 0.95.
Measuring content is not always trivial. Some MR-products are difficult and most topical products a nightmare. Extracting a lipophilic drug from a cream full of emul­sifiers is  great fun  a mess.
If I have to deal with a simple IR-product, the analytical method is very good, and the difference small I’m fine with 0.95 as well.

In general I prefer conservative assumption(s) over optimistic ones. With the former (if they were false) you may have burned money but have a study which passed. With the latter sometimes you have to perform yet another study. Not economic on the long run.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
d_labes
★★★

Berlin, Germany,
2017-12-28 18:41
(2272 d 15:08 ago)

@ DavidManteigas
Posting: # 18116
Views: 32,799
 

 Full ACK

Dear David,

❝ In this case, given your simulations above and the expected probability of approximately 12% of the studies having power greater then 95% having in consideration the initial assumptions, post hoc power means nothing.


Full ACK.
But not only in this case, also in other cases. The concept of post-hoc power is flawed by its own.

❝ But if you had 100 studies instead and 90% of the had >95% power although the sample size was calculated assuming expected power of 80%, some questions and conclusions might be drawn from those results, don't you think? From my understanding of the initial question, this was the case found. So I think that they should start by reviewing how they define their assumptions for the sample size, namely why they assume GMR=1.10 instead of the "normal" 0.95/1.05.


Again: Full ACK.

Regards,

Detlew
kms.srinivas
☆    

India,
2017-12-29 14:20
(2271 d 19:29 ago)

@ DavidManteigas
Posting: # 18123
Views: 32,511
 

 About GMR 1.10

❝ In this case, why they assume GMR=1.10 instead of the "normal" 0.95/1.05.


Yes. This was an example what i was trying to give to Helmut & Elmaestro. This is a very rare case, but in General GMR would be taken as 0.95/1.05 as you said, but the power remains same as 95% to 99%
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2017-12-29 17:18
(2271 d 16:31 ago)

@ kms.srinivas
Posting: # 18127
Views: 32,845
 

 Better 0.95 or 0.90

Hi kms,

❝ ❝ In this case, why they assume GMR=1.10 instead of the "normal" 0.95/1.05.


❝ This was […] a very rare case, but in General GMR would be taken as 0.95/1.05 as you said, but the power remains same as 95% to 99%


Similar same. ;-)

CV 0.20, GMR 1.10, target 80%, n 32 (power 81.01%), no dropouts expected

1e+05 simulated studies with “post hoc” power of
  ≥ target    : 54.04%
  ≥ achieved  : 51.57%
  ≥ 0.90      : 26.98%
  [0.95, 0.99]: 10.41%
  ≥ 0.95      : 12.50%


CV 0.20, GMR 1.05, target 80%, n 18 (power 80.02%), no dropouts expected

1e+05 simulated studies with “post hoc” power of
  ≥ target    : 48.45%
  ≥ achieved  : 48.41%
  ≥ 0.90      : 25.26%
  [0.95, 0.99]:  9.72%
  ≥ 0.95      : 12.92%


I still don’t understand what you wrote above:

❝ […] always it is coming in the interval 95 to 99 and some times 100


BTW, power curves are not symmetric in raw scale but in log-scale (see this post). If you are not – very – confident about the sign of ∆, I recommend to use GMR=1–∆ in order to be on the safe side. By this, power is preserved for GMR=1/(1–∆) as well. If you use GMR=1+∆, power for GMR=1–∆ will be insufficient:

cat(paste0("\u2206 with unknown sign: ", 100*delta, "%",
    "\n  based on GMR = 1 \u2013 \u2206 = ",
    sprintf("%5.4f", 1-delta), "      : n ",
    plan.1[["Sample size"]],
    "\n  based on GMR = 1 + \u2206 = ",
    sprintf("%5.4f", 1+delta), "      : n ",
    plan.2[["Sample size"]],
    "\n  based on GMR = 1 / (1 + \u2206) = ",
    sprintf("%5.4f", 1/(1+delta)), ": n ",
    plan.3[["Sample size"]],
    "\n  based on GMR = 1 / (1 \u2013 \u2206) = ",
    sprintf("%5.4f", 1/(1-delta)), ": n ",
    plan.4[["Sample size"]],
    "\n\nTrue GMR = ", sprintf("%5.4f", 1-delta),
    "\n  n ", plan.2[["Sample size"]], ", power ",
    sprintf("%4.2f%%", 100*power.TOST(CV=CV, theta0=1-delta,
    n=plan.2[["Sample size"]])),
    "\n  n ", plan.1[["Sample size"]], ", power ",
    sprintf("%4.2f%%", 100*plan.1[["Achieved power"]]),
    "\nTrue GMR = ", sprintf("%5.4f", 1/(1+delta)),
    "\n  n ", plan.3[["Sample size"]], ", power ",
    sprintf("%4.2f%%", 100*plan.3[["Achieved power"]]),
    "\n  n ", plan.1[["Sample size"]], ", power ",
    sprintf("%4.2f%%", 100*power.TOST(CV=CV, theta0=1/(1+delta),
    n=plan.1[["Sample size"]])),
    "\nTrue GMR = ", sprintf("%5.4f", 1+delta),
    "\n  n ", plan.2[["Sample size"]], ", power ",
    sprintf("%4.2f%%", 100*plan.2[["Achieved power"]]),
    "\n  n ", plan.1[["Sample size"]], ", power ",
    sprintf("%4.2f%%", 100*power.TOST(CV=CV, theta0=1+delta,
    n=plan.1[["Sample size"]])),
    "\nTrue GMR = ", sprintf("%5.4f", 1/(1-delta)),
    "\n  n ", plan.4[["Sample size"]], ", power ",
    sprintf("%4.2f%%", 100*plan.4[["Achieved power"]]),
    "\n  n ", plan.2[["Sample size"]], ", power ",
    sprintf("%4.2f%%", 100*power.TOST(CV=CV, theta0=1/(1-delta),
    n=plan.2[["Sample size"]])), "\n"))

∆ with unknown sign: 10%
  based on GMR = 1 – ∆ = 0.9000      : n 38
  based on GMR = 1 + ∆ = 1.1000      : n 32
  based on GMR = 1 / (1 + ∆) = 0.9091: n 32
  based on GMR = 1 / (1 – ∆) = 1.1111: n 38

True GMR = 0.9000

  n 32, power 75.17%
  n 38, power 81.55%
True GMR = 0.9091
  n 32, power 81.01%
  n 38, power 86.76%
True GMR = 1.1000
  n 32, power 81.01%
  n 38, power 86.76%
True GMR = 1.1111
  n 38, power 81.55%

  n 32, power 75.17%


In simple words, for ∆ 10% assume a GMR of 0.90 (which will preserve power for GMR up to 1.1111) and for ∆ 5% a GMR of 0.95 (covers up to 1.0526·).
If you assume a GMR of 1.10 power will be preserved only down to 0.9090 and with 1.05 down to 0.9524 – not to 0.90 or 0.95 as many probably expect.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Yura
★    

Belarus,
2017-12-29 14:46
(2271 d 19:04 ago)

@ Helmut
Posting: # 18125
Views: 32,510
 

 Yes, but why?

Hi Helmut,
Coolly you use possibilities R
:ok::clap:
Astea
★★  

Russia,
2018-01-21 00:55
(2249 d 08:55 ago)

@ Helmut
Posting: # 18234
Views: 31,762
 

 Buffon's needle

Dear all!

I was interested how it works on real data. For this purpose I've calculated the power for ~50 real 2x2 successful studies. Of course sample size is too low to make any conclusions but the tendency is pretty similar. The results are below. Please correct me if my reasoning is wrong.

The GMR of Cmax follows the log­normal distribution (p=0.123, Shapiro-Wilk W test), Geometric Mean PE was 0.9824, and including plus-minus SD for log-tranformed data leads to 0.935-1.032. So assuming 94-95% in sample size calculation seems to be good at reflecting the expected ratio.

Mean CV was 20.5% (median 18.5%). The average a posteriori power was 86.21% (median 97.33%). The distribution was as follows:
  ≥ target    : 73.58%
  ≥ achieved  : 67.92%
  ≥ 0.90      : 62.26%
  [0.95, 0.99]: 16.98%
  ≥ 0.95      : 56.60%
That is in 26.42% successful studies post hoc power was less than 80%. Why not in 50%? I suppose it is connected with two facts: the real number of subjects is always greater than calculated because researches include drop-outs and there exists the restricted limit of minimum of subjects in the study.

I've reproduced the calculation performed by Helmut for CV=18.5%, GMR=98.24% and 10% and 20%-drop-out rates.
For 10% I got
  ≥ target    : 66.21%
  ≥ achieved  : 61.56%
  ≥ 0.90      : 40.83%
  [0.95, 0.99]: 16.90%
  ≥ 0.95      : 24.02%

For 20%:
  ≥ target    : 79.65%
  ≥ achieved  : 75.51%
  ≥ 0.90      : 54.21%
  [0.95, 0.99]: 23.58%
  ≥ 0.95      : 34.09%


[image]Blue histogram is for real data (nbins=15). The higher rate for power close to 1 in real data should be connected with the redundant number of subjects in studies with low CV (for CV lower than 22% and GMR=0.95 the power would be greater than 80% when the involved number of subjects is more than 24).


P.S.
 if (length(packages[!inst]) > 0) install.packages(packages[!inst])
One needs "s" on the end of "package(s)"?
cat("Results of", nsims, "simulated studies:\n")); summary(res)
extra ")"?

Anticipating the question "but why"? First, the R, Detlew's and Helmut's code possibilities are really impressive! And second: to make sure again that a posteriori power is needless thing and it is waste of paper to include it in the report.

-Why are you teaching your sister bad words?
-I want her to know them and never repeat.
Oleg777
☆    

Russia,
2018-10-10 00:48
(1987 d 10:02 ago)

@ Astea
Posting: # 19421
Views: 24,910
 

 Buffon's needle

Dear Astea, would you be so kind...can you explain your message to my colleagues in russian here
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2018-10-10 15:41
(1986 d 19:09 ago)

@ Oleg777
Posting: # 19428
Views: 24,846
 

 0.95 or 1.05

Hi Oleg,

❝ […] can you explain your message to my colleagues in russian here


Interesting thread (as far as [image] translate took me). IIRC, some posters were not sure whether to use an assumed T/R-ratio of 0.95 or 1.05. Have a look at this post and that one. In short, if you assume a 5% difference but are not (very!) sure about the direction of the deviation (lower or higher than 100%), a study powered for 0.95 will always be sufficiently powered for 1.05 as well but not the other way ’round.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2018-10-10 14:46
(1986 d 20:03 ago)

@ Astea
Posting: # 19427
Views: 24,830
 

 Buffon's needle

Hi Nastia,

somehow I missed your post. Sorry.

❝ I was interested how it works on real data. […] That is in 26.42% successful studies post hoc power was less than 80%. Why not in 50%? I suppose it is connected with two facts: the real number of subjects is always greater than calculated because researches include drop-outs and there exists the restricted limit of minimum of subjects in the study.


Very interesting!

❝ P.S.

if (length(packages[!inst]) > 0) install.packages(packages[!inst])

❝ One needs "s" on the end of "package(s)"?


Yes. That’s how the function of library utils is called.
Try ?install.packages and ?install.package
Also packages[!inst] not package[!inst]. Corrected in the OP.

cat("Results of", nsims, "simulated studies:\n")); summary(res)

❝ extra ")"?


THX, wrong! Corrected in the OP.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Astea
★★  

Russia,
2018-10-12 01:14
(1985 d 09:36 ago)

@ Helmut
Posting: # 19429
Views: 24,628
 

 Buffon's needle

Dear Helmut!

❝ somehow I missed your post. Sorry.


Thanks for the reply!

IMHO Russia is suffering from lack of reliable information and good russian-language statistical resources. Hope Oleg is trying to improve this situation.

To this time all the roads lead to BEBAC forum - almost every answer on any question begins with a phrase like "See Helmut's post #..." :yes:

"Being in minority, even a minority of one, did not make you mad"
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2018-10-12 03:10
(1985 d 07:40 ago)

@ Astea
Posting: # 19430
Views: 24,527
 

 School maths

Hi Nastia,

I like what you wrote over there:

Если вспомнить школьную математику, формулу ln(x^k)=klnx, тогда становится понятно почему ln(0.9) = –ln(1/0.9) = –1*ln(0.9^(–1)) = 0.10536. Аналогично, ln(0.9500) = –ln(1.052632). Очевидно, что 1.0526 больше чем 1.05.

(my emphasis) :-D

BTW, isn’t my name better transliterated with Хельмут Шютц than with Гельмут Шутц?

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Astea
★★  

Russia,
2018-10-12 14:41
(1984 d 20:08 ago)

@ Helmut
Posting: # 19439
Views: 24,718
 

 School russian

Oh, Helmut!
As I see you are learning russian faster than our regulators do english :clap: Du hast unglaubliche Fähigkeiten!
I'm looking forward to see your articles on russian ;-)

And logarithms are studing at school (actually I teach children a profound course of math).

❝ BTW, isn’t my name better transliterated with Хельмут Шютц than with Гельмут Шутц?


As for your name, please, don't care. Russian transcription is awfull. We say Юнг instead of Young, Броун instead of Brown... And there is an another school rule to write "Жу" and "Шу" with the letter "У" instead of "Ю", excepting three words ("жюри", "брошюра", "парашют"). I remember your story about "What a strange name - "Helmut". :-)

"Being in minority, even a minority of one, did not make you mad"
mittyri
★★  

Russia,
2018-10-13 01:25
(1984 d 09:25 ago)

(edited by mittyri on 2018-10-13 15:04)
@ Helmut
Posting: # 19440
Views: 24,451
 

 Offtop: Umschrift der westlichen Eigennamen auf Russisch

Hi Helmut,

❝ BTW, isn’t my name better transliterated with Хельмут Шютц than with Гельмут Шутц?


So easy question without any exact answer!
We got hot discussion with my colleague regarding that and did find the names transliterated as 'Хельмут' and as 'Гельмут'.
Both are acceptable, but the last one is more classical.

Regarding 'Шутц' and 'Шютц':
it depends on the history of the person and his name. The most important is Umlaut presence.
For example, this Schutz is American, and no umlaut is expected and written.
But that Schütz is essential German and we'd translate it as 'Шютц'.

Kind regards,
Mittyri
mittyri
★★  

Russia,
2017-12-28 22:52
(2272 d 10:58 ago)

@ Helmut
Posting: # 18119
Views: 32,808
 

 EEU?

Hi Helmut,

❝ Fine. Can you explain to us why you performed a “posthoc analysis” at all? What did you want to achieve? To repeat ElMaestro:

❝ ❝ ❝ ❝ Try and ask yourself which question post-hoc power actually answers. Try and formulate it in a very specific sentence.


hmmm...
May be the topicstarter is preparing the reports for Eurasian Economic Union? :-D

[image]
" - study power analysis (presenting the results for both Cmax and AUC(0-t) in tabular format);
- summary and conclusion"

It took effect this year. Colleagues will have a lot of fun IMHO.

PS:
Eurasian Economic Commission – Setting up of Common Market of Pharmaceutical Products link is dead on Guideline page. Didn't find the alternative. May be Yura or Beholder or BE-proff or someone else can help.

Kind regards,
Mittyri
Yura
★    

Belarus,
2017-12-29 14:41
(2271 d 19:09 ago)

@ mittyri
Posting: # 18124
Views: 32,523
 

 EEU?

Hi mittyri,
So I understand: imagine the power analysis and draw a conclusion (at low power) that since bioequivalence is established, a posteriori power rating does not affect the findings of the study
More interesting question: the pharmacokinetic equation and its analysis ...:confused:
Kind regards
mittyri
★★  

Russia,
2017-12-29 15:11
(2271 d 18:38 ago)

@ Yura
Posting: # 18126
Views: 32,551
 

 EEU - pharmacokinetic equation???

Hi Yura,

❝ So I understand: imagine the power analysis and draw a conclusion (at low power) that since bioequivalence is established, a posteriori power rating does not affect the findings of the study

Yep. Once again: to be lucky is not a crime :-D
and vice versa: if the post-hoc power is -> 100%, you're a hero even if the study was overpowered de facto (i.e. CI [0.98-1.03]).

❝ More interesting question: the pharmacokinetic equation and its analysis ... :confused:

Good question! Next question!
Seems like they are interested in PK model which is not permitted. No clue so far :-|

Kind regards,
Mittyri
Beholder
★    

Russia,
2018-01-16 16:10
(2253 d 17:39 ago)

@ mittyri
Posting: # 18184
Views: 32,052
 

 EEU?

Hi Mittyri,

❝ PS:Eurasian Economic Commission – Setting up of Common Market of Pharmaceutical Products link is dead on Guideline page. Didn't find the alternative.

❝ ...May be Yura or Beholder or BE-proff or someone else can help.


Let me try. If you meant this link than it is still alive and allows to open Decisions.

Also one can use ConsultantPlus in order to find EEU Decisions but after 20.00 (Moscow time) only. Nevertheless our neighbours from Belarus have all Decisions uploaded to Expertise Center site.

Best regards
Beholder
xtianbadillo
☆    

Mexico,
2018-01-18 23:22
(2251 d 10:27 ago)

@ BE-proff
Posting: # 18219
Views: 31,837
 

 Simulations

❝ Is it worth to calculate sample size at power of 95%?

❝ Are there any risks?


Too many subjects
It is unethical to disturb more subjects than necessary
Some subjects at risk and they are not necessary
It is an unnecessary waste of some resources ($)
It takes more time to analytical sample analysis - time is money <- this does not a apply to a rich client


Too few subjects
A study unable to reach its objective is unethical
All subjects at risk for nothing
All resources ($) is wasted when the study is inconclusive
ElMaestro
★★★

Denmark,
2017-12-28 21:13
(2272 d 12:36 ago)

@ kms.srinivas
Posting: # 18118
Views: 32,578
 

 Numbers don't lie

Dear all,

today I was flipping a coin two times and recording the number of times I got tails.
I defined success as any outcome associated with two tails.

I got two tails and the experiment was a success.
What was the power, given the outcome?

At this point it is clearly experimentally proven that the coin only shows tails, so power was obviously 100%.

Some of you may insist that I was just "lucky", whatever the hell that means. To this backward perception I must clearly refer to the truth illustrated by the numbers: This was not luck, it was meant to happen by way of the nature of the coin.

Numbers don't lie. Humans do. Don't believe all the rubbish from the usual subjects in this thread, like Chewbacca and The Berlin Chimney. They are clearly not so well connected with real life as I am. It is incredible what some people can get away with nowadays.

Pass or fail!
ElMaestro
UA Flag
Activity
 Admin contact
22,940 posts in 4,812 threads, 1,639 registered users;
45 visitors (0 registered, 45 guests [including 7 identified bots]).
Forum time: 09:50 CET (Europe/Vienna)

Those people who think they know everything
are a great annoyance to those of us who do.    Isaac Asimov

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5