What the heck‽ [Software]

posted by Helmut Homepage – Vienna, Austria, 2014-10-06 21:05 (3913 d 01:57 ago) – Posting: # 13657
Views: 39,896

Hi ElMaestro,

❝ Looks to me like their examples also include a Potvin B run. I am not a veterinarian but ... I think they got it wrong as they plugged in the observed GMR from stage 1 and not 0.95 for stage 2 dimensioning.


Yes, that’s weird. Subsection II.D.3 of the guidance smells of copypasting from EMA’s GL (contrary to Method C which is generally suggested by the FDA). Furthermore,

The plan to use a two-stage approach should be pre-specified in the protocol along with the number of animals to be included in each stage and the adjusted signi­fi­cance levels to be used for each of the analyses.

I beg your pardon?

Figure 1 of the supp. is Potvin’s B:

[…] sample size based on variance stage 1…

But in the paragraph below:

[…] sample size based on the information derived at Stage 1.


But let’s continue:

SCENARIO: For the sake of this example, we will use the following Stage 1 assump­tions:

Nice to call animals subjects, but these guys are adventurous, aren’t they? Let’s assume they don’t know that with a T/R ratio of 0.90 an αadj of 0.0294 is not sufficient any more (covering the grid of n1 12…60, CV 10…100%). My magic code tells me…

For adj. α of 0.0294 a maximum inflation
of 0.053753 is detected at CV=20% and n1=12.

… and suggests to use 0.0272 instead.*

[image]

However, let’s believe in Mr Pocock.

library(PowerTOST)
sampleN.TOST(alpha=0.0294, CV=0.15, theta0=0.9, design="2x2x2")
+++++++++++ Equivalence test - TOST +++++++++++
            Sample size estimation
-----------------------------------------------
Study design:  2x2 crossover
log-transformed data (multiplicative model)

alpha = 0.0294, target power = 0.8
BE margins        = 0.8 ... 1.25
Null (true) ratio = 0.9,  CV = 0.15

Sample size (total)
 n     power
26   0.802288

power.TOST(alpha=0.0294, CV=0.15, theta0=0.9, n=20)
[1] 0.6850327


Brilliant! They must love a second stage. Some more stuff about inflation & power:

library(Power2Stage)
power.2stage(method="B", alpha=c(0.0294, 0.0294), n1=20,
   GMR=0.9, CV=0.15, targetpower=0.8, pmethod="nct",
   usePE=FALSE, Nmax=Inf, theta0=1.25, nsims=1e6)

Method B: alpha (s1/s2)= 0.0294 0.0294
Futility criterion Nmax= Inf
CV= 0.15; n(stage 1)= 20; GMR= 0.9
BE margins = 0.8 ... 1.25
GMR= 0.9 and mse of stage 1 in sample size est. used

1e+06 sims at theta0= 1.25 (p(BE)='alpha').
p(BE)   = 0.038733
p(BE) s1= 0.029144
pct studies in stage 2= 74.27%

Distribution of n(total)
- mean (range)= 27.4 (20 ... 80)
- percentiles
 5% 50% 95%
 20  26  42


With this CV, T/R, n1, and 0.0294 there will be no inflation. But that’s not true for other combinations. Overall power ~85% (~27 of studies will proceed to the second stage).

What if they want to adapt for the T/R of stage 1 (in the code above switch to usePE=TRUE)?

Method B: alpha (s1/s2)= 0.0294 0.0294
Futility criterion Nmax= Inf
CV= 0.15; n(stage 1)= 20; GMR= 0.9
BE margins = 0.8 ... 1.25
PE and mse of stage 1 in sample size est. used

1e+06 sims at theta0= 1.25 (p(BE)='alpha').
p(BE)   = 0.047555
p(BE) s1= 0.029144
pct studies in stage 2= 36.39%

Distribution of n(total)
- mean (range)= 203753.4 (20 ... 4375926130)
- percentiles
  5%  50%  95%
  20   20 5932


Still no inflation, but the sample sizes ring the alarm bell. Old story. Full adaption rarely ‘works’ in BE. This time the entire output for power:

Method B: alpha (s1/s2)= 0.0294 0.0294
Futility criterion Nmax= Inf
CV= 0.15; n(stage 1)= 20; GMR= 0.9
BE margins = 0.8 ... 1.25
PE and mse of stage 1 in sample size est. used

1e+05 sims at theta0= 0.9 (p(BE)='power').
p(BE)   = 0.948
p(BE) s1= 0.68365
pct studies in stage 2= 26.73%

Distribution of n(total)
- mean (range)= 17250.8 (20 ... 1117221800)
- percentiles
 5% 50% 95%
 20  20 160


Oops! In the worst case almost the entire population of India? In the spirit of Karalis / Macheras let’s add a futility criterion limiting the total sample size. OK, why not 100? No inflation; power:

Method B: alpha (s1/s2)= 0.0294 0.0294
Futility criterion Nmax= 100
CV= 0.15; n(stage 1)= 20; GMR= 0.9
BE margins = 0.8 ... 1.25
PE and mse of stage 1 in sample size est. used

1e+05 sims at theta0= 0.9 (p(BE)='power').
p(BE)   = 0.86378
p(BE) s1= 0.68365
pct studies in stage 2= 18.29%

Distribution of n(total)
- mean (range)= 27.3 (20 ... 100)
- percentiles
 5% 50% 95%
 20  20  68


Might work. Still not what the guidance wants – pre-specified sample size in both stages. Will they accept a maximum total sample size as a futility criterion? Does one have to perform the second stage in the pre-specified sample size even if the calculated one is lower?



Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes

Complete thread:

UA Flag
Activity
 Admin contact
23,424 posts in 4,927 threads, 1,673 registered users;
317 visitors (0 registered, 317 guests [including 7 identified bots]).
Forum time: 23:03 CEST (Europe/Vienna)

Reach for the stars,
even if you have to stand on a cactus.    Susan Longacre

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5