cakhatri
★    

India,
2017-11-30 10:44
(2310 d 03:31 ago)

Posting: # 18024
Views: 15,626
 

 Sample Size Estimation -Replicate design- BE limits [Power / Sample Size]

Dear All,

I would like to know

1.What BE limits should be taken into consideration while estimating sample size for replicate design studies for USFDA & EMA.

Should it be 80-125% ? We normally use FARTSSIE

Why the Question

1. For a product with 51% ISCV (from a completed partial replicate study UKMHRA) by using FARTSSIE we arrived at different sample size for EMA

- Sample size estimated = 27 (80% Power, 95-105% T/R, Method C1, BE Limits 69.84 to 143.19%)
- Sample size estimated = 76 (80% Power, 95-105% T/R, No reference Scaled BE Limits-80-125%)

Which sample size should we consider for pivotal study

Regards
Chirag
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2017-11-30 11:58
(2310 d 02:17 ago)

@ cakhatri
Posting: # 18025
Views: 14,796
 

 Reference-scaling: Don’t use FARTSSIE!

Hi Chirag,

❝ What BE limits should be taken into consideration while estimating sample size for replicate design studies for USFDA & EMA.


If you want to go with reference-scaling, none! The limits depend on the CVwR and are estimated in the actual study. Hence, you need simulations, where your assumed CVwR is the average. Either use the tables provided by the two Lászlós* or package PowerTOST for R. The software is open source and comes free of costs. :thumb up:

❝ Should it be 80-125% ?


No (see above). Only if you don’t want reference-scaling (i.e., conventional ABE). Actually reference-scaling was introduced to avoid the extreme sample sizes required for ABE.

❝ We normally use FARTSSIE


FARTSSIE cannot perform the necessary simulations and therefore, is not suitable for any of the reference-scaling methods.

❝ For a product with 51% ISCV (from a completed partial replicate study UKMHRA) by using FARTSSIE we arrived at different sample size for EMA


❝ - Sample size estimated = 27 (80% Power, 95-105% T/R, Method C1, BE Limits 69.84 to 143.19%)

❝ - Sample size estimated = 76 (80% Power, 95-105% T/R, No reference Scaled BE Limits-80-125%)

  1. Do not assume a T/R of 95% for HVDP(s). The two Lászlós recommend a 10% difference and for good reasons. Therefore, 90% is the default in PowerTOST’s reference-scaling functions.
  2. If ever possible avoid the partial replicate design. Though the evaluation works for the EMA’s methods, sometimes no convergence is achieved for the FDA’s mixed effects model (if CVwR <30%) – independent from the software. Study done, no result, you are fired.
    Better to opt for full replicate 2-sequence designs: 4-periods (TRTR|RTRT) or – if you are afraid of drop­outs and/or want to avoid large sampling volumes – 3-periods (TRT|RTR). In the latter one you need at least twelve eligible subjects in sequence RTR according to the EMA’s Q&A-document (not relevant in practice).
After installing R and downloading PowerTOST, start the R-console, and type library(PowerTOST). If you want to make yourself familiar with PowerTOST, type help(package=PowerTOST).
Try these examples, where "2x2x4" denotes the 4-perod full replicate, "2x2x3" the 3-period full replicate, and "2x3x3" the 3-sequence 3-period (partial) replicate.
  • The EMA’s ABEL
    sampleN.scABEL(CV=0.51, design="2x2x4")
    sampleN.scABEL(CV=0.51, design="2x2x3")
    sampleN.scABEL(CV=0.51, design="2x3x3")

  • Health Canada’s ABEL
    sampleN.scABEL(CV=0.51, design="2x2x4", regulator="HC")
    sampleN.scABEL(CV=0.51, design="2x2x3", regulator="HC")
    sampleN.scABEL(CV=0.51, design="2x3x3", regulator="HC")

  • The FDA’s RSABE
    sampleN.RSABE(CV=0.51, design="2x2x4")
    sampleN.RSABE(CV=0.51, design="2x2x3")
    sampleN.RSABE(CV=0.51, design="2x3x3")
Sample sizes for the FDA (22, 34, 30) are always smaller than ones for the EMA (28, 42, 39) because in RSABE there is no 50% CVwR limit for scaling. On the other hand Health Canada’s ABEL sometimes require less subjects than the EMA’s ABEL because the upper cap is 57.38%. If your boss insist in a T/R of 95% add the argument theta0=0.95 to the function calls but don’t blame me if the study fails. If you want 90% power instead of the default 80%, add the argument targetpower=0.90.
I suggest to perform a power analysis (recommended by ICH E9) assessing the influence of deviations from assumptions (lower/higher CVwR, T/R deviating more than expected from 100%, less eligible subjects due to drop­outs). Try (with the default partial replicate):

pa.scABE(CV=0.51, regulator="EMA")
pa.scABE(CV=0.51, regulator="HC")
pa.scABE(CV=0.51, regulator="FDA")

For the FDA CVwR can be as high as ~97% until power drops to 70%, though the sample size is smaller (30) compared to the EMA’s and Health Canada’s (39). With the same sample size 39, the CVwR can rise to ~76% (Health Canada) and only to ~64% (EMA). This is a consequence of differing upper scaling caps.


  • Endrényi L, Tóthfalusi L. Sample Sizes for Designing Bioequivalence Studies for Highly Variable Drugs. J Pharm Pharmaceut Sci. 2011;15(1):73–84. [image] free resource.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Astea
★★  

Russia,
2017-11-30 23:50
(2309 d 14:25 ago)

(edited by Astea on 2017-12-01 14:18)
@ Helmut
Posting: # 18028
Views: 14,349
 

 Reference-scaling: Don’t use FARTSSIE?

❝ FARTSSIE cannot perform the necessary simulations and therefore, is not suitable for any of the reference-scaling methods.


Just few words in defence of FARTSSIE. May be the approximations are rude, far from exact PowerTOST (see #), but the results overally seems to be correlated for CV>30%:

design               2x2x4                        2x2x3 (2x3x3 from Table)
theta0 0.9  0.9  0.9  0.95 0.95 0.95|0.9  0.9  0.9  0.95 0.95 0.95
CV,%   F    R    T    F    R    T    F    R    T    F    R     T
30     40   34   35   20   18   19   60   50   53   30   26   27
35     32   34   34   19   20   20   48   50   51   28   28   29
40     27   30   31   18   20   20   41   46   44   27   30   29
45     25   28   29   18   20   21   37   42   40   27   30   29
50     23   28   28   17   22   22   34   42   40   26   32   30
55     27   30   30   20   22   23   40   44   42   30   34   32
60     31   32   32   23   24   25   46   48   46   35   36   36
65     35   36   37   27   28   28   53   54   53   40   40   40
70     40   40   40   30   30   31   60   60   58   45   46   45
75     44   44   45   33   34   34   66   66   67   50   50   50
80     49   50   50   37   38   38   73   74   72   55   56   54

here
F - FARTSSIE22 (note: sample size should be additionally rounded)
R - PowerTOST v.1.4.6, sampleN.scABEL
T- Table from Endrényi L, Tóthfalusi L. Sample Sizes for Designing Bioequivalence Studies for Highly Variable Drugs. J Pharm Pharmaceut Sci. 2011;15(1):73–84. (*edit: the last columns refers to 2x3x3 design)

"Being in minority, even a minority of one, did not make you mad"
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2017-12-01 12:42
(2309 d 01:33 ago)

@ Astea
Posting: # 18029
Views: 14,490
 

 Reference-scaling: Don’t use FARTSSIE!

Hi Astea,

❝ Just few words in defence of FARTSSIE. May be the approximations are rude, far from exact PowerTOST (see #), but the results overally seems to be correlated for CV>30%:


Nope. The EMA’s ABEL not only means extending the limits (based on the observed – not the assumed CVwR) but also assessing the GMR-restriction. It’s a complex framework and therefore, no simple formula exists to calculate power. Try this one:

library(PowerTOST)
CV         <- c(30, 40, 50)
n          <- c(40, 27, 23)
n.FARTSSIE <- n + n %% 2 # round to next even
res        <- data.frame(CV=CV, n.FARTSSIE=n.FARTSSIE, pwr.FARTSSIE=NA,
                         n.ABE=NA, pwr.ABE=NA, n.ABEL=NA, pwr.ABEL=NA)
names(res)[1] <- "CV%"
EL         <- scABEL(CV=CV/100)
for (j in seq_along(CV)) {
  # power for ABEL with n of FARTSSIE
  res[j, "pwr.FARTSSIE"] <- power.scABEL(CV=CV[j]/100, design="2x2x4",
                                         n=n.FARTSSIE[j])
  # PowerTOST's function for ABE mimicking FARTSSIE
  tmp <- sampleN.TOST(CV=CV[j]/100, theta0=0.9, theta1=EL[[j, "lower"]],
                      design="2x2x4", print=FALSE, details=FALSE)
  res[j, "n.ABE"]   <- tmp[["Sample size"]]
  res[j, "pwr.ABE"] <- tmp[["Achieved power"]]
  # PowerTOST's function for ABEL (1e5 simulations)
  tmp <- sampleN.scABEL(CV=CV[j]/100, theta0=0.9, design="2x2x4",
                        print=FALSE, details=FALSE)
  res[j, "n.ABEL"]   <- tmp[["Sample size"]]
  res[j, "pwr.ABEL"] <- tmp[["Achieved power"]]
}
print(signif(res, 5), row.names=FALSE)

 CV% n.FARTSSIE pwr.FARTSSIE n.ABE pwr.ABE n.ABEL pwr.ABEL
  30         40      0.85247    40 0.80999     34  0.80281
  40         28      0.78286    28 0.81790     30  0.80656
  50         24      0.75363    24 0.83042     28  0.81428


For all CVs FARTSSIE’s sample sizes will give the desired power for ABE (column pwr.ABE) but not for ABEL (column pwr.FARTSSIE). For CV 30% it will be too high (since only 34 subjects are required) but for CV >30% too low (more subjects are required). Only sampleN.scABEL() will give sample sizes (n.ABEL) with at least the desired power (pwr.ABEL).

As expected, FARTSSIE screws completely up if one wants to assess the Type I Error. Set “Method C1” and the 4-period replicate. Try Calculate Power with CV 30%, T/R 125%, sample size 34. I got

version power
  2.2   5%   
  1.8   4.99%
  1.7   4.99%

Sorry, Dave!

library(PowerTOST)
cat(paste0(100*power.scABEL(CV=0.3, design="2x2x4", theta0=1.25,
                            n=34, nsims=1e6), "%\n"))
8.1626%


Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Astea
★★  

Russia,
2017-12-01 15:16
(2308 d 22:59 ago)

@ Helmut
Posting: # 18031
Views: 14,372
 

 Reference-scaling: USE PowerTOST

Dear Helmut!

Thank you for enlightening this question!

The advantage of R is doubtless. I just wanted to show FARTSSIE not worser than other rude methods of sample size estimation (remember nomogram and so on).

(Sorry, I've made a mistake in the previous post, the data from the paper refers to partial replicate design 2x3x3.)

By the way, for extreme numbers from the paper by Endrényi L, Tóthfalusi L. we also get slightly underpowered study:

power.scABEL(CV=0.8,n=54,theta0=0.95,design="2x3x3")
[1] 0.79917
power.scABEL(CV=0.8,n=72,theta0=0.9,design="2x3x3")
[1] 0.79782

"Being in minority, even a minority of one, did not make you mad"
cakhatri
★    

India,
2017-12-04 07:00
(2306 d 07:15 ago)

@ Astea
Posting: # 18032
Views: 14,141
 

 Reference-scaling: USE PowerTOST

Dear Helmut & Astea,

Thankyou for the response. Honestly this is too much of statistics for me to understand. We are discussing with our statisticians & medical writing team to fully understand the response and implement accordingly.

I shall revert in case of any queries

Thankyou once again

Regards
Chirag
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2017-12-04 16:07
(2305 d 22:08 ago)

@ cakhatri
Posting: # 18034
Views: 14,167
 

 Reference-scaling: USE PowerTOST

Hi Chirag,

your statisticians should read the relevant papers and register here / ask questions directly. Better than playing Chinese whispers. :-D

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
cakhatri
★    

India,
2017-12-05 06:33
(2305 d 07:42 ago)

@ Helmut
Posting: # 18035
Views: 14,025
 

 Reference-scaling: USE PowerTOST

Dear Helmut,

Very True. The statistician was advised to do so and has already created a login yesterday or day before.

Regards
Chirag
d_labes
★★★

Berlin, Germany,
2017-12-06 10:52
(2304 d 03:23 ago)

@ Astea
Posting: # 18037
Views: 13,935
 

 Use PowerTOST

Dear Astea!

❝ ...

❝ By the way, for extreme numbers from the paper by Endrényi L, Tóthfalusi L. we also get slightly underpowered study:

power.scABEL(CV=0.8,n=54,theta0=0.95,design="2x3x3")

❝ [1] 0.79917

power.scABEL(CV=0.8,n=72,theta0=0.9,design="2x3x3")

❝ [1] 0.79782

❝ ...


The sample sizes in the paper of the two Laszlos are based on 10 000 simulations (only).
Additionally the seed of the pseudo-random number generator for the sims was presumably different.
Try this:
power.scABEL(CV=0.8, n=54, theta0=0.95, design="2x3x3", nsims=10000)
[1] 0.7976
power.scABEL(CV=0.8, n=54, theta0=0.95, design="2x3x3", nsims=10000, setseed=F)
[1] 0.7995
power.scABEL(CV=0.8, n=54, theta0=0.95, design="2x3x3", nsims=10000, setseed=F)
[1] 0.8029


With nsims=1E6 and setseed=F on the other hand I don't succeed to obtain power>0.8 (20 tries).
One more reason to use PowerTOST's power.scABEL() / sampleN.scABEL() functions :cool:. Also w.r.t the tables in the paper of the two Laszlos.

Regards,

Detlew
mittyri
★★  

Russia,
2017-12-07 17:03
(2302 d 21:12 ago)

@ d_labes
Posting: # 18043
Views: 14,167
 

 Use PowerTOST, not Endrényi L, Tóthfalusi L tables

Dear Detlew,

here comes a magik (citing the paper):
The precision of the estimation was evaluated by running the simulations twenty times at twenty-two different conditions (different CV’s, different GMR’s and different designs).
:-D

Seed was sampled 'unfortunately' about 20 times?

Kind regards,
Mittyri
d_labes
★★★

Berlin, Germany,
2017-12-07 19:21
(2302 d 18:54 ago)

(edited by d_labes on 2017-12-08 14:38)
@ mittyri
Posting: # 18044
Views: 13,823
 

 Precision in Endrényi L, Tóthfalusi L tables

Dear mittyri,

❝ here comes a magik (citing the paper):

The precision of the estimation was evaluated by running the simulations twenty times at twenty-two different conditions (different CV’s, different GMR’s and different designs).

:-D


❝ Seed was sampled 'unfortunately' about 20 times?


What do you mean by 'unfortunately'?

This sentence describes an investigation of how precise the simulated power values are.
Read the paragraph forward:
"The standard deviation of the simulated powers (20 values under each combination of CV, GMR and design, each with 10 000 sims; my insertion) was calculated under each condition; the mean of these standard deviations was 0.460. Thus, the precision of the power estimation is about ±0.5%."
They must have of course always choosen a different seed for the 20 repetitions under each combination of CV, GMR and design. Else you would get always the same result under each combination.

Of course they had also the opportunity to obtain a precision estimate via binomial distribution, without the burden of all that calculations. But then the random number generator had to be ideal.

BTW: Both Laszlos recommend to use PowerTOST if they were asked :thumb up:.

Regards,

Detlew
mittyri
★★  

Russia,
2017-12-08 23:56
(2301 d 14:19 ago)

@ d_labes
Posting: # 18048
Views: 13,804
 

 Precision of PowerTOST

Dear Detlew,

thank you for clarification
I tried to figure out the precision of PowerTOST estimations with the following code:
library("ggplot2")
library("PowerTOST")
reps <- 1E6
scABELdata <- data.frame(rep = 1:reps)
for(i in 1:reps){
  set.seed(i)
  scABELdata$power[i] <- power.scABEL(CV=0.8, n=54, theta0=0.95, design="2x3x3", nsims=10000, setseed=F)
}

ggplot(scABELdata, aes(power))+
  geom_density(fill = 2, alpha = 0.3)+
  theme_bw() +
  ggtitle(sprintf("%d reps: Mean is %.4f, SD is %.4f; %.2f %% obs are less than 0.8", reps, mean(scABELdata$power), sd(scABELdata$power), sum(scABELdata$power<0.8)/length(scABELdata$power)*100))


please correct if it is wrong.
The resulted plot:
[image]

BAsed on the mean and se calculated from sd and n=20 I would say that sample mean 80% in PowerTOST is a real unfortune :cool:

❝ BTW: Both Laszlos recommend to use PowerTOST if they were asked :thumb up:.

Fully agree and thanks again for this wonderful package.

Kind regards,
Mittyri
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2017-12-09 18:00
(2300 d 20:15 ago)

@ mittyri
Posting: # 18049
Views: 13,784
 

 Precision of PowerTOST

Hi mittyri,

❝ I tried to figure out the precision of PowerTOST estimations with the following code:

  scABELdata$power[j] <- power.scABEL(CV=0.8, n=54, theta0=0.95,

                                      design="2x3x3", nsims=10000,

                                      setseed=F)

❝ please correct if it is wrong.


I think so.
You got the right answer to a wrong question. For theta0=0.95 and targetpower=0.8 sample sizes are 57 for design="2x3x3" and 56 for design="2x2x3". Therefore, with 54 power will be <0.8 for both designs:

library(PowerTOST)
power.scABEL(CV=0.8, theta0=0.95, n=57, design="2x3x3")
# [1] 0.82231
power.scABEL(CV=0.8, theta0=0.95, n=54, design="2x3x3")
# [1] 0.79917
power.scABEL(CV=0.8, theta0=0.95, n=56, design="2x2x3")
# [1] 0.81545
power.scABEL(CV=0.8, theta0=0.95, n=54, design="2x2x3")
# [1] 0.79883


❝ […] I would say that sample mean 80% in PowerTOST is a real unfortune :cool:


On the contrary; fortunately works as designed with the right sample sizes. :-D

[image] [image]

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
mittyri
★★  

Russia,
2017-12-09 22:05
(2300 d 16:10 ago)

@ Helmut
Posting: # 18050
Views: 13,748
 

 Precision of PowerTOST

Hi Helmut,

sorry if my thoughts were unclear in previous post.
As Astea has noted, 2 Laszlos in their paper estimated the required minimum sample size as 54 subjects for "2x3x3" design (CV=0.8, theta0=0.95). They claimed that "The precision of the estimation was evaluated by running the simulations twenty times".
My question was: is it possible for PowerTOST? Is it possible to get the mean power of 80% using the mean of 20 runs?
library("ggplot2")
library("PowerTOST")
reps <- 1E4
scABELoverall <- data.frame(rep = 1:reps)
scABEL20 <- data.frame(1:20)
for(external in 1:reps){
  for(internal in 1:20){
    scABEL20$power[internal] <- power.scABEL(CV=0.8, n=54, theta0=0.95, design="2x3x3", nsims=10000, setseed=F)
  }
  scABELoverall$meanpower[external] <- mean(scABEL20$power)
}
ggplot(scABELoverall, aes(meanpower))+
  geom_density(fill = 2, alpha = 0.3)+
  theme_bw()+
  ggtitle(sprintf("%d reps: Mean is %.4f, SD is %.4f; %.2f %% of twenties are less than 0.8", reps, mean(scABELoverall$meanpower), sd(scABELoverall$meanpower), sum(scABELoverall$meanpower<0.8)/length(scABELoverall$meanpower)*100))


[image]
So the answer is: yes, that's possible, but you need to be unlucky (the probability is about 7%) :-D

Kind regards,
Mittyri
Astea
★★  

Russia,
2017-12-10 01:10
(2300 d 13:05 ago)

@ mittyri
Posting: # 18051
Views: 13,755
 

 Precision of PowerTOST

Dear All!

Thank you for clarifying this question! Interesting consequences!

How much time did you spend on plotting with 1E6 reps? My comp hanged over when I tried to get more than 1E4...:sleeping:

❝ So the answer is: yes, that's possible, but you need to be unlucky (the probability is about 7%)


For 72 subjects and theta0=0.9 we get even worser: 98.21 are less than 0.8 (checked with 104 reps).

"Being in minority, even a minority of one, did not make you mad"
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2017-12-10 21:27
(2299 d 16:48 ago)

@ Astea
Posting: # 18052
Views: 13,846
 

 Precision of PowerTOST

Hi Astea,

❝ How much time did you spend on plotting with 1E6 reps? My comp hanged over when I tried to get more than 1E4...


2:15 hours on my 2½ years old CPU for 1e5 rep’s × nsims=1e5 – since this is the default in power.scABEL().

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Astea
★★  

Russia,
2017-12-10 22:13
(2299 d 16:02 ago)

@ Helmut
Posting: # 18053
Views: 13,801
 

 Precision of PowerTOST

Dear Helmut!

❝ 2:15 hours on my 2½ years old CPU for 1e5 rep’s × nsims=1e5 – since this is the default in power.scABEL().

:ok: Citius, Altius, Fortius!

"Being in minority, even a minority of one, did not make you mad"
pjs
★    

India,
2018-02-09 14:08
(2239 d 00:07 ago)

@ Helmut
Posting: # 18388
Views: 12,362
 

 Reference-scaling: Don’t use FARTSSIE!

Hi All,

❝ If you want to go with reference-scaling, none! They depend on the CVwR and are obtained in the actual study. Hence, you need simulations, where your assumed CVwR is the mean. Either use the tables provided by the two Lászlós* or package PowerTOST for R. The software is open source and comes free of costs. :thumb up:


For sample size estimation for highly variable drugs, simulations are required to estimate sample size.

Just wondering what parameters/variables are applied in the simulations being done like wise in the sample size suggested by Laszlo or by package powertost,

Also what could be impact of difference in variable assumption for simulation incase of borderline high and very large variability (like 30% and 300%), varying T/R ratios (100%, 80.01%) etc.


Regards
pjs
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2018-02-09 21:15
(2238 d 17:00 ago)

@ pjs
Posting: # 18391
Views: 12,474
 

 Reference-scaling: Don’t use FARTSSIE!

Hi pjs,

❝ For sample size estimation for highly variable drugs, simulations are required to estimate sample size.


Correct.

❝ Just wondering what parameters/variables are applied in the simulations being done like wise in the sample size suggested by Laszlo or by package powertost,


Try the respective functions sampleN.RSABE() for the FDA’s reference-scaling and sampleN.scABEL() for the EMA’s and Health Canada’s average bioequivalence with expanding limits in PowerTOST. Both functions are extensively documented. In the R-console after typing library(PowerTOST) try help(sampleN.RSABE) or help(sampleN.scABEL).
László Tóthfalusi’s code (in MatLab) is not available in the public domain. It is slow (therefore, only 10,000 simulations) and for years the two Lászlós recommend PowerTOST instead. ;-)

❝ Also what could be impact of difference in variable assumption for simulation incase of borderline high and very large variability (like 30% and 300%),


With high (say >50%) CwR the GMR-restriction (80.00% ≤ GMR ≤ 125.00%) precedes over the CI-decision. At CVwR 300% practically only the GMR is important (esp. for RSABE).

❝ varying T/R ratios (100%, …


Never, ever assume a T/R-ratio of 100%. It will strike back when the study fails with a too low sample size. We recommend 90% at the best.

❝ … 80.01%)


That’s for CVwR ≤30% (no scaling) almost simulating the Type I Error (assuming true Null). Would be at 80%.
Try

power.scABEL(CV=0.3, theta0=0.8, design="2x2x4", n=34, nsims=1e6)
power.RSABE(CV=0.3, theta0=0.8, design="2x2x4", n=32, nsims=1e6)

Surprised? :thumb down:

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
UA Flag
Activity
 Admin contact
22,957 posts in 4,819 threads, 1,639 registered users;
78 visitors (0 registered, 78 guests [including 6 identified bots]).
Forum time: 14:16 CET (Europe/Vienna)

Nothing shows a lack of mathematical education more
than an overly precise calculation.    Carl Friedrich Gauß

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5