Reasons to go for Full Replicate [Design Issues]
first of all please search the forum before posting. There are dozens of threads discussing these issues in the categories RSABE / ABEL and Power / Sample Size. I will extend ElMaestro’s answers a little bit.
❝ 3. Which is better design? Full or Partial replicate? Why?
❝ ❝ Which is better food: chicken or veg?
When it comes to food, both are fine if properly cooked. When it comes to replicate designs, full. Full stop.
❝ ❝ Full replicate is tougher on volunteers, because there will be more periods.
Only in 4-period replicate designs. If you are worried about dropouts or are concerned about blood loss, consider one of the 3-period full replicate designs, i.e., TRT|RTR or TRR|RTT. BTW, the loss in power due to dropouts1 is small.
❝ ❝ Partial replicate is fine. RTR/RRT/TRR is a very good design.
It is not even a bad design. Partial replicates, i.e., TRR|RTR|RRT and TRR|RTR are lousy designs. Reasons:
- If we evaluate a study with the FDA’s mixed-effects model for ABE2, it might happen that the optimizer fails to converge. This is independent from software. Search the forum for data sets where both SAS and Phoenix/WinNonlin failed. Then we have a study without a result. Great.
Remedy in SAS lingo: Specify the covariance structure withTYPE=FA0(1)
– instead ofTYPE=FA0(2)
as recommended by the FDA. State it in the protocol and initiate a controlled correspondence with the FDA or you might end up with an R-t-R.
- RSABE/ABEL was developed entirely based on simulations. The fundamental publications explored full replicates only. Simulations of partial replicate designs with heteroscedasticity (CVwT ≠ CVwR) are very tricky.
- Sample size estimation. If we have data of a pilot study performed in a full replicate design, we can take different CVwT and CVwR into account. In many cases CVwT < CVwR. Then we get an incentive (i.e., a smaller sample size of the pivotal study). In partial replicates we have to assume CVwT = CVwR and go full throttle.
- Study cost ~ fixed cost + number of treatments × (cost / treatment). Furthermore, the sample size of 4-period replicates is roughly ⅔ the one of 3-period replicates. Hence, we will save also in the fixed cost (less pre-/post study exams).
From the economic perspective the 4-period full replicate is always the winner. From the ethical perspective I leave it to you to decide whether it is better to torture less subjects more often or the other way ’round.
- The so-called “extra-reference design” TRR|RTR is biased in the presence of period effects and should never ever be used.
❝ ❝ Note here are proponents of RTR/TRT design types on this forum, …
Correct.

❝ ❝ … albeit I believe there isn't an explicit solid regulatory support for that type.
Mentioned in the EMA’s Q&A document. Requires that at least twelve eligible subjects are in sequence RTR – a condition which is always fulfilled in practice (see this post).
Backstory: In my presentations I argued against partial replicates and recommended (if only three periods are preferred: dropouts, limited blood samples) the TRT|RTR instead. Then the Ukrainian agency asked the EMA’s PKWP for a clarification. The omniscient oracle has spoken.
❝ ❝ Fitting RTR/RRT/TRR for FDA may be tricky, since the stats model -if I get it right- in SAS's world doesn't work well.
Not only in SAS.
❝ ❝ It has to do with the lack of an intra-subject variance component for Test.
Correct. The model is over-specified (since T is not repeatedly administered). Even if the optimizer converges, the estimate of CVwT is plain nonsense.
❝ 4. What is the SAS code for Randomization of Full and Partial Replicates?
Please do your homework first. Hints: SAS-codes for the FDA’s RSABE and ABE are given in the progesterone guidance and for the EMA’s ABEL in the Q&A document. For the Health Canada’s ABEL you have to use a mixed-effects model.
- R-code to assess the impact of dropouts for various designs. Give the dropout-rate in percent.
library(PowerTOST)
power.final <- function(CV, theta0, target, design, scaling=TRUE,
regulator, dropout.rate) {
if (design %in% c("parallel", "2x2x2r")) stop("design not supported.")
if (scaling) { # RSABE or ABEL
if (regulator == "FDA") {
res <- sampleN.RSABE(CV=CV, design=design, theta0=theta0,
targetpower=target, print=FALSE, details=FALSE)
} else { # EMA or HC
res <- sampleN.scABEL(CV=CV, design=design, theta0=theta0,
targetpower=target, regulator=regulator,
print=FALSE, details=FALSE)
}
} else { # ABE
res <- sampleN.TOST(CV=CV, design=design, theta0=theta0,
targetpower=target, print=FALSE, details=FALSE)
}
washouts <- as.integer(substr(design, nchar(design), nchar(design)))-1
eligible <- res[["Sample size"]]
for (j in 1:washouts) {
eligible <- eligible*(1-dropout.rate/100)
}
eligible <- round(eligible)
if (scaling) {
if (regulator == "FDA") {
pwr <- suppressMessages(power.RSABE(CV=CV, design=design,
theta0=theta0, n=eligible))
} else {
pwr <- suppressMessages(power.scABEL(CV=CV, design=design,
theta0=theta0, n=eligible,
regulator=regulator))
}
} else {
pwr <- suppressMessages(power.TOST(CV=CV, design=design,
theta0=theta0, n=eligible))
}
cat("\nDesign", design,
"\nStudy started with", res[["Sample size"]], "subjects (achieved power",
sprintf("%.1f%%).", 100*res[["Achieved power"]]),
"\nStudy finished with", eligible, "eligible subjects (power",
sprintf("%.1f%%).", 100*pwr), "\n\n")
}
Of course we will see more dropouts in 4-period designs but is this larger loss in power compared to the 3-period designs relevant?
Examples for CV 0.4, GMR 0.9, EMA’s ABEL, dropout-rate 5%. Call the function with
power.final(CV=0.4, theta0=0.9, target=0.8, design=design, scaling=TRUE,
regulator="EMA", dropout.rate=5)
and changedesign
to one of"2x2x4"
,"2x2x3"
,"2x3x3"
.
Gives:
Design 2x2x4
Study started with 30 subjects (achieved power 80.7%).
Study finished with 26 eligible subjects (power 75.9%).
Design 2x2x3
Study started with 46 subjects (achieved power 80.7%).
Study finished with 42 eligible subjects (power 77.7%).
Design 2x3x3
Study started with 42 subjects (achieved power 80.1%).
Study finished with 38 eligible subjects (power 76.3%).
Coming back to one of your questions:❝ 2. Can we go for 2 way crossover, when ISCV > 30% also?
Sometimes we are not allowed to expand the acceptance range (i.e., if we are not able to provide a clinical justification). Remember that for the EMA scaling would be acceptable for Cmax and for Health Canada for AUC… Let’s see (as above, but additionallydesign="2x2x2", scaling=FALSE
):
Design 2x2x4
Study started with 68 subjects (achieved power 80.7%).
Study finished with 58 eligible subjects (power 75.0%).
Design 2x2x3
Study started with 100 subjects (achieved power 80.0%).
Study finished with 90 eligible subjects (power 76.2%).
Design 2x3x3
Study started with 102 subjects (achieved power 80.7%).
Study finished with 92 eligible subjects (power 77.0%).
Design 2x2x2
Study started with 134 subjects (achieved power 80.1%).
Study finished with 127 eligible subjects (power 78.2%).
- For the FDA you can use RSABE for any PK metric. Since generally the variability of Cmax is larger than the one of AUC, you might end up with a situation to employ RSABE for Cmax (swR ≥ 0.294) and ABE for AUC (swR < 0.294).
Dif-tor heh smusma 🖖🏼 Довге життя Україна!
![[image]](https://static.bebac.at/pics/Blue_and_yellow_ribbon_UA.png)
Helmut Schütz
![[image]](https://static.bebac.at/img/CC by.png)
The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Complete thread:
- Reasons to go for Partial and Full Replicate kms.srinivas 2018-03-09 05:38 [Design Issues]
- Reason to go for Partial and Full Replicate ElMaestro 2018-03-09 10:17
- Reasons to go for Full ReplicateHelmut 2018-03-09 13:17