kms.srinivas
☆    

India,
2018-03-09 06:38
(2212 d 09:52 ago)

(edited by kms.srinivas on 2018-03-09 10:15)
Posting: # 18506
Views: 4,657
 

 Reasons to go for Partial and Full Replicate [Design Issues]

Dear Friends,
I am very thankful if you could help me regarding following:
1. In which circumstances, Partial or Full Replicate design should be used?
2. Can we go for 2 way crossover, when ISCV > 30% also?
3. Which is better design? Full or Partial replicate? Why?
4. What is the SAS code for Randomization of Full and Partial Replicates?

Thanks in advance for your response.
ElMaestro
★★★

Denmark,
2018-03-09 11:17
(2212 d 05:13 ago)

@ kms.srinivas
Posting: # 18507
Views: 4,173
 

 Reason to go for Partial and Full Replicate

Hello kms.srinivas,


❝ 1. In which circumstances, Partial or Full Replicate design should be used?

"Should" is perhaps slightly wrong. Generally, regulators tend not to make the choice of going replicate or partial replicate for any applicants.
Partial or full replicates can be used all the time. They tend to be advantageous if (and in my opinion only if) the study in which they are used shows a CVintra(ref) of >30%. Here's an element of predicting, guessing, expecting variability. You can often get inspiration from published studies/PARs/FOI information etc.

❝ 2. Can we go for 2 way crossover, when ISCV > 30% also?

Yes. In a sense it might incur a penalty in terms of sample size (study cost), but that will mainly be a hindsight conclusion.

❝ 3. Which is better design? Full or Partial replicate? Why?

Which is better food: chicken or veg?
Full replicate gives you an opportunity to derive intra-CV for Test. But that quantitity isn't much used, generally. If we talk widening of acceptance range then that is made on basis of intra-CV for Ref.
Full replicate is tougher on volunteers, because there will be more periods. All other factors equal, you will often see higher drop out rates towards the late periods.
Partial replicate is fine. RTR/RRT/TRR is a very good design. Note here are proponents of RTR/TRT design types on this forum, albeit I believe there isn't an explicit solid regulatory support for that type. This may constitute a type of semi-replicate which will grow into maturity with time.
Fitting RTR/RRT/TRR for FDA may be tricky, since the stats model -if I get it right- in SAS's world doesn't work well. It has to do with the lack of an intra-subject variance component for Test.

❝ 4. What is SAS code for Full and Partial Replicates?

Can't help.
But the code will differ depending on the country in which the dossier is submitted.

A much more personal remark: If in doubt (if CV is expected to be just around 30%, for example) , go for a 222BE design.

Pass or fail!
ElMaestro
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2018-03-09 14:17
(2212 d 02:13 ago)

@ kms.srinivas
Posting: # 18508
Views: 4,277
 

 Reasons to go for Full Replicate

Hi kms,

first of all please search the forum before posting. There are dozens of threads discussing these issues in the categories RSABE / ABEL and Power / Sample Size. I will extend ElMaestro’s answers a little bit.

❝ 3. Which is better design? Full or Partial replicate? Why?

❝ ❝ Which is better food: chicken or veg?


When it comes to food, both are fine if properly cooked. When it comes to replicate designs, full. Full stop.

❝ ❝ Full replicate is tougher on volunteers, because there will be more periods.


Only in 4-period replicate designs. If you are worried about dropouts or are concerned about blood loss, consider one of the 3-period full replicate designs, i.e., TRT|RTR or TRR|RTT. BTW, the loss in power due to drop­outs1 is small.

❝ ❝ Partial replicate is fine. RTR/RRT/TRR is a very good design.


It is not even a bad design. Partial replicates, i.e., TRR|RTR|RRT and TRR|RTR are lousy designs. Reasons:
  • If we evaluate a study with the FDA’s mixed-effects model for ABE2, it might happen that the optimizer fails to converge. This is independent from software. Search the forum for data sets where both SAS and Phoenix/WinNonlin failed. Then we have a study without a result. Great.
    Remedy in SAS lingo: Specify the covariance structure with TYPE=FA0(1) – instead of TYPE=FA0(2) as recommended by the FDA. State it in the protocol and initiate a controlled correspondence with the FDA or you might end up with an R-t-R.
  • RSABE/ABEL was developed entirely based on simulations. The fundamental publications explored full replicates only. Simulations of partial replicate designs with heteroscedasticity (CVwT ≠ CVwR) are very tricky.
  • Sample size estimation. If we have data of a pilot study performed in a full replicate design, we can take different CVwT and CVwR into account. In many cases CVwT < CVwR. Then we get an incentive (i.e., a smaller sample size of the pivotal study). In partial replicates we have to assume CVwT = CVwR and go full throttle.
  • Study cost ~ fixed cost + number of treatments × (cost / treatment). Furthermore, the sample size of 4-period replicates is roughly ⅔ the one of 3-period replicates. Hence, we will save also in the fixed cost (less pre-/post study exams).
    From the economic perspective the 4-period full replicate is always the winner. From the ethical perspective I leave it to you to decide whether it is better to torture less subjects more often or the other way ’round.

    [image]

    [image]

    [image]

  • The so-called “extra-reference design” TRR|RTR is biased in the presence of period effects and should never ever be used.

❝ ❝ Note here are proponents of RTR/TRT design types on this forum, …


Correct. :-D But not only here.

❝ ❝ … albeit I believe there isn't an explicit solid regulatory support for that type.


Mentioned in the EMA’s Q&A document. Requires that at least twelve eligible subjects are in sequence RTR – a condition which is always fulfilled in practice (see this post).
Backstory: In my presentations I argued against partial replicates and recommended (if only three periods are preferred: dropouts, limited blood samples) the TRT|RTR instead. Then the Ukrainian agency asked the EMA’s PKWP for a clarification. The omniscient oracle has spoken.

❝ ❝ Fitting RTR/RRT/TRR for FDA may be tricky, since the stats model -if I get it right- in SAS's world doesn't work well.


Not only in SAS.

❝ ❝ It has to do with the lack of an intra-subject variance component for Test.


Correct. The model is over-specified (since T is not repeatedly administered). Even if the optimizer converges, the estimate of CVwT is plain nonsense.

❝ 4. What is the SAS code for Randomization of Full and Partial Replicates?


Please do your homework first. Hints: SAS-codes for the FDA’s RSABE and ABE are given in the progesterone guidance and for the EMA’s ABEL in the Q&A document. For the Health Canada’s ABEL you have to use a mixed-effects model.


  1. R-code to assess the impact of dropouts for various designs. Give the dropout-rate in percent.
    library(PowerTOST)
    power.final <- function(CV, theta0, target, design, scaling=TRUE,
                            regulator, dropout.rate) {
      if (design %in% c("parallel", "2x2x2r")) stop("design not supported.")
      if (scaling) { # RSABE or ABEL
        if (regulator == "FDA") {
          res <- sampleN.RSABE(CV=CV, design=design, theta0=theta0,
                               targetpower=target, print=FALSE, details=FALSE)
        } else { # EMA or HC
          res <- sampleN.scABEL(CV=CV, design=design, theta0=theta0,
                                targetpower=target, regulator=regulator,
                                print=FALSE, details=FALSE)
        }
      } else {       # ABE
        res <- sampleN.TOST(CV=CV, design=design, theta0=theta0,
                            targetpower=target, print=FALSE, details=FALSE)
      }
      washouts <- as.integer(substr(design, nchar(design), nchar(design)))-1
      eligible <- res[["Sample size"]]
      for (j in 1:washouts) {
        eligible <- eligible*(1-dropout.rate/100)
      }
      eligible <- round(eligible)
      if (scaling) {
        if (regulator == "FDA") {
          pwr <- suppressMessages(power.RSABE(CV=CV, design=design,
                                              theta0=theta0, n=eligible))
        } else {
          pwr <- suppressMessages(power.scABEL(CV=CV, design=design,
                                              theta0=theta0, n=eligible,
                                              regulator=regulator))
        }
      } else {
        pwr <- suppressMessages(power.TOST(CV=CV, design=design,
                                           theta0=theta0, n=eligible))
      }
      cat("\nDesign", design,
          "\nStudy started with", res[["Sample size"]], "subjects (achieved power",
          sprintf("%.1f%%).", 100*res[["Achieved power"]]),
          "\nStudy finished with", eligible, "eligible subjects (power",
          sprintf("%.1f%%).", 100*pwr), "\n\n")
    }

    Of course we will see more dropouts in 4-period designs but is this larger loss in power compared to the 3-period designs relevant?
    Examples for CV 0.4, GMR 0.9, EMA’s ABEL, dropout-rate 5%. Call the function with
    power.final(CV=0.4, theta0=0.9, target=0.8, design=design, scaling=TRUE,
                regulator="EMA", dropout.rate=5)

    and change design to one of "2x2x4", "2x2x3", "2x3x3".
    Gives:
    Design 2x2x4
    Study started with 30 subjects (achieved power 80.7%).
    Study finished with 26 eligible subjects (power 75.9%).
    Design 2x2x3
    Study started with 46 subjects (achieved power 80.7%).
    Study finished with 42 eligible subjects (power 77.7%).
    Design 2x3x3
    Study started with 42 subjects (achieved power 80.1%).
    Study finished with 38 eligible subjects (power 76.3%).


    Coming back to one of your questions:

    ❝ 2. Can we go for 2 way crossover, when ISCV > 30% also?


    Sometimes we are not allowed to expand the acceptance range (i.e., if we are not able to provide a clinical justification). Remember that for the EMA scaling would be acceptable for Cmax and for Health Canada for AUC… Let’s see (as above, but additionally design="2x2x2", scaling=FALSE):
    Design 2x2x4
    Study started with 68 subjects (achieved power 80.7%).
    Study finished with 58 eligible subjects (power 75.0%).
    Design 2x2x3
    Study started with 100 subjects (achieved power 80.0%).
    Study finished with 90 eligible subjects (power 76.2%).
    Design 2x3x3
    Study started with 102 subjects (achieved power 80.7%).
    Study finished with 92 eligible subjects (power 77.0%).
    Design 2x2x2
    Study started with 134 subjects (achieved power 80.1%).
    Study finished with 127 eligible subjects (power 78.2%).


  2. For the FDA you can use RSABE for any PK metric. Since generally the variability of Cmax is larger than the one of AUC, you might end up with a situation to employ RSABE for Cmax (swR ≥ 0.294) and ABE for AUC (swR < 0.294).

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
UA Flag
Activity
 Admin contact
22,957 posts in 4,819 threads, 1,636 registered users;
104 visitors (0 registered, 104 guests [including 7 identified bots]).
Forum time: 16:31 CET (Europe/Vienna)

With four parameters I can fit an elephant,
and with five I can make him wiggle his trunk.    John von Neumann

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5