balakotu
★    

India,
2022-03-29 13:24
(730 d 18:00 ago)

Posting: # 22883
Views: 2,064
 

 Adjusting weight to reflect group differences in Model [General Sta­tis­tics]

Dear All,

Kindly look into the below request and give me your valuable suggestions

How to include the weight factor in statistical BE evaluation for adjusting weight to reflect group size differences in a single dose (equal dose in all healthy subjects) parallel design study conducted in different groups on different dates at the same clinical site, with large difference in the number of subjects in each group.

Regards
Kotu.


Edit: Category changed; see also this post #1[Helmut]
dshah
★★  

India/United Kingdom,
2022-03-29 20:40
(730 d 10:45 ago)

@ balakotu
Posting: # 22884
Views: 1,706
 

 Adjusting weight to reflect group differences in Model

Dear Balakotu!
As per EMA- In parallel design studies, the treatment groups should be comparable in all known variables that may affect the pharmacokinetics of the active substance (e.g. age, body weight, sex, ethnic origin, smoking status, extensive/poor metabolic status). This is an essential pre-requisite to give validity to the results from such studies.
In general, it is recommended to have balanced between treatment arms.
Regards,
Divyen
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2022-03-30 13:07
(729 d 18:18 ago)

@ dshah
Posting: # 22886
Views: 1,735
 

 Adjusting weight to reflect group differences in Model

Hi Divyen & Kotu,

❝ As per EMA […]



@Divyen: You are absolutely right when it comes to designing a study.

@Kotu: Were you interested in what to do when Murphy’s law hit and it turned out that groups of eligible subjects differed by a great extent? If yes:
  • First of all, make sure that you use the not the simple t-test but the Welch-test. The former is sensitive (specifically: anticonservative) to unequal variances and/or group sizes. Hence, that’s important even for equal group sizes.
    • FDA Section VI.B.d:
      ‘[…] equal variances should not be assumed.
    • EMA Section 4.1.8:
      The statistical analysis should take into account sources of variation that can be reasonably assumed to have an effect on the response variable.
    Not by any chance the Welch-test is the default in SAS and [image]. For the setup in Phoenix/WinNonlin see the online User’s Guide.
  • You could try to normalize the PK-metrics by body weight, i.e., assess AUC/BW and Cmax/BW, which seemingly is in line with the EMA’s GL:
    The precise model to be used for the analysis should be pre-specified in the protocol.
    If this was not the case, essentially you have two options.
    1. If you did not evaluate the study already, amend the SAP accordingly and cross fingers.
    2. Otherwise, you could only present it as a sensitivity analysis. If the original analysis fails and the sensitivity analysis of BW-adjusted metrics passes, I’m afraid that cards are stacked against you.
      Regulatory acceptance not guaranteed. At the 2nd GBHI conference (September 2016, Rockville) there was a discussion about adding body weight as a covariate in crossover studies in patients because it may change with time. Response of regulators: No (though apparently the FDA was more open to the idea).

❝ In general, it is recommended to have balanced between treatment arms.


Correct – even in a crossover. It is a common misconception that period effects mean out because T and R are affected to the same degree. That’s not correct for unbalanced sequences. However, unless the degree if imbalance is extreme, the bias is small.


Edit: The published Two-Stage-Design methods are also correct in the strict sense for balanced sequences only. At the end an [image]-script where you can try to counteract imbalance by intentionally allocate subjects in the second stage in such a way that in the pooled analysis sequences are as balanced as possible.

Example: Potvin ‘Method B’ (default), 24 subjects dosed in the first stage, 12 eligible in sequence RT and 10 in sequence TR (dropout-rate ≈8.3%), CV 25%, exact sample size re-estimation (default) taking the stage-term in the pooled analysis into account.

TSD(n1 = 24, n1.1 = 12, n1.2 = 10, CV = 25)

 TSD-method: 1 (α = 0.0294, GMR = 95%, power = 80%)
 Sample size re-estimation: exact
 ──────────────────────────────────────────────────
 Stage 1
 ──────────────────────────────────────────────────
 Randomized/dosed subjects             : 24
 Eligible subjects (drop-outs)         : 22 (2)
 Eligible subjects in sequences RT|TR  : 12|10
 Allocation ratio RT/TR                : 1:0.8333
 Drop-out rate                         : 8.333%
 ──────────────────────────────────────────────────
 Interim analysis
 ──────────────────────────────────────────────────
 Relevant PK metrics’ maximum CV       : 25%
 Estimated total sample size           : 34
 ──────────────────────────────────────────────────
 Stage 2
 ──────────────────────────────────────────────────
 Preliminary sample size               : 12
 Expected drop-out rate                : 8.333%
 Final sample size (adj. for drop-outs): 14
 Randomized subjects in sequences RT|TR: 6|8
 Expected eligible subj. in seq. RT|TR : 6|7
 Allocation ratio RT/TR                : 1:1.167
 ──────────────────────────────────────────────────
 Pooled data set
 ──────────────────────────────────────────────────
 Expected eligible subjects            : 35
 Expected eligible subj. in seq. RT|TR : 18|17
 Allocation ratio RT/TR                : 1:0.9444 (imbalanced)


Estimated n2 12. Assuming that we will see the same dropout-rate like in the first stage, adjusted n2 14. In­stead of dosing seven subjects / sequence, we dose six in sequence RT and eight in sequence TR. If the drop­out-rate is realized, we get an allocation-ratio of 1:0.9444, which is not that bad.


library(Power2Stage)
TSD <- function(method1 = 1, method2 = 1, n1, n1.1, n1.2, CV, do.2) {
  up2even <- function(n) {    # get balanced sequences
    return(as.integer(2 * (n %/% 2 + as.logical(n %% 2))))
  }
  nadj <- function(n, do.r) { # adjust for dropout-rate
    return(as.integer(up2even(n / (1 - do.r))))
  }
  n1.e   <- n1.1 + n1.2           # stage 1: eligible subjects
  n1.ar  <- n1.2 / n1.1           # stage 1: sequence allocation ratio
  do.r   <- abs((n1.e - n1) / n1) # stage 1: drop-out rate
  if(!missing(do.2)) do.2 <- do.2 / 100 # anticipated drop-out rate stage 2
  if(missing(do.2)) do.2  <- do.r       # apply 1st if not given
  CV     <- CV / 100
  if (method1 == 1) {adj <- 0.0294; GMR <- 0.95; pwr <- 0.8}
  if (method1 == 2) {adj <- 0.0280; GMR <- 0.90; pwr <- 0.8}
  if (method1 == 3) {adj <- 0.0284; GMR <- 0.95; pwr <- 0.9}
  if (method1 == 4) {adj <- 0.0274; GMR <- 0.95; pwr <- 0.9}
  if (method1 == 5) {adj <- 0.0269; GMR <- 0.90; pwr <- 0.9}
  if (method2 == 1) me <- "exact"
  if (method2 == 2) me <- "nct"
  if (method2 == 3) me <- "shifted"
  n2.p   <- sampleN2.TOST(alpha = adj, CV = CV, n1 = n1.e, theta0 = GMR,
                          targetpower = pwr, method = me)[["Sample size"]]
  nt     <- n1.e + n2.p             # preliminary total sample size
  n2.1   <- nadj(nt/2-n1.1, do.2)   # adjust for drop-outs
  n2.2   <- nadj(nt/2-n1.2, do.2)   # adjust for drop-outs
  n2     <- n2.1+n2.2               # dosed in stage 2
  n2.1e  <- round(n2.1*(1-do.2), 0) # stage 2: expected elig. subjects in seq. 1
  n2.2e  <- round(n2.2*(1-do.2), 0) # stage 2: expected elig. subjects in seq. 2
  n2.e   <- n2.1e+n2.2e             # stage 2: expected elig. subjects
  n2.ar  <- n2.2e/n2.1e             # stage 2: sequence allocation ratio
  ar     <- (n1.2+n2.2e)/(n1.1+n2.1e) # pooled data’s allocation ratio
  ifelse(ar == 1, bal <- "(balanced)", bal <- "(imbalanced)")
  sep    <- paste(paste0(rep("\u2500", 50), collapse=""), "\n")
  if(method2 > 1) me <- c(me, "t-distribution")
  cat("\n TSD-method:", method1,
  paste0("(\u03b1 = ", adj, ", GMR = ", 100*GMR, "%, power = ", 100*pwr, "%)\n"),
  "Sample size re-estimation:", me, "\n", sep,
  "Stage 1\n", sep,
  "Randomized/dosed subjects             :", n1, "\n",
  "Eligible subjects (drop-outs)         :", n1.e, paste0("(", n1-n1.e,")"), "\n",
  "Eligible subjects in sequences RT|TR  :", paste0(n1.1, "|", n1.2), "\n",
  "Allocation ratio RT/TR                :", paste0("1:", signif(n1.ar, 4)), "\n",
  "Drop-out rate                         :", paste0(signif(100*do.r, 4), "%\n"), sep,
  "Interim analysis\n", sep,
  "Relevant PK metrics’ maximum CV       :", paste0(signif(100*CV, 4), "%\n"),
  "Estimated total sample size           :", as.numeric(nt), "\n", sep,
  "Stage 2\n", sep,
  "Preliminary sample size               :", n2.p, "\n",
  "Expected drop-out rate                :", paste0(signif(100*do.2, 4),"%\n"),
  "Final sample size (adj. for drop-outs):", n2, "\n",
  "Randomized subjects in sequences RT|TR:", paste0(n2.1, "|", n2.2), "\n",
  "Expected eligible subj. in seq. RT|TR :", paste0(n2.1e, "|", n2.2e), "\n",
  "Allocation ratio RT/TR                :", paste0("1:", signif(n2.ar, 4)), "\n", sep,
  "Pooled data set\n", sep,
  "Expected eligible subjects            :", n1.e+n2.e, "\n",
  "Expected eligible subj. in seq. RT|TR :", paste0(n1.1+n2.1e, "|", n1.2+n2.2e), "\n",
  "Allocation ratio RT/TR                :", paste0("1:", signif(ar, 4)), bal, "\n\n")
}

method1 <- 1 # select from TSD-Methods
###################################################
#                                     GMR% power% #
# 1 Potvin et al. (2008) Methods B/C:  95    80   #
# 2 Montague et al. (2011) Method D:   90    80   #
# 3 Fuglsang (2013) Method B:          95    90   #
# 4 Fuglsang (2013) Method C1/D1:      95    90   #
# 5 Fuglsang (2013) Method C2/D2:      90    90   #
###################################################

method2 <- 1 # select from power-estimation Methods
#######################################
# 3 shifted t-distribution:    good   #
# 2 noncentral t-distribution: better #
# 1 exact (Owen’s Q-function): best   #
#######################################


Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
ElMaestro
★★★

Denmark,
2022-03-30 13:19
(729 d 18:06 ago)

@ Helmut
Posting: # 22887
Views: 1,668
 

 Adjusting weight to reflect group differences in Model

Hi all,

may I add that while adjustment for BW seems a nogo in 'ordinary' BE discplines, it is quite the opposite for biosimilars where you more often than not (said solely on basis of the biosimilars I am working on) add BW as a covariate in the model. I don't think it was entirely clear what type of product was behind the question in this case.

At any rate, regardless of whether BW is taken into account one way or another, all this should be established at the time of protocol drafting. If the idea to adjust by BW was a result of a failing BE study then, naturally, the prospects may not be good.

Pass or fail!
ElMaestro
UA Flag
Activity
 Admin contact
22,957 posts in 4,819 threads, 1,636 registered users;
110 visitors (0 registered, 110 guests [including 6 identified bots]).
Forum time: 06:25 CET (Europe/Vienna)

With four parameters I can fit an elephant,
and with five I can make him wiggle his trunk.    John von Neumann

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5