MGR
★    

India,
2022-03-21 08:30
(739 d 00:07 ago)

Posting: # 22850
Views: 4,470
 

 Kel Calculation in ANVISA Regulatory [Regulatives / Guidelines]

Hi Everyone,

I have done the pharmacokinetic analysis using Phoenix WinNonlin 8.3 (as per ANVISA data published in Manual for Good Bioavailability and Bioequivalence Practices (Vol. 1 Module 3 : Statistical step)). But the calculated kel (Elimination Rate constant) values were not matching with the presented guideline values (Result values were reported in Page No. 12 (Table 1.3 and Table 1.4))

[image]

For further information I have used best fit option in Winnonlin for kel calculation.

The Raw Data is available in Page No. 11 (Table 1.1 and Table 1.2)

[image]

Could anybody help me how to calculate/formula/logic of Kel Value calculations as per ANVISA Regulatory?


Edit: I deleted a later post – you can edit your posts for 24 hours; see also the FAQ #3[Helmut]

Regards,
MGR
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2022-03-21 12:12
(738 d 20:25 ago)

@ MGR
Posting: # 22853
Views: 4,010
 

 Data, please

Hi MGR,

can you mail the data to me? I’m not in the mood of typing them in.
I will upload the file to the server for others to check.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2022-03-21 19:38
(738 d 12:58 ago)

@ MGR
Posting: # 22857
Views: 3,973
 

 ANVISA’s strange examples

Hi MGR,

THX for providing the data! If others are interested you may download them (in CSV-format).

❝ I have done the pharmacokinetic analysis using Phoenix WinNonlin 8.3…

❝ Could anybody help me how to calculate/formula/logic of Kel Value calculations as per ANVISA Regulatory?


No logic, only bad practices.

First of all, from subjects where concentrations are relatively high it is clear that the drug follows a two-com­part­ment model. In subjects with low concentrations only the distribution phase is visible, explaining also the reported half lives ranging from 0.57 to 9.55 hours.
The default automatic algorithm in Phoe­nix WinNonlin (and most other software I know) is starting with the last three concentrations and maximizing \(\small{R_\textrm{adj}^2}\) until the improvement by adding data is ≤0.0001. So far, so good.
  • It works for two-compartment models reasonably well in most cases:

    [image]

  • However, it might fail in others:

    [image]

    If the the distribution- and elimination-phases are not well separated, the algo is ‘greedy’, i.e., reaches too far. Then the estimated elimination is contaminated by the distribution and looks faster than it really is.
It was kind of a scavenger hunt to unveil what the ANVISA did. By trial-and-error I was successful at the end. Don’t know which software was used but for many years tmax|Cmax is not included by the algo because absorption is not complete. Another example:
  • Automatic:

    [image]

    Fine with me. Maybe I would have used only the last three values.

  • ANVISA:

    [image]

    Why the heck?
Since the automatic algo might fail (sometimes with two-compartment models, regularly with drugs showing enterohepatic recycling, controlled release products with flat profiles, multiphasic release products), visual inspection of the fits is mandatory.

   The selection of the most suitable time interval cannot be left to a programmed algorithm based on mathematical criteria, but necessitates scientific judgment by both the clinical pharmacokineticist and the person who determined the concentrations and knows about their reliability.1
   It should be emphasised that the TTT method has been introduced in this paper to provide a reasonable tool to support visual curve inspection for reliably identifying the mono-exponential terminal phase. Moreover, the TTT method should not be utilised without visual inspection of the respective concentra­tion-time course. Thus, before using this new approach the monophasic shape post the peak of the curve has to be checked visually by means of a semilogarithmic diagram.2


But what do we see in ANVISA’s examples? In many cases tmax|Cmax was included. Outdated software?3 In some cases a quite reasonable automatic fit was changed to the worse. Very bad: The fits were corrected about three times as often for R than for T. If we would do that, an assessor might suspect cherry-picking (intentionally changing the T/R-ratio of AUC0–∞).

My procedure (of course, outlined in the protocol):
  • Perform NCA in a blinded manner (in Phoenix WinNonlin sort by period and subject). Use an automatic algo.
  • Inspect all fits and correct the start/end-times if deemed necessary in a consistent manner.
  • In newer versions of Phoenix WinNonlin lock the results.
  • Join the randomization (i.e., unblind the data) and continue as usual.


  1. Hauschke D, Steinijans VW, Pigeot I. Bioequivalence Studies in Drug Development: Methods and Appli­cations. New York: Wiley; 2007. p. 20-3.
  2. Scheerans C, Derendorf H, Kloft C. Proposal for a Standardised Identification of the Mono-Exponential Terminal Phase for Orally Administered Drugs. Biopharm Drug Dispos. 2008; 29(3): 145–57. doi:10.1002/bdd.596.
  3. Including tmax|Cmax was implemented in ‘classical’ WinNonlin till v5.3 of 2009. The ANVISA’s manual is of 2002. Maybe that’s the reason.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
mittyri
★★  

Russia,
2022-03-22 12:31
(737 d 20:05 ago)

@ Helmut
Posting: # 22858
Views: 3,905
 

 R2adj improvement limit

Hi Helmut,

❝ The default automatic algorithm in Phoe­nix WinNonlin (and most other software I know) is starting with the last three concentrations and maximizing \(\small{R_\textrm{adj}^2}\) until the improvement by adding data is ≤0.0001.


I was always wondering: why the limit is ≤0.0001? Not simple <0? Or maybe ≤0.01?

Kind regards,
Mittyri
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2022-03-22 15:04
(737 d 17:32 ago)

@ mittyri
Posting: # 22859
Views: 3,856
 

 R2adj improvement limit

Hi mittyri,

❝ I was always wondering: why the limit is ≤0.0001?


You know that I’m not the right addressee to answer this question. :-D
Simulation script at the end.

limit ≤ 1e-04
    lambda.z           bias (%)          lz.start          lz.n
 Min.   :0.003178   Min.   :-97.249   Min.   : 3.25   Min.   : 3.000
 1st Qu.:0.107161   1st Qu.: -7.239   1st Qu.: 5.25   1st Qu.: 3.000
 Median :0.118384   Median :  2.475   Median :14.50   Median : 4.000
 3rd Qu.:0.136400   3rd Qu.: 18.070   3rd Qu.:17.25   3rd Qu.:10.000
 Max.   :0.266735   Max.   :130.890   Max.   :17.25   Max.   :13.000
 NA's   :11         NA's   :11        NA's   :11      NA's   :11

limit ≤ 0.01
    lambda.z           bias (%)          lz.start          lz.n
 Min.   :0.006491   Min.   :-94.381   Min.   : 3.75   Min.   : 3.000
 1st Qu.:0.106485   1st Qu.: -7.825   1st Qu.:10.25   1st Qu.: 3.000
 Median :0.119452   Median :  3.400   Median :14.50   Median : 4.000
 3rd Qu.:0.136873   3rd Qu.: 18.479   3rd Qu.:17.25   3rd Qu.: 6.000
 Max.   :0.278439   Max.   :141.022   Max.   :17.25   Max.   :12.000
 NA's   :8          NA's   :8         NA's   :8       NA's   :8

limit ≤ 0
    lambda.z           bias (%)          lz.start          lz.n
 Min.   :0.001576   Min.   :-98.636   Min.   : 3.25   Min.   : 3.000
 1st Qu.:0.107295   1st Qu.: -7.124   1st Qu.: 5.25   1st Qu.: 3.000
 Median :0.118507   Median :  2.582   Median :14.50   Median : 4.000
 3rd Qu.:0.136410   3rd Qu.: 18.078   3rd Qu.:17.25   3rd Qu.:10.000
 Max.   :0.264027   Max.   :128.546   Max.   :17.25   Max.   :13.000
 NA's   :10         NA's   :10        NA's   :10      NA's   :10


❝ Not simple <0? Or maybe ≤0.01?


Look at the bias of \(\small{\widehat{\lambda}_\textrm{z}}\). Given that, I don’t know.
Perhaps the outcome would be different for a two-com­part­ment model.

Since you are a ne[image]d, check out the distribution of aggr$lambda.z. Very strange.


sim.el <- function(D, f, V, t12.a, t12.e, tlag, t,
                   CV0, limit = 0.0001) {
  one.comp <- function(f, D, V, k01, k10, tlag, t) {
    # one-compartment model, first order absorption
    # and elimination; optional lag time

    if (!isTRUE(all.equal(k01, k10))) { # common: k01 != k10
      C    <- f * D * k01 / (V * (k01 - k10)) *
              (exp(-k10 * (t - tlag)) - exp(-k01 * (t - tlag)))
      tmax <- log(k01 / k10) / (k01 - k10) + tlag
      Cmax <- f * D * k01 / (V * (k01 - k10)) *
              (exp(-k10 * tmax) - exp(-k01 * tmax))
    } else {                            # flip-flop
      k    <- k10
      C    <- f * D / V * k * (t - tlag) * exp(-k * (t - tlag))
      tmax <- 1 / k
      Cmax <- f * D / V * k * tmax * exp(-k * tmax)
    }
    C[C <= 0] <- 0                     # correct negatives due to lag-time
    res <- list(C = C, Cmax = Cmax, tmax = tmax)
    return(res)
  }
  k01    <- log(2) / t12.a   # absorption rate constant
  k10    <- log(2) / t12.e   # elimination rate constant
  C0     <- one.comp(f, D, V, k01, k10, tlag, t)$C # model wo error
  CV     <- CV0 - C0 * 0.005 # noise increases with decreasing C
  varlog <- log(CV^2 + 1)
  C      <- numeric()
  for (j in 1:length(C0)) {
    C[j] <- rlnorm(1, meanlog = log(C0[j]) - 0.5 * varlog[j],
                   sdlog = sqrt(varlog[j]))
  }
  data   <- data.frame(t = t, C = C)
  data   <- data[complete.cases(data), ]    # discard NAs
  data   <- data[data$t > t[C == max(C)], ] # discard tmax and earlier
  lz.end <- tail(data$t, 1)
  tmp    <- tail(data, 3)
  r2     <- a <- b <- numeric()
  m      <- lm(log(C) ~ t, data = tmp)
  a[1]   <- coef(m)[[1]]
  b[1]   <- coef(m)[[2]]
  r2[1]  <- summary(m)$adj.r.squared
  k      <- 1
  for (j in 4:nrow(data)) {
    k         <- k + 1
    tmp       <- tail(data, j)
    m         <- lm(log(C) ~ t, data = tmp)
    a[k]      <- coef(m)[[1]]
    b[k]      <- coef(m)[[2]]
    r2[k]     <- summary(m)$adj.r.squared
    if (r2[k] < r2[k-1] | abs(r2[k] - r2[k-1]) <= limit) break
  }
  loc <- which(r2 == max(r2))
  if (b[loc] >= 0) { # positive slope not meaningful
    intcpt <- lambda.z <- lz.n <- lz.start <- lz.end <- NA
  } else {
    intcpt   <- a[loc]
    lambda.z <- -b[loc]
    lz.start <- tmp$t[2]
    lz.n     <- nrow(tmp) - 1
  }
  res <- data.frame(limit = limit, intcpt = intcpt, lambda.z = lambda.z,
                    lz.start = lz.start, lz.end = lz.end, lz.n = lz.n)
  return(res)
}
sum.simple <- function(x, digits = 4) {
  # nonparametric summary:
  # remove arithmetic mean whilst keeping eventual NAs

  res <- summary(x)
  if (nrow(res) == 6) {
    res <- res[c(1:3, 5:6), ]
  } else {
    res <- res[c(1:3, 5:7), ]
  }
  return(res)
}
D      <- 200L   # dose
f      <- 2/3    # fraction absorbed (BA)
V      <- 3      # volume of distribution
t12.a  <- 0.75   # absorption half life
t12.e  <- 6      # elimination half life
tlag   <- 0.5    # lag time
t      <- c(0, 0.75, 1.25, 2, 2.5, 3.25, 3.75, 4.5, 5.25, 6.25,
            7.25, 8.75, 10.25, 12.25, 14.5, 17.25, 20.25, 24)
CV0    <- 0.20   # maximum CV (at low concentrations)
limit  <- 0.0001 # stopping rule for R2adj
nsims  <- 1e5L   # number of simulations
aggr   <- data.frame()
pb     <- txtProgressBar(0, 1, 0, char = "\u2588", width = NA, style = 3)
for (j in 1:nsims) {
  aggr <- rbind(aggr, sim.el(D, f, V, t12.a, t12.e, tlag, t, CV0, limit))
  setTxtProgressBar(pb, j / nsims)
}
close(pb)
aggr$bias <- 100 * (aggr$lambda.z - log(2) / t12.e) / (log(2) / t12.e)
names(aggr)[7] <- "bias (%)"
cat("limit \u2264", limit, "\n"); sum.simple(aggr[, c(3, 7, 4, 6)])


Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
UA Flag
Activity
 Admin contact
22,957 posts in 4,819 threads, 1,636 registered users;
115 visitors (0 registered, 115 guests [including 0 identified bots]).
Forum time: 08:37 CET (Europe/Vienna)

With four parameters I can fit an elephant,
and with five I can make him wiggle his trunk.    John von Neumann

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5