jag009
★★★

NJ,
2017-03-20 05:42
(2556 d 02:24 ago)

Posting: # 17166
Views: 24,399
 

 Highly Variable Drug BE Justification [Regulatives / Guidelines]

Hi all,

Just want to know what your experience is on this matter.

The FDA stated that "Special Considerations: Applicants may consider using a reference-scaled average bioequivalence approach for x drug. If using this approach, the applicant should provide evidence of high variability (i.e., within-subject variability ≥30%) in bioequivalence parameters. Applicants who would like to use this approach are encouraged to submit a protocol for review by the Division of Bioequivalence in the Office of Generic Drugs."

The above implies that one can conduct replicate studies (RSABE) if he/she has supportive data (ISCV>=30%). What if one conducted pilot studies and found that the ISCV is like 28/29%? My concensus still would be to proceed with RSABE approach since the it is a mixed approach which allows both RSABE (if Ref SD<0.294) and ABE analysis (if Ref SD>0.294)

John
ElMaestro
★★★

Denmark,
2017-03-20 07:55
(2556 d 00:11 ago)

@ jag009
Posting: # 17167
Views: 23,325
 

 Highly Variable Drug BE Justification

Hi jag009,

❝ What if one conducted pilot studies and found that the ISCV is like 28/29%? My concensus still would be to proceed with RSABE approach since the it is a mixed approach which allows both RSABE(if Ref SD<0.294) and ABE analysis(if Ref SD>0.294)


That's valid, but if you are not really sure if you are just "borderline" then the additional overhead associated with the replicated design may not be worth it. You find an intra-CVR of 31%, and you scale the limits a wee bit etc. But the price you paid for this moderate scaling option could be much higher than you'd be paying for a conventional 222BE trial with a few extra volunteers. Would be interesting to discuss an objective function here :-)

This isn' the answer, but just a view.

Pass or fail!
ElMaestro
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2017-03-20 14:12
(2555 d 17:55 ago)

@ ElMaestro
Posting: # 17169
Views: 23,555
 

 Study costs: Replicate vs. 2×2×2

Hi ElMaestro & John,

❝ ❝ My concensus still would be to proceed with RSABE approach since the it is a mixed approach which allows both RSABE(if Ref SD<0.294) and ABE analysis(if Ref SD>0.294)


❝ That's valid, but if you are not really sure if you are just "borderline" then the additional overhead associated with the replicated design may not be worth it. You find an intra-CVR of 31%, and you scale the limits a wee bit etc. But the price you paid for this moderate scaling option could be much higher than you'd be paying for a conventional 222BE trial with a few extra volunteers.


I lean towards John’s idea. Say you estimated the 31% from a previous 2×2×2 study in 40 subjects and assume that CVwR=CVwT=CVw. The 95% CI of the CV is 25.1–40.6%. Hence, in the best case (high CVwR) the FDA’s implied BE-limits would be 70.58–141.69%. That’s rather substantial.

❝ Would be interesting to discuss an objective function here :-)


Some ideas: The number of treatments in a 2×2×4 study is roughly the same as in a 2×2×2 study – if we don’t plan for reference-scaling (which I likely would do in a borderline case).
  • Equal costs
    • Design & logistics,
    • Ethics committee,
    • Medical writing,
    • Consumables (clinics, bioanalytics),
    • Sample shipment if applicable,
    • Biostatistics.
  • Lower costs
    • Screenings & post-study exams,
    • Subjects’ remuneration (though higher for the individuals due to more samples, extended hospitalization).
  • Higher costs
    • Clinics,
    • Analytics (not necessarily: Roughly the same number of samples but might take longer if a batch size exceeds a working day).
I would not be worried about the higher chance of dropouts in the replicate study. The impact on power is low (unless the drug is nasty and dropouts are caused by AEs).
Calculation of costs is complicated (fixed & variable costs, blahblah). In my CRO we had a spreadsheet with 100 rows… Had a quick look (19 samples / period, last sampling 16 hours; 2×2×2 in 42 subjects vs. 2×2×4 in 22). The replicate design would be ~4% cheaper. ;-) If you hope for RSABE (for CVwR 31% 18 subjects instead of 22 for ABE) the reduction in costs would be ~12%.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
VStus
★    

Poland,
2017-03-21 13:49
(2554 d 18:17 ago)

@ Helmut
Posting: # 17172
Views: 23,236
 

 Study costs: Replicate seems to be cheaper

Hi Helmut,

We were able to get more than 4% discount for replicate versus equivalent '2x2'.

Our internal belief is that replicate is cheaper, but will take more time... And time is money...

We would rather do replicate study instead of large '2x2' with 2 or more groups due to the capacity of the clinic (if wash-out is not long).

Regards, VStus
Dr_Dan
★★  

Germany,
2017-03-21 14:06
(2554 d 18:00 ago)

@ VStus
Posting: # 17173
Views: 23,056
 

 Study costs: Replicate seems to be cheaper

Dear VStus
For your calculation please keep in mind: the more study periods you have the higher the risk of drop outs.

Kind regards and have a nice day
Dr_Dan
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2017-03-21 15:50
(2554 d 16:17 ago)

@ Dr_Dan
Posting: # 17176
Views: 23,077
 

 Study costs: Replicate seems to be cheaper

Hi Dan,

I agree with VStus.

❝ For your calculation please keep in mind: the more study periods you have the higher the risk of drop outs.


As I wrote above:

❝ ❝ I would not be worried about the higher chance of dropouts in the replicate study. The impact on power is low (unless the drug is nasty and dropouts are caused by AEs).


Try:

library(PowerTOST)
CV  <- 0.31
dor <- 5 # dropout-rate (in each washout) in percent
des <- "2x2x2"
res <- sampleN.TOST(CV=0.31, design=des, print=FALSE)
n   <- res[["Sample size"]]
pwr <- res[["Achieved power"]]
wo  <- as.integer(substr(des, nchar(des), nchar(des)))-1
el  <- n # eligible subjects
for (j in 1:wo) {
  el <- el*(1-dor/100)
}
cat("Study started with", n, "subjects (expected power",
    sprintf("%.1f%%).", 100*pwr),
    "\nStudy ended with", as.integer(el), "eligible subjects (expected power",
    sprintf("%.1f%%).", 100*power.TOST(CV=CV, design=des, n=el)), "\n")
# Study started with 42 subjects (expected power 81.1%).
# Study ended with 39 eligible subjects (expected power 79.2%).

des <- "2x2x4"
res <- sampleN.TOST(CV=0.31, design=des, print=FALSE)
n   <- res[["Sample size"]]
pwr <- res[["Achieved power"]]
wo  <- as.integer(substr(des, nchar(des), nchar(des)))-1
el  <- n
for (j in 1:wo) {
  el <- el*(1-dor/100)
}
cat("Study started with", n, "subjects (expected power",
    sprintf("%.1f%%).", 100*pwr),
    "\nStudy ended with", as.integer(el), "eligible subjects (expected power",
    sprintf("%.1f%%).", 100*power.TOST(CV=CV, design=des, n=el)), "\n")
# Study started with 22 subjects (expected power 83.3%).
# Study ended with 18 eligible subjects (expected power 75.1%).

Does it really hurt?

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
d_labes
★★★

Berlin, Germany,
2017-03-21 16:16
(2554 d 15:51 ago)

@ Helmut
Posting: # 17177
Views: 22,972
 

 expected power

Dear All!

Don't confuse the usage of "expected power" here by Helmut with the functions in package PowerTOST which estimate the expected power and / or sample size based on expected power, some sort of Bayesian power, taking into account the uncertainty of an observed point estimate of T vs. R and / or the uncertainty of an observed CV.

See f.i. ?exppower.TOST and references given in that man page.

Regards,

Detlew
VStus
★    

Poland,
2017-03-23 15:24
(2552 d 16:42 ago)

@ Helmut
Posting: # 17180
Views: 23,020
 

 Doesn't hurt!

Daer Helmut,
Dear Colleagues,

It even hurts less (or doesn't hurt at all) in case of scaled bioequivalence.
Last chunk of code modified:
CV  <- 0.42
#des <- "2x2x4"#scaled+mode subjects included for dro-outs
res <- sampleN.scABEL(CV=CV, design=des, print=FALSE)
n   <- res[["Sample size"]] + 8 #Let's assume we compensate for possible drop-outs
pwr <- res[["Achieved power"]]
wo  <- as.integer(substr(des, nchar(des), nchar(des)))-1
el  <- n
for (j in 1:wo) {
  el <- el*(1-dor/100)
}
cat("Study started with", n, "subjects (expected power",
    sprintf("%.1f%%).", 100*pwr),
    "\nStudy ended with", as.integer(el), "eligible subjects (expected power",
    sprintf("%.1f%%).", 100*power.scABEL(CV=CV, design=des, n=el, regulator='EMA')), "\n")

#Study started with 38 subjects (expected power 81.9%).
#Study ended with 32 eligible subjects (expected power 84.1%).


Best regards, VStus
mahmoud-teaima
★    

2017-03-23 09:17
(2552 d 22:50 ago)

@ Helmut
Posting: # 17178
Views: 24,160
 

 Study costs: Replicate vs. 2×2×2

Hi Helmut,

❝ Calculation of costs is complicated (fixed & variable costs, blahblah). In my CRO we had a spreadsheet with 100 rows… Had a quick look (19 samples / period, last sampling 16 hours; 2×2×2 in 42 subjects vs. 2×2×4 in 22). The replicate design would be ~4% cheaper. ;-) If you hope for RSABE (for CVwR 31% 18 subjects instead of 22 for ABE) the reduction in costs would be ~12%.


Can you please help me for in the approximate estimation of cost of bioequivalence study?, would you please send me the template sheet that you use for such calculations.

Greetings.

Mahmoud Teaima, PhD.
M.tareq
☆    

2017-05-07 23:21
(2507 d 09:46 ago)

@ Helmut
Posting: # 17314
Views: 22,478
 

 Study costs: Replicate vs. 2×2×2

hi all

from ethical view regarding replicate study vs standard crossover,

first case:

the published data suggest slightly boarder line CV% of 29-31%

the required sample size will be around 42 volunteer without accounting for dropouts
however if it will be replicated whether partially around (33 volunteer) or fully replicate around (22 volunteers) and power maintained at at least 80% without scaling to ref variability

in standard design even though higher number of volunteers will be dosed but only for 2 periods
and in replicate design fewer subjects will be dosed but for longer periods,
my question about the ethical view of such approach vs sponsor risk due to inadequate design?

2nd case

regarding drugs which aren't stated to be HVDP (from previous studies or assessment reports)

suppose the CV was found to be around 20% and the sponsor or cro wishes to go for replicate design for safe planning that the drug might shows high CV :confused: just in case or so

whats the regulatory views regarding that matter? in EMA guidelines regarding replicate design and widening of acceptance range for cmax it stated that the drug known to be HVPD CV > 30 and not as a result of outliers

so from ethical point of view ,given the drug isn't HVPD and it's stated in the protocol if the drug CV% was less than 30% the normal acceptance criteria will be applied, why the need for the extra periods/dosing of volunteers from ethical/scientific view?

Thanks in advance
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2017-05-08 16:21
(2506 d 16:45 ago)

@ M.tareq
Posting: # 17316
Views: 22,639
 

 Replicate vs. 2×2×2 (ethics?)

Hi M.tareq,

❝ first case:

❝ the published data suggest slightly boarder line CV% of 29-31%

❝ my question about the ethical view of such approach vs sponsor risk due to inadequate design?


For ABE I don’t see a general advantage of replicate designs, except if one expects low droput-rates. When the dropout-rate exceeds ~5% the loss in power (more periods) might be substantial. Try this one:

library(PowerTOST)
CV     <- 0.31 # assume CV
theta0 <- 0.95 # assumed GMR
target <- 0.80 # desired power
dor    <- seq(0, 15, by=0.1) # expected droput-rates (%)
########################################################
des    <- c("2x2x4", "2x2x3", "2x3x3", "2x2x2")
col    <- c("red", "magenta", "blue", "darkgreen")
op     <- par(no.readonly=TRUE) # safe graphic defaults
par(pty="s")                    # square plotting region
for (j in seq_along(des)) {     # loop over designs
  # calculate number of washout periods
  wo  <- as.integer(substr(des[j], nchar(des[j]), nchar(des[j]))) - 1
  # sample size at start
  N   <- sampleN.TOST(CV=CV, design=des[j], targetpower=target,
                      print=FALSE)[["Sample size"]]
  pwr <- numeric() # vector of final powers
  for (k in seq_along(dor)) { # loop over dropout-rates
    el <- N # eligible subjects at the end of the study
    for (l in 1:wo) { # loop over washout periods
      el     <- el*(1-dor[k]/100) # reduce sample sizes
      pwr[k] <- 100*suppressMessages(power.TOST(CV=CV, design=des[j],
                                     n=floor(el))) # final powers
    }
  }
  if (j == 1) {
    plot(dor, pwr, type="s", main=sprintf("CV %.1f%%", 100*CV),
         cex.main=0.9, cex.axis=0.85, lwd=2, col=col[j], las=1,
         xlab="dropout-rate (%)", ylab="final power (%)")
    grid()
  } else {
    lines(dor, pwr, type="s", lwd=2, col=col[j])
  }
}
legend("bottomleft", legend=des, col=col, lwd=rep(2, length(des)),
       bg="white", cex=0.8, title="design")
par(op) # restore defaults


[image]


There is an exception: If you suspect an HVD(P) you should perform the pilot study in a replicate design to get an estimate of CVwR (and CVwT in the full replicates) which is needed for estimating the sample size of the pivotal study. If the pilot study was a simple crossover you have to assume that CVw = CVwR = CVwT, which might be false.
For borderline cases (no reference-scaling intended but one suspects that the CV might be higher than assumed) a Two-Stage Design can be a good alternative. A while ago I reviewed a manuscript exploring the pros and cons of TSDs vs. ABEL. Was very interesting and I hope that the authors submit a revised MS soon.

❝ 2nd case

❝ regarding drugs which aren't stated to be HVDP (from previous studies or assessment reports)

❝ suppose the CV was found to be around 20% and the sponsor or cro wishes to go for replicate design for safe planning that the drug might shows high CV :confused: just in case or so


“Just in case” never works.

❝ […] in EMA guidelines regarding replicate design and widening of acceptance range for cmax it stated that the drug known to be HVPD CV > 30 …


Exactly. You have to state in the protocol that you intend reference-scaling. Furthermore, you have to give a justification that the widened acceptance range for Cmax is of no clinical relevance. IMHO, that’s bizarre (the FDA for good reasons doesn’t require it). HVD(P)s are safe and efficacious despite their high variability since their dose-response curves are flat. The fact that the originator’s drug was approved (no problems in phase III) and is on the market for years demonstrates that there are no safety/efficacy issues.

❝ … and not as a result of outliers


Nobody knows how to deal with this story. :-( Regulations  science. Not required by the FDA…

❝ so from ethical point of view, given the drug isn't HVPD and it's stated in the protocol if the drug CV% was less than 30% the normal acceptance criteria will be applied, …


It doesn’t work that way. State in the protocol that you intent to scale and follow the EMA’s conditions. If CVwR ≤30% don’t scale or apply ABEL otherwise.

❝ why the need for the extra periods/dosing of volunteers from ethical/scientific view?


For the decision (ABE or ABEL) based on CVwR a replicate design is required.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
nobody
nothing

2017-05-08 16:52
(2506 d 16:14 ago)

@ Helmut
Posting: # 17317
Views: 22,337
 

 Replicate vs. 2×2×2 (ethics?)

❝ ❝ … and not as a result of outliers


❝ Nobody knows how to deal with this story. :-( Regulations  science. Not required by the FDA…


... sorry, no.

But this "it's safe because it has been shown to be safe in clinical practice over the last decades"-argumentation (question one above) never really went well in EU, I guess. At least you need to write a new, Chapter 2.5-like overview on the pk, pharmacology, safety to make the assessor sleep well next night after reading your application for a generic. On the other hand, assessors for originator's MAs have a much, much better sleep (better sleeping pills?), in my experience...


Edit: Please don’t shout! [Helmut]

Kindest regards, nobody
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2017-05-10 16:10
(2504 d 16:56 ago)

@ nobody
Posting: # 17343
Views: 22,303
 

 Applicability of reference-scaling

Hi nobody,

❝ ❝ Nobody knows how to deal with this story. :-(


❝ ... sorry, no.


Didn’t mean you. Nobody nobody (case sensitive). ;-)

❝ […] At least you need to write a new, Chapter 2.5-like overview on the pk, pharmacology, safety to make the assessor sleep well next night after reading your application for a generic. On the other hand, assessors for originator's MAs have a much, much better sleep (better sleeping pills?), in my experience...


Well, maybe assessors should take stimulants instead of sleeping pills. Les Benet’s claim “HVD(P)s are safe drugs” was based on two arguments:
  1. If an HVD(P) does not have a flat dose-response curve, there would have been safety/efficacy issues in phase III and the originator’s product would not get approved.
  2. No safety/efficacy issues were found in clinical practice.
    • OK, we all know that pharmacovigilance isn’t very sensitive and the system not well implemented in some countries. However, 32% of 222 novel therapeutics were affected by a postmarket safety event.*
Hence, I would rather say:

Reference-scaling for the innovator’s product

  1. Should not be allowed when scaling up to the production batch size.
  2. Should not be allowed for major formulation changes for a certain time after approval (e.g., five years).
  3. If there were no relevant issues, it should be allowed.
    Easy for the FDA because the product-specific guidance is issued fast and updated regularly.
    Takes a while for the EMA, but some already exist (e.g., posaconazole).

Reference-scaling for a generic product

  • Should generally be allowed (#3 for the innovator is applicable).
Just my two ¢ worth.


  • Downing NS, Shah ND, Aminawung JA, Pease AM, Zeitoun J-D, Krumholz HM, Ross JS. Postmarket Safety Events Among Novel Therapeutics Approved by the US Food and Drug Administration Between 2001 and 2010. JAMA. 2017; 317(18): 1854–63. doi:10.1001/jama.2017.5150.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
M.tareq
☆    

2017-05-15 01:04
(2500 d 08:03 ago)

@ Helmut
Posting: # 17353
Views: 22,091
 

 Replicate vs. 2×2×2

Hi Helmut,

Thanks for your comprehensive reply and explanation,

❝ while ago I reviewed a manuscript exploring the pros and cons of TSDs vs. ABEL. Was very interesting and I hope that the authors submit a revised MS soon.


Can i get the link or the name of the paper when the authors submit it ?

In General, if a drug isn't known to be HVDP and by definition of HVDP that, they have wide therapeutic index yet the CRO/Sponsor went with replicate design.

How the assessor would consider that? mean if the published literature or pilot study suggest low CV of the reference /test product yet the cro/sponsor went with replicate design?

Like abusing the use of scABE ? :confused:

❝ Nobody knows how to deal with this story. :-( Regulations ≠ science. Not required by the FDA…


Agree, yet i read an ANDA submission -can't find the link now- where the sponsor detected outliers based on T/R ratio of each subject and redosing of such subjects along with other subjects who exhibited normal pkp profile; though it was after reviewing with FDA regulator.

Another published paper about ibandronic acid https://www.ncbi.nlm.nih.gov/pubmed/24756462 the sponsor/CRO stated the definition of outliers using studentized residuals and boxplot to eliminate subjects with values away from the boxplot by more than 3 IQR.

My point is, as your kindly said, it's best to state in the protocol how to deal with outliers especially regarding variability estimation and proving/showing to the assessor the reasons for excluding such outlier(s) from study or reviewing it with the regulator before submission of the data?

Thanks and Best Regards.

P.S:
Apologies for being information/knowledge leecher atm, will try to get my seed/leech ratio up ^^
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2017-05-15 21:07
(2499 d 12:00 ago)

@ M.tareq
Posting: # 17355
Views: 22,162
 

 Replicate vs. 2×2×2

Hi M.tareq,

❝ ❝ while ago I reviewed a manuscript exploring the pros and cons of TSDs vs. ABEL. Was very interesting and I hope that the authors submit a revised MS soon.


❝ Can i get the link or the name of the paper when the authors submit it ?


If the revised MS will get accepted and published, for sure.

❝ In General, if a drug isn't known to be HVDP and by definition of HVDP that, they have wide therapeutic index yet the CRO/Sponsor went with replicate design.


Any replicate design can be assessed for conventional ABE. If the sponsor intends to try RSABE it has to be stated in the protocol. Might be worthwhile in borderline cases (CVwR ~30%).

❝ How the assessor would consider that? mean if the published literature or pilot study suggest low CV of the reference /test product yet the cro/sponsor went with replicate design?


I think you are mixing two things up. Replicate designs are applicable for ABE as well. If the sponsor aims for RSABE – in contrary to data in the public domain (few exist!) which suggests low variability – as a regulator I would be cautious. Note that regulators see a lot of studies. Was the CVwR caused by poor study conduct? That’s a gray zone.

Pilot studies can be misleading. The estimated CV is not carved in stone. Try this one:

library(PowerTOST)
design <- "2x2x4" # any of "2x2x4", "2x2x3, "2x3x3"
n   <- 16      # total sample size
CV  <- 0.25    # CV as a ratio
df  <- eval(parse(
         text=known.designs()[which(known.designs()$design==design), "df"],
         srcfile=NULL)
       )       # get the degrees of freedom
CL <- CVCL(CV=CV, df=df, side="2-sided")
cat("\ndesign of pilot:", design,
    "\nsample size    :", n,
    "\nestimated CV   :", sprintf("%.2f%%", 100*CV),
    "\nCL of the CV   :", sprintf("%.2f%% to %.2f%%", 100*CL[[1]], 100*CL[[2]]),
    "\n\n")

Gives:

design of pilot: 2x2x4
sample size    : 16
estimated CV   : 25.00%
CL of the CV   : 20.60% to 31.87%

Might be highly variable, right?

❝ Like abusing the use of scABE ? :confused:


Maybe.

❝ ❝ Nobody knows how to deal with this story. :-( Regulations ≠ science. Not required by the FDA…


❝ Agree, yet i read an ANDA submission -can't find the link now- where the sponsor detected outliers based on T/R ratio of each subject and redosing of such subjects along with other subjects who exhibited normal pkp profile; though it was after reviewing with FDA regulator.


Yes, that’s an old story (“re-dose the suspected outliers – both with T and R – together with at least five ‘normal’ subjects or 20% of the sample size, whichever is larger”). IMHO, the individual T/R-ratios are not a particularly good idea for assessing outliers (ignoring period effects).

❝ Another published paper about ibandronic acid https://www.ncbi.nlm.nih.gov/pubmed/24756462


Highly variable like all bisposphonates. Can’t reproduce the estimated sample. I got 132 and not 138.

❝ … the sponsor/CRO stated the definition of outliers using studentized residuals and boxplot to eliminate subjects with values away from the boxplot by more than 3 IQR.


Not surprising since Susana Almeida was co-chair of the Bioequivalence Working Group, European Generic and Biosimilar Medicines Association (EGA). At the joint EGA/EMA workshop (London, June 2010) I – as a joke! – suggested boxplots, which are nonparametric by nature. The EMA hates nonparametric statistics. But the panelists welcomed the “idea” and my joke made it to the Q&A document. My fault. Never presume humor. Now it’s carved in stone.

This study demonstrated why not accepting reference-scaling for AUC might not be a good idea. Since the sample size is based on AUC, products with extreme T/R-ratios of Cmax would pass ABEL due to the high sample size. Given the results of this study: 90% chance to pass with T/R as low as 84.46% or 80% chance to pass with T/R 82.95%. Technically (i.e., according to the GL) nothing speaks against that. But do we want such products on the market?
Wasn’t the case in this study (T/R for Cmax 102.56%), but…
BTW, for the FDA the study could have been performed in just 45 (!) subjects – without risking extreme Cmax-ratios.

❝ My point is, as your kindly said, it's best to state in the protocol how to deal with outliers especially regarding variability estimation and proving/showing to the assessor the reasons for excluding such outlier(s) from study or reviewing it with the regulator before submission of the data?


I would say so.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
M.tareq
☆    

2017-05-15 23:02
(2499 d 10:04 ago)

@ Helmut
Posting: # 17356
Views: 21,896
 

 Replicate vs. 2×2×2

Thanks alot for your time and explanation.

Best Regards
DavidManteigas
★    

Portugal,
2017-05-16 14:41
(2498 d 18:25 ago)

@ Helmut
Posting: # 17357
Views: 22,056
 

 Regulators view of HVD drugs

Hi Helmut,

For the sake of curiosity, in your opinion should a regulator approve a generic submited as "highly variable" although all the previous trials reported low CV's? Technically, the criteria are well defined and no objection should be raised in principle. However, should a study that reports an high variation, when all the previous information reports otherwise, be considered scientifically sound for the demonstration of bioequivalence? Ideally, regulators would publish in their product-specific guidelines which compounds could be considered "highly variable"...

Regards,
David
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2017-05-16 17:01
(2498 d 16:05 ago)

@ DavidManteigas
Posting: # 17358
Views: 22,231
 

 Regulators view of HVD drugs

Hi David,

❝ For the sake of curiosity, in your opinion should a regulator approve a generic submited as "highly variable" although all the previous trials reported low CV's? Technically, the criteria are well defined and no objection should be raised in principle. However, should a study that reports an high variation, when all the previous information reports otherwise, be considered scientifically sound for the demonstration of bioequivalence?


Is the high variability enough to trigger an inspection? Likely. See Section 2 of the EMA’s CMDh Guidance on triggers for inspections of bioequivalence trials: Quick scan.
  • Question 6.
    Are there any observations which raise concerns about the quality or validity of the reported study data in general? E.g.:
    • study data too clean / too messy;
    • data/results in contradiction to published and known data (e.g. distribution and/or characteristics of volunteers) on this product/active substance;
    • conflicting results between studies regarding pharmacokinetic parameters or overall/intra-subject variability.
  • General considerations
    Although response to these questions may not always be easily found, the issues raised should be taken seriously.
    Issues should generally be judged based on proper knowledge on bioequivalence testing methodologies.
    In case it is known, the conduct and outcome of the study may be compared with previous studies in order to check for potential issues.
Can an inspection reveal bad study conduct increasing the variability? Possibly not.
Regulators should assess studies based on the “whole body of evidence”. Would an assessor have the guts to reject an application without relevant findings in an inspection and risk to be overruled by the CHMP in a referral? Duno.

❝ Ideally, regulators would publish in their product-specific guidelines which compounds could be considered "highly variable"...


True. Takes a while. As of today there are only 36 (adopted + draft). Reference-scaling recommended for capecitabine, levodopa/carbidopa/entacapone, posaconazole. In all other cases applicants are left out in the rain with this footnote:
  • As intra-subject variability of the reference product has not been reviewed to elaborate this product-specific bioequivalence guideline, it is not possible to recommend at this stage the use of a replicate design to demonstrate high intra-subject variability and widen the acceptance range of Cmax. If high intra-individual variability (CVintra > 30 %) is expected, the applicants might follow respective guideline recommendations.
IMHO, this is a vicious circle

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
ElMaestro
★★★

Denmark,
2017-05-17 05:30
(2498 d 03:36 ago)

@ DavidManteigas
Posting: # 17361
Views: 22,017
 

 What's the problem?

Hi DM,

❝ For the sake of curiosity, in your opinion should a regulator approve a generic submited as "highly variable" although all the previous trials reported low CV's? Technically, the criteria are well defined and no objection should be raised in principle. However, should a study that reports an high variation, when all the previous information reports otherwise, be considered scientifically sound for the demonstration of bioequivalence?


The CV we observe is influenced by bioanalytical variability and possibly by factors noone understands well. For example, it has not -to the best of my knowledge- been much studied if certain populations are more within-variable than others. It is in my opinion entirely likely that if we go about studying a drug product in different places we will observe different CV's for some reason or other or by chance, even if we take great care with our experiment.
Of course assessments should reflect the applicant's observations.
Additional factors to consider: With time assays often get better, by and large - a CV obtained from a CRO using the API3000 10 years ago is in my opinion likely to be higher than if it were obatined today with an API6500 due to S:N phenomena (but I am not implying anything about "how much"), all other factors equal.
CV for Cmax (possibly also CV for AUC) may perhaps only be justifiably compared if the time points for blood sampling are the same (they rarely are between studies).

❝ Ideally, regulators would publish in their product-specific guidelines which compounds could be considered "highly variable"...


They already did that in the general guideline. Those with an observed CV above 30%. Simple as that.:-) But you are right, some degree of common sense is also necessary. If you have 20 reports with a CV of 11% and one with a CV of 41% then you start wondering. A few aberrant subject T/R's can easily inflate a CV, whether they occur by chance or not.

This is the sort of thing that is not covered by much literature and probably won't be for another 20 years?!

Pass or fail!
ElMaestro
nobody
nothing

2017-05-17 09:48
(2497 d 23:19 ago)

@ ElMaestro
Posting: # 17362
Views: 21,807
 

 What's the problem?

❝ ...If you have 20 reports with a CV of 11% and one with a CV of 41% then you start wondering.


OK, and (as a regulator) what are the options? Start an inspection of the CRO? Find some other "good reason" not to grant MA to the pile of trash high quality generic developed in a scientifically advanced pharmaceutical company with cutting edge technology?

Kindest regards, nobody
nobody
nothing

2017-05-17 11:31
(2497 d 21:36 ago)

@ nobody
Posting: # 17364
Views: 21,969
 

 What's the problem?

PS: No, I don't think that assay performance will do anything to intraindiv. CV of a drug product, as long as

- not practically all samples are around LLOQ
- effects are not random.

Did some sims in the old daysTM, effect of assay when validated according to current recommendations is negligible.

Especially for the "11% CV in 5 other trials, suddenly 40% in my trial" example there is no explanation remotely related to assay.

Kindest regards, nobody
DavidManteigas
★    

Portugal,
2017-05-17 13:25
(2497 d 19:41 ago)

@ nobody
Posting: # 17366
Views: 21,993
 

 What's the problem?

Thank you all for your feedbacks.

It would be interesting to study the within study variation for the same compound in the same unit to understand which is the main source of variability. Although I understand the point of El Maestro about assay sensitivity, I don't think that could be pointed as a cause for a study which presented high variability when other studies reported low variability. I understand the approach to highly variable drugs, when the drug is actually highly variable per se and not due to study conduct or assay sensibility.
ElMaestro
★★★

Denmark,
2017-05-17 14:07
(2497 d 19:00 ago)

@ DavidManteigas
Posting: # 17367
Views: 21,975
 

 What's the problem?

Hi DM,

❝ It would be interesting to study the within study variation for the same compound in the same unit to understand which is the main source of variability. Although I understand the point of El Maestro about assay sensitivity, I don't think that could be pointed as a cause for a study which presented high variability when other studies reported low variability. I understand the approach to highly variable drugs, when the drug is actually highly variable per se and not due to study conduct or assay sensibility.


Then it all comes back to one of Helmut's favourite hobbies, calculation of confidence intervals for variabilities. I think the conclusion generally is that perhaps the drug is highly variable and perhaps it isn't.

Pass or fail!
ElMaestro
kumarnaidu
★    

Mumbai, India,
2017-07-11 16:43
(2442 d 16:23 ago)

@ ElMaestro
Posting: # 17533
Views: 21,029
 

 What's the problem?

Hi all,
Recently we did pilot and pivotal (Partial reference replicate) studies for WHO and we got high variability (CV=32%). In the USFDA product specific guidance they have suggested 2x2 crossover design for this drug. Based on our prior experience can we perform reference replicate study or need to go as per guidance?:confused:

Kumar Naidu
ElMaestro
★★★

Denmark,
2017-07-11 16:56
(2442 d 16:11 ago)

@ kumarnaidu
Posting: # 17534
Views: 21,062
 

 What's the problem?

❝ Recently we did pilot and pivotal (Partial reference replicate) studies for WHO and we got high variability (CV=32%). In the USFDA product specific guidance they have suggested 2x2 crossover design for this drug. Based on our prior experience can we perform reference replicate study or need to go as per guidance?:confused:


You probably have both options available. However, the gain with CV=32% is minute; I mean if you have a CVr of 32% in a replicate study you are widening the limits so minutely that I think it isn't worth the effort. At the end of the day those (semi)replicated are still far less routine than the standard 222BE jobs. And IEC's can have a lot of funny ideas when they review your protocol etc.

That said, I'd like to know the background better before saying this is my final answer :-)

Pass or fail!
ElMaestro
UA Flag
Activity
 Admin contact
22,940 posts in 4,812 threads, 1,640 registered users;
40 visitors (0 registered, 40 guests [including 6 identified bots]).
Forum time: 08:07 CET (Europe/Vienna)

Those people who think they know everything
are a great annoyance to those of us who do.    Isaac Asimov

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5