## Tricky… [RSABE / ABEL]

Hi Elena,

» In the protocol, we stated that first we would try ABE, and if we failed then would go to scaling.

So far, so good (though usually you would check for a CVwR >30% first).

» We passed the BE criteria within 80-125% range for both Cmax and AUC (ABE approach).

OK.

» The expert demanded to calculate RR that we did not do but can do, …

Likely he/she is interested whether the reference is a HVD(P).

» … but also to control TIE that we think is nonsense.

Tricky. In your original post you stated also a decision tree. Any of these decisions made can be wrong, which will inflate the TIE. Imagine you failed ABE not because the reference is highly variable but by lacking power. Then you would continue to #2. The observed CVwR was >30% by chance and you expand the limits. However, the true CVwR (in the patient population) is ≤30%, and the decision wrong – inflated TIE (see there). More in the answer to #4.

» As a result, I have a couple of additional questions.
»
» 1. Is calculation RR for the reference drug mandatory …

Quoting the EMA’s GL:

For the acceptance interval to be widened the bioequivalence study must be of a replicate design where it has been demonstrated that the within-subject variability for Cmax of the reference compound in the study is >30%.

Similar wording in other jurisdictions (ASEAN States, Australia, Canada, East African Community, Egypt, Eurasian Economic Union, New Zealand, Russian Federation, the WHO).

» … and used somewhere else, except to widen the bioequivalence interval for Cmax (reference to my first questions)?
• For the Gulf Cooperation Council (Bahrain, Kuwait, Oman, Qatar, Saudi Arabia, United Arab Emirates) and South Africa you can use ABE for Cmax with fixed (!) limits of 75.00–133.33% if CVwR >30%.*
• AUC additionally for the WHO, where you also have to compare within-subject variabilities of T and R.
• For Health Canada AUC only.
• Any PK metric for the FDA and China’s CDE (though by another method called RSABE). In the linked guidance you see that the assessment whether the reference is highly variable sits on top of the decision tree. Only if swR <0.294 (CVwR ~30%), you would assess the study by ABE – not the other way ’round.

» 2. Is TIE nominal and cannot be controlled if we do not widen the bioequivalence interval in replicative studies and use ABEL?
» 3. Could you provide us with formula to calculate CI for fully replicative studies in math not computer format?

It’s the same one like in ABE.

» 4. How mathematically (in formula) can the inflation of TIE take place in RBS if we nominally control t-parameter (choose it from tables, for example)?

RBS = replicated biostudy?
The problem with decision trees is that control of the TIE cannot be shown analytically (by a formula) – we need simulations. My gut feeling tells me that in your case (passing ABE in step #1) there should be no problem. But who knows?

I try not to think with my gut.
If I’m serious about understanding the world,
thinking with anything besides my brain, as tempting as that might be,
is likely to get me into trouble.
Carl Sagan

I’m too busy to set up simulations. Ask a statistician (hint: R is much faster than SAS, MATLAB, or GNU Octave).

However, you could check the TIE for the conventional framework in PowerTOST. You need CVwT, CVwR, and the number of subjects / sequence. Example (one million simulated studies by default):

library(PowerTOST) CVwT   <- 0.20 CVwR   <- 0.20 CV     <- c(CVwT, CVwR) n      <- c(12, 12) # subjects / sequence (arbitrary order) theta0 <- scABEL(CV = CVwR)[["upper"]] # simulations via the key statistics power.scABEL(CV = CV, n = n, theta0 = theta0, design = "2x2x4") # simulations based on subject data (slower) power.scABEL.sdsims(CV = CV, n = n, theta0 = theta0, design = "2x2x4")

If the empiric TIE is ≤0.05, fine with the usual approach. Whether this is sufficient to convince the expert, duno. Not what you planned & did.

» 5. What role of fixed or random effects in RBS or ordinary BS?

That’s almost a philosophical question. In the strict sense:
1. If you treat subjects as a fixed effect, you make a statement about the subjects in the study.
2. If you treat them as a random effect, you make a statement about the population of other subjects.
At the end of the day you extrapolate the results of the study to the population of patients. Some statisticians (including ones of the FDA, Health Canada, China’s CDE, and myself) think that #2 is the correct way. Others (of the EMA, …) prefer #1. If the study is balanced and complete (i.e., no missing periods) the outcome is identical.
I performed large simulations and seemingly the EMA’s ‘Method A’ is slightly more conservative than ‘Method B’. However, both methods assume homoscedasticity (identical within-subject variabilities of T and R). A rather strong assumption which was demonstrated to be false is numerous studies… The FDA’s method (termed ‘Method C’ by the EMA) – which is a full-blown mixed-effects model – is generally more conservative (wider confidence interval). But that’s another story.

» I am not a statistician

I tried to answer in a not too statistical way.

Dif-tor heh smusma 🖖
Helmut Schütz

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes