Hi Manojbob,

I like this question :-) There are plenty of reviews in this area and funnily enough none of them deal well with these aspects.

Excluding an anchor in order to make a calibrator pass. It sounds so wrong, by the principle of it. The assay or method development may not be very good if the need for this solution is present.

In all cases I think it has to be done as per SOP. You can have decision trees that mandate an achor, or mandate none.

Also, think about defining the role of the anchor. Most companies simply use the anchor on the regression curve, this is what you are doing also. However, the alternative is to use the anchors ODs to define the ODmin and ODmax guesses for the optimiser. And this, I think, is a what I would often prefer in the cases I have dealt with. Nevertheless, not all software offers this opportunity, i.e. I don't recall if Watson LIMS (which is what most CROs use for the regression of ligand binding assays, 4PL and 5PL curves) allows the users to set the initial guess of the optimiser as the anchor's reading. Even when an optimiser fails to converge, I assure you, there is a minimum somewhere and if you have very good anchors then ODmin and ODmax are often well reflected there.

Yet, in real life, I am seeing sometimes just a low anchor but no high anchor because the highest calibrator is not anywhere near the ODmax (meaning: EC50/ED50 etc is somewhere close to the highest calibrator); as a result you will have curve fits between runs where the ODmax varies by a factor 20 or something due to variance. Such ODmax's have nothing to do with real life, but the fits anmd backcalculations may still fit the actual points well. It all traces back to the affinities associate with the antibodies, and these affinities can very a lot between batches (notably polyclonal).

All in all, I think you can mask the anchor if your SOPs specifically allows. If you want to mask because it is convenient and gives nicer data then quite possibly it is better to just call the plate off and re-analyse (also SOP driven). Policies for deactivating points on the curve should be defined at the time of method dev. (or SOP dev), not during sample analysis. Real life, however, will very often throw you a curve ball with these LBAs. :crying:]]>

Hi Mahmoud,

Not a Dr ;-)

Truth does not belong to the realm of science. :-D

Correct; sometimes CV

Why, can you elaborate?

I see. Please answer my other question (also in the subject line)

as well.]]>

Dear Dr

=======

Thank you very much for your information

in This study I found that and for Cmax Pk that

CVintra =27.3217 and CVinter=20.7271 this is not true in BE studies

From most studies CVinter is more more than CVintra

I think that the two types=CSH or FA0(1) in the FDA code are not suitable to obtain these results

No

Edit: Full quote removed. Please delete everything from the text of the original poster which is not necessary in understanding your answer; see also this post #5 --> 16205! [Helmut]]]>

Hi Mahmoud,

` GMR Lower Upper`

`when i used FDA code the result for Cmax 85.734 79.363 92.617`

`when i used code FDA but type=FA(1) the result is Cmax 85.925 80.087 92.190`

Are you trying to “safe” a failed study? What is stated in the SAP?]]>

Thank you very much

The BE study is (TRRT,RTTR) two sequence four period

` GMR Lower Upper`

when i used FDA code the result for Cmax 85.734 79.363 92.617

when i used code FDA but type=FA(1) the result is Cmax 85.925 80.087 92.190

Edit: Full quote removed. Please delete everything from the text of the original poster which is not necessary in understanding your answer; see also this post #5 --> 16205! [Helmut]]]>

Hi Mahmoud,

where does this code come from? It’s not the standard one recommended by the FDA here and there.

`model AUCT=sequence period treat/DDFM=kr;`

I would suggest to use

`DDFM=SATTER`

(as the FDA recommends). The default of `DDFM=KR`

uses the `SCORING=0`

) as does JMP. You may run into troubles if your study is re-evaluated in other software which uses the `replicateBE`

). In SAS you can get the expected information matrix by setting `SCORING=1`

.`random treat/type=CSH subject=subject G;`

`TYPE=CSH`

could possibly be replaced by `TYPE=FA(1)`

Acc. to the FDA’s guidance, Appendix E:

In the *Random* statement, `TYPE=FA0(2)`

could possibly be replaced by `TYPE=CSH`

.

`FA(1)`

is not the same as `FA0(2)`

Of course.

`FA(q`

) = factor analytic and `FA0(q)`

= no diagonal factor analytic.If this is a full replicate design, I would follow the FDA’s recommendation and use

`FA0(2)`

.However, in the stupid partial replicate designs (TRR|RTR|RRT or TRR|RTR) the optimizer may fail to converge since the model is over-specified (T not repeated). Then you’ve performed a study and don’t get a result cause SAS shows you the finger.* I suggest to specify

`FA0(1)`

instead. State that already in the SAP.- If you are lucky SAS throws only a warning.

`Convergence criteria met but final hessian is not positive definite.`

But if Murphy’s law hits:

`WARNING: Did not converge.`

WARNING: Output 'Estimates' was not created.

Guidelines do not provide any criterion for acceptance of anchor points and it is understood because these point are not part of valid calibration range. I wish to seek your suggestion on exclusion of anchors. This query is for method validation as well as routine analysis.

SDT-01 is anchor and STD-02 is LLOQ of the curve - given a situation

1) when the OD of STD-01 is equal to or greater than that of STD-02 and all standards of the curve meet acceptance criteria, can we choose to mask the lower anchor since we know that the OD is higher.

2) The STD-02 of the curve does not meet criteria and when the lower anchor is masked the same would fall within criteria. in such case can the lower anchor be masked and can this be termed as improvement in curve fit.

3) The method is validated and established with anchors and during routine analysis If the lower anchor is masked for any reason, then since anchor point is included in regression formula for 4 or 5 PL, would the next point i.e. STD-02 (LLOQ) be considered as new anchor and in such case would the curve be accepted.]]>

Dear All

========

in the following SAS code

`proc mixed data=d1;`

class treat sequence subject period;

model AUCT=sequence period treat/DDFM=kr;

random treat/type=CSH subject=subject G;

repeated/group=treat subject=subject;

estimate 'T-R' treat 1 -1 /cl alpha=0.10;

run;

In the Random statement

`TYPE=CSH`

could possibly be replaced by `TYPE=FA(1)`

`FA(1)`

is not the same as `FA0(2)`

Thank You

M.Youseed

Edit: Category and subject line changed; see also this post #1 --> 16205. [Helmut]]]>

You can more information at

Reference Datasets for Studies in a Replicate Design Intended for Average Bioequivalence with Expanding Limits.

Schütz, H., Labes, D., Tomashevskiy, M. et al. Reference Datasets for Studies in a Replicate Design Intended for Average Bioequivalence with Expanding Limits. AAPS J. 2020; 22(4): Article 44. doi:s12248-020-0427-6.]]>

Approach 2. “The QC samples are analysed against a calibration curve, obtained from freshly spiked calibration standards, and the obtained concentrations are compared to the nominal concentrations. The mean concentration at each level should be within ±15% of the nominal concentration.” (EMA validaton guidelines "Re-injection of samples can be made in case of instrument failure if reinjection reproducibility and oninjector stability have been demonstrated during validation")

Approach 2. Its equivalent to oninjector stability. So a different approach should be considered.

Approach 1. You can consider an anova with different injections at different times. or, check bias and precision between injections at diferrent times.]]>

Approach 1 to be followed.

Edit: Full quote removed. Please delete everything from the text of the original poster which is not necessary in understanding your answer; see also this post #5 --> 16205. Please follow the Forum’s Policy! [Helmut]]]>

Dear ksreekanth,

Well, what do you plan to do if you have to re-inject part of the samples during your study because of an instrument failure in the middle of the run: will you refer to the initial calibration curve (Approach 1), or prepare a fresh calibration curve and analyse the old samples against a fresh curve (Approach 2) ?

So how do you think you should proceed during your validation ?]]>

Dear ElMaestro,

I am happy to hear that this information was considered as useful.

Regarding missing values in BE trials: you may find this work of interest. Of note, this work addresses missing values from a conceptual rather than technical point of view such as the impact on the width of a CI.

best regards & hope this helps

Martin]]>

Hi Martin,

Thanks for the reference.

I'd love to see a work where someone not only debates BLQ's but also where someone tries to discuss what the various options imply for the residual variability and thus confidence interval in BE trials; the same for missing values. I have a feeling it might not be a big deal, but I dare not say at this point what "big deal" is quantitatively.]]>

Hi Martin,

Yep. The EMA’s BE-GL states:

Non-compartmental methods should be used for determination of pharmacokinetic parameters in bioequivalence studies. The use of compartmental methods for the estimation of parameters is not acceptable.

Why am I not surprised that Thomas recommends kernel density imputation? ;-)

However,

Dear Helmut and Sara,

In statistical language - values

Of course, adequate handling would require some modeling which seems to be in contradiction to regulatory thinking as NCA is clearly favored in BA/BE studies. However, I would like to bring a recent paper on this topic to your attention: Barnett H, Geys H, Jacobs T, Jaki T (2020). Methods for Non-Compartmental Pharmacokinetic Analysis with Observations below the Limit of Quantification. Statistics in Biopharmaceutical Research, 1-23

best regards & hope this helps

Martin]]>

Hi i need clarity regarding Reinjection Reproducibility (RR) experiment which is performed during Bioanalytical method validation. I have observed two kinds of approaches which one to choose:

RR with initial CC and RR with reinjected CC:

In this approach once run got acquired and accepted, same run (with same set of CC standards as well low and high level QC samples) will be reinjected and considered as RR with reinjected CCs. and further this reinjected low and high level QC samples

RR with fresh CC:

In this approach once run got acquired and accepted, only the low and high level QC samples of this run will be reinjected under fresh CC standards and checked for accuracy and said that reinjection stability is established.

Kindly clarify whether the check point is reproducibility of QC results with Initial CC as well as reinjected CC (Approach 1) or the stability of reinjected QC samples under fresh CC (Approach 2). Which approach to be followed and established during method validation so as to mimic reinjection conditions due to system stoppages encountered during study sample analysis.]]>

Hi Sara,

You can use the median and quartiles or \(\small{\bar{x}_{geo}\mp SD_{geo}}\) if a certain percentage of samples are measurable (I have seen SOPs with 50%, 67%, and 75%) and nothing (‘not reportable’) otherwise. At the end of the day it’s not important at all (not relevant for the BE assessment). Use whatever you like.

See also this (lengthy) thread --> 20235.

To quote Harold Boxenbaum (Crystal City workshop about bioanalytical method validation, Arlington 1990):

*After a dose we know only one thing for sure: The concentration is not zero.*

Correct, since$$\lim_{x \to 0} \log x=-\infty.$$For simplicity we can say that \(\small{\log 0}\) is undefined. It is reasonable to assume that concentrations (\(\small{x \in \mathbb{R}^+}\)) follow a lognormal distribution, and the geometric mean would be the best estimator of location. Some people chicken out, set BQLs to zero, and present

- Phoenix/WinNonlin by default calculates descriptive statistics only for numeric values. This can lead to strange results. Say, we have \(n-2\) values which are BQL and two \(\small{C\geq LLOQ}\). Then we end up with \(\small{\bar{C}_{ar}=\tilde{C}=\tfrac{C_1+C_2}{2},\bar{C}_{geo}=\sqrt{C_1\times C_2}}\). Doesn’t make sense. However, we can specify
*different*rule sets for descriptive statistics and plots.

- A goody from the FDA’s NDA 204-412 (mesalamine delayed release capsules, n = 238, sampling times: pre-dose, 2, 3, 4, 5, 6, 7, 8, 10, 12, 14, 16, 24, 30, 36, and 48 h post-dose). BQLs were imputed as LLOQ/2.

Splendid, \(\small{\bar{x}\mp SD}\) in bloody Excel. Hey, wait a minute, that’s a fucking*line plot*… Oh dear! One hour intervals in the beginning are as wide as the 12 hours at the end.

Let’s see the*XY-plot*:

Do these guys and dolls really believe that at seven hours there’s a ~16% chance that concentrations are ≤**–**232 and a ~1% chance that concentrations are ≤**–**731‽ Any statistic implies an underlying distribution. The arithmetic mean implies a normal distribution with \(\small{x \in \mathbb{R}\:\vee\:x \in \left \{-\infty, +\infty\right \}}\). Fantastic.

Which cult of Pastafarianism do they belong to?

The one holding that negative mass exist or the one believing in negative lengths?

\(\small{\bar{x}_{geo}\mp SD_{geo}}\) reflects the terrible variability of this drug much better and shows that high concentrations are more likely than low ones.

Hi everyone,

What is your opinion on below quantification limit (BQL) substitution for concentration data summary statistics, by timepoint?

Zero substitution is the one I have seen the most, however, I don't think it is suitable for the calculation of geometric means.

Could you give me your opinion on this matter?

Many thanks!]]>

Hi ElMaestro & Sveta,

@ElMaestro: Operational – no. Statistical – likely yes.

The current regulatory thinking expressed at numerous conferences (nothing published) is that one still has to adjust α because in the first part one gets

@Sveta: For most drugs it is more difficult to demonstrate BE in fed state (true food-drug interaction, higher variability) than in fasted state. Consider to switch your approach (fed followed by fasted). I haven’t seen a single case where the fed study passed and the fasted one failed, but a lot of cases the other way ’round – which required reformulation. Unfortunately many companies start with the fasted study (hey, that’s standard) only to be hit in fed state.

I recommend also to evaluate the first part according to the

`PowerTOST`

.Note that the EMA’s BE-GL states:

In studies with more than two treatment arms (e.g. […] a four period study including test and reference in fed and fasted states), the analysis for each comparison should be conducted excluding the data from the treatments that are not relevant for the comparison in question.

(my emphasis)- One would have to formalize the decision process in the selection of T
_{1}or T_{2}: GMR closer to one; if similar, the one with lower variability, etc. IMHO, not worth the efforts, since the average gain in sample sizes even for an optimistic α 0.0304 over Bonferroni’s 0.025 is just ~5%. Add the given reluctance of assessors towards simulation-based methods…

- Since you will have only two treatments in each analysis, estimate the sample size for a 2×2×2 design and not for 3×3 Latin Squares. Requires sometimes slightly higher sample sizes.

`library(PowerTOST)`

CV <- seq(0.15, 0.3, 0.01) # Intra-subject CV

theta0 <- 0.95 # Assumed T/R-ratio

target <- 0.80 # Target (desired) power

alpha0 <- 0.05 # Nominal level

k <- 2 # Number of tests

alpha <- alpha0/k # Bonferroni-adjustment

res <- data.frame(CV = CV,

design.3 = "3x3", n.3 = NA, power.3 = NA,

design.2 = "2x2x2", n.2 = NA, power.2 = NA)

for (j in 1:nrow(res)) {

res[j, 3:4] <- signif(sampleN.TOST(alpha = alpha, CV = res$CV[j], theta0 = theta0,

targetpower = target, design = res$design.3[j],

details = FALSE, print = FALSE)[7:8], 3)

res[j, 6:7] <- signif(sampleN.TOST(alpha = alpha, CV = res$CV[j], theta0 = theta0,

targetpower = target, design = res$design.2[j],

details = FALSE, print = FALSE)[7:8], 3)

}

res$change <- sprintf("%+4.2f", 100*(res[, 6] - res[, 3])/res[, 3])

res$change[res$change == "+0.00"] <- "±0.00"

names(res)[2:6] <- rep(c("design", "n", "power"), 2)

txt <- paste0("Assumed \u03B8 ", theta0, ", target (desired) power ", target)

if (alpha != 0.05) {

txt <- paste0(txt, ", adjusted \u03B1 ", alpha, " (", 100*(1-2*alpha), "% CI), ")

} else {

txt <- paste0(txt, ", \u03B1 0.05 (conventional 90% CI), ")

}

txt <- paste0(txt, "TIE \u2264", signif(1-(1-alpha)^k, 5), "\n")

cat(txt); print(res, row.names = FALSE)

Peanuts:

`Assumed θ 0.95, target (desired) power 0.8, adjusted α 0.025 (95% CI), TIE ≤0.049375`

CV design n power design n power change

0.15 3x3 15 0.857 2x2x2 16 0.855 +6.67

0.16 3x3 15 0.808 2x2x2 16 0.806 +6.67

0.17 3x3 18 0.839 2x2x2 18 0.813 ±0.00

0.18 3x3 21 0.858 2x2x2 20 0.816 -4.76

0.19 3x3 21 0.817 2x2x2 22 0.817 +4.76

0.20 3x3 24 0.833 2x2x2 24 0.815 ±0.00

0.21 3x3 27 0.844 2x2x2 26 0.812 -3.70

0.22 3x3 27 0.808 2x2x2 28 0.808 +3.70

0.23 3x3 30 0.817 2x2x2 30 0.802 ±0.00

0.24 3x3 33 0.824 2x2x2 34 0.823 +3.03

0.25 3x3 36 0.828 2x2x2 36 0.816 ±0.00

0.26 3x3 39 0.830 2x2x2 38 0.808 -2.56

0.27 3x3 39 0.801 2x2x2 40 0.801 +2.56

0.28 3x3 42 0.803 2x2x2 44 0.813 +4.76

0.29 3x3 45 0.805 2x2x2 46 0.805 +2.22

0.30 3x3 48 0.805 2x2x2 50 0.814 +4.17

- In the ANOVA you get only one – pooled – residual variance. Apart from problems with potentially biased estimates and inflated TIE, you could base your decision only on the T/R-ratios. If they are similar, which one will you select? Flip a coin? In the IBD-analyses you get two variance estimates, which in such a case would be helpful.