AB
☆

India,
2012-05-04 17:13
(3984 d 14:33 ago)

Posting: # 8514
Views: 14,530

## SAS error in 3 way ref replicate study [RSABE / ABEL]

Dear all,

we have done a BE study in 3 way ref replicate design for FDA and ended up with the following error while running SAS code for unscaled 90% bioequivalence confidence intervals as given in "Draft Guidance on Progesterone" for parameter AUCI.

"NOTE: 20 observations are not included because of missing values.
WARNING: Did not converge."

These missing 20 observations are the ones (AUCI) which were not caculated in the respective treatment groups as the r2 values are less than 80%.

However, when we re run the data including these 20 missing AUCI (recalculated without considering the r2 criteria), there was no error.

plz suggest what is the correct way to deal with this?

Regards,
AB
jag009
★★★

NJ,
2012-05-04 17:36
(3984 d 14:10 ago)

@ AB
Posting: # 8515
Views: 13,016

## SAS error in 3 way ref replicate study

Hi AB,

Sample size n?

John
AB
☆

India,
2012-05-05 08:45
(3983 d 23:01 ago)

@ jag009
Posting: # 8516
Views: 12,937

## SAS error in 3 way ref replicate study

HI Jag009,

❝ Sample size n?

sample size is 60 in the study.

Regards,
AB
d_labes
★★★

Berlin, Germany,
2012-05-06 12:10
(3982 d 19:36 ago)

@ AB
Posting: # 8518
Views: 13,273

## FDA's ABE code and partial replicate design

Dear AB,

❝ "NOTE: 20 observations are not included because of missing values.

❝ WARNING: Did not converge."

❝ These missing 20 observations are the ones (AUCI) which were not caculated in the respective treatment groups as the r2 values are less than 80%.

❝ However, when we re run the data including these 20 missing AUCI (recalculated without considering the r2 criteria), there was no error.

Point 1:
The FDA ABE code for replicate crossover studies using SAS Proc MIXED fits a model which is overspecified for data coming from a partial replicate design. The intra-subject variability of Test formulation is confounded with the subject-by-formulation interaction within that design and not identifiable alone.

This may lead to convergence problems in the REML method (see this thread) or may lead to unreliable values of the intra-subject variances, especially for the Test formulation (see this thread).
Your missing values seems to exacerbate the problem. But are not the source IMHO .

Ways out? Don't know exactly.
The logical way within the code - method - given in the progesterone guidance would be not to use the Proc Mixed code but the estimate plus its 90% confidence interval obtained from the evaluation of the intra-subject contrasts of T vs. R (step Intermediate analysis - ilat) as measure of the ABE.
I'm always wondering why the Proc MIXED code was recommended if the evaluation via intra-subject contrasts did indicate that scaled ABE was not applicable.
On the other hand, if applicable, the ABE criterion evaluated via intra-subject contrasts is part of the scaled ABE criterion. That's illogical to me.

Another possibility is to reduce the model in the FDA Proc MIXED code. Neglecting the subject-by-formulation interaction (setting it to zero) would let to a somewhat better behaving model. You could achieve this by setting the covariance structure within
  RANDOM TRT/TYPE=FA0(2) SUB=SUBJ G;
to CS instead of FA0(2) or CSH.

Point 2:
Not to calculate the AUC(0-inf) values if the fit of the terminal part of the concentration-time curves had an r2 value less than 80% is at least statistically not very sound, not to say nonsense IMHO .
Regardless of which study design used.
Use and you will find some (partly lengthy) discussions here in the Forum about that subject. See here and here for instance.

Regards,

Detlew
Helmut
★★★

Vienna, Austria,
2012-05-06 15:29
(3982 d 16:17 ago)

@ d_labes
Posting: # 8519
Views: 13,361

## Arbitrary (and unjustified) cut-off of r²

Dear Detlew & AB!

❝ Not to calculate the AUC(0-inf) values if the fit of the terminal part of the concentration-time curves had an r2 value less than 80% is at least statistically not very sound, not to say nonsense IMHO .

Yes! Explained variance (sloppy: information) in regression (strongly!) depends on the sample size. See here and a rather lengthy thread of 2002 at David Bourne’s PKPD-list.
Critical values of $$\small{r}$$ according to Odeh (1982)1 (and modified for $$\small{r^2}$$); one sided, 5%:$$\small{\begin{array}{rcc} n & r & r^2\\\hline 3 & 0.9877 & 0.9755\\ 4 & 0.9000 & 0.8100\\ 5 & 0.805\phantom{0} & 0.6486\\ 6 & 0.729\phantom{0} & 0.5319\\ 7 & 0.669\phantom{0} & 0.4481\\ 8 & 0.621\phantom{0} & 0.3863\\ 9 & 0.582\phantom{0} & 0.3390\\ 10 & 0.549\phantom{0} & 0.3018\\ 11 & 0.521\phantom{0} & 0.2719\\ 12 & 0.497\phantom{0} & 0.2473\\ 13 & 0.476\phantom{0} & 0.2267\\ 14 & 0.457\phantom{0} & 0.2093\\ 15 & 0.441\phantom{0} & 0.1944\\\hline \end{array}}$$ In other words, an $$\small{r^2}$$ of 0.6486 from five data points denotes the same ‘quality of fit’ than an $$\small{r^2}$$ of 0.9755 from three. Searching the forum I get the impression that you (AB) are not alone with a cut-off of 0.80. Justification: nil. Maybe there is some copypasting going on? If you really want to use a cut-off (which I don’t recommend and is not required in any GL) take the number of data points into account. I strongly suggest to revise your SOP.
BTW, visual inspection of fits is mandatory (see there with references). Don’t trust in numbers alone. A classical example is Anscombe’s quartet.2

All data sets: $$\small{\bar{x}=9.0,\,s_x^2=11,\,\bar{y}=7.5,\,s_x^2=4.1\rightarrow\widehat{y}=3+0.5\cdot x,\,R_{yx}^2=0.82\ldots}$$

1. Odeh RE. Critical values of the sample product-moment correlation coefficient in the bivariate distribution. Commun Statist–Simula Computa. 1982; 11(1): 1–26. doi:10.1080/03610918208812243.
2. Anscombe FJ. Graphs in statistical analysis. Am Stat. 1973; 27: 17–21. doi:10.2307/2682899.

Dif-tor heh smusma 🖖🏼 Довге життя Україна!
Helmut Schütz

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
d_labes
★★★

Berlin, Germany,
2012-05-07 10:48
(3981 d 20:58 ago)

@ Helmut
Posting: # 8520
Views: 12,887

## Anscombe quartet

Dear Helmut!

❝ BTW, visual inspection of fits is mandatory (see here with references). Don’t trust in numbers alone. A classical example is Anscombe’s quartet.2

Very nice illustration .
Thanx for that I was not aware of up to now also it was invented already in 1973 (if Wikipedia is correct).

Regards,

Detlew
Helmut
★★★

Vienna, Austria,
2012-05-07 13:15
(3981 d 18:31 ago)

@ d_labes
Posting: # 8522
Views: 13,092

## Anscombe quartet in R

Dear Detlew!

❝ Very nice illustration .

I hijacked the code from Wikimedia Commons and learned that the data are available in R’s standard installation. Give it a try:

?anscombe

Dif-tor heh smusma 🖖🏼 Довге життя Україна!
Helmut Schütz

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
AB
☆

India,
2012-05-07 14:07
(3981 d 17:39 ago)

@ Helmut
Posting: # 8523
Views: 12,940

## Anscombe quartet in R

Dear Detlew & HS,

Many thanks for your detailed insight.

Regards,
AB
FI
☆

Austria,
2012-10-08 12:39
(3827 d 19:07 ago)

@ Helmut
Posting: # 9332
Views: 12,582

## Arbitrary (and unjustified) cut-off of r²

Dear Helmut,

❝ In other words, an $$\small{r^2}$$ of 0.6486 from five data points denotes the same ‘quality of fit’ than an $$\small{r^2}$$ of 0.9755 from three. Searching the forum I get the impression that you (AB) are not alone with a cut-off of 0.80. Justification: nil.

❝ BTW, visual inspection of fits is mandatory... Don’t trust in numbers alone. A classical example is Anscombe’s quartet.2

Small add-on: adj. r² method could be misleading in "the more (datapoints) the better"! Considering the PK of Azithromycin, "the less the better" could be considered, because there are (at least?) 3 elimination phases for Azi (uptake into white blood cells, rapid distribution into tissue... would resemble Anscombe2), and a very long t1/2, depending (!) on the timepoints used for calculation. As terminal elimination needs to be calculated and the adj r² method from previous study took mostly 3 to 5 points, but sometimes also 12 (!), should the timepoints be limited (to 4 to 3), to reflect PK? What if one concentration looks to be an analytical mistake (?), that confounds t1/2 in such a way that the slope increases...? Where to put the cut-off for adj r²?

FI
Helmut
★★★

Vienna, Austria,
2012-10-08 15:45
(3827 d 16:01 ago)

@ FI
Posting: # 9334
Views: 12,553

## Predominant half life; exclusions

Servus Franz!

❝ Considering the PK of Azithromycin, "the less the better" could be considered, because there are (at least?) 3 elimination phases […]. As terminal elimination needs to be calculated and the adj r² method from previous study took mostly 3 to 5 points, but sometimes also 12 (!), should the timepoints be limited (to 4 to 3), to reflect PK?

Multiphasic PK can be problematic, especially if volumes of distribution are variable. I once had to deal with a drug (3 phases) where the terminal half life was ~3 days and the volume of distribution of the deep compartment was very large. Was this phase important? No. Running a PopPK model it turned out that this compartment accounted for <1% of the AUC. In my case the V2 showed little variability, but if we have large variability the predominant half life in some subjects might be the second phase and the third in others… See also Boxenbaum & Battle (1995).*

❝ What if one concentration looks to be an analytical mistake(?), that confounds t1/2 in such a way that the slope increases...?

Since according to the EMA’s bioanalytical GL a blind plausibility review of data leading to confirmation/rejection (aka “pharmacokinetic repeat”) is not acceptable any more – bad luck. If other countries are concerned: Have an SOP in place, repeat the analysis, and cross fingers. Other options (?):
• Exclude the suspect value from the estimation of λz. Keep it in the calculation of AUCt and base the calculation of AUC not on Ct, but on its estimate (in Phoenix/WinNonlin’s terminology: AUCinfpred instead of AUCinfobs).
• Exclude to subject from the comparison of AUC, but keep him/her for the comparison of Cmax. Since the latter is more variable and the study should be powered to show BE for the most variable metric it should not hurt too much.
• If you expect problems beforehand – and have an IR formulation – go with AUC72 instead. No more hassle with extrapolations.
Of course everything should be covered by SOPs and – preferably – described in the protocol as well.

❝ Where to put the cut-off for adj r²?

Nowhere. Forget it. Doesn’t make any sense, IMHO. For a bad example see this thread. I would be happy to see a publication justifying an algorithm which would allow automatic selection of the terminal phase in multicompartment PK. If anybody knows a single one, please let me know. See also Ref.#2 at the end of this thread.

Dif-tor heh smusma 🖖🏼 Довге життя Україна!
Helmut Schütz

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
d_labes
★★★

Berlin, Germany,
2012-10-09 11:33
(3826 d 20:13 ago)

@ Helmut
Posting: # 9352
Views: 12,359

## Excluding time points for lambdaZ

Dear Helmut, dear FI!

❝ ❝ What if one concentration looks to be an analytical mistake(?), that confounds t1/2 in such a way that the slope increases...?

❝ ...

❝ Other options (?):

• Exclude the suspect value from the estimation of λz. Keep it in the calculation of AUCt and base the calculation of AUC not on Ct, but on its estimate (in Phoenix/WinNonlin’s terminology: AUCinfpred instead of AUCinfobs).

• ...

In this post I had claimed "I never have seen deficiency questions concerning the fit of the terminal phase of concentration time courses in my ~30 years career. Even if the 'fit' was done with only 2 points".
Say never never.
Quite recently I got:

… The applicant should justify the calculation of the terminal rate constant for patient #xxx, reference/r.1, patient #yyy, test/r.2 …

(It was a replicate BE study in patients).
The questioned cases had in common that the last measured concentration was increasing compared to the preceding ones and doesn't fit into the linear part for the log-linear regression. To not grossly overestimate the terminal half-life in such situations it is my standard operation to act according to Helmut's first option above.

Seems some regulators opinion do not match mine.

Regards,

Detlew
Helmut
★★★

Vienna, Austria,
2012-10-09 16:09
(3826 d 15:37 ago)

@ d_labes
Posting: # 9356
Views: 12,591

## Analytical variability

Dear Detlew!

In this post I had claimed "I never have seen deficiency questions concerning the fit of the terminal phase of concentration time courses in my ~30 years career. Even if the 'fit' was done with only 2 points".

I remember this post very well. So far I received only one request myself (by the sponsor, not an agency) why I have selected specific time points and not others. I answered by “visual inspection of the fit” (aka eye-ball PK) as recommended in the literature. BTW, I don’t know a single reference suggesting maximum R²adj or a statement about exclusion.

Say never never.

❝ Quite recently I got: … The applicant should justify the calculation of the terminal rate constant for patient #xxx, reference/r.1, patient #yyy, test/r.2 … […].

❝ The questioned cases had in common that the last measured concentration was increasing compared to the preceding ones and doesn't fit into the linear part for the log-linear regression. To not grossly overestimate the terminal half-life in such situations it is my standard operation to act according to Helmut's first option above.

Oh no! What will you answer? Maybe this post helps. Seems that (some) regulators are not aware about the consequences of (acceptable!) limitations of analytical methods (20% inaccuracy, 20% imprecision at the LLOQ) and take results as set in stone. As a finger exercise we once solved the confidence bands of weighted inverse regression (aka calibration) – which required some nasty algebra (partial derivatives, etc.).* Then you can come up not only with estimated concentrations but also their confidence intervals (asymmetric – since the CI of a linear function are two hyperbolas). If the CIs of two concentrations overlap, they are not significantly different…

• If one doesn’t have the knowledge and stamina to step into algebra here is an alternative: Calculate the CI of the regression (available in all statistical packages). To get the lower CL of the estimated concentration (x0|y0) use the bisection algo with starting values of xlow=0 and xhi=x0 and the estimated upper CL at these values. To get the lower CL run the algo within x0 and the ULOQ.

Dif-tor heh smusma 🖖🏼 Довге життя Україна!
Helmut Schütz

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
d_labes
★★★

Berlin, Germany,
2012-10-09 17:15
(3826 d 14:31 ago)

@ Helmut
Posting: # 9357
Views: 12,450

Dear Helmut!

❝ Oh no! What will you answer? Maybe this post helps. Seems that (some) regulators are not aware about the consequences of (acceptable!) limitations of analytical methods (20% inaccuracy, 20% imprecision at the LLOQ) and take results as set in stone. ...

My answer is somefink like (may be it helps others):
The concentration time points selected for calculation of the terminal rate constant (lambdaZ) are chosen by visual inspection as recommended in the literature.
The last concentration of the questioned terminal rate constant calculations does not fit the linear part of the concentration time curves (in log-linear plot). This behaviour is sometimes common if the concentrations are in the range of the LLOQ. To not grossly overestimate the terminal half-life in such cases it is recommended to leave out such measurement points. See for instance the book
Hauschke, Steinijans, Pigeot
"Bioequivalence Studies in Drug Development"
Wiley, Chichester (2007)
Chapter 2 “Metrics to characterize concentration-time profiles in single- and multiple-dose bioequivalence studies” especially Fig. 2.3

On the other hand the reliable estimation of the terminal rate constant which is needed for a reliable estimate of AUC(0-inf) has the one and only purpose to assure that the observation time is long enough for ensuring that AUC(0-tlast) covers at least 80% of the AUC(0-inf) (or the other way round that the residual area AUC(tlast-inf) is <=20% of AUC(0-inf)).

AUC(0-inf) in itself is not a primary endpoint on which the bioequivalence decision will be based (not according to the EMA guidance and also not according to the Study protocol). The bioequivalence decision taken in the Study Report is thus in no way affected by the way of calculation of the terminal rate constant.

Regards,

Detlew
Helmut
★★★

Vienna, Austria,
2012-10-09 21:43
(3826 d 10:03 ago)

@ d_labes
Posting: # 9362
Views: 12,230

## Well done!

Dear Detlew,

good anwer!

❝ On the other hand the reliable estimation of the terminal rate constant which is needed for a reliable estimate of AUC(0-inf) has the one and only purpose to assure that the observation time is long enough for ensuring that AUC(0-tlast) covers at least 80% of the AUC(0-inf) (or the other way round that the residual area AUC(tlast-inf) is <=20% of AUC(0-inf)).

❝ AUC(0-inf) in itself is not a primary endpoint on which the bioequivalence decision will be based (not according to the EMA guidance and also not according to the Study protocol). The bioequivalence decision taken in the Study Report is thus in no way affected by the way of calculation of the terminal rate constant.

Agree, but some hard-core assessor might tell you that if you cannot reliably estimate AUC you have not demonstrated that AUCt is an acceptable metric for the BE assessment. That’s a vicious circle.

BTW, I didn’t want to go to much off-topic (as I did above) and started another thread about analytical variability.

Dif-tor heh smusma 🖖🏼 Довге життя Україна!
Helmut Schütz

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes