Bioequivalence and Bioavailability Forum

Main page Policy/Terms of Use Abbreviations Latest Posts

 Log-in |  Register |  Search

Back to the forum  Query: 2017-10-21 03:03 CEST (UTC+2h)
 
AB
Junior

India,
2012-05-04 15:13

Posting: # 8514
Views: 7,897
 

 SAS error in 3 way ref replicate study [RSABE / ABEL]

Dear all,

we have done a BE study in 3 way ref replicate design for FDA and ended up with the following error while running SAS code for unscaled 90% bioequivalence confidence intervals as given in "Draft Guidance on Progesterone" for parameter AUCI.

"NOTE: 20 observations are not included because of missing values.
WARNING: Did not converge."

These missing 20 observations are the ones (AUCI) which were not caculated in the respective treatment groups as the r2 values are less than 80%.

However, when we re run the data including these 20 missing AUCI (recalculated without considering the r2 criteria), there was no error.

plz suggest what is the correct way to deal with this? :confused:

Thanks in advance

Regards,
AB
jag009
Hero

NJ,
2012-05-04 15:36
(edited by jag009 on 2012-05-04 16:15)

@ AB
Posting: # 8515
Views: 6,947
 

 SAS error in 3 way ref replicate study

Hi AB,

Give us more info...

Sample size n?

John
AB
Junior

India,
2012-05-05 06:45

@ jag009
Posting: # 8516
Views: 6,878
 

 SAS error in 3 way ref replicate study

HI Jag009,

» Give us more info...
» Sample size n?

sample size is 60 in the study.

Regards,
AB
d_labes
Hero

Berlin, Germany,
2012-05-06 10:10
(edited by d_labes on 2012-05-06 11:04)

@ AB
Posting: # 8518
Views: 7,095
 

 FDA's ABE code and partial replicate design

Dear AB,

» "NOTE: 20 observations are not included because of missing values.
» WARNING: Did not converge."
»
» These missing 20 observations are the ones (AUCI) which were not caculated in the respective treatment groups as the r2 values are less than 80%.
»
» However, when we re run the data including these 20 missing AUCI (recalculated without considering the r2 criteria), there was no error.

Point 1:
The FDA ABE code for replicate crossover studies using SAS Proc MIXED fits a model which is overspecified for data coming from a partial replicate design. The intra-subject variability of Test formulation is confounded with the subject-by-formulation interaction within that design and not identifiable alone.

This may lead to convergence problems in the REML method (see this thread) or may lead to unreliable values of the intra-subject variances, especially for the Test formulation (see this thread).
Your missing values seems to exacerbate the problem. But are not the source IMHO :no:.

Ways out? Don't know exactly.
The logical way within the code - method - given in the progesterone guidance would be not to use the Proc Mixed code but the estimate plus its 90% confidence interval obtained from the evaluation of the intra-subject contrasts of T vs. R (step Intermediate analysis - ilat) as measure of the ABE.
I'm always wondering why the Proc MIXED code was recommended if the evaluation via intra-subject contrasts did indicate that scaled ABE was not applicable.
On the other hand, if applicable, the ABE criterion evaluated via intra-subject contrasts is part of the scaled ABE criterion. That's illogical to me.

Another possibility is to reduce the model in the FDA Proc MIXED code. Neglecting the subject-by-formulation interaction (setting it to zero) would let to a somewhat better behaving model. You could achieve this by setting the covariance structure within
  RANDOM TRT/TYPE=FA0(2) SUB=SUBJ G;
to CS instead of FA0(2) or CSH.

Point 2:
Not to calculate the AUC(0-inf) values if the fit of the terminal part of the concentration-time curves had an r2 value less than 80% is at least statistically not very sound, not to say nonsense IMHO :smoke:.
Regardless of which study design used.
Use [image] and you will find some (partly lengthy) discussions here in the Forum about that subject. See here and here for instance.

Regards,

Detlew
Helmut
Hero
Homepage
Vienna, Austria,
2012-05-06 13:29

@ d_labes
Posting: # 8519
Views: 6,961
 

 Arbitrary (and unjustified) cut-off of r²

Dear Detlew & AB!

» Not to calculate the AUC(0-inf) values if the fit of the terminal part of the concentration-time curves had an r2 value less than 80% is at least statistically not very sound, not to say nonsense IMHO :smoke:.

Yess! Explained variance (sloppy: information) in regression (strongly!) depends on the sample size. See here and a rather lengthy thread from 2002 at David Bourne’s PKPD-list. Critical values of r according to Odeh (1982)1 (and modified for r²); one sided, 5%:
n     r       r²
————————————————————
 3   0.9877  0.9755
 4   0.9000  0.8100
 5   0.805   0.6486
 6   0.729   0.5319
 7   0.669   0.4481
 8   0.621   0.3863
 9   0.582   0.3390
10   0.549   0.3018
11   0.521   0.2719
12   0.497   0.2473
13   0.476   0.2267
14   0.457   0.2093
15   0.441   0.1944

In other words an r² of 0.6486 from five data points denotes the same ‘quality of fit’ than an r² of 0.9755 from three. Searching the forum I get the impression that you (AB) are not alone with a cut-off of 0.80. Justification: nil. Maybe there is some copypasting going on? If you really want to use a cut-off (which I don’t recommend and is not required in any GL) take the number of data points into account. I strongly suggest to revise your SOP.
BTW, visual inspection of fits is mandatory (see here with references). Don’t trust in numbers alone. A classical example is Anscombe’s quartet.2

[image]

All data sets: x 9.0, s²x 11, y 7.5, s²y 4.1, R²yx 0.82, regryx y=3+0.5x


  1. Odeh RE. Critical values of the sample product-moment correlation coefficient in the bivariate distribution. Commun Statist–Simula Computa. 1982;11(1):1–26. doi:10.1080/03610918208812243
  2. Anscombe FJ. Graphs in statistical analysis. Am Stat. 1973;27:17–21. doi:10.2307/2682899

[image]All the best,
Helmut Schütz 
[image]

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
d_labes
Hero

Berlin, Germany,
2012-05-07 08:48

@ Helmut
Posting: # 8520
Views: 6,782
 

 Anscombe quartet

Dear Helmut!

» BTW, visual inspection of fits is mandatory (see here with references). Don’t trust in numbers alone. A classical example is Anscombe’s quartet.2

Very nice illustration :ok:.
Thanx for that I was not aware of up to now also it was invented already in 1973 (if Wikipedia is correct).

Regards,

Detlew
Helmut
Hero
Homepage
Vienna, Austria,
2012-05-07 11:15

@ d_labes
Posting: # 8522
Views: 6,849
 

 Anscombe quartet in R

Dear Detlew!

» Very nice illustration :ok:.

I hijacked the code from Wikimedia Commons and learned that the data are available in R’s standard installation. Give it a try:
help(anscombe)

[image]All the best,
Helmut Schütz 
[image]

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
AB
Junior

India,
2012-05-07 12:07

@ Helmut
Posting: # 8523
Views: 6,808
 

 Anscombe quartet in R

Dear Detlew & HS,

Many thanks for your detailed insight.

Regards,
AB
FI
Junior

Austria,
2012-10-08 10:39

@ Helmut
Posting: # 9332
Views: 6,380
 

 Arbitrary (and unjustified) cut-off of r²

Dear Helmut,

» In other words an r² of 0.6486 from five data points denotes the same ‘quality of fit’ than an r² of 0.9755 from three. Searching the forum I get the impression that you (AB) are not alone with a cut-off of 0.80. Justification: nil.
» BTW, visual inspection of fits is mandatory... Don’t trust in numbers alone. A classical example is Anscombe’s quartet.2

Small add-on: adj. r² method could be misleading in "the more (datapoints) the better"! Considering the PK of Azithromycin, "the less the better" could be considered, because there are (at least?) 3 elimination phases for Azi (uptake into white blood cells, rapid distribution into tissue... would resemble Anscombe2), and a very long t1/2, depending (!) on the timepoints used for calculation. As terminal elimination needs to be calculated and the adj r² method from previous study took mostly 3 to 5 points, but sometimes also 12(!), should the timepoints be limited (to 4 to 3), to reflect PK? What if one concentration looks to be an analytical mistake(?), that confounds t1/2 in such a way that the slope increases...? Where to put the cut-off for adj r²?

Thanks a lot in advance
FI
Helmut
Hero
Homepage
Vienna, Austria,
2012-10-08 13:45

@ FI
Posting: # 9334
Views: 6,357
 

 Predominant half life; exclusions

Servus Franz!

» Considering the PK of Azithromycin, "the less the better" could be considered, because there are (at least?) 3 elimination phases […]. As terminal elimination needs to be calculated and the adj r² method from previous study took mostly 3 to 5 points, but sometimes also 12(!), should the timepoints be limited (to 4 to 3), to reflect PK?

Multiphasic PK can be problematic, especially if volumes of distribution are variable. I once had to deal with a drug (3 phases) where the terminal half life was ~3 days and the volume of distribution of the deep compartment was very large. Was this phase important? No. Running a PopPK model it turned out that this compartment accounted for <1% of the AUC. In my case the V2 showed little variability, but if we have large variability the predominant half life in some subjects might be the second phase and the third in others… See also Boxenbaum & Battle (1995).*

» What if one concentration looks to be an analytical mistake(?), that confounds t1/2 in such a way that the slope increases...?

Since according to the EMA’s bioanalytical GL a blind plausibility review of data leading to confirmation/rejection (aka “pharmacokinetic repeat”) is not acceptable – bad luck. If other countries are concerned: Have an SOP in place, repeat the analysis, and cross fingers. Other options (?):
  • Exclude the suspect value from the estimation of λz. Keep it in the calculation of AUCt and base the calculation of AUC not on Ct, but on its estimate (in Phoenix/WinNonlin’s terminology: AUCinfpred instead of AUCinfobs).
  • Exclude to subject from the comparison of AUC, but keep him/her for the comparison of Cmax. Since the latter is more variable and the study should be powered to show BE for the most variable metric it should not hurt too much.
  • If you expect problems beforehand – and have an IR formulation – go with AUC72 instead. No more hassle with extrapolations.
Of course everything should be covered by SOPs and – preferably – described in the protocol as well.

» Where to put the cut-off for adj r²?

Nowhere. Forget it. Doesn’t make any sense, IMHO. For a bad example see this thread. I would be happy to see a publication justifying an algorithm which would allow automatic selection of the terminal phase in multicompartment PK. If anybody knows a single one, please let me know. See also Ref.#2 at the end of this thread.



[image]All the best,
Helmut Schütz 
[image]

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
d_labes
Hero

Berlin, Germany,
2012-10-09 09:33

@ Helmut
Posting: # 9352
Views: 6,176
 

 Excluding time points for lambdaZ

Dear Helmut, dear FI!

» » What if one concentration looks to be an analytical mistake(?), that confounds t1/2 in such a way that the slope increases...?
» ...
» Other options (?):
  • Exclude the suspect value from the estimation of λz. Keep it in the calculation of AUCt and base the calculation of AUC not on Ct, but on its estimate (in Phoenix/WinNonlin’s terminology: AUCinfpred instead of AUCinfobs).
    »
  • ...

In this post I had claimed "I never have seen deficiency questions concerning the fit of the terminal phase of concentration time courses in my ~30 years career. Even if the 'fit' was done with only 2 points".
Say never never. :no:
Quite recently I got:

… The applicant should justify the calculation of the terminal rate constant for patient #xxx, reference/r.1, patient #yyy, test/r.2 …

(It was a replicate BE study in patients).
The questioned cases had in common that the last measured concentration was increasing compared to the preceding ones and doesn't fit into the linear part for the log-linear regression. To not grossly overestimate the terminal half-life in such situations it is my standard operation to act according to Helmut's first option above.

Seems some regulators opinion do not match mine.

Regards,

Detlew
Helmut
Hero
Homepage
Vienna, Austria,
2012-10-09 14:09

@ d_labes
Posting: # 9356
Views: 6,273
 

 Analytical variability

Dear Detlew!

» In this post I had claimed "I never have seen deficiency questions concerning the fit of the terminal phase of concentration time courses in my ~30 years career. Even if the 'fit' was done with only 2 points".

I remember this post very well. ;-) So far got only one request myself (by the sponsor, not an agency) why I have selected specific time points and not others. I answered by “visual inspection of the fit” (aka eye-ball PK) as recommended in the literature. BTW, I don’t know a single reference suggesting maximum R²adj or a statement about exclusion.

» Say never never. :no:
» Quite recently I got: … The applicant should justify the calculation of the terminal rate constant for patient #xxx, reference/r.1, patient #yyy, test/r.2 … […].
» The questioned cases had in common that the last measured concentration was increasing compared to the preceding ones and doesn't fit into the linear part for the log-linear regression. To not grossly overestimate the terminal half-life in such situations it is my standard operation to act according to Helmut's first option above.

Oh no! What will you answer? Maybe this post helps. Seems that (some) regulators are not aware about the consequences of (acceptable!) limitations of analytical methods (20% inaccuracy, 20% imprecision at the LLOQ) and take results as set in stone. As a finger exercise we once solved the confidence bands of weighted inverse regression (aka calibration) – which required some nasty algebra (partial derivatives, etc.).* Then you can come up not only with estimated concentrations but also their confidence intervals (asymmetric – since the CI of a linear function are two hyperbolas). If the CIs of two concentrations overlap, they are not significantly different…


  • If one doesn’t have the knowledge and stamina to step into algebra here is an alternative: Calculate the CI of the regression (available in all statistical packages). To get the lower CL of the estimated concentration (x0|y0) use the bisection algo with starting values of xlow=0 and xhi=x0 and the estimated upper CL at these values. To get the lower CL run the algo within x0 and the ULOQ.

[image]All the best,
Helmut Schütz 
[image]

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
d_labes
Hero

Berlin, Germany,
2012-10-09 15:15
(edited by d_labes on 2012-10-09 16:53)

@ Helmut
Posting: # 9357
Views: 6,239
 

 Answer machine

Dear Helmut!

» Oh no! What will you answer? Maybe this post helps. Seems that (some) regulators are not aware about the consequences of (acceptable!) limitations of analytical methods (20% inaccuracy, 20% imprecision at the LLOQ) and take results as set in stone. ...

My answer is somefink like (may be it helps others):
The concentration time points selected for calculation of the terminal rate constant (lambdaZ) are chosen by visual inspection as recommended in the literature. :-D
The last concentration of the questioned terminal rate constant calculations does not fit the linear part of the concentration time curves (in log-linear plot). This behaviour is sometimes common if the concentrations are in the range of the LLOQ. To not grossly overestimate the terminal half-life in such cases it is recommended to leave out such measurement points. See for instance the book
Hauschke, Steinijans, Pigeot
"Bioequivalence Studies in Drug Development"
Wiley, Chichester (2007)
Chapter 2 “Metrics to characterize concentration-time profiles in single- and multiple-dose bioequivalence studies” especially Fig. 2.3

:blahblah:

On the other hand the reliable estimation of the terminal rate constant which is needed for a reliable estimate of AUC(0-inf) has the one and only purpose to assure that the observation time is long enough for ensuring that AUC(0-tlast) covers at least 80% of the AUC(0-inf) (or the other way round that the residual area AUC(tlast-inf) is <=20% of AUC(0-inf)).

AUC(0-inf) in itself is not a primary endpoint on which the bioequivalence decision will be based (not according to the EMA guidance and also not according to the Study protocol). The bioequivalence decision taken in the Study Report is thus in no way affected by the way of calculation of the terminal rate constant.

Regards,

Detlew
Helmut
Hero
Homepage
Vienna, Austria,
2012-10-09 19:43

@ d_labes
Posting: # 9362
Views: 6,068
 

 Well done!

Dear Detlew,

good anwer!

» On the other hand the reliable estimation of the terminal rate constant which is needed for a reliable estimate of AUC(0-inf) has the one and only purpose to assure that the observation time is long enough for ensuring that AUC(0-tlast) covers at least 80% of the AUC(0-inf) (or the other way round that the residual area AUC(tlast-inf) is <=20% of AUC(0-inf)).
»
» AUC(0-inf) in itself is not a primary endpoint on which the bioequivalence decision will be based (not according to the EMA guidance and also not according to the Study protocol). The bioequivalence decision taken in the Study Report is thus in no way affected by the way of calculation of the terminal rate constant.

Agree, but some hard-core assessor might tell you that if you cannot reliably estimate AUC you have not demonstrated that AUCt is an acceptable metric for the BE assessment. That’s a vicious circle.

BTW, I didn’t want to go to much off-topic (as I did above) and started another thread about analytical variability.

[image]All the best,
Helmut Schütz 
[image]

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
Back to the forum Activity
 Thread view
Bioequivalence and Bioavailability Forum | Admin contact
17,396 Posts in 3,726 Threads, 1,074 registered users;
27 users online (0 registered, 27 guests).

The difference between a surrogate and a true endpoint
is like the difference between a cheque and cash.
You can get the cheque earlier but then,
of course, it might bounce.    Stephen Senn

The BIOEQUIVALENCE / BIOAVAILABILITY FORUM is hosted by
BEBAC Ing. Helmut Schütz
XHTML/CSS RSS Feed