SaraCHenriques
☆    

Portugal,
2020-07-01 14:37
(147 d 06:50 ago)

Posting: # 21631
Views: 2,618
 

 Concentration Statistics - BQL substitution [General Sta­tis­tics]

Hi everyone,

What is your opinion on below quantification limit (BQL) substitution for concentration data summary statistics, by timepoint?

Zero substitution is the one I have seen the most, however, I don't think it is suitable for the calculation of geometric means.

Could you give me your opinion on this matter?

Many thanks!

Sara
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2020-07-01 15:27
(147 d 06:00 ago)

@ SaraCHenriques
Posting: # 21632
Views: 2,069
 

 Concentration Statistics - BQL substitution

Hi Sara,

» What is your opinion on below quantification limit (BQL) substitution for concentration data summary statistics, by timepoint?

You can use the median and quartiles or \(\small{\bar{x}_{geo}\mp SD_{geo}}\) if a certain percentage of samples are measurable (I have seen SOPs with 50%, 67%, and 75%) and nothing (‘not reportable’) otherwise. At the end of the day it’s not important at all (not relevant for the BE assessment). Use whatever you like. 1
See also this (lengthy) thread.

» Zero substitution is the one I have seen the most, …

To quote Harold Boxenbaum (Crystal City workshop about bioanalytical method validation, Arlington 1990):

After a dose we know only one thing for sure: The concentration is not zero.

Ended in shouting matches. ;-)

» … I don't think it is suitable for the calculation of geometric means.

Correct, since$$\lim_{x \to 0} \log x=-\infty.$$For simplicity we can say that \(\small{\log 0}\) is undefined. It is reasonable to assume that concentrations (\(\small{x \in \mathbb{R}^+}\)) follow a lognormal distribution, and the geometric mean would be the best estimator of location. Some people chicken out, set BQLs to zero, and present arithmetic means. This leads to funny plots with \(\small{\bar{x}\mp SD,}\) where the lower whisker reaches far below zero. 2 Phew, negative concentrations? Not in this universe.


  1. Phoenix/WinNonlin by default calculates descriptive statistics only for numeric values. This can lead to strange results. Say, we have \(n-2\) values which are BQL and two \(\small{C\geq LLOQ}\). Then we end up with \(\small{\bar{C}_{ar}=\tilde{C}=\tfrac{C_1+C_2}{2},\bar{C}_{geo}=\sqrt{C_1\times C_2}}\). Doesn’t make sense. However, we can specify different rule sets for descriptive statistics and plots.
  2. A goody from the FDA’s NDA 204-412 (mesalamine delayed release capsules, n = 238, sampling times: pre-dose, 2, 3, 4, 5, 6, 7, 8, 10, 12, 14, 16, 24, 30, 36, and 48 h post-dose). BQLs were imputed as LLOQ/2.

    [image]
    Splendid, \(\small{\bar{x}\mp SD}\) in bloody Excel. Hey, wait a minute, that’s a fucking line plot… Oh dear! One hour intervals in the beginning are as wide as the 12 hours at the end.

    Let’s see the XY-plot:

    [image]
    [image]Do these guys and dolls really believe that at seven hours there’s a ~16% chance that concentrations are 232 and a ~1% chance that concentrations are 731‽ Any statistic implies an underlying distribution. The arithmetic mean implies a normal distribution with \(\small{x \in \mathbb{R}\:\vee\:x \in \left \{-\infty, +\infty\right \}}\). Fantastic.

    Which cult of [image] Pastafarianism do they belong to?
    The one holding that negative mass exist or the one believing in negative lengths?

    [image]
    \(\small{\bar{x}_{geo}\mp SD_{geo}}\) reflects the terrible variability of this drug much better and shows that high concentrations are more likely than low ones.

Dif-tor heh smusma 🖖
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
martin
★★  

Austria,
2020-07-02 09:37
(146 d 11:49 ago)

@ Helmut
Posting: # 21634
Views: 1,965
 

 Concentration Statistics - BQL substitution

Dear Helmut and Sara,

In statistical language - values <LLOQ are informatively censored and ignoring this fact can lead to biased results.

Of course, adequate handling would require some modeling which seems to be in contradiction to regulatory thinking as NCA is clearly favored in BA/BE studies. However, I would like to bring a recent paper on this topic to your attention: Barnett H, Geys H, Jacobs T, Jaki T (2020). Methods for Non-Compartmental Pharmacokinetic Analysis with Observations below the Limit of Quantification. Statistics in Biopharmaceutical Research, 1-23

best regards & hope this helps

Martin
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2020-07-02 11:15
(146 d 10:12 ago)

@ martin
Posting: # 21635
Views: 1,963
 

 Data imputation

Hi Martin,

» […] adequate handling would require some modeling which seems to be in contradiction to regulatory thinking as NCA is clearly favored in BA/BE studies.

Yep. The EMA’s BE-GL states:

Non-compartmental methods should be used for determination of pharmacokinetic parameters in bioequivalence studies. The use of compartmental methods for the estimation of parameters is not acceptable.


» However, I would like to bring a recent paper on this topic to your attention: […]

Why am I not surprised that Thomas recommends kernel density imputation? ;-)
However, PK modeling is not applied. \(\small{\widehat{AUC}}\) is still obtained by NCA, right?

Dif-tor heh smusma 🖖
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
ElMaestro
★★★

Belgium?,
2020-07-02 13:48
(146 d 07:39 ago)

@ martin
Posting: # 21636
Views: 1,892
 

 Concentration Statistics - BQL substitution

Hi Martin,

» In statistical language - values <LLOQ are informatively censored and ignoring this fact can lead to biased results.
»
» Of course, adequate handling would require some modeling which seems to be in contradiction to regulatory thinking as NCA is clearly favored in BA/BE studies. However, I would like to bring a recent paper on this topic to your attention: Barnett H, Geys H, Jacobs T, Jaki T (2020). Methods for Non-Compartmental Pharmacokinetic Analysis with Observations below the Limit of Quantification. Statistics in Biopharmaceutical Research, 1-23

Thanks for the reference.
I'd love to see a work where someone not only debates BLQ's but also where someone tries to discuss what the various options imply for the residual variability and thus confidence interval in BE trials; the same for missing values. I have a feeling it might not be a big deal, but I dare not say at this point what "big deal" is quantitatively.

I could be wrong, but...

Best regards,
ElMaestro

No, of course you do not need to audit your CRO if it was inspected in 1968 by the agency of Crabongostan.
martin
★★  

Austria,
2020-07-02 14:52
(146 d 06:35 ago)

@ ElMaestro
Posting: # 21637
Views: 1,878
 

 Concentration Statistics - BQL substitution

Dear ElMaestro,

I am happy to hear that this information was considered as useful.

Regarding missing values in BE trials: you may find this work of interest. Of note, this work addresses missing values from a conceptual rather than technical point of view such as the impact on the width of a CI.

best regards & hope this helps

Martin
Ben
★    

2020-07-07 16:37
(141 d 04:50 ago)

@ martin
Posting: # 21655
Views: 1,684
 

 Concentration Statistics - BQL substitution

Dear All

» Of course, adequate handling would require some modeling which seems to be in contradiction to regulatory thinking as NCA is clearly favored in BA/BE studies. However, I would like to bring a recent paper on this topic to your attention: Barnett H, Geys H, Jacobs T, Jaki T (2020). Methods for Non-Compartmental Pharmacokinetic Analysis with Observations below the Limit of Quantification. Statistics in Biopharmaceutical Research, 1-23

Well, another approach would be to fight for getting all measured values, including the ones where the lab tells us they are flagged as BLQ (yes, yes, I know they are "not reliable", whatever that means). But I have the feeling it is still better to use them than to ignore or set them to some fixed value. I had a nice discussion with Helmut about this topic. :-)
Any idea how this approach would behave? (could not see it within the article, question is also how to simulate such unreliable data... is it just data with higher variability?)

Best regards,
Ben.
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2020-07-08 10:23
(140 d 11:04 ago)

@ Ben
Posting: # 21656
Views: 1,660
 

 BQL in BE and PK modeling

Hi Ben and all,

» Well, another approach would be to fight for getting all measured values, including the ones where the lab tells us they are flagged as BLQ …

Join “El ingenioso hidalgo Don Quijote de la Mancha” in his tilting at windmills…

» … (yes, yes, I know they are "not reliable", whatever that means).

Easy. At the LLOQ: Accuracy ≤20% and precision ≤20% in chromatography, ≤30% in ligand binding assays. Can be higher, if justified (hard data demonstrating that it is impossible to comply with the rules).
<nitpick>

In chemistry we are bound to the [image] IUPAC’s terminology: inaccuracy, imprecision.
A method with an accuracy of 20% would be useless.* :thumb down:

</nitpick>

» But I have the feeling it is still better to use them than to ignore or set them to some fixed value.

Agree. Gut-feeling as well.

» I had a nice discussion with Helmut about this topic. :-)

A summary: At the first Crystal City meeting about bioanalytical method validation (Arlington 1990) there were heated debates about the topic. Essentially there were two parties: Regulators wanted to have a ‘general rule’ in order to avoid discussions with applicants. Members of the PK modeling community strongly opposed that:
  • Excluding concentrations based on an – arbitrary – cut off leads to truncated distributions and biased estimates.
  • Give us all you have! We incorporate the error already in the model (most used a mixed approach: multiplicative+additive). The more data we have, the better. Yes, we are aware that at the end of the day something below the LOD is not possible.
At the archives of David Bourne’s PKPD-list you find a lot of related discussions (LLOQ, BQL). Experienced modelers like Roger Jeliffe, Nick Holford, and Hans Proost refused the idea of dropping BQLs. Quoting Nick Holford:

There is no reason not to use these values. It is just silliness that chemists fail to give you the measurements because of an arbitrary cut off that has no real meaning for pharmaco­kinetic analysis. Omitting these values will always cause bias.
One thing is sure about the true concentration – until sufficient time has passed for less than one molecule to be left in the body then the concentration is not 0. This is longer than most people live…


Hence, in my CRO we had an SOP:
  • For BE we reported BQL to comply with the GLs.
  • For PK modeling, we reported LOD ≤ measured < LLOQ with an asterisk and a footnote explaining that for these values A&P >20%. :cool:

» … question is also how to simulate such unreliable data...

By (truncated?) lognormal distributions.

» is it just data with higher variability?

Nope. Higher (in)accuracy as well.


  • There was not a single (‼) chemist in the group writing the EMA’s guideline on bioanalytical method validation… No offense but the fact that I have a heart does not qualify me to write a guideline for cardiology.

Dif-tor heh smusma 🖖
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
ElMaestro
★★★

Belgium?,
2020-07-08 11:35
(140 d 09:52 ago)

@ Ben
Posting: # 21657
Views: 1,729
 

 Concentration Statistics - BQL substitution

Hi all,

I like the discussion, but I wonder are BLQ's really such an issue?
I mean, we have some rules which we can scientifically debate, but regardless of whether we like them scientifically, do the rules as we know them today lead to actual trouble? :-)

Just think about it, one perspective: Regulators mandate the use of the normal linear model in BE, and everyone knows that model may be right or wrong, probably it is wrong to a varying degree in all datasets. Sometimes we even know from KS tests or SW tests or God knows what, that the assumption of normality is outright wrong, yet we have to use the assumption of normality anyway. Is it a problem that an unknown proportion of studies definitely do not meet the assumptions, and that those that don't do so with a magnitude and nature whose consequences cannot be assessed?
No, actually BE seems to work rather fine in spite of all this. At least as I see it. Somehow I see the BLQ discussion the same way. Yes, it may not be optimal, but hey it provides a strict and well-defined way forward where everyone can easily reproduce everything from raw data. Last time I looked people taking generics were not dropping dead in the streets :-)

A biased estimator may indeed be a useful estimator.
A BLQ rule which is not scientifically optimal when looked at through the keyhole may still in a wider perspective be a good BLQ rule.
How about the AUCinf extrapolation rule of 80%? Is it a complete disaster that it isn't 83%? I mean at the end of the day, the smaller the extrapolated area the better we know the profile from the get-go, so 83% must be better than 80%. And so forth.

Just trying to put BE things into a perspective here. :-)

I could be wrong, but...

Best regards,
ElMaestro

No, of course you do not need to audit your CRO if it was inspected in 1968 by the agency of Crabongostan.
mittyri
★★  

Russia,
2020-07-08 12:03
(140 d 09:24 ago)

@ Ben
Posting: # 21658
Views: 1,609
 

 Roger again

Dear Ben,

I think you'll find that discussion useful where anchor points of Roger's position are given.
I'd recommend to read Ohlbe's answer carefully.

Kind regards,
Mittyri
Activity
 Admin contact
21,210 posts in 4,426 threads, 1,481 registered users;
online 2 (0 registered, 2 guests [including 2 identified bots]).
Forum time: Wednesday 20:27 CET (Europe/Vienna)

You can’t fix by analysis
what you bungled by design.    Richard J. Light, Judith D. Singer, John B. Willett

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5