ANVISA's POV on "triangulation points" [Regulatives / Guidelines]
❝ […] acting by your own thinking without questioning it has never been the best way to deal with the unknown.
Enlightenment is man’s emergence from his self-imposed immaturity for which he himself was responsible. Immaturity and dependence are the inability to use one’s own intellect without the direction of another. One is responsible for this immaturity and dependence, if its cause is not a lack of intelligence, but a lack of determination and courage to think without the direction of another. Sapere aude! Have courage to use your own understanding! is therefore the slogan of Enlightenment.
Immanuel Kant, Answering the Question: What is Enlightenment? (1784)
I’m definitely in favor of my own thinking over following guidelines unquestioned.
❝ If we only disregard one or more points of the individual profiles thinking that they’re impossible or unlikely, without the real proof of it, it might be “backfiring” since you don’t have also an assurance that all the other points are free of bias. How would we be sure of it? Critical sense? Experience? The nice shape of the profile?
Almost all of it. The shape of a particular profile must not be “nice” but should be consistent with the others.
❝ How would we be sure that in the same way that these points are unlikely and an discrepant Cmax would not be? We do know that exclusion of Cmax points are not well seen by regulators and that Cmax is a point like any other in the curve, also susceptible to swap of samples or mistakes in the quantification.
I agree that the area around tmax is problematic, but mostly if we consider the AUC. If you have frequent sampling around the expected tmax, a single BQL should not hurt in the comparison of Cmax.
❝ In neither of those cases we are in favour of excluding time points because a good intention of interpolation of data may result in more bias than before.
Strongly disagree. Example: theoretical profile 100(ℯ–0.1155 – ℯ–0.6931), AUC∞ 721.3; noise added. λz estimated with 0.1093. Original data and the 8 h sample set to zero or BQL. Below how the data are interpreted by software (Phoenix/WinNonlin, Kinetica):
t C pAUC¹ pAUC² C pAUC¹ pAUC² C pAUC¹ pAUC²
───────────────────────────────────────────────────────────
0 0 0 0 0 0 0 0 0 0
1 41.2 20.6 20.6 41.2 20.6 20.6 41.2 20.6 20.6
2 54.2 68.3 68.3 54.2 68.3 68.3 54.2 68.3 68.3
3 61.6 126.2 126.2 61.6 126.2 126.2 61.6 126.2 126.2
4 56.6 185.3 185.3 56.6 185.3 185.3 56.6 185.3 185.3
6 47.1 289.0 288.7 47.1 289.0 288.7 47.1 289.0 288.7
8 36.4 372.5 371.7 0 336.1 335.8 BQL interpolated
12 23.0 491.3 488.5 23.0 382.1 381.8 23.0 499.3 490.4
17 16.9 591.1 587.4 16.9 481.9 480.7 16.9 599.1 589.4
24 6.35 672.4 662.9 6.35 563.2 556.2 6.35 680.4 664.8
AUCinf 730.5 721.0 621.3 614.3 738.5 722.9
%RSE +1.27 –0.05 –13.9 –14.9 +2.38 +0.21
¹
linear trapezoidal²
lin-up / log down trapezoidalSetting to zero is nonsense (extreme bias). Lin-log interpolation of BQL is better than linear.
❝ […] When you disconsider one or more points of the dataset, you lose information …
Yes, but false “information”. What we measure does not necessarily represent the “truth”. If you find a “BQL” (no matter how often you repeat the analysis) embedded in a profile where the adjacent concentrations are 50× the LLOQ such a value is physiologically
❝ … and end up changing the outcome (in this case more the AUC outcome) and consequently the whole inference.
Correct. From a biased one to a plausible one.
❝ Think like this: if we have mistakes in clinical and/or analytical stages and that led to many triangulation points then it would be fair that you be penalized in terms of variability …
It’s not only the variability. At the 3rd EGA Symposium on Bioequivalence (London, June 2010) Gerald Beuerle presented an example where due to an obvious mix-up of samples between two subjects at Cmax the study would pass with the original values and fail after exchanging them (or excluding both subjects as well). Even in this case (likely BE falsely concluded) members of EMA’s PK group defended their position of keeping the value(s). Excuse me, but that’s bizarre.
See this case study. Don’t worry about the strange profiles. It was a pilot study of biphasic products and the sampling time points were far from optimal. The study was performed at one of the “big three” CROs. The barcode-system (scanning sample vials after centrifugation) was out of order in before noon in period 1. The SOP covered that, but required only signing a form – no four-eye-principle! Luckily we could confirm the sample mix-up. The anticoagulant was citrate, so the only parameters we could measure from plasma were γ-GT and albumine. The measured values agreed with what we got for these two subjects in the pre-/post-study lab exams. Luckily the subjects differed. It was a pilot study, so no big deal. But what if this would have been a pivotal one? The drug’s CVintra for Cmax is only 10–15% but the between-subject variability is 50–75%. In others words a single data point could screw up an entire study. When I presented this example to Kersti Oselin (at that time with UK’s MHRA) she stated that one can expect such events and should power the study to be protected against it. Well, that would mean 96 subjects instead of 14 (since the drug is subjected to polymorphism and in the worst case the mix-up is possible between subjects with the highest/lowest concentrations the sample size would rise to ~200). I don’t like the attitude of “increase the sample size” as the first (and sometimes only) remedy for problems.
I wish that a one month internship at a CRO becomes mandatory for assessors. Maybe then they would realize that the number 96 (or 200?) means human subjects.
❝ … due to your lack of GCP and GLP.
Nope. GxP deals with documentation, not correctness. You might be fully compliant with GxP and produce only rubbish.
❝ On the other hand, if you have triangulation points due to the kinetic of the drug that is highly variable over time, then you are not being penalized since that is the nature of the variable and it must be considered when planning the study (and the statistical analysis, sample size calculation, etc.).
True. That’s why I’m not in favor of “nice” profiles per se but consistency.
❝ smoothing of the concentration curves by interpolation may not be the safest way to correct a value that should be already right.
See the example above.
Dif-tor heh smusma 🖖🏼 Довге життя Україна!
![[image]](https://static.bebac.at/pics/Blue_and_yellow_ribbon_UA.png)
Helmut Schütz
![[image]](https://static.bebac.at/img/CC by.png)
The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Complete thread:
- ANVISA's POV on "triangulation points" Lucas 2015-03-20 21:05 [Regulatives / Guidelines]
- ANVISA's POV on "triangulation points" Helmut 2015-03-24 09:48
- ANVISA's POV on "triangulation points" felipeberlinski 2015-03-24 20:37
- ANVISA's POV on "triangulation points" ElMaestro 2015-03-24 22:16
- ANVISA's POV on "triangulation points" felipeberlinski 2015-03-25 13:52
- ANVISA's POV on "triangulation points" ElMaestro 2015-03-24 22:16
- ANVISA's POV on "triangulation points" nobody 2015-03-25 10:43
- ANVISA's POV on "triangulation points" Lucas 2015-03-25 15:40
- ANVISA's POV on "triangulation points" Helmut 2015-03-27 15:03
- ANVISA's POV on "triangulation points" Weidson 2015-03-30 23:00
- ANVISA's POV on "triangulation points"Helmut 2015-04-07 15:03
- ANVISA's POV on "triangulation points" Weidson 2015-04-09 21:58
- ANVISA's POV on "triangulation points"Helmut 2015-04-07 15:03
- ANVISA's POV on "triangulation points" Lucas 2015-03-31 00:27
- Baseline correction Helmut 2015-04-07 15:30
- ANVISA's POV on "triangulation points" Weidson 2015-03-30 23:00
- ANVISA's POV on "triangulation points" Helmut 2015-03-27 15:03
- ANVISA's POV on "triangulation points" felipeberlinski 2015-03-24 20:37
- ANVISA's POV on "triangulation points" MarceloCosta 2015-04-06 21:52
- ANVISA's POV on "triangulation points" Helmut 2015-04-07 01:59
- ANVISA's POV on "triangulation points" Helmut 2015-03-24 09:48