Lucas
★    

Brazil,
2015-03-20 22:05
(3296 d 14:19 ago)

Posting: # 14584
Views: 12,507
 

 ANVISA's POV on "triangulation points" [Regulatives / Guidelines]

Hi everybody.

I' like to share with all of you one doubt that I have regarding ANVISA's point of view on triangulation points. I don't even know if that concept is known by you guys, but a triangulation point is a null point (concentration <LLOQ) between two non-null points. ANVISA's staff understand that this is a point that makes no sense and even if it was reanalysed by the lab it should be deleted to make a interpolation between the non-null points. Keep in mind that we could have lots of triangulation points in sequence. I do not agree with that, in fact I find it difficult to agree with data disconsideration at all. If the data is available and you are certain of its veracity, then it should be included in PK derivation.
In light of a new BE study that almost was approved when these points were excluded and clearly not approved when not excluded I would like to know, since I'm no expert in EMA and FDA regulations, if anywhere else this is a standard procedure.
When we exclude points like those with the intention of eliminating a bias, we may be in fact creating another (big) one and we'll never know for sure. How would we know if the point that was excluded was wrong and not the ones surrounding it? That exclusion can cause a massive difference in AUC, changing even the study's conclusion in some cases.

What do you think of that procedure? :confused:

Tks in advance.

Lucas
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2015-03-24 10:48
(3293 d 01:36 ago)

@ Lucas
Posting: # 14606
Views: 10,849
 

 ANVISA's POV on "triangulation points"

Hi Lucas,

❝ I don't even know if that concept is known by you guys, but a triangulation point is a null point (concentration <LLOQ) between two non-null points.


Never heard this term. I have some sympathies with ANVISA’s approach. :-D

❝ […] EMA and FDA regulations, if anywhere else this is a standard procedure.


Though they don’t like it, the FDA accepts exclusion of data points based on PK reasons. Makes sense to me. Even if you confirmed a concentration to be <LLOQ such a value might be physiologically impossible – especially if the value is embedded by two high concentrations. You should unambigously state in an SOP (or better in the protocol) how to deal with this situations. You may opt for the lin-up/log-down trapezoidal method to calculate the AUC. If the value is not close to tmax another option is to exclude the subject from the comparison of AUCs and keep him/her for the comparison of Cmax.

If EMA is concerned BLQs cannot be removed.

❝ When we exclude points like those with the intention of eliminating a bias, we may be in fact creating another (big) one and we'll never know for sure.


In science we never can be sure. That’s the field of religion(s).

❝ How would we know if the point that was excluded was wrong and not the ones surrounding it?


I don’t like the common practice of repeating only the doubtful value. We always re-analyze the two neighbouring samples.

❝ That exclusion can cause a massive difference in AUC, changing even the study's conclusion in some cases.


Sure. Some ideas above. As long as you follow predefined procedures and the study has not be unblinded yet, I don’t see a problem.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
felipeberlinski
☆    

Brazil,
2015-03-24 21:37
(3292 d 14:48 ago)

@ Helmut
Posting: # 14611
Views: 10,657
 

 ANVISA's POV on "triangulation points"

Dears

It is a grey area.

As I'm not a stat, it would be difficult to point out using calculations.
At this forum there are several members that could give their opinion based on calculation better than me.

However I do agree that no data should be excluded once it was collected correctly. How can assure that? GCP? GLP? Another grey area.

Dear Helmut, ANVISA does not allow re-analysis by PK reasons. So an unexpected behaviour on PK curve should not be re-analyzed due to this, even if you know that the value found its impossible.
Your idea to analize the surrounding points its good but....I cannot foresee an acceptance for it since there are no guidance for such practice.


Solution? :confused:

Lucas: Release two reports, one using all points and another excluding those triangulation points. Go for a meeting and good luck, you'll need!

Regards
ElMaestro
★★★

Denmark,
2015-03-24 23:16
(3292 d 13:08 ago)

@ felipeberlinski
Posting: # 14612
Views: 10,668
 

 ANVISA's POV on "triangulation points"

Hi fb,

❝ Lucas: Release two reports, one using all points and another excluding those triangulation points. Go for a meeting and good luck, you'll need!


:-D:-D:-D:-D it is comments like this that makes it a pleasure to read the bebac forum. Thanks for it.

Pass or fail!
ElMaestro
felipeberlinski
☆    

Brazil,
2015-03-25 14:52
(3291 d 21:33 ago)

@ ElMaestro
Posting: # 14617
Views: 10,635
 

 ANVISA's POV on "triangulation points"

Hi ElMaestro

:-D:-D:-D:-D it is comments like this that makes it a pleasure to read the bebac forum. Thanks for it.


I'm sorry about this comment by there is no technical exit for regulators "wishes"
nobody
nothing

2015-03-25 11:43
(3292 d 00:42 ago)

@ Helmut
Posting: # 14615
Views: 10,662
 

 ANVISA's POV on "triangulation points"

❝ ...

❝ In science we never can be sure. That’s the field of religion(s).

❝ ...


WOW, and I always though it's completely the other way around! ;-) NICE, EMA, FDA, they all know it's exact science and are very sure about that. Or might it be that they have their knowledge directly from GOD?

But afaik HE doesn't change his mind that often...

Kindest regards, nobody
Lucas
★    

Brazil,
2015-03-25 16:40
(3291 d 19:45 ago)

@ Helmut
Posting: # 14618
Views: 10,614
 

 ANVISA's POV on "triangulation points"

Thanks everybody for all the responses.

Helmut

❝ [...] such a value might be physiologically impossible – especially if the value is embedded by two high concentrations.


I'm not a pharmacologist, so I don't have much knowledge regarding the impossibility of such value, but afaik there is the possibiility of an enterohepatic circulation, for example, which would make possible for the drug concentrations in plasma to have such "triangle" behaviour. Also, statisticians are mostly not in favor of outlier exclusion, and such points are outliers themselves. It's more important to understand why it happened than to just exclude it. It could have happened due to an accidental swap of samples, to enterohepatic circulation, high LLOQ or other unknown reason... We never have that smooth beautiful PK profile that we wish for, since there are drugs that has this "zig zag" PK profile. Drugs such as pentoxifylline or loratadine have by nature a behaviour that would make possible to points like that to be shown, and also we have more complicated situations like for endogenous drugs that after baseline correction might present lots of triagulations points.

❝ I don’t like the common practice of repeating only the doubtful value. We always re-analyze the two neighbouring samples.


That would be the best way, but ANVISA (as presented by Felipe in this thread) does not allow reanalysis due to PK reasons, in other to avoid a manipulation of the results.

Felipe and ElMaestro

❝ Release two reports, one using all points and another excluding those triangulation points. Go for a meeting and good luck, you'll need!


That's actually the way it is done around here now, but it's very hard to have a conclusion of the study if the rule of decision is in a "gray area"... They might go for the worst case or not.

tks

Lucas
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2015-03-27 16:03
(3289 d 20:22 ago)

@ Lucas
Posting: # 14626
Views: 10,439
 

 ANVISA's POV on "triangulation points"

Hi Lucas,

❝ ❝ […] such a value might be physiologically impossible – especially if the value is embedded by two high concentrations.


❝ […] afaik there is the possibiility of an enterohepatic circulation, for example, which would make possible for the drug concentrations in plasma to have such "triangle" behaviour.


Correct, but enterohepatic recirculation leads to secondary peaks – not drops. Since we are “talking BE” such a behavior is known beforehand and f.i. sampling should be adjusted accordingly. Isolated peaks are then more unlikely.

❝ Also, statisticians are mostly not in favor of outlier exclusion, and such points are outliers themselves. It's more important to understand why it happened than to just exclude it.


Correct again. It is a pity that some statisticians have limited knowledge of PK.
Statistics is just a tool. Many issues could be avoided if pharmacokineticists, bioanalysts, and statisticians talk more to each other already in designing studies.

❝ It could have happened due to an accidental swap of samples, […] high LLOQ or other unknown reason...


IMHO your first case is the most common one. :-(
Remember that the LLOQ is obtained from spiked samples. It might be that in a particular sample (coeluting compounds leading to a different matrix effect) the actual LLOQ might be higher.

❝ […] we have more complicated situations like for endogenous drugs that after baseline correction might present lots of triagulations points.


Yes, but that’s a different story. If the measured concentration equals the baseline, you get after subtraction a “true” zero. Such values are valid and should be used. If C <LLOQ then force it to zero.

❝ ❝ I don’t like the common practice of repeating only the doubtful value. We always re-analyze the two neighbouring samples.


❝ That would be the best way, but ANVISA does not allow reanalysis due to PK reasons, in other to avoid a manipulation of the results.


OK.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Weidson
☆    

Brazil,
2015-03-31 01:00
(3286 d 12:25 ago)

@ Helmut
Posting: # 14635
Views: 10,105
 

 ANVISA's POV on "triangulation points"

Hello Helmut!

❝ It is a pity that some statisticians have limited knowledge of PK.


I really understand your opinion. But I think that comments like that are unnecessary since they only spook from this discussion the good statisticians that also have PK knowledge. After all, they also could claim that “some pharmacokineticists also have limited knowledge of statistics” and I believe that this is not our intention with this ever excellent forum. What I think we need to do is to focus on Lucas’ question without any kind of judgement on which professional will be conducting the analysis. The question is only causing controversy because it goes against our personal beliefs as people who know some PK. Around here we do not make much differentiation between statisticians and pharmacokineticists because what really matters is the final result of the study and not a tiny part that would be exclusively relative to the behavior that we expect for each time point. We must not forget that each field of knowledge is oriented by a set of principles that when ignored can put into question the credibility of the whole (in this case the BE study). I think that the only reason that we envolve statistics in confirmatory assays is because we want (and need) a certain level of veracity in the results. Even though we might be well intentioned, acting by your own thinking without questioning it has never been the best way to deal with the unknown. If we only disregard one or more points of the individual profiles thinking that they’re impossible or unlikely, without the real proof of it, it might be “backfiring” since you don’t have also an assurance that all the other points are free of bias. How would we be sure of it? Critical sense? Experience? The nice shape of the profile? How would we be sure that in the same way that these points are unlikely and an discrepant Cmax would not be? We do know that exclusion of Cmax points are not well seen by regulators and that Cmax is a point like any other in the curve, also susceptible to swap of samples or mistakes in the quantification. In neither of those cases we are in favour of excluding time points because a good intention of interpolation of data may result in more bias than before.

❝ Statistics is just a tool.


I partially agree! But in the hands of those who doesn’t know how to use it ceases to be a tool to be a highly destructive weapon. :ok:

❝ Many issues could be avoided if pharmacokineticists, bioanalysts, and statisticians talk more to each other already in designing studies.


Agreed! IMHO Lucas only has raised this question because probably those three professionals are talking with each other in his company and think similarly, but ANVISA thinks different. Crystal clear rules like: “no data must be disregarded in the inference, unless there is a strong well documented reason for that” always rise the credibility of the final result. When you disconsider one or more points of the dataset, you lose information and end up changing the outcome (in this case more the AUC outcome) and consequently the whole inference. Think like this: if we have mistakes in clinical and/or analytical stages and that led to many triangulation points then it would be fair that you be penalized in terms of variability due to your lack of GCP and GLP. On the other hand, if you have triangulation points due to the kinetic of the drug that is highly variable over time, then you are not being penalized since that is the nature of the variable and it must be considered when planning the study (and the statistical analysis, sample size calculation, etc.). So, as I see, smoothing of the concentration curves by interpolation may not be the safest way to correct a value that should be already right.

Best Regards!
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2015-04-07 17:03
(3278 d 20:22 ago)

@ Weidson
Posting: # 14671
Views: 9,890
 

 ANVISA's POV on "triangulation points"

Hi Weidson,

❝ […] acting by your own thinking without questioning it has never been the best way to deal with the unknown.


Enlightenment is man’s emergence from his self-imposed immaturity for which he himself was responsible. Immaturity and dependence are the inability to use one’s own intellect without the direction of another. One is responsible for this immaturity and dependence, if its cause is not a lack of intelligence, but a lack of determination and courage to think without the direction of another. Sapere aude! Have courage to use your own under­standing! is therefore the slogan of Enlightenment.
       Immanuel Kant, Answering the Question: What is Enlightenment? (1784)


I’m definitely in favor of my own thinking over following guidelines unquestioned.

❝ If we only disregard one or more points of the individual profiles thinking that they’re impossible or unlikely, without the real proof of it, it might be “backfiring” since you don’t have also an assurance that all the other points are free of bias. How would we be sure of it? Critical sense? Experience? The nice shape of the profile?


Almost all of it. The shape of a particular profile must not be “nice” but should be consistent with the others.

❝ How would we be sure that in the same way that these points are unlikely and an discrepant Cmax would not be? We do know that exclusion of Cmax points are not well seen by regulators and that Cmax is a point like any other in the curve, also susceptible to swap of samples or mistakes in the quantification.


I agree that the area around tmax is problematic, but mostly if we consider the AUC. If you have frequent sampling around the expected tmax, a single BQL should not hurt in the com­pa­rison of Cmax.

❝ In neither of those cases we are in favour of excluding time points because a good intention of interpolation of data may result in more bias than before.


Strongly disagree. Example: theoretical profile 100(ℯ–0.1155 – ℯ–0.6931), AUC 721.3; noise added. λz estimated with 0.1093. Original data and the 8 h sample set to zero or BQL. Below how the data are inter­preted by software (Phoenix/WinNonlin, Kinetica):

 t  C    pAUC¹ pAUC²   C    pAUC¹ pAUC²   C    pAUC¹ pAUC² 
───────────────────────────────────────────────────────────
 0  0      0     0     0      0     0     0      0     0   
 1 41.2   20.6  20.6  41.2   20.6  20.6  41.2   20.6  20.6 
 2 54.2   68.3  68.3  54.2   68.3  68.3  54.2   68.3  68.3 
 3 61.6  126.2 126.2  61.6  126.2 126.2  61.6  126.2 126.2 
 4 56.6  185.3 185.3  56.6  185.3 185.3  56.6  185.3 185.3 
 6 47.1  289.0 288.7  47.1  289.0 288.7  47.1  289.0 288.7 
 8 36.4  372.5 371.7   0    336.1 335.8   BQL  interpolated
12 23.0  491.3 488.5  23.0  382.1 381.8  23.0  499.3 490.4 
17 16.9  591.1 587.4  16.9  481.9 480.7  16.9  599.1 589.4 
24  6.35 672.4 662.9   6.35 563.2 556.2   6.35 680.4 664.8 
  AUCinf 730.5 721.0        621.3 614.3        738.5 722.9 
  %RSE   +1.27 –0.05        –13.9 –14.9        +2.38 +0.21 

¹ linear trapezoidal
² lin-up / log down trapezoidal
Setting to zero is nonsense (extreme bias). Lin-log interpolation of BQL is better than linear.

❝ […] When you disconsider one or more points of the dataset, you lose information …


Yes, but false “information”. What we measure does not necessarily represent the “truth”. If you find a “BQL” (no matter how often you repeat the analysis) embedded in a profile where the adjacent concentrations are 50× the LLOQ such a value is physiologically impossible very, very unlikely.

❝ … and end up changing the outcome (in this case more the AUC outcome) and consequently the whole inference.


Correct. From a biased one to a plausible one.

❝ Think like this: if we have mistakes in clinical and/or analytical stages and that led to many triangulation points then it would be fair that you be penalized in terms of variability …


It’s not only the variability. At the 3rd EGA Symposium on Bioequivalence (London, June 2010) Gerald Beuerle presented an example where due to an obvious mix-up of samples between two subjects at Cmax the study would pass with the original values and fail after exchanging them (or excluding both subjects as well). Even in this case (likely BE falsely concluded) members of EMA’s PK group defended their position of keeping the value(s). Excuse me, but that’s bizarre.
See this case study. Don’t worry about the strange profiles. It was a pilot study of bi­phasic products and the sampling time points were far from optimal. The study was per­formed at one of the “big three” CROs. The barcode-system (scanning sample vials after centri­fugation) was out of order in before noon in period 1. The SOP covered that, but required only signing a form – no four-eye-principle! Luckily we could confirm the sample mix-up. The anticoagulant was citrate, so the only parameters we could measure from plasma were γ-GT and albumine. The measured values agreed with what we got for these two sub­jects in the pre-/post-study lab exams. Luckily the subjects differed. It was a pilot study, so no big deal. But what if this would have been a pivotal one? The drug’s CVintra for Cmax is only 10–15% but the between-subject variability is 50–75%. In others words a single data point could screw up an entire study. When I presented this example to Kersti Oselin (at that time with UK’s MHRA) she stated that one can expect such events and should power the study to be protected against it. Well, that would mean 96 subjects instead of 14 (since the drug is subjected to polymorphism and in the worst case the mix-up is possible between subjects with the highest/lowest concentrations the sample size would rise to ~200). I don’t like the attitude of “increase the sample size” as the first (and sometimes only) remedy for prob­lems.
I wish that a one month internship at a CRO becomes mandatory for assessors. Maybe then they would realize that the number 96 (or 200?) means human subjects.

❝ … due to your lack of GCP and GLP.


Nope. GxP deals with documentation, not correctness. You might be fully compliant with GxP and produce only rubbish.

❝ On the other hand, if you have triangulation points due to the kinetic of the drug that is highly variable over time, then you are not being penalized since that is the nature of the variable and it must be considered when planning the study (and the statistical analysis, sample size calculation, etc.).


True. That’s why I’m not in favor of “nice” profiles per se but consistency.

❝ smoothing of the concentration curves by interpolation may not be the safest way to correct a value that should be already right.


See the example above.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Weidson
☆    

Brazil,
2015-04-09 23:58
(3276 d 13:27 ago)

@ Helmut
Posting: # 14679
Views: 9,578
 

 ANVISA's POV on "triangulation points"

Hello Helmut!

Enlightenment is man’s emergence from his self-imposed immaturity for which he himself was responsible. Immaturity and dependence are the inability to use one’s own intellect without the direction of another. One is responsible for this immaturity and dependence, if its cause is not a lack of intelligence, but a lack of determination and courage to think without the direction of another. Sapere aude! Have courage to use your own under­standing! is therefore the slogan of Enlightenment.

❝       Immanuel Kant, Answering the Question: What is Enlightenment? (1784)


❝ I’m definitely in favor of my own thinking over following guidelines unquestioned.


First of all congratulations for your post. This reflexion from Kant is indeed very deep and express exactly what we both think. It certainly will motivate others members of this forum. Like you, I am also capable of formulate my own thinking, guided by my knowledge, experience and logic but always considering the fundamental principles that are the pillars of each field of knowledge. I guess that's why I decided to join this discussion and expose my opinion about it all. But now moving on to a few more viewpoints:

❝ Almost all of it.


I understand your Point Of View. Critical sense, experience and consistency of the PK profiles are indeed relevant factors that can't be detached from the final result, being positive or negative. However, "only" critical sense and even experience, no matter how advanced or well intentioned, would not be the safest way to disqualify one or more points that, until proven the opposite, has been collected within the predicted time and quantified with a validated bioanalytical method.

❝ The shape of a particular profile must not be “nice” but should be consistent with the others.


True. Would be wonderful if all profiles were consistent with each other. That would be ideal. However we can't forget that we are talking about countless factors involved since the administration of the drug until the quantification of the samples. One of them, for instance, is the interaction between release of the drug and its metabolism/absorption. This interaction can't follow the theoretical models since each subject's organism can behave differently each day. Besides, there are countless factors that have not been identified by theoretical models that in majority are univariate. IMHO to talk about only expected behavior for each point in the PK profile might not be an easy task and, consequently, not the safest way to disqualify any point in the curve.

❝ I agree that the area around tmax is problematic, but mostly if we consider the AUC. If you have frequent sampling around the expected tmax, a single BQL should not hurt in the com­pa­rison of Cmax.


Well, maybe my comment was not well understood. I'm sorry if it was not clear!! What I meant is that: it's very common to find discrepant Cmax in relation with the adjacent points. Since all points are important in the curve for a near perfect derivation of the primary endpoints (AUC and Cmax), then why should we disregard triangulation points and not disregard discrepant Cmax using the same criterion, that is the expected behavior. In your opinion, should we have more than one criterion? You think that the acceptance criterion should be different even though you know that all points were (theoretically) obtained from the same experimental conditions?

❝ Strongly disagree. Example: theoretical profile 100(ℯ–0.1155 – ℯ–0.6931), AUC 721.3; noise added. λz estimated with 0.1093. Original data and the 8 h sample set to zero or BQL. Below how the data are inter­preted by software (Phoenix/WinNonlin, Kinetica):


I really understand you opinion, and in fact I’m grateful for it. If I am right, by evaluating your example I realized that you intended to show that the standard error of the model when you used the interpolation of the data with linear trapezoidal rule and also with the lin-up/ log down trapezoidal rule is closer to the standard error found for the theoretical model. I also understood that between the two rules, the lin-up/ log down was the one with a smaller standard error, suggesting that it would be the best rule when interpolation is done. Well, going by your example you are correct. But is always good to emphasize that those results were in fact expected because when you disregard an outlier point before the construction of the new model the standard error will be smaller. Especially if it is in the middle of the curve, as on the example. Please keep in mind that we are discussing to consider or not a certain concentration point which we have no concrete proof that it is not qualified to be “real” or “correct”. The point that you made is more related to the improvement of the performance of the model when outliers are disregarded. If you have proof that the null point was caused by a swap of samples or even a fail from the analyst when analyzing it, then there is no doubt that you should exclude it. If by any technical issue you can’t correct its value, and you have the proof that the point is wrong, then I agree that the only way out is to exclude the points and interpolate the curve to minimize bias.

❝ Yes, but false “information”.


Once again, what gives you certainty that the information is indeed false without the proof? After all, being improbable (low probability of happening) does not mean it will never occur.

❝ What we measure does not necessarily represent the “truth”


Fully agree!! After all we are talking about samples and not population. That’s why we always must be alert not only to the quality of the measures but also with the quality of the whole study.

❝ If you find a “BQL” (no matter how often you repeat the analysis) embedded in a profile where the adjacent concentrations are 50× the LLOQ such a value is physiologically impossible very, very unlikely.


Ok! I agree in general and find that reasonable. But I would be very careful to say that would be always very unlikely. After all we are not talking exclusively about human physiology. We are also talking about of interactions between the physiology and the variety of pharmaceutical forms. Remember that we are discussing a procedure that is being use for all drugs that are involved in comparative bioavailability studies. You don’t think that might be exceptions to this rule and your strong opinion could be restricted to a group of drugs and not all of them? Please, think about it and check real data of PK profiles of pentoxifylline modified release, it would be something like this:

[image]
[image]
PS.: Those profiles are consistent with the rest of the subjects! This is a “normal” for this drug!

❝ It’s not only the variability. At the 3rd EGA Symposium on Bioequivalence (London, June 2010) Gerald Beuerle presented an example where due to an obvious mix-up of samples between two subjects at Cmax the study would pass with the original values and fail after exchanging them (or excluding both subjects as well). Even in this case (likely BE falsely concluded) members of EMA’s PK group defended their position of keeping the value(s). Excuse me, but that’s bizarre.


I agree with you on this. For this case EMA’s staff probably were suspicious about the veracity of the profiles and them requested the company to present evidence that the samples of those subjects were not swapped (they could’ve request a DNA test, for example). On the other hand, if the swap was proved, then the validity of the study would also be questioned since there were no assurances that other exchanges were made. Unfortunately the professionals with that kind of background are very rare.

❝ See this case study. Don’t worry about the strange profiles. It was a pilot study of bi­phasic products and the sampling time points were far from optimal. The study was per­formed at one of the “big three” CROs. The barcode-system (scanning sample vials after centri­fugation) was out of order in before noon in period 1. The SOP covered that, but required only signing a form – no four-eye-principle! Luckily we could confirm the sample mix-up.


They were very lucky to detect the error in is origin. It’s not always easy to detect or even trace a sample swap within the same subject.

❝ The anticoagulant was citrate, so the only parameters we could measure from plasma were γ-GT and albumine. The measured values agreed with what we got for these two sub­jects in the pre-/post-study lab exams. Luckily the subjects differed. It was a pilot study, so no big deal. But what if this would have been a pivotal one? The drug’s CVintra for Cmax is only 10–15% but the between-subject variability is 50–75%. In others words a single data point could screw up an entire study.


I understand perfectly your situation! I was once involved in a similar situation. But my case it was a pivotal study and the swap made the study fail. In that time we did not have a clinical facility in our BE center and the sponsor decided to do it in the facility that they trusted. When I realized that the CI was out of the acceptance range I decided to investigate. The situation was: one sample of a subject that had a high bioavailability was swapped with one sample of a subject with a low bioavailability. When we took the case to discuss with ANVISA, arguing that the samples were swapped and by inverting them the study would pass they informed that the study should fail in any way, for lack of GCP. After all, there was no way to assure them that no other samples were swapped.

❝ I don’t like the attitude of “increase the sample size” as the first (and sometimes only) remedy for problems.


OK. I understand your workflow. Around here we try to be more conservatives because the estimators tend to improve when the sample size is at least reasonable. When the drug is unknown we use at least 24 subjects in pilot studies. For many it can be considered as a “pricy guess” but in fact it’s not, when we look at the drugs currently being studied. A few years ago I made a research in my database to try to understand what would be the ideal N for the first study for a variety of “groups” of CVintra. What we discover is that 24 subjects would make possible to predict 71% of the cases of true bioequivalent of even not bioequivalent. For some sponsors 24 is a high number to invest in an exploratory study. For me it’s not, since there is no way to make an omelet without breaking a few eggs.

❝ I wish that a one month internship at a CRO becomes mandatory for assessors. Maybe then they would realize that the number 96 (or 200?) means human subjects.


Agreed! It’s a good point of view based on ethics.

❝ Nope. GxP deals with documentation, not correctness. You might be fully compliant with GxP and produce only rubbish.

I understand the formal concept of it. But taking it a little further IMHO GxP are in fact Good practice of execution. If you have tons of documents that does not help you to avoid mistakes than there are no good practice.

❝ True. That’s why I’m not in favor of “nice” profiles per se but consistency.


I am also in favor of consistency of profiles. However we have to be very careful when judging only models because what may see inconsistent could be explained by some variable that has not been incorporated in the model. After all, in philosophy one of the most absolute certainty is that we don’t know it all.

Thanks and best regards.
Lucas
★    

Brazil,
2015-03-31 02:27
(3286 d 10:58 ago)

@ Helmut
Posting: # 14636
Views: 10,110
 

 ANVISA's POV on "triangulation points"

Mr Schuetz!

❝ Correct, but enterohepatic recirculation leads to secondary peaks – not drops.


Yes, of course. I mean exactly that... the peaks can create a triangulation point. Imagine that in the elimination bit, when the drug is almost eliminated, a reabsortion occurs, then it could cause a triangulation.

❝ It is a pity that some statisticians have limited knowledge of PK.


Oh yes, for sure. Also a pity that pharmacokineticists have limited knowledge of stats, and that clinical experts have limited knowledge of stats or analytics, and so on. We could go all day on this... That's why we assemble a team to plan together a study. :-D

But we have to keep in mind that this is not a question of PK exclusively, since there are more fields of knowledge involved here. For instance, as Weidson said in his post, these points could be mistakes from clinical and/or analytical stages, or nature of the drug. I don't think is good to "ignore" statistics principles based on something that we think does not make "pharmacokinetic sense".

❝ Yes, but that’s a different story. If the measured concentration equals the baseline, you get after subtraction a “true” zero. Such values are valid and should be used. If C <LLOQ then force it to zero.


That is in fact another discussion... and a very long one. When the corrected concentration results in a negative value it makes no sense IMHO, and they ask us to force this point to zero... That's a conversation for another time.

Tks for the response!

Lucas
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2015-04-07 17:30
(3278 d 19:55 ago)

@ Lucas
Posting: # 14672
Views: 9,686
 

 Baseline correction

❝ Mr Schuetz!


Mr Teixeira! :-D

❝ ❝ Yes, but that’s a different story. If the measured concentration equals the baseline, you get after subtraction a “true” zero. Such values are valid and should be used. If C <LLOQ then force it to zero.


❝ When the corrected concentration results in a negative value it makes no sense IMHO, …


What do you suggest instead?

❝ … and they ask us to force this point to zero...


I have given up monitoring ANVISA’s website, but we find such a statement in numerous of FDA’s guidances (e.g., ergocalciferol):

We recommend that applicants mea­sure and approximate the baseline endogenous levels in blood (plasma) and subtract these levels from the total concentrations mea­sured from each subject after the drug product is administered. In this way, you can achieve an estimate of BE of the products.
If a baseline correction results in a negative plasma concentration value, the value should be set equal to 0 before calculating the baseline-corrected AUC. […] Deter­mi­nation of BE should be based on the baseline-corrected data.


Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
MarceloCosta
☆    

Brazil,
2015-04-06 23:52
(3279 d 13:33 ago)

@ Lucas
Posting: # 14667
Views: 9,711
 

 ANVISA's POV on "triangulation points"

Hello Everybody,

In my point of view there is no doubt about this discussion.

1 - zero is zero.
2 - missing must be interpolated.
3 - No exclusion for outliers.

something different of this is a bad interpretation.

Must be considered in ours SOP's.

Respect all rules. All SOP's. Less manipulation.

For a better Clinical and Analytical pratices.


I know zero point in the middle of the curve is a problem, but what is the cause?
who will decide what to do in each case? I think the Agency will.

Pk Analists and Statisticians must justify your decisions.

Report the real data!!!


Edit: Full quote removed. Please delete everything from the text of the original poster which is not necessary in understanding your answer; see also this post! [Helmut]
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2015-04-07 03:59
(3279 d 09:26 ago)

@ MarceloCosta
Posting: # 14668
Views: 9,714
 

 ANVISA's POV on "triangulation points"

Hi Marcelo,

welcome to the 12th member of the BEBA-Forum from Brazil!

❝ 1 - zero is zero.


Zero does not exist in practice. Not in bioanalytics. Very rarely in PK (by convention: the pre-dose concentration in the first period of a crossover study and by assumption: the one in higher periods).
  1. After some shouting matches (some people call that “lively discussions”) already in the first Crystal City Conference (Arlington 1990) a consensus was reached that concentrations below the Lower Limit Of Quantification (LLOQ) should be reported with a non-numeric code (i.e., “BLQ”) – not zero.
    If someone reports 0 anywhere in the profile that is either a typing error or proof of lack of under­standing how calibration works. The LLOQ is the lowest concentration where we have still acceptable inaccuracy and precision (for chromatographic methods 20%/±20% and for LBAs 30%/±30%). Generally the LLOQ is ~3–5 times the Limit of Detection (LOD). In other words, you see something but should not use it. It‘s not zero – look at the chromatograms. The LOD itself is (depending on the definition) 3–6 times the response of the blank.
    Imagine: In theory you have a perfectly linear calibration curve through the origin. In 50% of analytical runs you will find a negative intercept and in 50% a positive one. Is it possible to back-calculate a response resulting in exactly zero? Yes, but only if the response equals the intercept. If the chromatographic data system is properly set up, the integration threshold generally is around the LOD. You will never be able to get such a peakarea.
  2. Lack of understanding PK. You have a good bioanalytical method with an LLOQ of 1% of the Cmax and a washout of ten half lives. At the second period of a crossover you have a residual concentration of ~0.1%. Will the analytical method be sufficient to quantify it? No way. Likely it will be even below the LOD (0.2–0.3%). But since the concentration exists it should be reported as “BLQ”, not zero.

❝ 2 - missing must be interpolated.


Agree. But: Throw the linear trapezoidal rule into the waste bin. Gives a positive bias in the distri­bu­tion / elimination phases. It is a miracle to me why I see it in so many studies. The reign of the pocket calculator is history.

❝ 3 - No exclusion for outliers.


Disagree (see 1.a.) and read my posts above. Regulatory views are not necessarily good science.

❝ Must be considered in ours SOP's.


Agree. No cherry-picking. Must be done before unblinding.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
UA Flag
Activity
 Admin contact
22,957 posts in 4,819 threads, 1,636 registered users;
108 visitors (0 registered, 108 guests [including 9 identified bots]).
Forum time: 12:25 CET (Europe/Vienna)

With four parameters I can fit an elephant,
and with five I can make him wiggle his trunk.    John von Neumann

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5