ANVISA's POV on "triangulation points" [Regulatives / Guidelines]

posted by Weidson – Brazil, 2015-04-09 23:58 (3677 d 11:25 ago) – Posting: # 14679
Views: 10,468

Hello Helmut!

Enlightenment is man’s emergence from his self-imposed immaturity for which he himself was responsible. Immaturity and dependence are the inability to use one’s own intellect without the direction of another. One is responsible for this immaturity and dependence, if its cause is not a lack of intelligence, but a lack of determination and courage to think without the direction of another. Sapere aude! Have courage to use your own under­standing! is therefore the slogan of Enlightenment.

❝       Immanuel Kant, Answering the Question: What is Enlightenment? (1784)


❝ I’m definitely in favor of my own thinking over following guidelines unquestioned.


First of all congratulations for your post. This reflexion from Kant is indeed very deep and express exactly what we both think. It certainly will motivate others members of this forum. Like you, I am also capable of formulate my own thinking, guided by my knowledge, experience and logic but always considering the fundamental principles that are the pillars of each field of knowledge. I guess that's why I decided to join this discussion and expose my opinion about it all. But now moving on to a few more viewpoints:

❝ Almost all of it.


I understand your Point Of View. Critical sense, experience and consistency of the PK profiles are indeed relevant factors that can't be detached from the final result, being positive or negative. However, "only" critical sense and even experience, no matter how advanced or well intentioned, would not be the safest way to disqualify one or more points that, until proven the opposite, has been collected within the predicted time and quantified with a validated bioanalytical method.

❝ The shape of a particular profile must not be “nice” but should be consistent with the others.


True. Would be wonderful if all profiles were consistent with each other. That would be ideal. However we can't forget that we are talking about countless factors involved since the administration of the drug until the quantification of the samples. One of them, for instance, is the interaction between release of the drug and its metabolism/absorption. This interaction can't follow the theoretical models since each subject's organism can behave differently each day. Besides, there are countless factors that have not been identified by theoretical models that in majority are univariate. IMHO to talk about only expected behavior for each point in the PK profile might not be an easy task and, consequently, not the safest way to disqualify any point in the curve.

❝ I agree that the area around tmax is problematic, but mostly if we consider the AUC. If you have frequent sampling around the expected tmax, a single BQL should not hurt in the com­pa­rison of Cmax.


Well, maybe my comment was not well understood. I'm sorry if it was not clear!! What I meant is that: it's very common to find discrepant Cmax in relation with the adjacent points. Since all points are important in the curve for a near perfect derivation of the primary endpoints (AUC and Cmax), then why should we disregard triangulation points and not disregard discrepant Cmax using the same criterion, that is the expected behavior. In your opinion, should we have more than one criterion? You think that the acceptance criterion should be different even though you know that all points were (theoretically) obtained from the same experimental conditions?

❝ Strongly disagree. Example: theoretical profile 100(ℯ–0.1155 – ℯ–0.6931), AUC 721.3; noise added. λz estimated with 0.1093. Original data and the 8 h sample set to zero or BQL. Below how the data are inter­preted by software (Phoenix/WinNonlin, Kinetica):


I really understand you opinion, and in fact I’m grateful for it. If I am right, by evaluating your example I realized that you intended to show that the standard error of the model when you used the interpolation of the data with linear trapezoidal rule and also with the lin-up/ log down trapezoidal rule is closer to the standard error found for the theoretical model. I also understood that between the two rules, the lin-up/ log down was the one with a smaller standard error, suggesting that it would be the best rule when interpolation is done. Well, going by your example you are correct. But is always good to emphasize that those results were in fact expected because when you disregard an outlier point before the construction of the new model the standard error will be smaller. Especially if it is in the middle of the curve, as on the example. Please keep in mind that we are discussing to consider or not a certain concentration point which we have no concrete proof that it is not qualified to be “real” or “correct”. The point that you made is more related to the improvement of the performance of the model when outliers are disregarded. If you have proof that the null point was caused by a swap of samples or even a fail from the analyst when analyzing it, then there is no doubt that you should exclude it. If by any technical issue you can’t correct its value, and you have the proof that the point is wrong, then I agree that the only way out is to exclude the points and interpolate the curve to minimize bias.

❝ Yes, but false “information”.


Once again, what gives you certainty that the information is indeed false without the proof? After all, being improbable (low probability of happening) does not mean it will never occur.

❝ What we measure does not necessarily represent the “truth”


Fully agree!! After all we are talking about samples and not population. That’s why we always must be alert not only to the quality of the measures but also with the quality of the whole study.

❝ If you find a “BQL” (no matter how often you repeat the analysis) embedded in a profile where the adjacent concentrations are 50× the LLOQ such a value is physiologically impossible very, very unlikely.


Ok! I agree in general and find that reasonable. But I would be very careful to say that would be always very unlikely. After all we are not talking exclusively about human physiology. We are also talking about of interactions between the physiology and the variety of pharmaceutical forms. Remember that we are discussing a procedure that is being use for all drugs that are involved in comparative bioavailability studies. You don’t think that might be exceptions to this rule and your strong opinion could be restricted to a group of drugs and not all of them? Please, think about it and check real data of PK profiles of pentoxifylline modified release, it would be something like this:

[image]
[image]
PS.: Those profiles are consistent with the rest of the subjects! This is a “normal” for this drug!

❝ It’s not only the variability. At the 3rd EGA Symposium on Bioequivalence (London, June 2010) Gerald Beuerle presented an example where due to an obvious mix-up of samples between two subjects at Cmax the study would pass with the original values and fail after exchanging them (or excluding both subjects as well). Even in this case (likely BE falsely concluded) members of EMA’s PK group defended their position of keeping the value(s). Excuse me, but that’s bizarre.


I agree with you on this. For this case EMA’s staff probably were suspicious about the veracity of the profiles and them requested the company to present evidence that the samples of those subjects were not swapped (they could’ve request a DNA test, for example). On the other hand, if the swap was proved, then the validity of the study would also be questioned since there were no assurances that other exchanges were made. Unfortunately the professionals with that kind of background are very rare.

❝ See this case study. Don’t worry about the strange profiles. It was a pilot study of bi­phasic products and the sampling time points were far from optimal. The study was per­formed at one of the “big three” CROs. The barcode-system (scanning sample vials after centri­fugation) was out of order in before noon in period 1. The SOP covered that, but required only signing a form – no four-eye-principle! Luckily we could confirm the sample mix-up.


They were very lucky to detect the error in is origin. It’s not always easy to detect or even trace a sample swap within the same subject.

❝ The anticoagulant was citrate, so the only parameters we could measure from plasma were γ-GT and albumine. The measured values agreed with what we got for these two sub­jects in the pre-/post-study lab exams. Luckily the subjects differed. It was a pilot study, so no big deal. But what if this would have been a pivotal one? The drug’s CVintra for Cmax is only 10–15% but the between-subject variability is 50–75%. In others words a single data point could screw up an entire study.


I understand perfectly your situation! I was once involved in a similar situation. But my case it was a pivotal study and the swap made the study fail. In that time we did not have a clinical facility in our BE center and the sponsor decided to do it in the facility that they trusted. When I realized that the CI was out of the acceptance range I decided to investigate. The situation was: one sample of a subject that had a high bioavailability was swapped with one sample of a subject with a low bioavailability. When we took the case to discuss with ANVISA, arguing that the samples were swapped and by inverting them the study would pass they informed that the study should fail in any way, for lack of GCP. After all, there was no way to assure them that no other samples were swapped.

❝ I don’t like the attitude of “increase the sample size” as the first (and sometimes only) remedy for problems.


OK. I understand your workflow. Around here we try to be more conservatives because the estimators tend to improve when the sample size is at least reasonable. When the drug is unknown we use at least 24 subjects in pilot studies. For many it can be considered as a “pricy guess” but in fact it’s not, when we look at the drugs currently being studied. A few years ago I made a research in my database to try to understand what would be the ideal N for the first study for a variety of “groups” of CVintra. What we discover is that 24 subjects would make possible to predict 71% of the cases of true bioequivalent of even not bioequivalent. For some sponsors 24 is a high number to invest in an exploratory study. For me it’s not, since there is no way to make an omelet without breaking a few eggs.

❝ I wish that a one month internship at a CRO becomes mandatory for assessors. Maybe then they would realize that the number 96 (or 200?) means human subjects.


Agreed! It’s a good point of view based on ethics.

❝ Nope. GxP deals with documentation, not correctness. You might be fully compliant with GxP and produce only rubbish.

I understand the formal concept of it. But taking it a little further IMHO GxP are in fact Good practice of execution. If you have tons of documents that does not help you to avoid mistakes than there are no good practice.

❝ True. That’s why I’m not in favor of “nice” profiles per se but consistency.


I am also in favor of consistency of profiles. However we have to be very careful when judging only models because what may see inconsistent could be explained by some variable that has not been incorporated in the model. After all, in philosophy one of the most absolute certainty is that we don’t know it all.

Thanks and best regards.

Complete thread:

UA Flag
Activity
 Admin contact
23,424 posts in 4,927 threads, 1,667 registered users;
65 visitors (0 registered, 65 guests [including 9 identified bots]).
Forum time: 11:23 CEST (Europe/Vienna)

Patients may recover in spite of drugs or because of them.    John Gaddum

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5