BE-proff ● 2016-07-30 20:21 (3151 d 20:04 ago) Posting: # 16529 Views: 9,146 |
|
Hi All, Saying, I have conducted a BE-study which doesn't show bioequivalence. Is it possible to understand what was the reason of this failure - poor formulation or incorrect time points in the protocol? Thank you |
ElMaestro ★★★ Denmark, 2016-07-31 01:15 (3151 d 15:10 ago) @ BE-proff Posting: # 16530 Views: 7,971 |
|
Hi BE-Proff, ❝ Saying, I have conducted a BE-study which doesn't show bioequivalence. Yes that can happen. ❝ Is it possible to understand what was the reason of this failure - poor formulation or incorrect time points in the protocol? In practice, you often can't directly tell what caused the problem. A few rules of thumb: 1. If the CI is completely outside the acceptance range (e.g. 54.66% - 77.19%) then you can say you have shown bioinequivalence. In this case it is statistically a formulation problem. At least. 2. If the CI is not within the acceptance range but the point estimate is "close" to 1.0, then your trial was most likely underpowered, and you would in this case be justified in repeating it. 3. If your CI is not within the acceptance range and point estimate is far from one, then there is really no good way of telling what has happened or how to remedy it. Perhaps you can repeat it, perhaps you shouldn't, it will always be a difficult discussion. It is pt. 3 that often happens, unfortunately. Look also at predose concentrations, subjects lacking AUCinf values, protocol deviations, failed analytical runs. These may indicate a practical issue. I believe there are people much more experienced than I who can write entire books about what to do when a study fails and when to repeat it. I hope the few points above are helpful. Let's hear your numbers and some background of your failure, please. — Pass or fail! ElMaestro |
BE-proff ● 2016-08-02 19:06 (3148 d 21:19 ago) @ ElMaestro Posting: # 16535 Views: 7,810 |
|
Hi ElMaestro, Greate and really useful information! As for figures-I will try to look through available resources.. |
DavidManteigas ★ Portugal, 2016-08-08 13:18 (3143 d 03:07 ago) @ BE-proff Posting: # 16541 Views: 7,519 |
|
I usually look at the individual test to reference ratios. They can be very informative regarding the variability of the study drug and to understand what has gone wrong. For instance, if almost all your TR ratios fall below 1 that may indicate that the test drug absorption is in fact lower than the reference product; on the other side, if they're equally distributed above and below 1 - even if some of the ratios are "outliers" - you have a strong evidence of a study really underpowered and possibly an highly variable drug. Post-hoc power calculations may also be useful. In my opinion, using this data is more informative than looking at the CI alone. |
d_labes ★★★ Berlin, Germany, 2016-08-09 10:31 (3142 d 05:54 ago) @ DavidManteigas Posting: # 16543 Views: 7,707 |
|
Dear David, ❝ ... Post-hoc power calculations may also be useful. Post-hoc power is useless! Search the forum to find numerous discussions on that topic. Have also a look at Helmut's lectures, especially about power and sample size estimation. For instance this one: "Sample Size Challenges in BE Studies and the Myth of Power" — Regards, Detlew |
ElMaestro ★★★ Denmark, 2016-08-09 15:00 (3142 d 01:25 ago) @ d_labes Posting: # 16545 Views: 7,609 |
|
Hi Detleffff and David, ❝ Post-hoc power is useless! Search the forum to find numerous discussions on that topic. ❝ Have also a look at Helmut's lectures, especially about power and sample size estimation. ❝ For instance this one: "Sample Size Challenges in BE Studies and the Myth of Power" Haha, this forum is sometimes a venue for extremists. ![]() ![]() ![]() Upon careful deliberation together with my crew, I have decided to award you both a full point and none of you are lined up for keelhauling or flogging at the moment. a. Detlefff, it is very true that posthoc power is often useless. It is too often used in a way that is not particularly informative and where clinical managers just seem to present the PHP in order to cling on to some dumb argument why they have conducted a good trial (even if it failed, of course). b. David, I think you are trying to make a point which was not picked up by Detleffff: If the trial fails it could be due to chance, and it might suggest that some of our assumptions were not met. In such a situation we could plug in the observed CV and observed GMR in the power equation and I can see your point, it may actually be one little step towards getting a better understanding. We might achieve the same by looking separately at CV and observed GMR, but certainly PHP is better than nothing in this situation. Having said this I, too, am not in too much favour of PHP; I am only writing this post because PHP exceptionally could have some limited use as David pointed out, even though alternatives to PHP in that situation would at least give the same info, if not more. But the more ways we can look at failures, the better we will understand them. A good day to you both ![]() Amen. — Pass or fail! ElMaestro |
d_labes ★★★ Berlin, Germany, 2016-08-09 16:30 (3141 d 23:55 ago) @ ElMaestro Posting: # 16549 Views: 7,455 |
|
Dear ElMaestro, wise words as always. But I think you don't have to defend David, he is himself "Manns genug". And suggesting alternatives to PHP, confessing of being not a friend of PHP but stating ❝ ... but certainly PHP is better than nothing in this situation. Thanks for your verdict ❝ ... none of you are lined up for keelhauling or flogging at the moment. ![]() — Regards, Detlew |
ElMaestro ★★★ Denmark, 2016-08-09 18:17 (3141 d 22:08 ago) @ d_labes Posting: # 16550 Views: 7,530 |
|
Dear Detlefff, ❝ And suggesting alternatives to PHP, confessing of being not a friend of PHP but stating ❝ ❝ ... but certainly PHP is better than nothing in this situation. ❝ is in my eyes a little bit schizophrenic. Yup, that's what years of Schützomycin abuse did to me. ![]() — Pass or fail! ElMaestro |
DavidManteigas ★ Portugal, 2016-08-09 15:51 (3142 d 00:34 ago) @ d_labes Posting: # 16546 Views: 7,594 |
|
Dear Detlew, If post-hoc power is useless why power calculations are used in Potvin methods for sequential designs? There is nothing wrong with post-hoc power calculations imo. What is wrong is the conclusions you take from it and if you look at post-hoc power alone to justify a failure. As I stated, for studies that fail to demonstrate bioequivalence a lot other methods to "dig" into the failure could be used. As El Maestro stated GMRs and CVs may be more informative alone, but in fact I believe they all provide a similar conclusion. When I look at post-hoc power, GMRs and CVs what I'm trying to check is if the problem was in the formulation or in the number of subjects included in the study. That being said, power calculations or CVs alone are not the way I use to check data for explanation of failures. As I stated, I like to check individual ratios and to perform sensitivity analysis by removing "extreme" subjects and check how the overall conclusion is affected. Sometimes, the explanation for those failures may be in those extreme subjects and discussions with clinical operations may unleash some problems not evident from the data collected. Pooling data from several studies in the same unit is also useful to check for potential "trends" in outlying values, although I never had the opportunity to do that. Regards, David |
d_labes ★★★ Berlin, Germany, 2016-08-09 16:11 (3142 d 00:14 ago) @ DavidManteigas Posting: # 16548 Views: 7,454 |
|
Dear David, ❝ If post-hoc power is useless why power calculations are used in Potvin methods for sequential designs? This is not post-hoc power but a tool for guiding us in navigation through the Potvin's decision schemes. You don't use the GMR from stage 1 in those power calculations and in the sample size estimation for stage 2. ❝ There is nothing wrong with post-hoc power calculations imo. Sorry, here I'm not your opinion. As exemplified in numerous posts in this forum and in the literature cited within them. Full ACK with all other points you made for post-mortem analysis of a failed BE study. — Regards, Detlew |