jag009 ★★★ NJ, 2017-03-20 05:42 (2925 d 10:57 ago) Posting: # 17166 Views: 25,604 |
|
Hi all, Just want to know what your experience is on this matter. The FDA stated that "Special Considerations: Applicants may consider using a reference-scaled average bioequivalence approach for x drug. If using this approach, the applicant should provide evidence of high variability (i.e., within-subject variability ≥30%) in bioequivalence parameters. Applicants who would like to use this approach are encouraged to submit a protocol for review by the Division of Bioequivalence in the Office of Generic Drugs." The above implies that one can conduct replicate studies (RSABE) if he/she has supportive data (ISCV>=30%). What if one conducted pilot studies and found that the ISCV is like 28/29%? My concensus still would be to proceed with RSABE approach since the it is a mixed approach which allows both RSABE (if Ref SD<0.294) and ABE analysis (if Ref SD>0.294) John |
ElMaestro ★★★ Denmark, 2017-03-20 07:55 (2925 d 08:44 ago) @ jag009 Posting: # 17167 Views: 24,366 |
|
Hi jag009, ❝ What if one conducted pilot studies and found that the ISCV is like 28/29%? My concensus still would be to proceed with RSABE approach since the it is a mixed approach which allows both RSABE(if Ref SD<0.294) and ABE analysis(if Ref SD>0.294) That's valid, but if you are not really sure if you are just "borderline" then the additional overhead associated with the replicated design may not be worth it. You find an intra-CVR of 31%, and you scale the limits a wee bit etc. But the price you paid for this moderate scaling option could be much higher than you'd be paying for a conventional 222BE trial with a few extra volunteers. Would be interesting to discuss an objective function here ![]() This isn' the answer, but just a view. — Pass or fail! ElMaestro |
Helmut ★★★ ![]() ![]() Vienna, Austria, 2017-03-20 14:12 (2925 d 02:27 ago) @ ElMaestro Posting: # 17169 Views: 24,628 |
|
Hi ElMaestro & John, ❝ ❝ My concensus still would be to proceed with RSABE approach since the it is a mixed approach which allows both RSABE(if Ref SD<0.294) and ABE analysis(if Ref SD>0.294) ❝ That's valid, but if you are not really sure if you are just "borderline" then the additional overhead associated with the replicated design may not be worth it. You find an intra-CVR of 31%, and you scale the limits a wee bit etc. But the price you paid for this moderate scaling option could be much higher than you'd be paying for a conventional 222BE trial with a few extra volunteers. I lean towards John’s idea. Say you estimated the 31% from a previous 2×2×2 study in 40 subjects and assume that CVwR=CVwT=CVw. The 95% CI of the CV is 25.1–40.6%. Hence, in the best case (high CVwR) the FDA’s implied BE-limits would be 70.58–141.69%. That’s rather substantial. ❝ Would be interesting to discuss an objective function here Some ideas: The number of treatments in a 2×2×4 study is roughly the same as in a 2×2×2 study – if we don’t plan for reference-scaling (which I likely would do in a borderline case).
Calculation of costs is complicated (fixed & variable costs, blahblah). In my CRO we had a spreadsheet with 100 rows… Had a quick look (19 samples / period, last sampling 16 hours; 2×2×2 in 42 subjects vs. 2×2×4 in 22). The replicate design would be ~4% cheaper. ![]() — Dif-tor heh smusma 🖖🏼 Довге життя Україна! ![]() Helmut Schütz ![]() The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
VStus ★ Poland, 2017-03-21 13:49 (2924 d 02:49 ago) @ Helmut Posting: # 17172 Views: 24,284 |
|
Hi Helmut, We were able to get more than 4% discount for replicate versus equivalent '2x2'. Our internal belief is that replicate is cheaper, but will take more time... And time is money... We would rather do replicate study instead of large '2x2' with 2 or more groups due to the capacity of the clinic (if wash-out is not long). Regards, VStus |
Dr_Dan ★★ Germany, 2017-03-21 14:06 (2924 d 02:33 ago) @ VStus Posting: # 17173 Views: 24,111 |
|
Dear VStus For your calculation please keep in mind: the more study periods you have the higher the risk of drop outs. — Kind regards and have a nice day Dr_Dan |
Helmut ★★★ ![]() ![]() Vienna, Austria, 2017-03-21 15:50 (2924 d 00:49 ago) @ Dr_Dan Posting: # 17176 Views: 24,116 |
|
Hi Dan, I agree with VStus. ❝ For your calculation please keep in mind: the more study periods you have the higher the risk of drop outs. As I wrote above: ❝ ❝ I would not be worried about the higher chance of dropouts in the replicate study. The impact on power is low (unless the drug is nasty and dropouts are caused by AEs). Try:
— Dif-tor heh smusma 🖖🏼 Довге життя Україна! ![]() Helmut Schütz ![]() The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
d_labes ★★★ Berlin, Germany, 2017-03-21 16:16 (2924 d 00:23 ago) @ Helmut Posting: # 17177 Views: 24,013 |
|
Dear All! Don't confuse the usage of "expected power" here by Helmut with the functions in package PowerTOST which estimate the expected power and / or sample size based on expected power, some sort of Bayesian power, taking into account the uncertainty of an observed point estimate of T vs. R and / or the uncertainty of an observed CV.See f.i. ?exppower.TOST and references given in that man page.— Regards, Detlew |
VStus ★ Poland, 2017-03-23 15:24 (2922 d 01:14 ago) @ Helmut Posting: # 17180 Views: 24,056 |
|
Daer Helmut, Dear Colleagues, It even hurts less (or doesn't hurt at all) in case of scaled bioequivalence. Last chunk of code modified:
CV <- 0.42 Best regards, VStus |
mahmoud-teaima ★ 2017-03-23 09:17 (2922 d 07:22 ago) @ Helmut Posting: # 17178 Views: 25,242 |
|
Hi Helmut, ❝ Calculation of costs is complicated (fixed & variable costs, blahblah). In my CRO we had a spreadsheet with 100 rows… Had a quick look (19 samples / period, last sampling 16 hours; 2×2×2 in 42 subjects vs. 2×2×4 in 22). The replicate design would be ~4% cheaper. Can you please help me for in the approximate estimation of cost of bioequivalence study?, would you please send me the template sheet that you use for such calculations. Greetings. — Mahmoud Teaima, PhD. |
M.tareq ☆ 2017-05-07 23:21 (2876 d 18:18 ago) @ Helmut Posting: # 17314 Views: 23,493 |
|
hi all from ethical view regarding replicate study vs standard crossover, first case: the published data suggest slightly boarder line CV% of 29-31% the required sample size will be around 42 volunteer without accounting for dropouts however if it will be replicated whether partially around (33 volunteer) or fully replicate around (22 volunteers) and power maintained at at least 80% without scaling to ref variability in standard design even though higher number of volunteers will be dosed but only for 2 periods and in replicate design fewer subjects will be dosed but for longer periods, my question about the ethical view of such approach vs sponsor risk due to inadequate design? 2nd case regarding drugs which aren't stated to be HVDP (from previous studies or assessment reports) suppose the CV was found to be around 20% and the sponsor or cro wishes to go for replicate design for safe planning that the drug might shows high CV ![]() whats the regulatory views regarding that matter? in EMA guidelines regarding replicate design and widening of acceptance range for cmax it stated that the drug known to be HVPD CV > 30 and not as a result of outliers so from ethical point of view ,given the drug isn't HVPD and it's stated in the protocol if the drug CV% was less than 30% the normal acceptance criteria will be applied, why the need for the extra periods/dosing of volunteers from ethical/scientific view? Thanks in advance |
Helmut ★★★ ![]() ![]() Vienna, Austria, 2017-05-08 16:21 (2876 d 01:18 ago) @ M.tareq Posting: # 17316 Views: 23,669 |
|
Hi M.tareq, ❝ first case: ❝ the published data suggest slightly boarder line CV% of 29-31% ❝ my question about the ethical view of such approach vs sponsor risk due to inadequate design? For ABE I don’t see a general advantage of replicate designs, except if one expects low droput-rates. When the dropout-rate exceeds ~5% the loss in power (more periods) might be substantial. Try this one:
There is an exception: If you suspect an HVD(P) you should perform the pilot study in a replicate design to get an estimate of CVwR (and CVwT in the full replicates) which is needed for estimating the sample size of the pivotal study. If the pilot study was a simple crossover you have to assume that CVw = CVwR = CVwT, which might be false. For borderline cases (no reference-scaling intended but one suspects that the CV might be higher than assumed) a Two-Stage Design can be a good alternative. A while ago I reviewed a manuscript exploring the pros and cons of TSDs vs. ABEL. Was very interesting and I hope that the authors submit a revised MS soon. ❝ 2nd case ❝ regarding drugs which aren't stated to be HVDP (from previous studies or assessment reports) ❝ suppose the CV was found to be around 20% and the sponsor or cro wishes to go for replicate design for safe planning that the drug might shows high CV “Just in case” never works. ❝ […] in EMA guidelines regarding replicate design and widening of acceptance range for cmax it stated that the drug known to be HVPD CV > 30 … Exactly. You have to state in the protocol that you intend reference-scaling. Furthermore, you have to give a justification that the widened acceptance range for Cmax is of no clinical relevance. IMHO, that’s bizarre (the FDA for good reasons doesn’t require it). HVD(P)s are safe and efficacious despite their high variability since their dose-response curves are flat. The fact that the originator’s drug was approved (no problems in phase III) and is on the market for years demonstrates that there are no safety/efficacy issues. ❝ … and not as a result of outliers Nobody knows how to deal with this story. ![]() ❝ so from ethical point of view, given the drug isn't HVPD and it's stated in the protocol if the drug CV% was less than 30% the normal acceptance criteria will be applied, … It doesn’t work that way. State in the protocol that you intent to scale and follow the EMA’s conditions. If CVwR ≤30% don’t scale or apply ABEL otherwise. ❝ why the need for the extra periods/dosing of volunteers from ethical/scientific view? For the decision (ABE or ABEL) based on CVwR a replicate design is required. — Dif-tor heh smusma 🖖🏼 Довге життя Україна! ![]() Helmut Schütz ![]() The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
nobody nothing 2017-05-08 16:52 (2876 d 00:47 ago) @ Helmut Posting: # 17317 Views: 23,369 |
|
❝ ❝ … and not as a result of outliers ❝ ❝ Nobody knows how to deal with this story. ... sorry, no. But this "it's safe because it has been shown to be safe in clinical practice over the last decades"-argumentation (question one above) never really went well in EU, I guess. At least you need to write a new, Chapter 2.5-like overview on the pk, pharmacology, safety to make the assessor sleep well next night after reading your application for a generic. On the other hand, assessors for originator's MAs have a much, much better sleep (better sleeping pills?), in my experience... Edit: Please don’t shout! [Helmut] — Kindest regards, nobody |
Helmut ★★★ ![]() ![]() Vienna, Austria, 2017-05-10 16:10 (2874 d 01:28 ago) @ nobody Posting: # 17343 Views: 23,413 |
|
Hi nobody, ❝ ❝ Nobody knows how to deal with this story. ❝ ❝ ... sorry, no. Didn’t mean you. Nobody ≠ nobody (case sensitive). ![]() ❝ […] At least you need to write a new, Chapter 2.5-like overview on the pk, pharmacology, safety to make the assessor sleep well next night after reading your application for a generic. On the other hand, assessors for originator's MAs have a much, much better sleep (better sleeping pills?), in my experience... Well, maybe assessors should take stimulants instead of sleeping pills. Les Benet’s claim “HVD(P)s are safe drugs” was based on two arguments:
Reference-scaling for the innovator’s product
Reference-scaling for a generic product
— Dif-tor heh smusma 🖖🏼 Довге життя Україна! ![]() Helmut Schütz ![]() The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
M.tareq ☆ 2017-05-15 01:04 (2869 d 16:35 ago) @ Helmut Posting: # 17353 Views: 23,142 |
|
Hi Helmut, Thanks for your comprehensive reply and explanation, ❝ while ago I reviewed a manuscript exploring the pros and cons of TSDs vs. ABEL. Was very interesting and I hope that the authors submit a revised MS soon. Can i get the link or the name of the paper when the authors submit it ? In General, if a drug isn't known to be HVDP and by definition of HVDP that, they have wide therapeutic index yet the CRO/Sponsor went with replicate design. How the assessor would consider that? mean if the published literature or pilot study suggest low CV of the reference /test product yet the cro/sponsor went with replicate design? Like abusing the use of scABE ? ![]() ❝ Nobody knows how to deal with this story. Agree, yet i read an ANDA submission -can't find the link now- where the sponsor detected outliers based on T/R ratio of each subject and redosing of such subjects along with other subjects who exhibited normal pkp profile; though it was after reviewing with FDA regulator. Another published paper about ibandronic acid https://www.ncbi.nlm.nih.gov/pubmed/24756462 the sponsor/CRO stated the definition of outliers using studentized residuals and boxplot to eliminate subjects with values away from the boxplot by more than 3 IQR. My point is, as your kindly said, it's best to state in the protocol how to deal with outliers especially regarding variability estimation and proving/showing to the assessor the reasons for excluding such outlier(s) from study or reviewing it with the regulator before submission of the data? Thanks and Best Regards. P.S: Apologies for being information/knowledge leecher atm, will try to get my seed/leech ratio up ^^ |
Helmut ★★★ ![]() ![]() Vienna, Austria, 2017-05-15 21:07 (2868 d 20:32 ago) @ M.tareq Posting: # 17355 Views: 23,205 |
|
Hi M.tareq, ❝ ❝ while ago I reviewed a manuscript exploring the pros and cons of TSDs vs. ABEL. Was very interesting and I hope that the authors submit a revised MS soon. ❝ ❝ Can i get the link or the name of the paper when the authors submit it ? If the revised MS will get accepted and published, for sure. ❝ In General, if a drug isn't known to be HVDP and by definition of HVDP that, they have wide therapeutic index yet the CRO/Sponsor went with replicate design. Any replicate design can be assessed for conventional ABE. If the sponsor intends to try RSABE it has to be stated in the protocol. Might be worthwhile in borderline cases (CVwR ~30%). ❝ How the assessor would consider that? mean if the published literature or pilot study suggest low CV of the reference /test product yet the cro/sponsor went with replicate design? I think you are mixing two things up. Replicate designs are applicable for ABE as well. If the sponsor aims for RSABE – in contrary to data in the public domain (few exist!) which suggests low variability – as a regulator I would be cautious. Note that regulators see a lot of studies. Was the CVwR caused by poor study conduct? That’s a gray zone. Pilot studies can be misleading. The estimated CV is not carved in stone. Try this one:
❝ Like abusing the use of scABE ? Maybe. ❝ ❝ Nobody knows how to deal with this story. ❝ ❝ Agree, yet i read an ANDA submission -can't find the link now- where the sponsor detected outliers based on T/R ratio of each subject and redosing of such subjects along with other subjects who exhibited normal pkp profile; though it was after reviewing with FDA regulator. Yes, that’s an old story (“re-dose the suspected outliers – both with T and R – together with at least five ‘normal’ subjects or 20% of the sample size, whichever is larger”). IMHO, the individual T/R-ratios are not a particularly good idea for assessing outliers (ignoring period effects). ❝ Another published paper about ibandronic acid https://www.ncbi.nlm.nih.gov/pubmed/24756462 … Highly variable like all bisposphonates. Can’t reproduce the estimated sample. I got 132 and not 138. ❝ … the sponsor/CRO stated the definition of outliers using studentized residuals and boxplot to eliminate subjects with values away from the boxplot by more than 3 IQR. Not surprising since Susana Almeida was co-chair of the Bioequivalence Working Group, European Generic and Biosimilar Medicines Association (EGA). At the joint EGA/EMA workshop (London, June 2010) I – as a joke! – suggested boxplots, which are nonparametric by nature. The EMA hates nonparametric statistics. But the panelists welcomed the “idea” and my joke made it to the Q&A document. My fault. Never presume humor. Now it’s carved in stone. This study demonstrated why not accepting reference-scaling for AUC might not be a good idea. Since the sample size is based on AUC, products with extreme T/R-ratios of Cmax would pass ABEL due to the high sample size. Given the results of this study: 90% chance to pass with T/R as low as 84.46% or 80% chance to pass with T/R 82.95%. Technically (i.e., according to the GL) nothing speaks against that. But do we want such products on the market? Wasn’t the case in this study (T/R for Cmax 102.56%), but… BTW, for the FDA the study could have been performed in just 45 (!) subjects – without risking extreme Cmax-ratios. ❝ My point is, as your kindly said, it's best to state in the protocol how to deal with outliers especially regarding variability estimation and proving/showing to the assessor the reasons for excluding such outlier(s) from study or reviewing it with the regulator before submission of the data? I would say so. — Dif-tor heh smusma 🖖🏼 Довге життя Україна! ![]() Helmut Schütz ![]() The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
M.tareq ☆ 2017-05-15 23:02 (2868 d 18:36 ago) @ Helmut Posting: # 17356 Views: 22,929 |
|
Thanks alot for your time and explanation. Best Regards |
DavidManteigas ★ Portugal, 2017-05-16 14:41 (2868 d 02:58 ago) @ Helmut Posting: # 17357 Views: 23,086 |
|
Hi Helmut, For the sake of curiosity, in your opinion should a regulator approve a generic submited as "highly variable" although all the previous trials reported low CV's? Technically, the criteria are well defined and no objection should be raised in principle. However, should a study that reports an high variation, when all the previous information reports otherwise, be considered scientifically sound for the demonstration of bioequivalence? Ideally, regulators would publish in their product-specific guidelines which compounds could be considered "highly variable"... Regards, David |
Helmut ★★★ ![]() ![]() Vienna, Austria, 2017-05-16 17:01 (2868 d 00:38 ago) @ DavidManteigas Posting: # 17358 Views: 23,276 |
|
Hi David, ❝ For the sake of curiosity, in your opinion should a regulator approve a generic submited as "highly variable" although all the previous trials reported low CV's? Technically, the criteria are well defined and no objection should be raised in principle. However, should a study that reports an high variation, when all the previous information reports otherwise, be considered scientifically sound for the demonstration of bioequivalence? Is the high variability enough to trigger an inspection? Likely. See Section 2 of the EMA’s CMDh Guidance on triggers for inspections of bioequivalence trials: Quick scan.
Regulators should assess studies based on the “whole body of evidence”. Would an assessor have the guts to reject an application without relevant findings in an inspection and risk to be overruled by the CHMP in a referral? Duno. ❝ Ideally, regulators would publish in their product-specific guidelines which compounds could be considered "highly variable"... True. Takes a while. As of today there are only 36 (adopted + draft). Reference-scaling recommended for capecitabine, levodopa/carbidopa/entacapone, posaconazole. In all other cases applicants are left out in the rain with this footnote:
— Dif-tor heh smusma 🖖🏼 Довге життя Україна! ![]() Helmut Schütz ![]() The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
ElMaestro ★★★ Denmark, 2017-05-17 05:30 (2867 d 12:08 ago) @ DavidManteigas Posting: # 17361 Views: 23,054 |
|
Hi DM, ❝ For the sake of curiosity, in your opinion should a regulator approve a generic submited as "highly variable" although all the previous trials reported low CV's? Technically, the criteria are well defined and no objection should be raised in principle. However, should a study that reports an high variation, when all the previous information reports otherwise, be considered scientifically sound for the demonstration of bioequivalence? The CV we observe is influenced by bioanalytical variability and possibly by factors noone understands well. For example, it has not -to the best of my knowledge- been much studied if certain populations are more within-variable than others. It is in my opinion entirely likely that if we go about studying a drug product in different places we will observe different CV's for some reason or other or by chance, even if we take great care with our experiment. Of course assessments should reflect the applicant's observations. Additional factors to consider: With time assays often get better, by and large - a CV obtained from a CRO using the API3000 10 years ago is in my opinion likely to be higher than if it were obatined today with an API6500 due to S:N phenomena (but I am not implying anything about "how much"), all other factors equal. CV for Cmax (possibly also CV for AUC) may perhaps only be justifiably compared if the time points for blood sampling are the same (they rarely are between studies). ❝ Ideally, regulators would publish in their product-specific guidelines which compounds could be considered "highly variable"... They already did that in the general guideline. Those with an observed CV above 30%. Simple as that. ![]() This is the sort of thing that is not covered by much literature and probably won't be for another 20 years?! — Pass or fail! ElMaestro |
nobody nothing 2017-05-17 09:48 (2867 d 07:51 ago) @ ElMaestro Posting: # 17362 Views: 22,855 |
|
❝ ...If you have 20 reports with a CV of 11% and one with a CV of 41% then you start wondering. OK, and (as a regulator) what are the options? Start an inspection of the CRO? Find some other "good reason" not to grant MA to the — Kindest regards, nobody |
nobody nothing 2017-05-17 11:31 (2867 d 06:08 ago) @ nobody Posting: # 17364 Views: 23,019 |
|
PS: No, I don't think that assay performance will do anything to intraindiv. CV of a drug product, as long as - not practically all samples are around LLOQ - effects are not random. Did some sims in the old daysTM, effect of assay when validated according to current recommendations is negligible. Especially for the "11% CV in 5 other trials, suddenly 40% in my trial" example there is no explanation remotely related to assay. — Kindest regards, nobody |
DavidManteigas ★ Portugal, 2017-05-17 13:25 (2867 d 04:13 ago) @ nobody Posting: # 17366 Views: 23,004 |
|
Thank you all for your feedbacks. It would be interesting to study the within study variation for the same compound in the same unit to understand which is the main source of variability. Although I understand the point of El Maestro about assay sensitivity, I don't think that could be pointed as a cause for a study which presented high variability when other studies reported low variability. I understand the approach to highly variable drugs, when the drug is actually highly variable per se and not due to study conduct or assay sensibility. |
ElMaestro ★★★ Denmark, 2017-05-17 14:07 (2867 d 03:32 ago) @ DavidManteigas Posting: # 17367 Views: 23,017 |
|
Hi DM, ❝ It would be interesting to study the within study variation for the same compound in the same unit to understand which is the main source of variability. Although I understand the point of El Maestro about assay sensitivity, I don't think that could be pointed as a cause for a study which presented high variability when other studies reported low variability. I understand the approach to highly variable drugs, when the drug is actually highly variable per se and not due to study conduct or assay sensibility. Then it all comes back to one of Helmut's favourite hobbies, calculation of confidence intervals for variabilities. I think the conclusion generally is that perhaps the drug is highly variable and perhaps it isn't. — Pass or fail! ElMaestro |
kumarnaidu ★ Mumbai, India, 2017-07-11 16:43 (2812 d 00:55 ago) @ ElMaestro Posting: # 17533 Views: 22,064 |
|
Hi all, Recently we did pilot and pivotal (Partial reference replicate) studies for WHO and we got high variability (CV=32%). In the USFDA product specific guidance they have suggested 2x2 crossover design for this drug. Based on our prior experience can we perform reference replicate study or need to go as per guidance? ![]() — Kumar Naidu |
ElMaestro ★★★ Denmark, 2017-07-11 16:56 (2812 d 00:43 ago) @ kumarnaidu Posting: # 17534 Views: 22,095 |
|
❝ Recently we did pilot and pivotal (Partial reference replicate) studies for WHO and we got high variability (CV=32%). In the USFDA product specific guidance they have suggested 2x2 crossover design for this drug. Based on our prior experience can we perform reference replicate study or need to go as per guidance? You probably have both options available. However, the gain with CV=32% is minute; I mean if you have a CVr of 32% in a replicate study you are widening the limits so minutely that I think it isn't worth the effort. At the end of the day those (semi)replicated are still far less routine than the standard 222BE jobs. And IEC's can have a lot of funny ideas when they review your protocol etc. That said, I'd like to know the background better before saying this is my final answer ![]() — Pass or fail! ElMaestro |