unique_one ☆ India, 2014-07-08 10:58 (3810 d 20:19 ago) Posting: # 13234 Views: 17,046 |
|
Dear All, Results obtained from 2 way-crossover study of 28 volunteers are as mentioned below: 90% CI AUC: Lower limit - 98.35, Upper limit - 132.50 90% CI Cmax: Lower limit - 95.38, Upper limit - 127.80 T/R ratio of AUC: 112.2 T/R ratio of Cmax: 109.5 Intrasubject CV% AUC: 27.3 Intrasubject CV% Cmax: 28.2 Questions:
unique_one. Edit: Category changed. [Helmut] |
ElMaestro ★★★ Denmark, 2014-07-08 12:08 (3810 d 19:09 ago) @ unique_one Posting: # 13235 Views: 15,149 |
|
Hi unique_one, Excellent example. ❝ Questions: ❝ ❝ 1. Primary reason for BE Failure? Noone can tell. It might be a bioinequivalent product, or it might be a bioequivalent product. Your result cannot distinguish the two. ❝ 2. Do re-formulation would be needed? If you think the two products are bioinequivalent, then obviously yes. If you think the two products are bioequivalent with the estimated GMR reflecting true product performance, then you might not be able to afford the sample size necessary for another study in which case reformulation is necessary. You might opt for a repeat study, too. You can probably assume any GMR within the CI's (although a GMR close to the some of the extremes are likely a bit unwise and unethical. Hötzi rants are a possibility). There is little literature covering such situations, but I know people are working on it - publication within 12 months or so... — Pass or fail! ElMaestro |
unique_one ☆ India, 2014-07-08 12:42 (3810 d 18:35 ago) @ ElMaestro Posting: # 13236 Views: 15,117 |
|
Hello ElMaestro, Thank you very much for the info! However, as you mentioned that by considering the obtained T/R ratio, a higher sample size would be required. Could you please let me know that what could be in approx sample size if study is repeated? Secondly, as per our formulation experts team, they mentioned that as T/R ratio is quite well near 110% for both parameters, the reformulation might not be required in this case. Please advice. Thanks, unique_one. |
Helmut ★★★ Vienna, Austria, 2014-07-08 14:00 (3810 d 17:17 ago) @ unique_one Posting: # 13237 Views: 15,258 |
|
Hi! ❝ However, as you mentioned that by considering the obtained T/R ratio, a higher sample size would be required. ❝ Could you please let me know that what could be in approx sample size if study is repeated? I strongly suggest not to use values obtained in the study hoping that you get exactly the same ones in a repeated one. They are estimates, not “carved in stone”. Consider reading one of my presentations about sample size estimation. You will see that in your case (similar CVs of AUC and Cmax) the ratio is more important. To give you a first idea (using exactly the values for AUC): 78 subjects for 80% power and 108 for 90%. Get / package PowerTOST and run a sensitivity analysis according to ICH E9:The method by which the sample size is calculated should be given in the protocol, together with the estimates of any quantities used in the calculations (such as variances, mean values, […] difference to be detected). The basis of these estimates should also be given. It is important to investigate the sensitivity of the sample size estimate to a variety of deviations from these assumptions and this may be facilitated by providing a range of sample sizes appropriate for a reasonable range of deviations from assumptions. ❝ unique_one. Always remember that you are absolutely unique. Just like everyone else. Margaret Mead — Dif-tor heh smusma 🖖🏼 Довге життя Україна! Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
unique_one ☆ India, 2014-07-08 16:01 (3810 d 15:16 ago) @ Helmut Posting: # 13240 Views: 15,095 |
|
Dear Helmut, Thanks for the info! I just wanted to add one more info to the matter! In the study, six subjects dropped out due to different reasons. Therefore, only 22 evaluable subjects were available for PK calculations and BE evaluation. Whether this had resulted in higher Intrasubject variability and due to which the study may have failed? Look forward to hear from you on the same. regards, unique_one Edit: Please don’t open new posts if you want to add something; I merged your follow-up post. You can edit your posts for 24 hours (see also the Forum’s Policy). [Helmut] |
Helmut ★★★ Vienna, Austria, 2014-07-08 19:39 (3810 d 11:39 ago) @ unique_one Posting: # 13244 Views: 15,141 |
|
Hi, ❝ In the study, six subjects dropped out due to different reasons. ❝ Therefore, only 22 evaluable subjects were available for PK calculations and BE evaluation. 6/28 – that’s a lot! Which were the “different reasons”? How many subjects did complete each sequence? The latter information is only important if n1 ≠ n2 (imbalanced sequences). ❝ Whether this had resulted in higher Intrasubject variability and due to which the study may have failed? Possible, but unlikely. With your remaining sample size the CVs and ratios should be pretty “stable”. To give you an impression we can calculate confidence limits of the CV. Let’s assume two cases: A CV of 28% obtained from 28 and 22 subjects; upper CL, α 0.2: n CL of CV (%) As expected the estimate from 28 subjects is more precise (lower upper CL), but the gain compared to 22 subjects is not dramatic. — Dif-tor heh smusma 🖖🏼 Довге життя Україна! Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
unique_one ☆ India, 2014-07-09 08:59 (3809 d 22:19 ago) @ Helmut Posting: # 13246 Views: 14,997 |
|
Hi Helmut, Thank you very much for sharing your ideas! However, the subject dropout reasons were as mentioned below: 2 subjects withdrawn due to personal reasons. 2 subjects dropped out due to protocol non compliance. 1 subject dropped out due to Adverse Event 1 subject did not reported for second period of study. With regards to the imbalanced sequences, kindly find the details mentioned below: 12 subjects completed RT sequence 10 subjects completed TR sequence Look forward towards your feedback. Regards, unique_one. |
unique_one ☆ India, 2014-07-10 09:09 (3808 d 22:09 ago) @ unique_one Posting: # 13252 Views: 15,015 |
|
Dear Helmut, Any update on the above query? I just wanted to know the impact of the dropout rate and imbalanced sequence in the study. Thanks and Regards, unique_one. |
d_labes ★★★ Berlin, Germany, 2014-07-08 14:03 (3810 d 17:15 ago) @ unique_one Posting: # 13238 Views: 15,222 |
|
Dear unique_one. ❝ Could you please let me know that what could be in approx sample size if study is repeated? Simple enough using PowerTOST: Approach: The Believer’s ("Carved in stone") AUC Approach: The Conservative’s AUC Thus a relative conservative estimate (more conservative assumptions about CV and GMR are imaginable) of sample size would be 156. Seldom seen such a high sample size in BE studies. — Regards, Detlew |
unique_one ☆ India, 2014-07-08 16:03 (3810 d 15:14 ago) @ d_labes Posting: # 13241 Views: 15,049 |
|
Dear d_labes, Thanks for sharing information! Regards, unique_one |
Shuanghe ★★ Spain, 2014-07-08 14:57 (3810 d 16:21 ago) @ unique_one Posting: # 13239 Views: 15,112 |
|
Hi unique_one, ❝ Secondly, as per our formulation experts team, they mentioned that as T/R ratio is quite well near 110% for both parameters, the reformulation might not be required in this case. I don't think going forward with the same formulation is a good idea. Apart from what Helmut and Detlew said, you can try bootstrap data set with sample size of 28 and repeat about 10000 times to get a sense of your data (ratio, ISCV etc) and also try to bootstrap larger sample size as estimated by Helmut/Detlew to see your chance of success. — All the best, Shuanghe |
unique_one ☆ India, 2014-07-08 16:08 (3810 d 15:09 ago) @ Shuanghe Posting: # 13243 Views: 15,155 |
|
Dear Shuanghe, Thanks for sharing info! regards, unique_one |
Samaya B ☆ India, 2014-07-09 09:58 (3809 d 21:19 ago) @ unique_one Posting: # 13247 Views: 15,040 |
|
Dear Shuanghe, Can you please explain in some more detail how to use bootstrap data set to get a sense of the data and bootstrap with larger sample size? Thanks in advance. Regards, Samaya. |
Shuanghe ★★ Spain, 2014-07-09 20:26 (3809 d 10:52 ago) @ Samaya B Posting: # 13250 Views: 15,094 |
|
Hi Samaya, ❝ Can you please explain in some more detail how to use bootstrap data set to get a sense of the data and bootstrap with larger sample size? Many program can perform bootstrap. If you use SAS, try PROC SURVEYSELECT . Use METHOD=URS for sampling with replacement. You can specify the sample size (say 28) with SAMPSIZE=28 * and and repeated number (say 1000 times) with REPS=1000 . Use subject as selection unit with SAMPLINGUNIT subject; .* Note that you can have sample size which is greater than your original data since this is sampling with replacement. now you have a data set with 1000 sets of data with each set having 28 subjects. You can do BE analyse as usual (add BY replicate statement in your sas code) so for each replicate you'll have T/R ratio and 90% CI and ISCV etc. So you'll have 1000 of those now. Obvious BE result of each replicate will be slightly different. Now you can see how they vary.However, there are some drawbacks such as repeated subject number, sometimes sequence is extremely unbalanced (e.g., for certain replicate you might have 20 subject with TR and 8 with RT) etc. You can generate dummy subject number to replace with the original ones and to remove certain replicates with extremely unbalanced sequence (what you think is unlikely occur in real life. for example, for BE study with 100 subjects (50TR+50RT), after 15 drop out, 45TR +40RT might be likely but 50TR +35RT might not) before doing BE analysis. Also, you can try adding some random number to your logPK data based on the intra-subject variability of the formulation before analysis. But I'll leave it to you to try them. Please read the sas manual for details. — All the best, Shuanghe |
Weidson ☆ Brazil, 2021-07-03 03:36 (1259 d 03:42 ago) @ Shuanghe Posting: # 22455 Views: 6,919 |
|
Dear Shuanghe, It would be possible for you publish the SAS code (or R code if disponible) this your proposal? I would stay very grateful. At moment I am studing the bioequivalence studies with alternative methods as bootstrap for design crossover 2x2. I wasn't able to find any SAS code disponible for boostrap of the crossover experiments (I'm not programer). I am intending to use it to verify the robustness of the conclusion of bioinequivalence. Is there any member of this forum have experience with bootstrap methods? Does We have any code can be aplied for failed crossovers design? Best regards. |
Helmut ★★★ Vienna, Austria, 2021-07-08 15:50 (1253 d 15:27 ago) @ Weidson Posting: # 22463 Views: 6,703 |
|
Hi Weidson, ❝ I wasn't able to find any SAS code disponible for boostrap of the crossover experiments. No idea about SAS; a lengthy (157 lines) -script at the end. I simulated a failed study (underpowered). Output:
Note that we would need 22 subjects based on the simulation goalposts but 28 would be required based on the simulated data set. Though the PE was slightly ‘better’ (95.76% instead of 95%), the CV was ‘worse’ (25.58% instead of 22%). Hence, the bootstrapped result (with n = 22) is expected to be slightly underpowered (empiric power is the number of passing studies / the number of simulations). Show the first six rows of the simulated data and the bootstrapped studies as well as the structure of the data.frames :
Duno why we need bootstrapping at all. We could directly use the results of the data set:
95.55% (84.43% – 108.07%) . Given, if real-world data deviates from lognormal (e.g., contains discordant outliers) the bootstrapped results (since nonparametric) may be more reliable.❝ I am intending to use it to verify the robustness of the conclusion of bioinequivalence. Not sure what you mean by that. For any failed study the bloody post hoc power will be <50%. When you give the PE and CV in the function sampleN.TOST() you get immediately to number of subjects you would have needed to demonstrate BE (see the example above and lines 91–92 of the script).❝ Is there any member of this forum have experience with bootstrap methods? ElMaestro. ❝ Does We have any code can be aplied for failed crossovers design? I hope the one at the end helps (never done that before, bugs are possible). If you want to use your own data, ignore lines 19–44 and provide a data.frame of results (named data ). The column names are mandatory: subject (integer), period (integer), sequence ("RT" or "TR" ), treatment ("R" or "T" ), PK (untransformed).Once you have that, use this code
library(PowerTOST) Edit: Couldn’t resist. I introduced an ‘outlier’ (divided the PK-value of subject 1 after R by three). Homework: Find out where to insert the following lines. data$PK[1] <- data$PK[1] / 3 I got:
The simple (parametric) approach gave 104.94% (86.93% – 126.69%) which is wider than the bootstrapped 104.50% (87.58% – 124.98%) . Reasonable at a first look cause ANOVA is sensitive to outliers. Is this what you are interested in?However, since we want to bootstrap larger studies, we have to sample with replacement (line 66). Some studies contain the outlier as well but others none or even more than one. In my example:
If we desire a substantially larger sample size, I’m not convinced whether bootstrapping is useful at all. — Dif-tor heh smusma 🖖🏼 Довге життя Україна! Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
ElMaestro ★★★ Denmark, 2021-07-09 16:15 (1252 d 15:02 ago) @ Helmut Posting: # 22464 Views: 6,532 |
|
Hi Weidson, hi all, ❝ ❝ I am intending to use it to verify the robustness of the conclusion of bioinequivalence. ❝ ❝ Not sure what you mean by that. For any failed study the bloody post hoc power will be <50%. When you give the PE and CV in the function ❝ ❝ ❝ Is there any member of this forum have experience with bootstrap methods? ❝ ❝ ElMaestro. I do bootstrapping once in a while. Not often, just sometimes. But I did not quite understand what you were trying to achieve Weidson. The term "robustness" lost its meaning to me some months ago following an opinion expressed by a regulator in relation to robustness and simulation. Anyways, in dissolution trials we can often do a simple f2 comparison and if it "fails" then bootstrapping of the same data is the final attempt and it may sometimes actually lead to approval. Perhaps your idea is to do something along similar lines with a BE trial? In that case, Helmut's code above goes a long way towards this goal, you can probably easily extend it with BCa-derived intervals. But, let me be frank, in the current regulatory climate here and there I have no particular reason to think it will lead to regulatory acceptance regardless of the numerical result. An area where I think bootstrapping is totally ok and very useful is when you want to derive a sample size and you have pilot trial data. If the residual is totally weirdly distributed in the pilot trial, then a sample size calc. in the classical fashion can be wasted time and effort even though the final study always has to be evaluated in the usual parametric fashion involving the assumption of a normal residual. This is where a bootstrap sample size approach can be very justified and useful. But it, too, has some shortcomings. Such as the assumption of the GMR. You can't easily make provisions for assuming some other than what you have seen in the pilot. Nasty. But I digress. — Pass or fail! ElMaestro |
Helmut ★★★ Vienna, Austria, 2021-07-09 16:50 (1252 d 14:27 ago) @ ElMaestro Posting: # 22465 Views: 6,446 |
|
Hi ElMaestro, ❝ The term "robustness" lost its meaning to me some months ago following an opinion expressed by a regulator in relation to robustness and simulation. Oh dear! Any details? ❝ […] in dissolution trials we can often do a simple f2 comparison and if it "fails" then bootstrapping of the same data is the final attempt and it may sometimes actually lead to approval. Regrettably the other way ’round (see there). An ƒ2 ≥50 does not guarantee acceptance any more. The lower bootstrapped CL ❝ An area where I think bootstrapping is totally ok and very useful is when you want to derive a sample size and you have pilot trial data. If the residual is totally weirdly distributed in the pilot trial, then a sample size calc. in the classical fashion can be wasted time and effort even though the final study always has to be evaluated in the usual parametric fashion involving the assumption of a normal residual. Agree – in principle. ❝ This is where a bootstrap sample size approach can be very justified and useful. But it, too, has some shortcomings. Such as the assumption of the GMR. You can't easily make provisions for assuming some other than what you have seen in the pilot. Nasty. Yep. Another issue are ‘outliers’ like in my example. Does it make sense to assume to face them in the pivotal as well? I hope not. Then what? Drop them from the pilot data and bootstrap that? — Dif-tor heh smusma 🖖🏼 Довге життя Україна! Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
ElMaestro ★★★ Denmark, 2021-07-09 18:29 (1252 d 12:49 ago) @ Helmut Posting: # 22466 Views: 6,485 |
|
Hi Hötzi, ❝ Oh dear! Any details? Read here. "The Applicant should demonstrate that the consumer risk is not inflated above 5% with the proposed design and alpha expenditure rule, taking into account that simulations are not considered sufficiently robust and analytical solutions are preferred." ❝ Yep. Another issue are ‘outliers’ like in my example. Does it make sense to assume to face them in the pivotal as well? I hope not. Then what? Drop them from the pilot data and bootstrap that? If you believe an observation is an outlier, for one reason or another, probably it does not make sense to include that observation in the planning. Ot at last this sounds like a healthy argument. On the other hand if you do take the outliers into consideration for the subsequent steps then often the sample size just gets larger. At any rate, that aberrant value -whether you call it an outlier or not- is more or less what causes the residual to have a bonkers distribution. There may be several of them in the worst case. And then of course there's the issue with a positive and negative residual of equal magnitude subject-wise for a 222BE design. This is more of a triviality-by-design. When it rains it pours. — Pass or fail! ElMaestro |
Helmut ★★★ Vienna, Austria, 2021-07-09 23:04 (1252 d 08:13 ago) @ ElMaestro Posting: # 22467 Views: 6,422 |
|
Hi ElMaestro, ❝ ❝ Oh dear! Any details? ❝ ❝ ❝ "The Applicant should demonstrate that the consumer risk is not inflated above 5% with the proposed design and alpha expenditure rule, taking into account that simulations are not considered sufficiently robust and analytical solutions are preferred." ❝ ❝ Jesusfuckingchrist! Full ACK to everything else you wrote. — Dif-tor heh smusma 🖖🏼 Довге життя Україна! Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
Achievwin ★★ US, 2021-08-17 21:26 (1213 d 09:51 ago) @ unique_one Posting: # 22524 Views: 5,998 |
|
❝ 90% CI AUC: Lower limit - 98.35, Upper limit - 132.50, T/R ratio of AUC: 112.2 ❝ 90% CI Cmax: Lower limit - 95.38, Upper limit - 127.80, T/R ratio of Cmax: 109.5 ❝ ❝ Questions: ❝ 1. Primary reason for BE Failure? Looking at your confidence intervals it looks like your ISCV computation may not be accurate, I am getting 33.6 for AUC. Your study is under powered!!!!!. Assuming high bound of 34% ISCV and Ratio of 112 you 136 subjects (completers) for a standard 2x2 study design. To improve probability of success go for 3-way (if your RLD CV is tight) or 4-way cross over replicated design with RSABE option. You have a shot at satisfying Bioequivalence. Hope this helps |