Bootstrapping BE: An attempt in 🇷 [Study Assessment]
Hi Weidson, hi all,
I do bootstrapping once in a while. Not often, just sometimes.
But I did not quite understand what you were trying to achieve Weidson. The term "robustness" lost its meaning to me some months ago following an opinion expressed by a regulator in relation to robustness and simulation.
Anyways, in dissolution trials we can often do a simple f2 comparison and if it "fails" then bootstrapping of the same data is the final attempt and it may sometimes actually lead to approval. Perhaps your idea is to do something along similar lines with a BE trial?
In that case, Helmut's code above goes a long way towards this goal, you can probably easily extend it with BCa-derived intervals. But, let me be frank, in the current regulatory climate here and there I have no particular reason to think it will lead to regulatory acceptance regardless of the numerical result.
An area where I think bootstrapping is totally ok and very useful is when you want to derive a sample size and you have pilot trial data. If the residual is totally weirdly distributed in the pilot trial, then a sample size calc. in the classical fashion can be wasted time and effort even though the final study always has to be evaluated in the usual parametric fashion involving the assumption of a normal residual. This is where a bootstrap sample size approach can be very justified and useful. But it, too, has some shortcomings. Such as the assumption of the GMR. You can't easily make provisions for assuming some other than what you have seen in the pilot. Nasty.
But I digress.
❝ ❝ I am intending to use it to verify the robustness of the conclusion of bioinequivalence.
❝
❝ Not sure what you mean by that. For any failed study the bloody post hoc power will be <50%. When you give the PE and CV in the function sampleN.TOST()
you get immediately to number of subjects you would have needed to demonstrate BE (see the example above and lines 91–92 of the script).
❝
❝ ❝ Is there any member of this forum have experience with bootstrap methods?
❝
❝ ElMaestro.
I do bootstrapping once in a while. Not often, just sometimes.
But I did not quite understand what you were trying to achieve Weidson. The term "robustness" lost its meaning to me some months ago following an opinion expressed by a regulator in relation to robustness and simulation.
Anyways, in dissolution trials we can often do a simple f2 comparison and if it "fails" then bootstrapping of the same data is the final attempt and it may sometimes actually lead to approval. Perhaps your idea is to do something along similar lines with a BE trial?
In that case, Helmut's code above goes a long way towards this goal, you can probably easily extend it with BCa-derived intervals. But, let me be frank, in the current regulatory climate here and there I have no particular reason to think it will lead to regulatory acceptance regardless of the numerical result.
An area where I think bootstrapping is totally ok and very useful is when you want to derive a sample size and you have pilot trial data. If the residual is totally weirdly distributed in the pilot trial, then a sample size calc. in the classical fashion can be wasted time and effort even though the final study always has to be evaluated in the usual parametric fashion involving the assumption of a normal residual. This is where a bootstrap sample size approach can be very justified and useful. But it, too, has some shortcomings. Such as the assumption of the GMR. You can't easily make provisions for assuming some other than what you have seen in the pilot. Nasty.
But I digress.
—
Pass or fail!
ElMaestro
Pass or fail!
ElMaestro
Complete thread:
- Results Analysis unique_one 2014-07-08 08:58 [Study Assessment]
- Results Analysis ElMaestro 2014-07-08 10:08
- Results Analysis unique_one 2014-07-08 10:42
- Don’t hurry; explore consequences Helmut 2014-07-08 12:00
- Don’t hurry; explore consequences unique_one 2014-07-08 14:01
- The smaller a study, the more uncertain its results Helmut 2014-07-08 17:39
- The smaller a study, the more uncertain its results unique_one 2014-07-09 06:59
- The smaller a study, the more uncertain its results unique_one 2014-07-10 07:09
- The smaller a study, the more uncertain its results unique_one 2014-07-09 06:59
- The smaller a study, the more uncertain its results Helmut 2014-07-08 17:39
- Don’t hurry; explore consequences unique_one 2014-07-08 14:01
- 156 ? d_labes 2014-07-08 12:03
- 156 ? unique_one 2014-07-08 14:03
- Results Analysis Shuanghe 2014-07-08 12:57
- Results Analysis unique_one 2014-07-08 14:08
- Results Analysis Samaya B 2014-07-09 07:58
- bootstrap Shuanghe 2014-07-09 18:26
- bootstrap Weidson 2021-07-03 01:36
- Bootstrapping BE: An attempt in 🇷 Helmut 2021-07-08 13:50
- Bootstrapping BE: An attempt in 🇷ElMaestro 2021-07-09 14:15
- Bootstrapping BE: Desultory thoughts Helmut 2021-07-09 14:50
- Bootstrapping BE: Desultory thoughts ElMaestro 2021-07-09 16:29
- Bootstrapping BE: Desultory thoughts Helmut 2021-07-09 21:04
- Bootstrapping BE: Desultory thoughts ElMaestro 2021-07-09 16:29
- Bootstrapping BE: Desultory thoughts Helmut 2021-07-09 14:50
- Bootstrapping BE: An attempt in 🇷ElMaestro 2021-07-09 14:15
- Bootstrapping BE: An attempt in 🇷 Helmut 2021-07-08 13:50
- bootstrap Weidson 2021-07-03 01:36
- bootstrap Shuanghe 2014-07-09 18:26
- Results Analysis Samaya B 2014-07-09 07:58
- Results Analysis unique_one 2014-07-08 14:08
- Don’t hurry; explore consequences Helmut 2014-07-08 12:00
- Results Analysis unique_one 2014-07-08 10:42
- Results Analysis Achievwin 2021-08-17 19:26
- Results Analysis ElMaestro 2014-07-08 10:08