Helmut ★★★ Vienna, Austria, 2021-02-19 13:02 (1390 d 06:06 ago) Posting: # 22214 Views: 4,442 |
|
Dear all (and esp. our expert in probability zizou), say, we have \(\small{n}\) studies, each powered at 90%. What is the probability (i.e., power) that all of them pass BE? Let’s keep it simple: T/R-ratios and CVs are identical in studies \(\small{1\ldots n}\). Hence, \(\small{p_{\,1}=\ldots=p_{\,n}}\). If the outcomes of studies are independent, is \(\small{p_{\,\text{pass all}}=\prod_{i=1}^{i=n}p_{\,i}}\), e.g., for \(\small{p_{\,1}=\ldots=p_{\,6}=0.90\rightarrow 0.90^6\approx0.53}\)? Or does each study stand on its own and we don’t have to care? — Dif-tor heh smusma 🖖🏼 Довге життя Україна! Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
ElMaestro ★★★ Denmark, 2021-02-19 13:57 (1390 d 05:11 ago) @ Helmut Posting: # 22215 Views: 3,736 |
|
Hi Helmut, ❝ say, we have \(\small{n}\) studies, each powered at 90%. What is the probability (i.e., power) that all of them pass BE? ❝ ❝ Let’s keep it simple: T/R-ratios and CVs are identical in studies \(\small{1\ldots n}\). Hence, \(\small{p_{\,1}=\ldots=p_{\,n}}\). If the outcomes of studies are independent, is \(\small{p_{\,\text{pass all}}=\prod_{i=1}^{i=n}p_{\,i}}\), e.g., for \(\small{p_{\,1}=\ldots=p_{\,6}=0.90\rightarrow 0.90^6\approx0.53}\)? ❝ Or does each study stand on its own and we don’t have to care? Yes to 0.53. The risk is up to you or your client. I think there is no general awareness, but my real worry is the type I error, as I have indicated elsewhere. "Have to care" really involves the fine print. I think in the absence of further info it is difficult to tell if you should care and/or from which perspective care is necessary. Related issue, the one that worries me more: You test one formulation, it fails on the 90% CI, you develop a new formulation, it passes on the 90% CI. What is the type I error? Well, strictly speaking that would be inflated. But noone seems to give a damn. — Pass or fail! ElMaestro |
Helmut ★★★ Vienna, Austria, 2021-02-19 14:37 (1390 d 04:31 ago) @ ElMaestro Posting: # 22216 Views: 3,676 |
|
Hi ElMaestro, ❝ Yes to 0.53. Shit, expected that. ❝ The risk is up to you or your client. Such a case is not uncommon. Say, you have single dose studies (highest strength, fasting/fed), multiple dose studies with two strengths (fasting/fed). Then you get the n = 6 of my example. If you want a certain overall power, then each study has to be powered to \(p_i=\sqrt[n]{p_\textrm{overall}}\). With n = 6, for 80% → 96.35% and for 90% → 98.26%. Will an IEC accept that?* ❝ I think there is no general awareness, but my real worry is the type I error, as I have indicated elsewhere. I know. ❝ Related issue, the one that worries me more: ❝ You test one formulation, it fails on the 90% CI, you develop a new formulation, it passes on the 90% CI. What is the type I error? I’m not concerned about a new formulation. The first one went to the waste bin → zero consumer risk. The study supporting the new formulation stands on its own. Hence, TIE ≤0.05. ❝ Well, strictly speaking that would be inflated. But noone seems to give a damn. I’m indeed concerned about repeating the study (same formulation) with more subjects. Then you get an inflated TIE for sure. Regulators don’t care. They trust in the second study more because it is larger and thus the result ‘more reliable’, I guess. We have that everywhere. A value in the post-study exam is clinically significant out of range. Follow-up initiated and now all is good. Did the value really improve? There is always inaccuracy involved. Maybe the first one was correct and the second one not. My doctor gave me six months to live,
— Dif-tor heh smusma 🖖🏼 Довге життя Україна! Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
Helmut ★★★ Vienna, Austria, 2022-06-24 16:03 (900 d 04:05 ago) @ ElMaestro Posting: # 23080 Views: 2,347 |
|
Hi ElMaestro and all, sorry for excavating an old story. ❝ ❝ say, we have \(\small{n}\) studies, each powered at 90%. What is the probability (i.e., power) that all of them pass BE? ❝ ❝ Let’s keep it simple: T/R-ratios and CVs are identical in studies \(\small{1\ldots n}\). Hence, \(\small{p_{\,1}=\ldots=p_{\,n}}\). If the outcomes of studies are independent, is \(\small{p_{\,\text{pass all}}=\prod_{i=1}^{i=n}p_{\,i}}\), e.g., for \(\small{p_{\,1}=\ldots=p_{\,6}=0.90\rightarrow 0.90^6\approx0.53}\)? ❝ ❝ Or does each study stand on its own and we don’t have to care? ❝ ❝ Yes to 0.53. ❝ The risk is up to you or your client. I think there is no general awareness, … ❝ "Have to care" really involves the fine print. I think in the absence of further info it is difficult to tell if you should care and/or from which perspective care is necessary. I’m pretty sure that we were wrong: We want to demonstrate BE in all studies. Otherwise, the product would not get an approval (based on multiple studies in the dossier). That means, we have an ‘AND-composition’. Hence, the Intersection-Union Test (IUT) principle applies1,2 and each study stands indeed on its own. Therefore, any kind of ‘power adjustment’ I mused about before is not necessary. In my example above one would have to power each of the studies to \(\small{\sqrt[6]{0.90}=98.26\%}\) to achieve ≥ 90% overall power. I cannot imagine that this was ever done. Detlew and I have some empiric evidence. The largest number of confirmatory studies in a dossier I have seen so far was 12, powered to 80–90% (there were more in the package but only exploratory like comparing types of food, sprinkle studies, ). If overall power in multiple studies would have been really that low (say, \(\small{0.85^{12}\approx14\%}\)), I should have seen many more failures – which I didn’t. ❝ … but my real worry is the type I error, as I have indicated elsewhere. We discussed that above. Agencies accept repeating an inconclusive3,4 study in a larger sample size. I agree with your alter ego5 that such an approach may inflate the Type I Error indeed. I guess regulators trust more in the repeated study believing [sic] that its outcome is more ‘reliable’ due to the larger sample size. But that’s – apart from the inflated Type I Error – a fallacy.
— Dif-tor heh smusma 🖖🏼 Довге життя Україна! Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |