Helmut ★★★ Vienna, Austria, 20210219 12:02 (540 d 13:10 ago) Posting: # 22214 Views: 1,957 

Dear all (and esp. our expert in probability zizou), say, we have \(\small{n}\) studies, each powered at 90%. What is the probability (i.e., power) that all of them pass BE? Let’s keep it simple: T/Rratios and CVs are identical in studies \(\small{1\ldots n}\). Hence, \(\small{p_{\,1}=\ldots=p_{\,n}}\). If the outcomes of studies are independent, is \(\small{p_{\,\text{pass all}}=\prod_{i=1}^{i=n}p_{\,i}}\), e.g., for \(\small{p_{\,1}=\ldots=p_{\,6}=0.90\rightarrow 0.90^6\approx0.53}\)? Or does each study stand on its own and we don’t have to care? — Diftor heh smusma 🖖 _{} Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes 
ElMaestro ★★★ Denmark, 20210219 12:57 (540 d 12:16 ago) @ Helmut Posting: # 22215 Views: 1,779 

Hi Helmut, » say, we have \(\small{n}\) studies, each powered at 90%. What is the probability (i.e., power) that all of them pass BE? » » Let’s keep it simple: T/Rratios and CVs are identical in studies \(\small{1\ldots n}\). Hence, \(\small{p_{\,1}=\ldots=p_{\,n}}\). If the outcomes of studies are independent, is \(\small{p_{\,\text{pass all}}=\prod_{i=1}^{i=n}p_{\,i}}\), e.g., for \(\small{p_{\,1}=\ldots=p_{\,6}=0.90\rightarrow 0.90^6\approx0.53}\)? » Or does each study stand on its own and we don’t have to care? Yes to 0.53. The risk is up to you or your client. I think there is no general awareness, but my real worry is the type I error, as I have indicated elsewhere. "Have to care" really involves the fine print. I think in the absence of further info it is difficult to tell if you should care and/or from which perspective care is necessary. Related issue, the one that worries me more: You test one formulation, it fails on the 90% CI, you develop a new formulation, it passes on the 90% CI. What is the type I error? Well, strictly speaking that would be inflated. But noone seems to give a damn. — Pass or fail! ElMaestro 
Helmut ★★★ Vienna, Austria, 20210219 13:37 (540 d 11:35 ago) @ ElMaestro Posting: # 22216 Views: 1,650 

Hi ElMaestro, » Yes to 0.53. Shit, expected that. » The risk is up to you or your client. Such a case is not uncommon. Say, you have single dose studies (highest strength, fasting/fed), multiple dose studies with two strengths (fasting/fed). Then you get the n = 6 of my example. If you want a certain overall power, then each study has to be powered to \(p_i=\sqrt[n]{p_\textrm{overall}}\). With n = 6, for 80% → 96.35% and for 90% → 98.26%. Will an IEC accept that?* » I think there is no general awareness, but my real worry is the type I error, as I have indicated elsewhere. I know. » Related issue, the one that worries me more: » You test one formulation, it fails on the 90% CI, you develop a new formulation, it passes on the 90% CI. What is the type I error? I’m not concerned about a new formulation. The first one went to the waste bin → zero consumer risk. The study supporting the new formulation stands on its own. Hence, TIE ≤0.05. » Well, strictly speaking that would be inflated. But noone seems to give a damn. I’m indeed concerned about repeating the study (same formulation) with more subjects. Then you get an inflated TIE for sure. Regulators don’t care. They trust in the second study more because it is larger and thus the result ‘more reliable’, I guess. We have that everywhere. A value in the poststudy exam is clinically significant out of range. Followup initiated and now all is good. Did the value really improve? There is always inaccuracy involved. Maybe the first one was correct and the second one not. My doctor gave me six months to live,
— Diftor heh smusma 🖖 _{} Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes 
Helmut ★★★ Vienna, Austria, 20220624 14:03 (50 d 12:10 ago) @ ElMaestro Posting: # 23080 Views: 207 

Hi ElMaestro and all, sorry for excavating an old story. » » say, we have \(\small{n}\) studies, each powered at 90%. What is the probability (i.e., power) that all of them pass BE? » » Let’s keep it simple: T/Rratios and CVs are identical in studies \(\small{1\ldots n}\). Hence, \(\small{p_{\,1}=\ldots=p_{\,n}}\). If the outcomes of studies are independent, is \(\small{p_{\,\text{pass all}}=\prod_{i=1}^{i=n}p_{\,i}}\), e.g., for \(\small{p_{\,1}=\ldots=p_{\,6}=0.90\rightarrow 0.90^6\approx0.53}\)? » » Or does each study stand on its own and we don’t have to care? » » Yes to 0.53. » The risk is up to you or your client. I think there is no general awareness, … » "Have to care" really involves the fine print. I think in the absence of further info it is difficult to tell if you should care and/or from which perspective care is necessary. I’m pretty sure that we were wrong: We want to demonstrate BE in all studies. Otherwise, the product would not get an approval. That means, we have an ‘ANDcomposition’. Hence, the IntersectionUnion Test (IUT) principle applies^{1,2} and each study stands indeed on its own. Therefore, any kind of ‘power adjustment’ I mused about before is not necessary. In my example above one would have to power each of the studies to \(\small{\sqrt[6]{0.90}=98.26\%}\) to achieve ≥ 90% overall power. Cannot imagine that this was ever done. I have some empiric evidence. The largest number of confirmatory studies in a dossier I have seen so far was 12, powered to 80–90% (there were more in the package but only exploratory like comparing types of food, sprinkle studies, ). If overall power in multiple studies would have been really that low, I should have seen many more failures – which I didn’t. » … but my real worry is the type I error, as I have indicated elsewhere. We discussed that above. Agencies accept repeating an inconclusive^{3,4} study in a larger sample size. I agree with your alter ego^{5} that such an approach may inflate the Type I Error indeed. I guess regulators trust more in the repeated study believing [sic] that its outcome is more ‘reliable’ due to the larger sample size. But that’s – apart from the inflated Type I Error – a fallacy.
— Diftor heh smusma 🖖 _{} Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes 