Loss in power [Regulatives / Guidelines]
continuing the evaluation of the data sets of this post.
Background: Some studies are quite dated (the oldest performed in October 1992!). In those days a pre-specified acceptance range of 75.00–133.33% (or even 70.00–142.86%) was acceptable for Cmax. However, I evaluated all data sets for the common 80.00–125.00%. This explains why more than the expected 20% failed (no, I didn’t screw up the design

All data sets were evaluated by model 3 for pooled data (like in the reports – I never cared about groups) and by model 1 to get the p value of the Group-by-Treatment interaction.
- If p (G×T) ≥0.1 the pooled data was evaluated by model 2.
- If p (G×T) <0.1 the largest group(s) were evaluated by model 3.
If there were more than one large group with equal sizes, both had to pass since I expect assessors will ask for it.
86 studies, 60 analytes, data sets: 85 (AUC), 86 (Cmax).
Evaluated by model 1 (all effects fixed); p (G×T) <0.1:
AUC : 8.24% ( 7/85)
Cmax: 12.79% (11/86)
Summary of passing results.
AUC : model 2 (pooled) : 84.62% (66/78)
model 2 (pooled without pre-test) : 84.71% (72/85)
loss (compared to pooled model 3) : 1.18% ( 1/85)
model 3 (largest group) : 85.71% ( 6/ 7)
model 2 (pooled) or model 3 (largest group): 84.71% (72/85)
loss (compared to pooled model 3) : 1.18% ( 1/85)
model 3 (pooled) : 85.88% (73/85)
CV (range) : 21.36% (4.59–61.73%)
Cmax: model 2 (pooled) : 62.67% (47/75)
model 2 (pooled without pre-test) : 63.95% (55/86)
loss (compared to pooled model 3) : 0.00% ( 0/86)
model 3 (largest group) : 27.27% ( 3/11)
model 2 (pooled) or model 3 (largest group): 58.14% (50/86)
loss (compared to pooled model 3) : 5.81% ( 5/86)
model 3 (pooled) : 63.95% (55/86)
CV (range) : 27.88% (6.82–76.99%)
On another note: If we apply model 2 without a pre-test (maybe the best way to go for regulators insisting in a group-term) the loss in power compared to the pooled model 3 is negligible. Reasonable, since we lost only few residual degrees of freedom:
pooled model 3: DF=n1+n2–2
pooled model 2: DF=n1+n2-(Ngroups–1)–2
Dif-tor heh smusma 🖖🏼 Довге життя Україна!
![[image]](https://static.bebac.at/pics/Blue_and_yellow_ribbon_UA.png)
Helmut Schütz
![[image]](https://static.bebac.at/img/CC by.png)
The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Complete thread:
- Russian «Экспертами» and their hobby Helmut 2017-04-29 00:46 [Regulatives / Guidelines]
- Low power of Group-by-Treatment interaction mittyri 2017-04-29 22:57
- Let’s forget the Group-by-Treatment interaction, please! Helmut 2017-04-30 13:54
- Let’s forget the Group-by-Treatment interaction, please! ElMaestro 2017-05-01 16:19
- Some answers Helmut 2017-05-02 01:10
- Some answers ElMaestro 2017-05-02 09:04
- Example Helmut 2017-05-02 12:35
- Sensitivity of term? mittyri 2017-05-02 18:29
- Simulations Helmut 2017-05-05 14:38
- loosing specificity due to low sensitivity mittyri 2017-05-08 23:28
- loosing specificity due to low sensitivity Helmut 2017-05-09 00:55
- loosing specificity due to low sensitivity mittyri 2017-05-08 23:28
- Loss in power Helmut 2017-05-06 17:31
- Interval between groups Helmut 2017-05-08 19:02
- IMP handling mittyri 2017-05-08 23:40
- IMP handling Helmut 2017-05-09 01:08
- IMP handling mittyri 2017-05-08 23:40
- Loss in powerHelmut 2017-05-14 17:22
- Simulations Helmut 2017-05-05 14:38
- Some answers ElMaestro 2017-05-02 09:04
- No convergence in JMP and Phoenix WinNonlin Helmut 2017-05-25 15:26
- Ouch?!??? ElMaestro 2017-05-25 16:24
- Some answers Helmut 2017-05-02 01:10
- Let’s forget the Group-by-Treatment interaction, please! ElMaestro 2017-05-01 16:19
- Let’s forget the Group-by-Treatment interaction, please! Helmut 2017-04-30 13:54
- Russian «Экспертами» and their hobby Artem Gusev 2017-05-02 16:13
- be careful with mixed models mittyri 2017-05-02 17:53
- be careful with mixed models Artem Gusev 2017-05-03 11:02
- p-value(s) in model 2 Helmut 2017-05-05 14:48
- be careful with mixed models mittyri 2017-05-02 17:53
- Russian «Экспертами» following the EEU GLs Helmut 2017-05-24 20:17
- Russian «Экспертами» following the EEU GLs Beholder 2017-05-24 22:37
- Penalty for carelessness mittyri 2017-05-25 08:52
- Russian «Экспертами» following the EEU GLs Beholder 2017-05-25 10:43
- Russian «Экспертами» following the EEU GLs Mikalai 2018-01-04 10:43
- Belarus = member of the EEU Helmut 2018-01-04 13:08
- Belarus = member of the EEU Mikalai 2018-01-04 19:49
- Trying your model for EEU mittyri 2018-01-04 22:04
- Trying your model for EEU Helmut 2018-01-05 00:06
- help us to stop it, please... Astea 2018-01-10 12:09
- help us to stop it, please... Beholder 2018-01-10 12:49
- regulators convinced by science? d_labes 2018-01-10 15:15
- regulators convinced by science? Beholder 2018-01-10 17:14
- Чёрт побери! d_labes 2018-01-10 18:53
- regulators convinced by science? Astea 2018-01-10 19:10
- regulators convinced by science? Beholder 2018-01-10 17:14
- help us to stop it, please... Astea 2018-01-10 12:09
- Trying your model for EEU Helmut 2018-01-05 00:06
- Belarus = member of the EEU Helmut 2018-01-04 13:08
- Russian «Экспертами» following the EEU GLs Beholder 2017-05-24 22:37
- Low power of Group-by-Treatment interaction mittyri 2017-04-29 22:57