## AND or OR, that’s the question [Power / Sample Size]

Thanks again for your help on this, Helmut.

❝ Oh dear, that one! Opening Pandora’s box. However, only relevant if one would assess the study with the ‘All at Once’ approach (i.e., an ANOVA of pooled data). The jury left the courtroom 2½ years ago and hasn’t returned ever since…

And my apologies for opening the Pandora's box! I will pretend that I never wrote this

❝ In the ‘Two at a Time’ approach T1 and T2 are treated independently. Hence, you have two separate distributions. You see that also in the evaluation, where you obtain two residual variance estimates.

Here, I still am a bit confused. Since my alternative is T1=R AND T2=R, I need to carry out 2 tests. Both have to be rejected at a given significance (without adjustment, say 0.05) and jointly, these 2 tests have to achieve a given power (say 0.8). Here is where I am confused:

1) The 2 tests must jointly have a power > 0.8. For illustration, if we assume independence of these 2 tests (which is clearly not the case but for the sake of illustration), we want to achieve the following:
power(test1, test2) = power(test1) x power(test2) = 0.8

If we further assume equal power for test 1 and test 2, then we need sqrt(0.8) = 0.9 for each test. When computing the sample size in PowerTOST, I am wondering whether the power argument is for the joint tests, or for individual test. If it's specified at an individual level, the joint power is 0.8 x 0.8 = 0.64.

2) I understand that T1 and T2 are treated independently. But are they assumed to be independent from R, as well?
Even if T1, T2 and R are pairwise independent, R appears in both test statistics so the two tests will show correlation. Does this correlation taken into account when computing the sample size?

In my naive opinion, I feel that, under the alternative which stipulates BE, the PK parameters should be correlated between treatments and this correlation should be taken into account.