## Great post! [Outliers]

Hi Zizou,

at a first look your arguments are convincing indeed.

A basic assumption in BE is that what we observe in the study is an unbiased sample of the population, i.e., the true (but unknown) distributions of the sample and the population are identical. Only then we can extrapolate the study’s result to the patient population (BE as a substitute for therapeutic equivalence). As stated in HC’s guidance parametric methods are not robust against extreme values. My example:

 n  method          PE       90% CI     CVintra  CVinter 12  LME           110.15% 77.60–156.36%  50.1%    NA   (negative variance component)
12  ANOVA         110.15% 77.60–156.36%  50.1%    NA 11  LME            92.45% 78.34–109.11%  21.3%   22.2% 12  nonparametric  98.95% 76.24–130.06%   NA      NA   (note: 91.78% CI)

Observations:
• The mixed effects model (according to the guidance) converged but with a negative variance for the random effect. Tweaks weren’t successful (singularity tolerance ↓, convergence criterion ↓, maximum number of iterations ↑).
• Excluding the outlier I got a pretty narrow CI (which is similar to the other PK metrics). However, the one in the nonparametric method of the full dataset is much wider. Is the distribution of n=11 really close to the true one?
• In a strict sense the nonparametric method gives only a correct measurement of shift if distributions are similar. Duno whether this is the case here. Haven’t tried the Brunner-Munzel test* yet (which doesn’t require this assumption).
Time allowing (ha-ha) I will try some sim’s where I replace one value with the true x±6σ. Asymptotics are generally fine if n≥30. Hence your suggestion for CV 30% makes sense (GMR 0.95, power 80% → n 40). But I will also try one with a lower CV of 15% (n 12).

Dif-tor heh smusma 🖖
Helmut Schütz

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes