## Bad science (statistics is taboo) [Study Assessment]

Hi Loky do,

» » ... it is considered acceptable that low-lier profiles can be excluded from statistical analysis […] if they occur with the same or lower frequency in the test product compared to the reference product.

»

» Is this case-specific for this product, and its formula?

Yes. It’s based on numerous replicate design studies reviewed by the PKWP. AFAIK, in all more outliers were seen after the reference. It’s a terrible product showing also funky batch-to-batch variability.

» or it could be followed for other products with the same cases?

If it’s a delayed release product, see my previous post. However, note the subtle different wording “number” vs. “frequency”.

In the draft MR-GL “comparable frequency” was used as well. From a statistical point of view that calls for a two-sided test. When I showed an example at the GL-meeting in Bonn (2013) I stirred up a hornets’ nest. Imagine: 64 subjects, 4 outliers after T and 3 after R. Is the frequency comparable? Yes, the

Lesson learned: Bad science (statistics is taboo). If you have one more outlier after T than after R – independent of the sample size – you’re dead.

» » ... it is considered acceptable that low-lier profiles can be excluded from statistical analysis […] if they occur with the same or lower frequency in the test product compared to the reference product.

»

» Is this case-specific for this product, and its formula?

Yes. It’s based on numerous replicate design studies reviewed by the PKWP. AFAIK, in all more outliers were seen after the reference. It’s a terrible product showing also funky batch-to-batch variability.

» or it could be followed for other products with the same cases?

If it’s a delayed release product, see my previous post. However, note the subtle different wording “number” vs. “frequency”.

In the draft MR-GL “comparable frequency” was used as well. From a statistical point of view that calls for a two-sided test. When I showed an example at the GL-meeting in Bonn (2013) I stirred up a hornets’ nest. Imagine: 64 subjects, 4 outliers after T and 3 after R. Is the frequency comparable? Yes, the

*p*-value is 0.7139!^{1}The members of the PKWP did not like that at all. Oops, one (!) more. Changed in the final GL. Not acceptable. You can have it even more extreme. A 4-period full replicate study, one outlier (0.78%) after T and none after R →*p*-value 0.5.^{2}Not acceptable because the bloody*number*is higher!Lesson learned: Bad science (statistics is taboo). If you have one more outlier after T than after R – independent of the sample size – you’re dead.

- Two-sided test (is T = R?)

`subj <- 64`

per <- 2

seq <- 2

OL.R <- 3

OL.T <- 4

n.T <- n.R <- subj*per/seq

outliers <- matrix(c(OL.T, n.T-OL.T, OL.R, n.R), nrow = 2,

dimnames=list(Guess = c("T", "R"),

Truth = c("T", "R")))

fisher.test(outliers, alternative = "two.sided")

Fisher's Exact Test for Count Data

data: outliers

**p-value = 0.7139**

alternative hypothesis: true odds ratio is not equal to 1

95 percent confidence interval:

0.2296229 10.0829801

sample estimates:

odds ratio

1.418408

- One-sided test (is T > R?)

`subj <- 64`

per <- 4

seq <- 2

OL.R <- 0

OL.T <- OL.R + 1

n.T <- n.R <- subj*per/seq

outliers <- matrix(c(OL.T, n.T-OL.T, OL.R, n.R), nrow = 2,

dimnames=list(Guess = c("T", "R"),

Truth = c("T", "R")))

fisher.test(outliers, alternative = "greater")

Fisher's Exact Test for Count Data

data: outliers

**p-value = 0.5**

alternative hypothesis: true odds ratio is greater than 1

95 percent confidence interval:

0.05262625 Inf

sample estimates:

odds ratio

Inf

—

Helmut Schütz

The quality of responses received is directly proportional to the quality of the question asked. 🚮

Science Quotes

*Dif-tor heh smusma*🖖_{}Helmut Schütz

The quality of responses received is directly proportional to the quality of the question asked. 🚮

Science Quotes

### Complete thread:

- Subject non compliance Loky do 2020-07-15 12:57 [Study Assessment]
- Subject non compliance (or product failure?) Helmut 2020-07-16 14:30
- Subject non compliance (or product failure?) Loky do 2020-07-16 22:29
- Bad science (statistics is taboo)Helmut 2020-07-17 01:01
- unexpected behavior for test product Loky do 2020-07-22 11:29
- Subject-by-Formulation interaction? Helmut 2020-07-22 12:03
- Subject-by-Formulation interaction? Loky do 2020-07-26 11:30

- Subject-by-Formulation interaction? Helmut 2020-07-22 12:03

- unexpected behavior for test product Loky do 2020-07-22 11:29

- Bad science (statistics is taboo)Helmut 2020-07-17 01:01

- Subject non compliance (or product failure?) Loky do 2020-07-16 22:29

- Subject non compliance (or product failure?) Helmut 2020-07-16 14:30