## I'm seeking to understand the math behind our current regulation [General Statistics]

» Hi Victor, I tried to reconstruct your original post as good as I could. Since it was broken before the first “\(\mathcal{A}\)”, I guess you used an UTF-16 character whereas the forum is coded in UTF-8.

Hi Helmut, Thanks for helping me out :)

Edit: after a quick experiment (click here to see screenshot), it seems that the “\(\mathcal{A}\)” I used was a UTF-8 character after all? ⊙.☉

» Please don’t link to large images breaking the layout of the posting area and forcing us to scroll our viewport. THX.

Noted, and thanks for downscaling my original image :)

» I think that your approach has same flaws.

I see; I thought it would make sense for T

coupled with the fact that the population distribution that is being analyzed looks a lot like a Log-normal distribution; so I thought normalizing T

»

Thanks for enlightening me that I can now restate the current standard's hypothesis in a "more familiar (undergraduate-level)" form:

$$H_0: ln(\mu_T) - ln(\mu_R) \notin \left [ ln(\theta_1), ln(\theta_2) \right ]\:vs\:H_1: ln(\theta_1) < ln(\mu_T) - ln(\mu_R) < ln(\theta_2)$$

I now realize that I was actually using the old standard's hypothesis (whose null tested for bioequivalence, instead of the current standard's null for bioinequivalence), which had problems with their

»

With that said, regarding the old standard's hypothesis (whose null tested for bioequivalence), I was originally curious (although it may be a meaningless problem now, but I'm still curious) on how they bounded the family-wise error rate (FWER) if

The same question more importantly applied to

Do you know how FDA bounded the "global"

Thanks in advance :)

ଘ(੭*ˊᵕˋ)੭* ̀ˋ

Hi Helmut, Thanks for helping me out :)

Edit: after a quick experiment (click here to see screenshot), it seems that the “\(\mathcal{A}\)” I used was a UTF-8 character after all? ⊙.☉

» Please don’t link to large images breaking the layout of the posting area and forcing us to scroll our viewport. THX.

Noted, and thanks for downscaling my original image :)

» I think that your approach has same flaws.

- You shouldn’t transform the profiles but the PK metrics AUC and C
_{max}.

I see; I thought it would make sense for T

_{max}to also be transformed because of googling stuff like this:coupled with the fact that the population distribution that is being analyzed looks a lot like a Log-normal distribution; so I thought normalizing T

_{max}just made sense, since almost all distributions studied in undergraduate (e.g. F-distribution used by ANOVA) are ultimately transformations of one or more standard normals. With that said, is the above stuff that I googled, wrong?»

- The Null hypothesis is bioinequivalence,
*i.e.*,

» $$H_0:\mu_T/\mu_R\not\in \left [ \theta_1,\theta_2 \right ]\:vs\:H_1:\theta_1<\mu_T/\mu_R<\theta_2$$ where \([\theta_1,\theta_2]\) are the limits of the acceptance range. Testing for a statistically significant difference is futile (*i.e.*, asking whether treatments are equal). We are interested in a clinically relevant difference \(\Delta\). With the common 20% we get back-transformed \(\theta_1=1-\Delta,\:\theta_2=1/(1-\Delta)\) or 80–125%.

Thanks for enlightening me that I can now restate the current standard's hypothesis in a "more familiar (undergraduate-level)" form:

$$H_0: ln(\mu_T) - ln(\mu_R) \notin \left [ ln(\theta_1), ln(\theta_2) \right ]\:vs\:H_1: ln(\theta_1) < ln(\mu_T) - ln(\mu_R) < ln(\theta_2)$$

I now realize that I was actually using the old standard's hypothesis (whose null tested for bioequivalence, instead of the current standard's null for bioinequivalence), which had problems with their

**α**&**β**(highlighted in red below, cropped from this paper), thus rendering my initial question pointless, because I was analyzing an old problem; i.e. before Hauck and Anderson's 1984 paper.»

*Nominal*\(\alpha\) is*fixed*by the regulatory agency (generally at 0.05). With low sample sizes and/or high variability the*actual*\(\alpha\) can be substantially lower.- Since you have to pass
*both*AUC and C_{max}(each tested at \(\alpha\) 0.05) the intersection-union tests keep the familywise error rate at ≤0.05.

With that said, regarding the old standard's hypothesis (whose null tested for bioequivalence), I was originally curious (although it may be a meaningless problem now, but I'm still curious) on how they bounded the family-wise error rate (FWER) if

**α=5%**for each hypothesis test, since the probability of committing one or more type I errors when performing three hypothesis tests = 1 - (1-**α**)^3 = 1 - (1-0.05)^3 = 14.26% (if those three hypothesis tests*were*actually independent).The same question more importantly applied to

**β**, since in the old standard's hypothesis (whose null tested for bioequivalence), "the consumer’s risk is defined as the probability (**β**) of accepting a formulation which is bioinequivalent, i.e. accepting H_{0}when H_{0}is false (Type II error)." (as quoted from page 212 of the same paper).Do you know how FDA bounded the "global"

**α**&**β**before 1984? Because I am curious on "what kind of secret math technique" was happening behind-the-scenes that allowed 12 random-samples to be considered "good enough by the FDA"; i.e.- How to calculate the probability of committing one or more type I errors when performing three hypothesis tests, when the null was tested for bioequivalence (before 1984)?

- How to calculate the probability of committing one or more type II errors when performing three hypothesis tests, when the null was tested for bioequivalence (before 1984)?

Thanks in advance :)

ଘ(੭*ˊᵕˋ)੭* ̀ˋ

### Complete thread:

- What is the largest α (Alpha) & β (Beta) allowed by FDA? victor 2019-11-16 21:57 [General Statistics]
- What do you want to achieve? Helmut 2019-11-17 01:26
- I'm seeking to understand the math behind our current regulationvictor 2019-11-17 10:53
- Some answers Helmut 2019-11-17 14:35
- Wow! Amazing answers! victor 2019-11-18 08:26
- More answers Helmut 2019-11-18 15:09
- Wow! More amazing answers! victor 2019-11-18 20:16
- Books & intersection-union Helmut 2019-11-19 12:01
- My progress on IUT so far victor 2019-11-22 01:28
- Update: Counterexamples victor 2019-11-23 09:05

- My progress on IUT so far victor 2019-11-22 01:28

- Books & intersection-union Helmut 2019-11-19 12:01

- Wow! More amazing answers! victor 2019-11-18 20:16

- More answers Helmut 2019-11-18 15:09

- Wow! Amazing answers! victor 2019-11-18 08:26

- Some answers Helmut 2019-11-17 14:35

- I'm seeking to understand the math behind our current regulationvictor 2019-11-17 10:53

- What do you want to achieve? Helmut 2019-11-17 01:26