I'm seeking to understand the math behind our current regulation [General Sta­tis­tics]

posted by victor – Malaysia, 2019-11-17 11:53 (1852 d 00:19 ago) – Posting: # 20815
Views: 8,550

(edited on 2019-11-17 14:16)

❝ Hi Victor, I tried to reconstruct your original post as good as I could. Since it was broken before the first “\(\mathcal{A}\)”, I guess you used an UTF-16 character whereas the forum is coded in UTF-8.


Hi Helmut, Thanks for helping me out :)

Edit: after a quick experiment (click here to see screenshot), it seems that the “\(\mathcal{A}\)” I used was a UTF-8 character after all? ⊙.☉

❝ Please don’t link to large images breaking the layout of the posting area and forcing us to scroll our viewport. THX.


Noted, and thanks for downscaling my original image :)

❝ I think that your approach has same flaws.


I see; I thought it would make sense for Tmax to also be transformed because of googling stuff like this:
[image]
coupled with the fact that the population distribution that is being analyzed looks a lot like a Log-normal distribution; so I thought normalizing Tmax just made sense, since almost all distributions studied in undergraduate (e.g. F-distribution used by ANOVA) are ultimately transformations of one or more standard normals. With that said, is the above stuff that I googled, wrong?


Thanks for enlightening me that I can now restate the current standard's hypothesis in a "more familiar (undergraduate-level)" form:
$$H_0: ln(\mu_T) - ln(\mu_R) \notin \left [ ln(\theta_1), ln(\theta_2) \right ]\:vs\:H_1: ln(\theta_1) < ln(\mu_T) - ln(\mu_R) < ln(\theta_2)$$

I now realize that I was actually using the old standard's hypothesis (whose null tested for bioequivalence, instead of the current standard's null for bioinequivalence), which had problems with their α & β (highlighted in red below, cropped from this paper), thus rendering my initial question pointless, because I was analyzing an old problem; i.e. before Hauck and Anderson's 1984 paper.

[image]


With that said, regarding the old standard's hypothesis (whose null tested for bioequivalence), I was originally curious (although it may be a meaningless problem now, but I'm still curious) on how they bounded the family-wise error rate (FWER) if α=5% for each hypothesis test, since the probability of committing one or more type I errors when performing three hypothesis tests = 1 - (1-α)^3 = 1 - (1-0.05)^3 = 14.26% (if those three hypothesis tests were actually independent).

The same question more importantly applied to β, since in the old standard's hypothesis (whose null tested for bioequivalence), "the consumer’s risk is defined as the probability (β) of accepting a formulation which is bioinequivalent, i.e. accepting H0 when H0 is false (Type II error)." (as quoted from page 212 of the same paper).

Do you know how FDA bounded the "global" α & β before 1984? Because I am curious on "what kind of secret math technique" was happening behind-the-scenes that allowed 12 random-samples to be considered "good enough by the FDA"; i.e.


Thanks in advance :)
ଘ(੭*ˊᵕˋ)੭* ̀ˋ

Complete thread:

UA Flag
Activity
 Admin contact
23,336 posts in 4,902 threads, 1,667 registered users;
27 visitors (0 registered, 27 guests [including 11 identified bots]).
Forum time: 12:13 CET (Europe/Vienna)

Biostatistician. One who has neither the intellect for mathematics
nor the commitment for medicine but likes to dabble in both.    Stephen Senn

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5