My progress on IUT so far [General Sta­tis­tics]

posted by victor – Malaysia, 2019-11-22 02:28 (1607 d 14:08 ago) – Posting: # 20857
Views: 6,733

❝ ❝ Honestly, I feel like a caveman […]


❝ C’mon! Your skills in maths are impressive.


Thanks Helmut :-) I appreciate it a lot.
(*^‿^*)
I'm also grateful that you shared that excerpt (more on this later, because Chapter 7 wasn't available on books.google.com).


Thankfully, I found enough free time to analyze the following IUT theorem using my notes when studying Theorem 8.3.4 from page 122 of this lecture notes. (I don't have access to any of your recommended textbooks yet though).

[image]

P.S. I intentionally used < instead of ≤ because I feel that in practice, αY will always be less than one. Besides that, my Gaussian level sets didn't scale correctly against my hand-drawn background, so their shapes are compromised; they should actually have the same covariance matrix.



I'm now thinking about all the cases when H0's covariance matrix is different from the true covariance matrix (such as how β would look like) to see how IUT really dealt with dependencies. But a preliminary glance seem to suggest that the above theorem is violated by the following cases, so I definitely need to think deeper on what the theorem actually states (i.e. what it means for IUT to be a level α test of H0 versus H1) to truly understand how the following cases affect the "global" α and β:
[image]

With that said, now that I am starting to get a feel for IUT, I feel that I am getting closer to truly understand the following facts you shared: :-)



As of now though, I haven't managed to see the "new" geometric interpretation of correlation (yet), so the following facts are still not within my grasp:

❝ The FWER gets more conservative the more the PK metrics differ.

❝ The Euclidean distance between centers give the correlation of PK metrics (here they are identical as well). The FWER is given by the area of the intersection which in any case will be ≤ the nominal α.


[image]In reality the correlation of AUC0–∞ (green) with AUC0–t (blue) is higher than the correlation of both with Cmax (red). If we would test only the AUCs, the FWER would by given again by the intersection which is clearly lower than the individual type I errors. If we add Cmax, the FWER decreases further.





    The proof of the result is almost trivial, at least if one is willing to adopt some piece of the basic formalism customary in expositions of the abstract theory of statistical hypothesis testing methods. […] The condition we have to verify, reads […] as follows:

$$E_{(\eta_1,\ldots,\eta_q)}(\phi)\leq\alpha\;\textrm{for all}\;(\eta_1,\ldots,\eta_q)\in H\tag{7.3}$$ where \(E_{(\eta_1,\ldots,\eta_q)}(\cdot)\) denotes the expected value computed under the parameter constellation \((\eta_1,\ldots,\eta_q)\). […]


❝     In order to apply the result to multisample equivalence testing problems, let \(\theta_j\) be the parameter of interest (e.g., the expected value) for the ith distribution under comparison, and require of a pair \((i,j)\) of distributions equivalent to each other that the statement $$K_{(i,j)}:\,\rho(\theta_i,\theta_j)<\epsilon,\tag{7.4}$$ holds true with \(\rho(\cdot,\cdot)\) denoting a suitable measure of distance between parameters. Suppose furthermore that for each \((i,j)\) a test \(\phi_{(i,j)}\) of \(H_{(i,j)}:\,\rho(\theta_i,\theta_j)\geq \epsilon\) versus \(K_{(i,j)}:\,\rho(\theta_i,\theta_j)< \epsilon\) is available whose rejection probability is \(\leq \alpha\) at any point \((\theta_1,\ldots,\theta_k)\) in the full parameter space such that \(\rho(\theta_i,\theta_j)\geq \epsilon\). Then, by the intersection-union principle, deciding in favour of “global quivalence” if and only if equivalence can be established for all \((_{2}^{k})\) possible pairs, yields a valid level-\(\alpha\) test for $$H:\,\underset{i<j}{\max}\{\rho(\theta_i,\theta_j)\}\geq \epsilon\;\textrm{vs.}\;K:\,\underset{i<j}{\max}\{\rho(\theta_i,\theta_j)\}<\epsilon\tag{7.5}$$

:-D



I'm thinking that the excerpt you shared contains the crucial info for me to see the "new" geometric interpretation of correlation, because of the following statement:

❝ […] with \(\rho(\cdot,\cdot)\) denoting a suitable measure of distance between parameters.


but I'm very confused by the excerpt's notations, because I couldn't find their corresponding notations in the aforementioned lecture notes (nor by googling); in particular:
【・ヘ・?】

Thanks in advance for the clarification :-)



❝ Do you know Anscombe’s quartet?


Nope :-P Thanks for sharing :) Because this is the first time I heard of Anscombe’s quartet, so I found it pretty interesting as a possible example to introduce other statistics (e.g. skew and kurtosis).

[image]

For some reason though, my mind thought of the Raven Paradox when reading about Anscombe’s quartet. Maybe because they both raised the question on what actually constitutes as evidence for a hypothesis?

Complete thread:

UA Flag
Activity
 Admin contact
22,984 posts in 4,822 threads, 1,651 registered users;
46 visitors (0 registered, 46 guests [including 2 identified bots]).
Forum time: 17:36 CEST (Europe/Vienna)

You can’t fix by analysis
what you bungled by design.    Richard J. Light, Judith D. Singer, John B. Willett

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5