## Worded differently [General Sta­tis­tics]

Hi Hötzi,

» Completely confused. Can you try again, please?

OK.
1. Let us look at the wikipedia page for the t test:
"Most test statistics have the form t = Z/s, where Z and s are functions of the data."
2. For the t-distribution, here Z=sample mean - mean and s=sd/sqrt(n)
3. Why are Z and s independent in this case? Or more generally, and for me much more importantly, if we have two functions (f and g, or Z and s), then which properties of such functions or their input would render them independent??
Wikipedia links to a page about independence, key here is: "Two events are independent, statistically independent, or stochastically independent if the occurrence of one does not affect the probability of occurrence of the other (equivalently, does not affect the odds). Similarly, two random variables are independent if the realization of one does not affect the probability distribution of the other."

I am fully aware that when we simulate a normal dist. with some mean and some variance, then that defines their expected estimates in a sample. I.e. if a sample has a mean that is higher than the simulated mean, then that does not necessarily mean the sampled sd is higher (or lower, for that matter, that was where I was going with "perturbation"). It sounds right to think of the two as independent, in that case. Now, how about the general case, for example if we know nothing about the nature of the sample, but just look at any two functions of the sample? What property would we look for in those two functions to think they are independent?
A general understanding of the idea of independence of any two quantities derived from a sample, that is what I am looking for; point #3 above defines my question.

Pass or fail!
ElMaestro