## Wow! Amazing answers! [General Sta­tis­tics]

Since it will take me time to properly digest and reply all the details you just shared (especially on IUT, because IUT is new to me, and I think IUT might be the missing link I was searching for to deal with dependencies in multiple hypothesis testing), hence I decided to just respond to specific concepts, one at a time :)

Also, thanks for confirming which concepts I grasped correctly, because as someone new to pharmacokinetics, it definitely helps me eliminate logic errors
(｡♥‿♥｡)

For now, I'd like to dig a little deeper to understand all the logic errors(?) that I made regarding tmax.

»
1. Tmax is poor terminology. “T” denotes the absolute temperature in Kelvin whereas “t” stands for time. Hence, tmax should be used.

Good point :) I initially chose to use Tmax because I'm used to writing random variables in uppercase, to help myself catch for syntax errors if I ever wrote something silly like E[t], when t is not a random variable, etc.

With that said...(I rearranged your reply below to fit the flow better)

» » … coupled with the fact that the population distribution that is being analyzed looks a lot like a Log-normal distribution; so I thought normalizing Tmax just made sense, since almost all distributions studied in undergraduate (e.g. F-distribution used by ANOVA) are ultimately transformations of one or more standard normals.
»
» Yes it is because such a transformation for a discrete distribution (like tmax) is not allowed.
»
» The distribution strongly depends on the study’s sampling schedule but is definitely discrete. Given, the underlying one likely is continuous2 but we simply don’t have an infinite number of samples in NCA. A log-transformation for discrete distributions is simply not allowed. Hence, what is stated in this slide is wrong. Many people opt for one of the variants of the Wilcoxon test to assess the difference. Not necessarily correct. The comparison of the shift in locations is only valid if distributions are equal. If not, one has to opt for the Brunner-Munzel test3 (available in the R package nparcomp).

Does it mean that the Nyquist criterion is not satisfied in pharmacokinetics (in terms of sampling interval)? Couldn't we just design our study’s sampling schedule to ensure that Nyquist criterion is satisfied so that we can perfectly reconstruct the original continuous-time function from the samples?

» Wait a minute. It is possible to see concentration-time profiles as distributions resulting from a stochastic process. […] I know only one approach trying to directly compare profiles based on moment-theory (the Kullback-Leibler information criterion).5

Yes, I originally viewed the population distribution that is being analyzed as a stochastic process, but I didn't dive in too deep to check if the moment-generating function of a log-normal distribution "matches" the (concentration-time profile?) model used in pharmacokinetics (because I'm still learning about the models used in pharmacokinetics), so it was more of a "visually looks a lot like a log-normal distribution" on my part :p

With that said, this specific fact you shared intrigues me...

» Les Benet once stated (too lazy to search where and when) that for a reliable estimate of AUC one has to sample in such a way that the extrapolated fraction is 5–20%. For AUMC one would need 1–5%. No idea about VRT but in my experience its variability is extreme.

I'm familiar with Taylor Series Approximations, Fourier series approximation, etc. but never really thought about how the error terms of a moment-generating function is controlled if I use, for example, a log-normal distribution to describe the concentration-time profile. It might be obvious (but I don't recall learning it in undergraduate though, did I?), but I kinda prefer spending my time learning about IUT instead of thinking about this :p, so I was wondering if you have any good reference materials for learning about how to correctly use the moment-generating function to model a distribution in real-life? (e.g. concentration-time profile)

Fun fact: as I was briefly reading about Kullback–Leibler divergence, I thought it looked familiar, then I realized I first encountered it when reading about IIT

Thanks once again for all your amazing replies!
ଘ(੭*ˊᵕˋ)੭* ̀ˋ