And the Oscar goes to: Replicates! [General Sta­tis­tics]

posted by Helmut Homepage – Vienna, Austria, 2022-11-17 11:19 (466 d 07:16 ago) – Posting: # 23371
Views: 2,039

Hi ElMaestro,

❝ The estimates are unbiased estimators of the effects you are extracting. That's how it works, so there is no surprise here. Now try and consider the quality of the estimates, for example if you want to know how well is a mean (or slope or intercept or ...) estimated. Look for example at the SE of the estimates, easily extractable from an lm via e.g.

summary(M)$coeff[,2]


Your wish is my command.

x = 1, 2, 3, 4, 5
 estimate    a.hat     SE  b.hat     SE
     mean +0.00001 0.4836 1.9999 0.1458
x = 1, 1, 3, 5, 5
 estimate    a.hat     SE  b.hat     SE
     mean -0.00003 0.4024 2.0000 0.1152
x = 1, 1, 1, 5, 5
 estimate    a.hat     SE  b.hat     SE
     mean +0.00011 0.3423 1.9999 0.1051
x = 1, 1, 1, 1, 5
 estimate    a.hat     SE  b.hat     SE
     mean +0.00012 0.3102 1.9999 0.1288


❝ More replicates, better estimates.


You are right! I always preferred calibration curves with duplicates. ;-)

❝ Or, expressed in clear BE terms, higher sample size gives narrower confidence intervals, but the expected value of the point estimate is unchanged.


Not only that. Power depends roughly on the total number of treatments. Even more, in replicate designs we have more degrees of freedom than in a 2×2×2 crossover.

library(PowerTOST)
des1  <- c("2x2x2", "2x2x4", "2x2x3", "2x3x3")
des2  <- c("2×2×2 crossover",
           "2-sequence 4-period full replicate",
           "2-sequence 3-period full replicate",
           "3-sequence 3-period partial replicate")
CV    <- 0.35
cross <- sampleN.TOST(CV = CV, design = des1[1], print = FALSE)[7:8]
full1 <- sampleN.TOST(CV = CV, design = des1[2], print = FALSE)[7:8]
full2 <- sampleN.TOST(CV = CV, design = des1[3], print = FALSE)[7:8]
part  <- sampleN.TOST(CV = CV, design = des1[4], print = FALSE)[7:8]
x     <- data.frame(design = des2,
                    n = c(cross[["Sample size"]], full1[["Sample size"]],
                          full2[["Sample size"]], part[["Sample size"]]),
                    df = c(cross[["Sample size"]] -  2,
                           3 * full1[["Sample size"]] - 4,
                           2 * full2[["Sample size"]] - 3,
                           2 * part[["Sample size"]] - 3),
                    power = c(cross[["Achieved power"]], full1[["Achieved power"]],
                              full2[["Achieved power"]], part[["Achieved power"]]))
print(x, row.names = FALSE, right = FALSE)

design                                n  df power   
2×2×2 crossover                       52 50 0.8074702
2-sequence 4-period full replicate    26 74 0.8108995
2-sequence 3-period full replicate    38 73 0.8008153
3-sequence 3-period partial replicate 39 75 0.8109938


Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes

Complete thread:

UA Flag
Activity
 Admin contact
22,912 posts in 4,806 threads, 1,635 registered users;
22 visitors (1 registered, 21 guests [including 4 identified bots]).
Forum time: 18:35 CET (Europe/Vienna)

[…] an inappropriate study design is incapable of answering
a research question, no matter how careful the subsequent
methodology, conduct, analysis, and interpretation:
Flawless execution of a flawed design achieves nothing worthwhile.    J. Rick Turner

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5