And the Oscar goes to: Replicates! [General Statistics]
❝ The estimates are unbiased estimators of the effects you are extracting. That's how it works, so there is no surprise here. Now try and consider the quality of the estimates, for example if you want to know how well is a mean (or slope or intercept or ...) estimated. Look for example at the SE of the estimates, easily extractable from an lm via e.g.
❝ summary(M)$coeff[,2]
Your wish is my command.
x = 1, 2, 3, 4, 5
estimate a.hat SE b.hat SE
mean +0.00001 0.4836 1.9999 0.1458
x = 1, 1, 3, 5, 5
estimate a.hat SE b.hat SE
mean -0.00003 0.4024 2.0000 0.1152
x = 1, 1, 1, 5, 5
estimate a.hat SE b.hat SE
mean +0.00011 0.3423 1.9999 0.1051
x = 1, 1, 1, 1, 5
estimate a.hat SE b.hat SE
mean +0.00012 0.3102 1.9999 0.1288
❝ More replicates, better estimates.
You are right! I always preferred calibration curves with duplicates.
❝ Or, expressed in clear BE terms, higher sample size gives narrower confidence intervals, but the expected value of the point estimate is unchanged.
Not only that. Power depends roughly on the total number of treatments. Even more, in replicate designs we have more degrees of freedom than in a 2×2×2 crossover.
library(PowerTOST)
des1 <- c("2x2x2", "2x2x4", "2x2x3", "2x3x3")
des2 <- c("2×2×2 crossover",
"2-sequence 4-period full replicate",
"2-sequence 3-period full replicate",
"3-sequence 3-period partial replicate")
CV <- 0.35
cross <- sampleN.TOST(CV = CV, design = des1[1], print = FALSE)[7:8]
full1 <- sampleN.TOST(CV = CV, design = des1[2], print = FALSE)[7:8]
full2 <- sampleN.TOST(CV = CV, design = des1[3], print = FALSE)[7:8]
part <- sampleN.TOST(CV = CV, design = des1[4], print = FALSE)[7:8]
x <- data.frame(design = des2,
n = c(cross[["Sample size"]], full1[["Sample size"]],
full2[["Sample size"]], part[["Sample size"]]),
df = c(cross[["Sample size"]] - 2,
3 * full1[["Sample size"]] - 4,
2 * full2[["Sample size"]] - 3,
2 * part[["Sample size"]] - 3),
power = c(cross[["Achieved power"]], full1[["Achieved power"]],
full2[["Achieved power"]], part[["Achieved power"]]))
print(x, row.names = FALSE, right = FALSE)
design n df power
2×2×2 crossover 52 50 0.8074702
2-sequence 4-period full replicate 26 74 0.8108995
2-sequence 3-period full replicate 38 73 0.8008153
3-sequence 3-period partial replicate 39 75 0.8109938
Dif-tor heh smusma 🖖🏼 Довге життя Україна!
Helmut Schütz
The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Complete thread:
- Observations in a linear model Helmut 2022-11-15 10:55 [General Statistics]
- Observations in a linear model ElMaestro 2022-11-16 08:44
- Observations in a linear model Helmut 2022-11-16 09:49
- Observations in a linear model ElMaestro 2022-11-16 20:50
- And the Oscar goes to: Replicates!Helmut 2022-11-17 10:19
- It's all in the df's ElMaestro 2022-11-18 17:28
- And the Oscar goes to: Replicates!Helmut 2022-11-17 10:19
- Observations in a linear model ElMaestro 2022-11-16 20:50
- Observations in a linear model Helmut 2022-11-16 09:49
- Observations in a linear model ElMaestro 2022-11-16 08:44