The n ext crackpot iteration [Two-Stage / GS Designs]
Further:
When we initiate a Potvin run the assumed GMR = 0.95 or whatever is known at the time the iterations are started, they do not change as we go. Same for Target power.
Note this:
Aha, within some reason the necessary number of subjects per sequence (given the GMR and the target power) varies in a simply manner with CV, perhaps this is well modeled with a polynomial?
Oh how wonderful!! The polynomial provides a fantastic approximation within our entire interval of interest.
Let us create an approximating the function:
And this one is really really fast! The initial guess may be as accurate as Zhang (haven't checked because I don't speak Zhango), but it does not rely on pnorm, qnorm, pt or qt and there is no division, sqrt, exp or ln. Therefore this is blazing fast.
All it takes is a split second of numerical gymnastics before a Potvin run (and you do not need c(100:1000)/1000 - we can even do c(10:100)/100 and get the same quality of estimate, cheaper.
Actually we might even completely abolish sample size iteration altogether because:
You see, the sample size estimates are sort of almost perfect already. If you want to remove the very few 1's and -1's then just increase the polynomial degree above.
The implementation of Potvin et al. is hereby sped up by a factor 10 gazillion




When we initiate a Potvin run the assumed GMR = 0.95 or whatever is known at the time the iterations are started, they do not change as we go. Same for Target power.
Note this:
CV=c(100:1000)/1000
N=rep(0, length(CV))
for (i in 1:length(N))
N[i]=GetNps(0.95, CV[i], 0.8) ##Yes, Hötzi, plug in your own function here
plot (CV, N)
Aha, within some reason the necessary number of subjects per sequence (given the GMR and the target power) varies in a simply manner with CV, perhaps this is well modeled with a polynomial?
M=lm(N~poly(CV, 4, raw=TRUE))
points(CV, fitted(M), col="red")
Oh how wonderful!! The polynomial provides a fantastic approximation within our entire interval of interest.
Let us create an approximating the function:
GuessNps=function(CV, GMR, T.Pwr)
{
if ((GMR==0.95) && (T.Pwr==0.8))
Rslt = 6 -43.3322*CV + 363.4835*CV*CV -238.5417 *CV*CV*CV+ 62.7520*CV*CV*CV*CV
##get your coefficient from summary(M) or coef(M)
##note: We might not want to write blah^3 etc if we optimize for speed, not sure.
##fill in the other GMR, T.Pwr scenarios here 


return(round(Rslt,0)) ## perhaps ceil would be better?
}
And this one is really really fast! The initial guess may be as accurate as Zhang (haven't checked because I don't speak Zhango), but it does not rely on pnorm, qnorm, pt or qt and there is no division, sqrt, exp or ln. Therefore this is blazing fast.
All it takes is a split second of numerical gymnastics before a Potvin run (and you do not need c(100:1000)/1000 - we can even do c(10:100)/100 and get the same quality of estimate, cheaper.
Actually we might even completely abolish sample size iteration altogether because:
for (i in 1:length(N))
cat("i=",i, "Diff=", GetNps(0.95,CV[i],0.8) - GuessNps(CV[i], 0.95, 0.8),"\n")
You see, the sample size estimates are sort of almost perfect already. If you want to remove the very few 1's and -1's then just increase the polynomial degree above.
The implementation of Potvin et al. is hereby sped up by a factor 10 gazillion





—
Pass or fail!
ElMaestro
Pass or fail!
ElMaestro
Complete thread:
- Initial sample size guess for the Potvin methods ElMaestro 2017-08-19 15:04 [Two-Stage / GS Designs]
- Initial sample size guess for the Potvin methods Helmut 2017-08-19 16:06
- Initial sample size guess for the Potvin methods ElMaestro 2017-08-19 16:14
- Initial sample size guess for the Potvin methods Helmut 2017-08-19 17:12
- Initial sample size guess for the Potvin methods ElMaestro 2017-08-19 17:17
- Confuse-a-Cat Helmut 2017-08-19 17:33
- Confuse-a-Cat ElMaestro 2017-08-19 17:56
- Confuse-a-Cat Helmut 2017-08-19 17:33
- Initial sample size guess for the Potvin methods ElMaestro 2017-08-19 17:17
- loop ↔ vectorized ↔ direct Helmut 2017-08-20 14:40
- loop ↔ vectorized ↔ direct ElMaestro 2017-08-20 15:22
- loop ↔ vectorized ↔ direct Helmut 2017-08-20 16:23
- loop ↔ vectorized ↔ direct ElMaestro 2017-08-20 17:22
- loop ↔ vectorized ↔ direct Helmut 2017-08-20 16:23
- loop ↔ vectorized ↔ direct ElMaestro 2017-08-20 15:22
- Initial sample size guess for the Potvin methods Helmut 2017-08-19 17:12
- Initial sample size guess for the Potvin methods ElMaestro 2017-08-19 16:14
- The n ext crackpot iterationElMaestro 2017-08-19 20:04
- The n ext crackpot iteration Helmut 2017-08-20 02:20
- The ultimate crackpot iteration! ElMaestro 2017-08-20 14:35
- The ultimate crackpot iteration! Helmut 2017-08-20 15:11
- The ultimate crackpot iteration! ElMaestro 2017-08-20 15:28
- The ultimate crackpot iteration! Helmut 2017-08-20 16:06
- The ultimate crackpot iteration! ElMaestro 2017-08-20 16:15
- The ultimate crackpot iteration! Helmut 2017-08-20 18:58
- The ultimate crackpot iteration! ElMaestro 2017-08-20 19:32
- Suggested code ElMaestro 2017-08-21 18:13
- Nitpicker! Helmut 2017-08-22 13:33
- Nitpicker! ElMaestro 2017-08-22 17:27
- Nitpicker! Helmut 2017-08-22 17:49
- Nitpicker! ElMaestro 2017-08-22 17:59
- Nitpicker! Helmut 2017-08-22 19:15
- Benchmark code ElMaestro 2017-08-22 22:29
- Benchmark code Helmut 2017-08-23 01:48
- Benchmark code ElMaestro 2017-08-22 22:29
- Nitpicker! Helmut 2017-08-22 19:15
- Nitpicker! ElMaestro 2017-08-22 17:59
- Nitpicker! Helmut 2017-08-22 17:49
- Nitpicker! ElMaestro 2017-08-22 17:27
- Nitpicker! Helmut 2017-08-22 13:33
- Suggested code ElMaestro 2017-08-21 18:13
- The ultimate crackpot iteration! ElMaestro 2017-08-20 19:32
- The ultimate crackpot iteration! Helmut 2017-08-20 18:58
- The ultimate crackpot iteration! ElMaestro 2017-08-20 16:15
- The ultimate crackpot iteration! Helmut 2017-08-20 16:06
- The ultimate crackpot iteration! ElMaestro 2017-08-20 15:28
- The ultimate crackpot iteration! Helmut 2017-08-20 15:11
- Initial sample size guess for the Potvin methods Helmut 2017-08-19 16:06